

Lidozin
Members-
Posts
49 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by Lidozin
-
Just to be clear — negative values of B4 in the drag polar formulation are physically invalid in most cases. They imply that drag would decrease at high lift coefficients, which contradicts fundamental aerodynamic behavior. If used without extreme care (or worse, by mistake), they can artificially boost L/D in regimes where a real aircraft would be well past its energy limits or approaching stall. For subsonic aircraft with straight wings, B4 should either be zero or small and positive — and only applied when supported by actual aerodynamic data. Anything else tends to produce unrealistic results and should be treated as a modeling error, not a tuning parameter. What’s more, estimating CD0, B and even B4 is relatively straightforward if you have access to basic performance benchmarks — such as maximum level speed, climb rate, sustained turn time, or overall energy–maneuverability diagrams. These give you plenty of reference points to constrain the polar in a physically consistent way, even without full wind tunnel data.
-
There are generally two ways to approach questions like this. One is the engineering-based approach, grounded in measurable quantities, physical laws, and repeatable analysis. The other is more perception-driven, relying on impressions, intuition, and speculative reasoning. In the recent discussion about AI F4U, many participants spent considerable time debating whether the AI's flight behavior felt right or wrong, and what might be causing that perception. But just a quick look at the aerodynamic tables — using the first approach — and a couple of reference plots were enough to answer the questions concretely and resolve much of the confusion. Of course, when you apply this kind of analysis, it can be hard to “convince” anyone in a thread where conclusions are often shaped by impressions and group consensus. After all, engineering models don’t win by vote count — they win by predictive accuracy and consistency with real-world data. That’s the challenge — and the strength — of sticking to a technical approach. It might not sway opinions immediately, but it builds a foundation that can be tested, reproduced, and improved over time. P.S. Some have expressed doubts about the validity of results derived from aerodynamic tables, suggesting that the simulation might apply those tables differently than assumed in the calculations. That, again, reflects the second — intuitive — approach: questioning the outcome not through concrete counter-evidence, but through uncertainty about the internal mechanics. But from an engineering perspective, the solution is straightforward: you don't need to guess. Numerous forum posts over the years — including examples from developers and community — have documented how the simulation reads and uses these tables. The logic is well-known, consistent, and has been independently confirmed. There’s no mystery here. This is exactly the difference: one approach raises questions based on feeling or possibility. The other seeks out the implementation, reads the code or its documented behavior, and uses that to anchor the analysis. That’s not to say intuition has no place — it can help spot issues — but resolving them ultimately requires verifiable structure.
-
B and B4 are simply constants that define the shape of the L/D polar curve. Cx0 is the drag at zero lift.
-
Glad that helped clarify things a bit. Just to add — the way the simulator uses the aerodynamic and engine data from the .lua files (such as thrust tables and drag polars) is actually well-understood and has been analyzed in detail over the years. It’s not a black box — the trajectory model applies this data in a consistent and predictable way, based on fairly straightforward physics. That’s precisely why it’s possible to compare AI behavior to real-world flight data and get meaningful results. When you match conditions (mass, altitude, airspeed), the outputs — like climb rate, turn rate, and energy loss — generally follow from the input tables in a transparent way. So yes, the numbers in the .lua files aren’t just decorative — they actually drive the simulation logic quite directly. You can instruct the AI to follow a route with maximum climb, and even without TacView, measure the resulting vertical speed at various altitudes. Alternatively, you can have the AI accelerate at maximum power while maintaining altitude, and determine its equivalent vertical speed — which allows you to cross-check the previous test using a different energy-based method. It would then be possible to compare the test results with calculated performance values, to see how closely they match.
-
If I had TacView, and wanted to make a well-supported case to the developers that the concern is valid, I would record a 1-on-1 duel and then analyze it in detail. If TacView is capable of plotting the derivative of total energy over time (which is equivalent to vertical specific excess power), as well as g-load, then it becomes possible to check a common suspicion: that the AI loses significantly less energy during high-g maneuvers than the player’s aircraft does. By comparing g-load and the energy rate side by side for both the AI and the player aircraft, we might be able to identify whether this is the case. Let’s try to shed some light on this together.
-
That's a very good observation. A TacView analysis would be a great complement to the video and could help clarify the energy profiles. In my own experience, 1-on-1 engagements between player and AI usually end in a stalemate: the human gains a slight angular advantage, the AI keeps an edge in energy, and neither can convert it into a win — sometimes for 30 minutes or more.
-
Thanks for the suggestion, but just to clarify — this thread is specifically about the flight model of MiG-15. The comparisons being made here are between its AI flight behavior and known reference data from real-world documentation — including turn performance, climb rates, and energy metrics. So far, they align very well, which is the only point being addressed. What you're describing relates to a different aircraft — the MiG-21 — which isn't the subject of this discussion. If you believe the AI model for the MiG-21 exhibits unrealistic behavior like sustained 9G turns without energy loss, that’s certainly worth investigating — but it's a separate case. If you’d like, feel free to share your relevant .LUA data for the MiG-21 AI model. That would make it possible to analyze its aerodynamic tables and compare them to available real-world performance data, just as was done here with YYY. It’s the only way to move from general impressions to something that can be tested and validated. As for whether I “play the game” — yes, I do. And I happen to enjoy 1v1 engagements with matching aircraft types, specifically because they reveal how well energy-based parameters are being applied. That’s why I focus on that context when evaluating AI behavior.
-
You’ve raised a number of observations about strange or buggy AI behavior — some potentially valid, others harder to verify — but I think we may be talking past each other. The original discussion wasn’t about general AI behavior across all mission stages. It was specifically about how the AI performs in a dogfight, and whether the AI's flight model during combat is based on real aerodynamic parameters that correspond to those of the real aircraft. So let me ask directly: What exactly, in your experience, seems unrealistic or broken in AI behavior during a 1-on-1 dogfight — with both human and AI flying the same aircraft, from the same initial conditions? If there’s a mismatch in energy performance, turn rate, climb, etc., under those circumstances — that’s something worth looking into. But if the concerns are about form-up logic, taxiing behavior, or scripted transitions, those are separate layers of the simulation, and not what’s being discussed when we refer to the AI using a physics-based trajectory model during combat. Let’s isolate the question to air combat maneuvering performance. That’s the only way to make progress on whether the model is being applied correctly — or not — in that context. Personally, I tend to focus on this specific aircraft and enjoy 1-on-1 dogfights in matching types, precisely because they allow a fair comparison of skill and energy management. I don’t spend much time observing AI in other scenarios, so I leave those potential bugs to others — for me, the duels are more than enough.
-
Here there is no magic involved, just an implemented antigraviton-graphene whirlpool, or simply put, negative values for the coefficient B4 from the file kindly sent to me are used. If you plot the polar and aerodynamic efficiency graphs for Mach 0.2 (ground speed — 240 km/h), you can see the results of this method. Negative values for B4 have also been successfully applied for other Mach numbers. Maybe I was harshly pranked by having negative values put into the file sent to me, but anyone can check that for themselves.
-
I’d like to respond to your points, because I think there’s a fundamental misunderstanding here — not about opinions, but about how flight models actually work. First of all, a trajectory-based model (point mass model) is not something that can be "good" or "bad" in itself. It's simply a set of well-known, well-defined differential equations that describe the motion of an aircraft CoG under the influence of aerodynamic, thrust, and gravity forces. Solving these equations — through numerical integration — gives us the actual flight trajectory, including climb, acceleration, turn performance, and so on.
-
I had absolutely no intention of defending anyone — neither dissatisfied users, nor, certainly, the developers. My goal is to establish the facts, based on knowledge — as lofty as that may sound. If this model had shown significant deviations from the reference documentation, I would have gladly published those findings. This particular aircraft caught my attention because it's one of the most frequently criticized by users, and at the same time, one of the best-documented — both in terms of source data, which undoubtedly gave the developers a solid foundation to work with, and in terms of flight performance, which provides a solid basis for comparing the model with the real aircraft in detail.
-
Let me clarify a few points, since there seems to be a misunderstanding about what trajectory-based (or “point mass”) flight models actually include. First of all, the notion that a trajectory model ignores angle of attack is simply incorrect. In fact, angle of attack (AoA) is one of the key input parameters used to define the aerodynamic polar — the lift and drag coefficients are tabulated precisely as functions of AoA. When a simulation computes the net force on the aircraft at any moment, it determines AoA from the velocity vector and attitude, then looks up the corresponding L and D values from that polar. This is standard in both AI models and performance simulation tools used in real-world aerospace. So yes — energy loss due to drag is inherently tied to AoA in such models. It’s not being ignored; it’s the foundation for how excess power, turn rate, and climb performance are derived.
-
As far as I understand from press releases, the advantage of GFM is that it provides a much more natural simulation of aircraft behavior during short-period motion — that is, rotation around the center of mass — while preserving the accuracy of the older, trajectory-based model. The very same accuracy we’ve been seeing demonstrated throughout this discussion thread. And that’s exactly why we’re looking forward to the new model with such anticipation. https://www.digitalcombatsimulator.com/en/news/2021-12-03/
-
null You are fundamentally mistaken here. When we talk about AI aircraft — or even more advanced flight models — in the context of maneuvering characteristics (i.e. the ability to change trajectory), we're primarily discussing the motion of the aircraft in space as a point mass. Simulating this type of motion has been possible since long ago — even a ZX Spectrum or a PC XT was sufficient for solving such equations. And today, it's entirely feasible to simulate thousands of aircraft using an AI-level flight model in real time. A few lookup tables — lift and drag coefficients, thrust vs. altitude and speed — along with some auxiliary data, and voila: you have a highly accurate trajectory and maneuvering model.
-
If the induced drag component of the aircraft's L/D polar is known (and the polar itself is available from wind tunnel tests up to high values of CL, and the model's energy performance matches the real aircraft both at low lift coefficients (e.g. specific excess power in 1g flight) and at high CL (e.g. sustained turn at zero excess power), then it follows that energy loss during high-load, non-sustained maneuvers — where CL is even higher — will also closely match the real aircraft. I find it hard to believe that whoever tuned this flight model would have ignored such a rich dataset — especially given how thoroughly the design bureau compiled and published the aircraft’s aerodynamic characteristics in the official technical documentation. nullnull
-
This is easy to verify with a simple test setup: position yourself relative to the AI aircraft so that you're firmly within its rear blind cone — specifically, low-six o'clock, about 100–200 meters behind and below, matching its speed to maintain position. Maintain pursuit while staying within that no-visibility sector. If the AI still detects you and reacts — despite following its flight plan and having orders to engage the first contact it sees — then it truly does have 360-degree situational awareness, which would be unrealistic. Next, briefly move just outside the blind cone — into a zone where a real pilot could reasonably acquire a contact visually — and observe the difference in reaction time and behavior.
-
Thus, the casual claims that the AI aircraft MiG-15 possesses "supernatural" power can be put to rest. The core energy-related parameters — both in 1g flight (such as maximum speed and specific excess power) and in sustained turns at zero excess power — show very strong agreement with available reference data.
-
For turn time at 1,000 meters altitude, the documentation provides calculated reference data. This point has been plotted on the graph of the AI aircraft’s computed turn time. To keep the analysis concise, we'll limit the comparison to two altitude points — 1,000 meters and 10,000 meters. For 10,000 meters, the documentation includes a turn performance chart obtained from actual flight tests; those reference points will be overlaid on the AI's calculated curve. null null
-
The next parameter to be evaluated is rate of climb. Unlike with the flight envelope, the documentation does include charts for both maximum rate of climb and the indicated airspeed at which it is achieved, across different altitudes. By plotting the AI aircraft’s rate-of-climb versus IAS at various altitudes, we can determine both the peak climb rate and the corresponding airspeed at three key points: sea level, 5,000 meters, and 10,000 meters. Once again, we observe a very close match between the AI’s energy performance and the documented data — strong evidence that the flight model behaves realistically in this regard as well. All calculations were performed for a nominal aircraft weight of 5,000 kg.
-
I've seen many claims here suggesting that the AI (aircraft type) possesses "unrealistic" or "supernatural" flight physics. However, since all the aerodynamic and performance data used by the AI are available in the .lua file, it's possible to compute its actual characteristics and compare them directly with those from the real aircraft’s technical documentation. Let’s start with the flight envelope. The original manual doesn’t provide a complete envelope chart, so we'll have to compare individual performance points mentioned throughout the text. When these reference points are plotted over the AI’s calculated envelope, the result speaks for itself. At the very least, in this aspect, there’s no evidence of any "supernatural" behavior — the AI’s performance stays well within the bounds of what's expected based on real-world data.
-
That's a really interesting question. I have the ability to test MiG-15 bot, but unfortunately I can't test F4U bot because the module is missing. The data needed for the calculations is included in the file CoreMods\F4U-1D.lua .
-
One can check AI FM using data from F4U-1D.lua from CoreMods. Drag polars for given Mach numbers can be easily plotted using the data from the file.
-
Could somebody share a file F4U-1D.lua from CoreMods?