Jump to content

Lidozin

Members
  • Posts

    39
  • Joined

  • Last visited

Everything posted by Lidozin

  1. As far as I understand, the forum thread was discussing a recording from an online session, since the aircraft names match the nicknames of forum participants. Are there any references showing such large TacView discrepancies occurring exclusively in offline missions?
  2. Thanks for the materials! Small modifications to the aerodynamic polars — especially within the low-to-mid Mach range — have minimal impact on overall energy performance compared to relatively larger changes in thrust. That said, it’s reasonable to acknowledge that changes in thrust, and thus in the bot’s energy potential, can under certain conditions influence its combat logic or maneuvering behavior.
  3. This is quite interesting, but judging by the graphs posted by Curly, the aircraft's aerodynamics (i.e., the polars), which are solely responsible for energy performance along with thrust, were barely changed in the low-to-mid Mach number region where your test took place. Therefore, in our case, thrust is essentially the dominant factor in energy gain. The maximum lift coefficient, unfortunately, has the opposite effect on energy: if you reduce it for the AI — as was done in the mod based on the reference document — the AI will actually preserve energy better than the default version. Unfortunately, I haven't been able to locate the mod file to try it myself. Would you be willing to share it? If you still have the track from your test, it would be quite valuable to see video of both runs — one using the default data file, and the other using the modified one. In that case, your piloting should remain the same, and the AI’s trajectory should presumably change. And, by the way, how can I make the AI perform such a maneuver in a mission?
  4. Does this refer to online TacView sessions, or are there known cases of such 100% mismatch in offline missions as well?
  5. Thank you — I agree it's good to see that SFM and PFM align well under stable conditions. However, zoom climbs of the type shown in your video are difficult to verify without detailed data. The results depend heavily on maintaining the same speed-energy profile and minimizing oscillations or excess control input. To make conclusive statements about energy performance in steep climbs, a TacView recording or a comparable export of time history for TAS, altitude, and G-load would be ideal — for both aircraft. That would allow direct comparison of energy rates and drag profiles. Even if we assume that the thrust curve in the low-speed regime was deliberately adjusted for some internal purpose (though I’d argue that it's not simply a "no-loss" curve, since the shape doesn't fully match that either), the difference in equivalent vertical velocity at the worst point (IAS ~250 km/h) does not exceed 8–9%. Given that the AI spends almost no time at those speeds, the contribution to its total energy gain is negligible. At higher speeds, the difference becomes virtually zero. So, even replacing the base thrust table with one that strictly matches the theoretical curve would not result in any substantial change to the outcome of dogfights against the AI.
  6. Thank you for the entertaining interlude — sincerely. It's always refreshing to see people stay engaged, even in parody. However, as enjoyable as it was to read, I couldn’t find in it anything resembling a technical counterpoint to the tested climb profile or the data comparison with the real-world reference chart. Regarding the 500 knots straight up — if that was a serious remark, I would kindly ask for clarification. A well-trimmed MiG-15 starting from 950+ km/h (which is about 510 knots) absolutely can convert kinetic energy into altitude for a short time — that's basic energy conservation and directly tied to its dynamic ceiling. There's nothing unnatural about it unless you're claiming the AI sustains it indefinitely, which is easily testable in TacView or even with a stopwatch and status bar, as already demonstrated. You’re very welcome to propose a reproducible test that demonstrates any claimed violation of physics. If it's testable, measurable, and repeatable — I'm all ears. Otherwise, I’d suggest we let the data speak. Because ten minutes of quiet measurement saves hours of speculative back-and-forth.
  7. Most of the frustration and speculation expressed in this thread seems to stem from combat-related behavior. I haven’t come across many complaints about AI taxiing, takeoff, or landing. And to be honest, those phases don’t particularly interest me either, since I primarily view the AI as a sparring partner in aerial combat. Now, climb performance represents a critical component of combat behavior ( it’s energy gain at 1g, and while it doesn’t occur in isolation that often, it fully defines acceleration in level flight and in shallow dives) both of which are common in real engagements. What I’ve shown is that in this regime, the AI follows the physics defined in its data tables and behaves exactly as the real aircraft would according to flight test documentation. That alone should dispel many doubts. What do we observe more often in dogfights? Sustained or transient turning flight with increased load factors, where energy is either lost or traded in ways governed by well-known aerodynamic relationships. I also showed that the aerodynamic data used for the FM (lift, drag, thrust) supports correct energy behavior in those turning regimes. So far, everything lines up. However, to completely rule out the suspicion that the AI is “cheating” in these cases, the next logical step would be to analyze a 1v1 fight recording where both the player and AI aircraft are of the same type. Specifically, you’d export the time history of TAS, altitude, and G-load for both. Using known energy equations, one can compute the specific excess power and compare it to the observed load factor. If the AI is cheating — by bypassing the FM or using hidden scripts — it would become immediately obvious. Either its energy behavior would be physically implausible, or you'd see clear discontinuities or artifacts in the data. I believe this type of empirical comparison would cut through all the theoretical debate and provide developers with a solid foundation for any investigation or follow-up.
  8. This mod is unlikely to make a meaningful difference, if only because the thrust has been adjusted in the TAS region below 300 km/h — a regime the AI almost never flies in, even at low altitude. During climb, the AI typically maintains a TAS around 700 km/h, where the thrust values in the mod remain essentially unchanged. So any claimed improvements are unlikely to impact the AI’s actual climb behavior in a measurable way. Instead of relying on modifications, you can perform a direct test using the standard setup: Place the AI-controlled MiG-15 ahead of your aircraft at a distance of 600 meters, both starting at sea level with a TAS of 700 km/h. Assign the AI a route with waypoints that require a continuous climb to 11,000 meters at full power. Position your own aircraft directly behind the AI (600 m), matching its speed and heading. At mission start, apply full throttle and maintain level flight. Let your aircraft accelerate naturally until it reaches 700–705 km/h TAS, then initiate a gradual climb, maintaining 710 ± 10 km/h TAS throughout. Use trim gently to hold pitch; avoid aggressive control inputs. The goal is not to match the AI’s pitch angle, but to fly a clean, energy-efficient climb profile.
  9. What you’re describing is exactly what separates a well-trained pilot — or a skilled virtual one — from someone just “flying it by feel.” Yes, it's hard. Yes, it takes discipline. That’s why real-world flight and combat manuals emphasize very specific energy management techniques: Climbing at the most efficient airspeed. Maintaining coordinated, smooth flight. Avoiding unnecessary g-loading. Turning at corner velocity. Trimming properly and flying clean. These aren't theoretical details — they're core to real-world air combat doctrine, because that’s what allows you to stay fast, stay high, and stay alive. You don’t need to be a robot. But you do need to avoid wasting energy through unnecessary control inputs. And even if your airspeed control is only accurate to ±30–40 km/h, that’s often enough, as long as you don’t induce drag by chasing the fight with abrupt pitch changes. As for the AI: it simply flies by the tables with clean logic and no wasted motion. That’s not superhuman — it’s what happens when someone (or something) doesn’t bleed energy. In fact, I suspect that when some players meet another human online who does understand energy fighting, timing, and aerodynamic discipline — they’re likely to call them a cheater, too. That said, I’d like to remind everyone that the original goal of this analysis was not to examine AI behavior in terms of tactics or input realism, but simply to test the claim that the AI “doesn’t obey physics, or has physical performance beyond what a player-controlled aircraft can achieve.” The flight test results suggest otherwise. Let’s avoid shifting the discussion away from that specific and measurable question.
  10. I’m not suggesting that the AI behaves perfectly in every respect — only that, in this specific context, its energy performance in sustained climb matches both the manual and computed data to within a few percent. That’s not “superhuman” — that’s simply a correct implementation of aerodynamic tables. The formation example is a common misunderstanding: A wingman falling behind during a climb is not necessarily a sign of AI "superpowers", but often a result of human-induced energy loss — especially when trying to aggressively hold position by chasing pitch and throttle changes. In real flight, a lead aircraft never climbs at full power unless deliberately trying to leave the wingman behind. If you want to stay with the AI during a clean climb, fly exactly like it does: hold a stable profile, minimize control input, and don’t chase energy with abrupt g-load changes. This isn’t hypothetical — it’s perfectly doable in practice. In fact, in the test shown earlier, a human-flown aircraft matched the climb profile almost exactly by simply following the documented TAS and keeping the g-load near 1.0. Finally, if there are concerns about other aspects of AI behavior — like situational awareness or detection — that’s a separate discussion, and worth having. But let’s not conflate that with correctly modeled flight performance.
  11. If your goal is to tame the AI's energy performance, one practical approach is to keep the original polar coefficients while simply zeroing out the B4 term. This will already reduce excessive climb and turn rates. For more accurate tuning, however, it's worth adjusting the values to better reflect real-world aerodynamic characteristics: Cx0 ≈ 0.025 B ≈ 0.07 These values are typical for aircraft of this class and will result in a maximum L/D ratio around 12 — a realistic and well-balanced figure.
  12. It doesn’t really matter how many years someone has spent in DCS. What does matter is whether they took the time to find and translate the relevant section of the aircraft manual, where the correct TAS for best rate of climb is specified. What also matters is whether they were able to fly a test profile while accurately maintaining that target TAS, and doing so smoothly, without introducing pitch oscillations or g-loading fluctuations. Even small deviations from 1g can significantly increase induced drag and reduce the rate of climb. Now, with that done, we can simply compare the test points — both for the AI (green) and for the human-flown aircraft (orange) — directly against the official climb performance chart of the real aircraft. Ten minutes of calm, methodical testing often yield more clarity than hours of scholastic debate. It’s also worth noting that trying to “stay in formation” with an AI aircraft using maximum thrust is inherently flawed as a climb rate test. With both aircraft of the same type and mass, and one applying full power while the other attempts to follow, the follower will inevitably fall behind. Any such comparison will always bias against the trailing aircraft. That’s precisely why controlled solo climb tests flown at the documented best-climb TAS and held at near-constant 1g are the only valid method for evaluating energy performance in this context.
  13. I'm just lighting the gas to shed some light on the facts. Some posts above I was told to “shine alone in single player.” Well, I didn't quite go solo — I took ten minutes to run a simple, reproducible test in DCS. No TacView, no special tools, just a stopwatch and some altitude readings every 1000 m. The result? Green markers on the reference chart you’ve all seen before — and yes, they sit right on top of the official climb performance curve. No magic, no mystery, no tweaking — just a clean test under ISA conditions and 5000 kg mass. Turns out, a bit of curiosity and the absence of laziness go a long way. Now, perhaps we can move from general feelings and philosophical concerns about “how software works” to concrete, measurable outcomes. The data’s there. The method’s simple. The mission is obvious. Anyone can repeat it. And if the simulation produced results matching the calculations based on the data file, then there’s only one conclusion to draw: the implementation is consistent with the model. If the claim is that something’s broken — the burden of proof now lies with those making the claim.
  14. If presenting transparent methods, reproducible tests, and referencing documented code is “trolling,” then perhaps we've simply redefined what constructive technical discussion looks like. Let’s stick to substance.
  15. It seems you're contrasting well-documented facts — including code excerpts and consistent confirmation by developers and users alike — with subjective impressions, and then calling that "group consensus." But that's exactly the distinction between an engineering approach and opinion-based discussion: one relies on verifiable data, the other on votes. The real contradiction in your reply is that you question objective sources while accusing others of being overly confident without sufficient basis. The confidence in how the simulation behaves stems from well-established knowledge of the trajectory model in use. There is ample publicly available information describing the aerodynamic model applied in DCS, including the formulas for thrust, drag, and motion. These are not speculative; they’ve been consistently referenced and verified by many within the community. As philosophy reminds us, practice is the criterion of truth (Karl Marx). If there is any uncertainty, it can — and should — be addressed empirically. One simple and effective test is to task the AI-controlled aircraft with a maximum-rate climb and record the time at which it reaches each successive 1000-metre altitude increment. This approach avoids the need for specialized tools like TacView, requiring only careful observation and a notepad. It’s a modest investment of effort, but it yields clear data: either the simulation behaves as predicted by the aerodynamic tables, or it does not — and in either case, we move from speculation to grounded evaluation. The test can also be easily shared and repeated by others, allowing for open verification. Naturally, for consistency, the AI aircraft should have a mass of precisely 5000 kg, and the atmospheric conditions should correspond to ISA.
  16. Just to be clear — negative values of B4 in the drag polar formulation are physically invalid in most cases. They imply that drag would decrease at high lift coefficients, which contradicts fundamental aerodynamic behavior. If used without extreme care (or worse, by mistake), they can artificially boost L/D in regimes where a real aircraft would be well past its energy limits or approaching stall. For subsonic aircraft with straight wings, B4 should either be zero or small and positive — and only applied when supported by actual aerodynamic data. Anything else tends to produce unrealistic results and should be treated as a modeling error, not a tuning parameter. What’s more, estimating CD0, B and even B4 is relatively straightforward if you have access to basic performance benchmarks — such as maximum level speed, climb rate, sustained turn time, or overall energy–maneuverability diagrams. These give you plenty of reference points to constrain the polar in a physically consistent way, even without full wind tunnel data.
  17. There are generally two ways to approach questions like this. One is the engineering-based approach, grounded in measurable quantities, physical laws, and repeatable analysis. The other is more perception-driven, relying on impressions, intuition, and speculative reasoning. In the recent discussion about AI F4U, many participants spent considerable time debating whether the AI's flight behavior felt right or wrong, and what might be causing that perception. But just a quick look at the aerodynamic tables — using the first approach — and a couple of reference plots were enough to answer the questions concretely and resolve much of the confusion. Of course, when you apply this kind of analysis, it can be hard to “convince” anyone in a thread where conclusions are often shaped by impressions and group consensus. After all, engineering models don’t win by vote count — they win by predictive accuracy and consistency with real-world data. That’s the challenge — and the strength — of sticking to a technical approach. It might not sway opinions immediately, but it builds a foundation that can be tested, reproduced, and improved over time. P.S. Some have expressed doubts about the validity of results derived from aerodynamic tables, suggesting that the simulation might apply those tables differently than assumed in the calculations. That, again, reflects the second — intuitive — approach: questioning the outcome not through concrete counter-evidence, but through uncertainty about the internal mechanics. But from an engineering perspective, the solution is straightforward: you don't need to guess. Numerous forum posts over the years — including examples from developers and community — have documented how the simulation reads and uses these tables. The logic is well-known, consistent, and has been independently confirmed. There’s no mystery here. This is exactly the difference: one approach raises questions based on feeling or possibility. The other seeks out the implementation, reads the code or its documented behavior, and uses that to anchor the analysis. That’s not to say intuition has no place — it can help spot issues — but resolving them ultimately requires verifiable structure.
  18. B and B4 are simply constants that define the shape of the L/D polar curve. Cx0 is the drag at zero lift.
  19. Glad that helped clarify things a bit. Just to add — the way the simulator uses the aerodynamic and engine data from the .lua files (such as thrust tables and drag polars) is actually well-understood and has been analyzed in detail over the years. It’s not a black box — the trajectory model applies this data in a consistent and predictable way, based on fairly straightforward physics. That’s precisely why it’s possible to compare AI behavior to real-world flight data and get meaningful results. When you match conditions (mass, altitude, airspeed), the outputs — like climb rate, turn rate, and energy loss — generally follow from the input tables in a transparent way. So yes, the numbers in the .lua files aren’t just decorative — they actually drive the simulation logic quite directly. You can instruct the AI to follow a route with maximum climb, and even without TacView, measure the resulting vertical speed at various altitudes. Alternatively, you can have the AI accelerate at maximum power while maintaining altitude, and determine its equivalent vertical speed — which allows you to cross-check the previous test using a different energy-based method. It would then be possible to compare the test results with calculated performance values, to see how closely they match.
  20. If I had TacView, and wanted to make a well-supported case to the developers that the concern is valid, I would record a 1-on-1 duel and then analyze it in detail. If TacView is capable of plotting the derivative of total energy over time (which is equivalent to vertical specific excess power), as well as g-load, then it becomes possible to check a common suspicion: that the AI loses significantly less energy during high-g maneuvers than the player’s aircraft does. By comparing g-load and the energy rate side by side for both the AI and the player aircraft, we might be able to identify whether this is the case. Let’s try to shed some light on this together.
  21. That's a very good observation. A TacView analysis would be a great complement to the video and could help clarify the energy profiles. In my own experience, 1-on-1 engagements between player and AI usually end in a stalemate: the human gains a slight angular advantage, the AI keeps an edge in energy, and neither can convert it into a win — sometimes for 30 minutes or more.
  22. Thanks for the suggestion, but just to clarify — this thread is specifically about the flight model of MiG-15. The comparisons being made here are between its AI flight behavior and known reference data from real-world documentation — including turn performance, climb rates, and energy metrics. So far, they align very well, which is the only point being addressed. What you're describing relates to a different aircraft — the MiG-21 — which isn't the subject of this discussion. If you believe the AI model for the MiG-21 exhibits unrealistic behavior like sustained 9G turns without energy loss, that’s certainly worth investigating — but it's a separate case. If you’d like, feel free to share your relevant .LUA data for the MiG-21 AI model. That would make it possible to analyze its aerodynamic tables and compare them to available real-world performance data, just as was done here with YYY. It’s the only way to move from general impressions to something that can be tested and validated. As for whether I “play the game” — yes, I do. And I happen to enjoy 1v1 engagements with matching aircraft types, specifically because they reveal how well energy-based parameters are being applied. That’s why I focus on that context when evaluating AI behavior.
  23. You’ve raised a number of observations about strange or buggy AI behavior — some potentially valid, others harder to verify — but I think we may be talking past each other. The original discussion wasn’t about general AI behavior across all mission stages. It was specifically about how the AI performs in a dogfight, and whether the AI's flight model during combat is based on real aerodynamic parameters that correspond to those of the real aircraft. So let me ask directly: What exactly, in your experience, seems unrealistic or broken in AI behavior during a 1-on-1 dogfight — with both human and AI flying the same aircraft, from the same initial conditions? If there’s a mismatch in energy performance, turn rate, climb, etc., under those circumstances — that’s something worth looking into. But if the concerns are about form-up logic, taxiing behavior, or scripted transitions, those are separate layers of the simulation, and not what’s being discussed when we refer to the AI using a physics-based trajectory model during combat. Let’s isolate the question to air combat maneuvering performance. That’s the only way to make progress on whether the model is being applied correctly — or not — in that context. Personally, I tend to focus on this specific aircraft and enjoy 1-on-1 dogfights in matching types, precisely because they allow a fair comparison of skill and energy management. I don’t spend much time observing AI in other scenarios, so I leave those potential bugs to others — for me, the duels are more than enough.
  24. Here there is no magic involved, just an implemented antigraviton-graphene whirlpool, or simply put, negative values for the coefficient B4 from the file kindly sent to me are used. If you plot the polar and aerodynamic efficiency graphs for Mach 0.2 (ground speed — 240 km/h), you can see the results of this method. Negative values for B4 have also been successfully applied for other Mach numbers. Maybe I was harshly pranked by having negative values put into the file sent to me, but anyone can check that for themselves.
  25. I’d like to respond to your points, because I think there’s a fundamental misunderstanding here — not about opinions, but about how flight models actually work. First of all, a trajectory-based model (point mass model) is not something that can be "good" or "bad" in itself. It's simply a set of well-known, well-defined differential equations that describe the motion of an aircraft CoG under the influence of aerodynamic, thrust, and gravity forces. Solving these equations — through numerical integration — gives us the actual flight trajectory, including climb, acceleration, turn performance, and so on.
×
×
  • Create New...