Jump to content

Recommended Posts

Posted (edited)

@Lidozin In all fairness, your forum account isn't exactly mature, how long have you been playing DCS, if I may ask. Because you are aware that you sort of come off as a troll? Especially when you start this thread and give yourself the solution. That's just bad standards. 😂

Let's just all hope that the GFM will be released soon, or as someone claims. The CPU hit isn't that great even if AI were to use real time data. 😉 

Cheers!
 
Sent from my SM-A536B using Tapatalk
 

Edited by MAXsenna
Formatting
  • Like 5
Posted (edited)

Set a dogfight between an Ace AI MiG-21 and something else with you as an observer and watch the thing pull 9G sustained turns without losing speed. Then come and tell us it's all fine because your data, equations and parameters say so.

The math is pretty interesting but it seems you don't play the game.

 

Edited by diego999
  • Like 2
Posted
On 7/8/2025 at 1:28 PM, Lidozin said:

Thus, the casual claims that the AI aircraft MiG-15 possesses "supernatural" power can be put to rest. The core energy-related parameters — both in 1g flight (such as maximum speed and specific excess power) and in sustained turns at zero excess power — show very strong agreement with available reference data.

In this controlled test, my Mig-15 has the same amount of fuel and the same speed as the AI Mig-15. But i just cannot keep up with AI's climb.

 

Here is a brief thread about the issue 

 

  • Like 3
Posted (edited)
6 hours ago, diego999 said:

Set a dogfight between an Ace AI MiG-21 and something else with you as an observer and watch the thing pull 9G sustained turns without losing speed. Then come and tell us it's all fine because your data, equations and parameters say so.

The math is pretty interesting but it seems you don't play the game.

 

Thanks for the suggestion, but just to clarify — this thread is specifically about the flight model of MiG-15. The comparisons being made here are between its AI flight behavior and known reference data from real-world documentation — including turn performance, climb rates, and energy metrics. So far, they align very well, which is the only point being addressed.

What you're describing relates to a different aircraft — the MiG-21 — which isn't the subject of this discussion. If you believe the AI model for the MiG-21 exhibits unrealistic behavior like sustained 9G turns without energy loss, that’s certainly worth investigating — but it's a separate case.

If you’d like, feel free to share your  relevant .LUA data for the MiG-21 AI model. That would make it possible to analyze its aerodynamic tables and compare them to available real-world performance data, just as was done here with YYY. It’s the only way to move from general impressions to something that can be tested and validated.

As for whether I “play the game” — yes, I do. And I happen to enjoy 1v1 engagements with matching aircraft types, specifically because they reveal how well energy-based parameters are being applied. That’s why I focus on that context when evaluating AI behavior.

Edited by Lidozin
Posted
1 hour ago, Katmandu said:

In this controlled test, my Mig-15 has the same amount of fuel and the same speed as the AI Mig-15. But i just cannot keep up with AI's climb.

 

Here is a brief thread about the issue 

 

That's a very good observation.

A TacView analysis would be a great complement to the video and could help clarify the energy profiles.

In my own experience, 1-on-1 engagements between player and AI usually end in a stalemate: the human gains a slight angular advantage, the AI keeps an edge in energy, and neither can convert it into a win — sometimes for 30 minutes or more.

Posted (edited)
44 minutes ago, Lidozin said:

A TacView analysis would be a great complement to the video and could help clarify the energy profiles.

I don't have TacView, but here are the misison file and the replay track of this test. 

 

44 minutes ago, Lidozin said:

In my own experience, 1-on-1 engagements between player and AI usually end in a stalemate: the human gains a slight angular advantage, the AI keeps an edge in energy,

I agree. The AI's energy advantage is due to the its "supernatural" climb rate, AI Mig-15 simply does a power spiral climb to get out of trouble and player's MiG-15 falls out of the sky when attempting to follow it. Luckily we have access to MiG-15 AI SFM file and a mod with a more realistic AI flight profile is possible, see this post:

 

Edited by Katmandu
  • Like 1
Posted (edited)
Zitat

If there’s a mismatch in energy performance, turn rate, climb, etc., under those circumstances — that’s something worth looking into. But if the concerns are about form-up logic, taxiing behavior, or scripted transitions, those are separate layers of the simulation, and not what’s being discussed when we refer to the AI using a physics-based trajectory model during combat.

Just to be clear, the guy youre talking to is a beta tester. His literaly role, probably job, is to find, observe and test how the game works. I dont know Pikey personally, but he probably knows a lot more about the games inner workings - and issues - than most of us. I would take his word seriously.

And Ive seen, for example, the Heatblur devs say similar things on their discord. Those guys really know what theyre doing. 

vor 16 Stunden schrieb Lidozin:

Let’s isolate the question to air combat maneuvering performance. That’s the only way to make progress on whether the model is being applied correctly — or not — in that context.

Personally, I tend to focus on this specific aircraft and enjoy 1-on-1 dogfights in matching types, precisely because they allow a fair comparison of skill and energy management. I don’t spend much time observing AI in other scenarios, so I leave those potential bugs to others — for me, the duels are more than enough.

Theres an issue here, youre not applying the same standards to those two aspects:

1. You use the lua number for your original calculation, which might well be accurate. It even makes sense that ED isnt actually applying fantasy values into the lua docs. Actually something that makes them look a bit more sensible and news to me, thanks for that.

2. When challenged on how those numbers actually translate into the ingame physics and flight model, you are using your 'observation' to tell the level of accuracy. Thats a lot less rigorous and scientific approach, especially if you didnt consider this facette yet.

And Im not gonna pretend I know the exact issue, but the AI flight model can be deeply broken in very common A2A combat situations, in a way that does clearly not follow physics. To me the energy retention, somewhat during aggressive turns, but especially during climbs is the most obvious.

The video of the AI Mig-15 outclimbing a player controlled one is a good example; a track view is better, but this video doesnt even pass the smell test, it should be blindingly obvious that something is quite wrong there.

And talking about personal 1v1 experiences? I recently had an honestly quite funny situation where I had my clean F-4E, at good speed, IIRC half fuel and only sidewinders left, do a hard AB climb on Syria trying to shake off an AI Mig-15. The Mig-15 was stuck at my back the entire climb, and even when my plane was approaching stall, the 15' was in stable flight, IIRC fairly low AoA and could easily maneuver even while climbing.

Do you know how absurd that situation is? A 50s variant non-afterburning swept wing fighter keeping up with a 1975 3rd gen that should have 4-5 times the nominal climb rate and is optimized for high altitude flight? The F4E can climb and do high altitude better than a Mig-21Bis, and that plane was also a much more powerful high altitude interceptor than the 15'.

I recommend that test to you: Take a plane that should clearly have better climb rates and high altitude performance than a Mig-15, any AB 3rd gen or newer should do easily do. Have the AI Mig-15 chase you up. Use F2 to observe the 15s speed, stability and AoA. You will see why most people dont even consider if the AI is broken much of a topic of debate. Its that obvious during climbs.

Edited by Temetre
  • Like 4
Posted
3 minutes ago, Temetre said:

2. When challenged on how those numbers actually translate into the ingame physics and flight model, you are using your 'observation' to tell the level of accuracy. Thats a lot less rigorous and scientific approach, especially if you didnt consider this facette yet.

That's a good point. We can plot out LUAs all we like, but it's of no use if AI performance doesn't actually follow the LUAs. If the current AI FM is incapable of translating those curves into realistic performance, directly inputting real data is of no use. Hopefully GFM will be able to do a better job at that.

Of course, we also have to keep in mind that with vintage aircraft, it should be modeling a human pilot's inability to perfectly follow the curves. This human factor is difficult to simulate, but there are ways to fake it.

  • Like 3
Posted
1 hour ago, Katmandu said:

I don't have TacView, but here are the misison file and the replay track of this test. 

 

I agree. The AI's energy advantage is due to the its "supernatural" climb rate, AI Mig-15 simply does a power spiral climb to get out of trouble and player's MiG-15 falls out of the sky when attempting to follow it. Luckily we have access to MiG-15 AI SFM file and a mod with a more realistic AI flight profile is possible, see this post:

 

If I had TacView, and wanted to make a well-supported case to the developers that the concern is valid, I would record a 1-on-1 duel and then analyze it in detail.

If TacView is capable of plotting the derivative of total energy over time (which is equivalent to vertical specific excess power), as well as g-load, then it becomes possible to check a common suspicion: that the AI loses significantly less energy during high-g maneuvers than the player’s aircraft does.

By comparing g-load and the energy rate side by side for both the AI and the player aircraft, we might be able to identify whether this is the case.

Let’s try to shed some light on this together.

Posted (edited)
vor 54 Minuten schrieb Dragon1-1:

That's a good point. We can plot out LUAs all we like, but it's of no use if AI performance doesn't actually follow the LUAs. If the current AI FM is incapable of translating those curves into realistic performance, directly inputting real data is of no use. Hopefully GFM will be able to do a better job at that.

To be fair while I was thinking about that, Pyker did a really good job actually putting that issue into words!

If you could easily make a framework where you just enter numbers and the result was realistic, then making simulations and games would be so much easier. Im sure MSFS for example has a very, very complex framework that Asobo devs put a ton of work in, but even then you just 'feel' how most planes use that framework. They inherently feel like MSFS2020/2024 planes.

And then have AI make good use of that frameworks is another layer of complexity... but also a very important one in DCS.

vor 54 Minuten schrieb Dragon1-1:

Of course, we also have to keep in mind that with vintage aircraft, it should be modeling a human pilot's inability to perfectly follow the curves. This human factor is difficult to simulate, but there are ways to fake it.

Oh agreed, there is so much to AI and simulation. Personally Id be happy if the GFM for the AI is just generally in the right ballpark with the performance numbers. And as you say, maybe with the performance numbers of an average well trained pilot, rather than the planes theoretical maximum.

Stuff like seeing an AI F-14 do the roll reversal wobble would be pretty funny.

Edited by Temetre
  • Like 1
Posted
8 minutes ago, Temetre said:

If you could easily make a framework where you just enter numbers and the result was realistic, then making simulations and games would be so much easier.

Note that a simulation also needs to include some way to model abnormal conditions for which data does not exist. You can put a simulated plane into a flight regime in which testing it for real would be too dangerous, for instance. In DCS, this is further compounded by having to figure out how a plane would fly with various kinds of battle damage. That's one reason why DCS doesn't use lookup tables only. While this could be passable for something like CMO, where you don't actually fly the aircraft, 

One thing GFM does is simulating some of those abnormal conditions. AI will be able to stall out and depart the aircraft. Hopefully, ED will take opportunity to look at decisionmaking process of the AI, and at the way it flies simple administrative tasks, as well.

  • Like 2
Posted (edited)
1 hour ago, Lidozin said:

If I had TacView, and wanted to make a well-supported case to the developers that the concern is valid, I would record a 1-on-1 duel and then analyze it in detail.

Two points:

1. ED doesn't accept TacView as evidence in cases like these as TacView's telemetry outputs sometimes differ by a factor of two or three from the actual in-game data. Like in game acceleration would be 5g, and TacView would report 14g etc. 

2. Duels have very little use for rigorous analysis as both planes have all sorts of differences in vectors and magnitudes at any given instant. It is more prudent to have simple tests where both planes behaviors are nearly matched - like with the climb test replay, or the sustained turning test. The latter is also avaiblable btw: 

 

Edited by Katmandu
  • Like 2
Posted (edited)
3 hours ago, Temetre said:

And talking about personal 1v1 experiences? I recently had an honestly quite funny situation where I had my clean F-4E, at good speed, IIRC half fuel and only sidewinders left, do a hard AB climb on Syria trying to shake off an AI Mig-15. The Mig-15 was stuck at my back the entire climb, and even when my plane was approaching stall, the 15' was in stable flight, IIRC fairly low AoA and could easily maneuver even while climbing.

Do you know how absurd that situation is? A 50s variant non-afterburning swept wing fighter keeping up with a 1975 3rd gen that should have 4-5 times the nominal climb rate and is optimized for high altitude flight? The F4E can climb and do high altitude better than a Mig-21Bis, and that plane was also a much more powerful high altitude interceptor than the 15'.

I recommend that test to you: Take a plane that should clearly have better climb rates and high altitude performance than a Mig-15, any AB 3rd gen or newer should do easily do. Have the AI Mig-15 chase you up. Use F2 to observe the 15s speed, stability and AoA. You will see why most people dont even consider if the AI is broken much of a topic of debate. Its that obvious during climbs.

This kind of hyperbole reports are another reason why the AI didn't get fixed haha 😉 Gen3 fighters absolutely dominate the AI Mig-15, overpowered or not. Here is just a quick demo in a fully fueled F-5 vs Ace AI Mig-15, it just cannot keep up. AI Mig-15 definitely outclimbs and outturns human PFM MiG-15 - as the tests above demonstrate, but let's keep it real, it poses little threat to Gen3 and Gen4 Fighters 🙂

 

 

Edited by Katmandu
  • Like 1
Posted

Interesting discussion and nice to see someone attempt to bring data into this. However I wouldn't say that comparing the envelope and turn performance is definitive. I think at least some of the issue with the AI comes from transient maneuvers and edge of envelope performance. The AI never seems to struggle near stall. It's especially visible with WWII fighters as they can maintain a perfect climb under full power at virtually zero airspeed and not have to deal with the torque effects of props at all, nor cooling issues as far as I can tell. From experience the SFM also seems to do weird stuff during transient maneuvers. For example I'm not sure if there is any performance hit to holding max sustained turn while also rolling for the AI. The AI also seems to have unnatural abilities when it comes to changing speed, like somehow magically decelerating while the afterburner is engaged. Stuff like that may not show up in simple flight tests.

 

It's also compounded by AI super SA. They know your speed at all times and will react to any change to your maneuvering even if they shouldn't be able to detect it.

  • Like 4

Awaiting: DCS F-15C

Win 10 i5-9600KF 4.6 GHz 64 GB RAM RTX2080Ti 11GB -- Win 7 64 i5-6600K 3.6 GHz 32 GB RAM GTX970 4GB -- A-10C, F-5E, Su-27, F-15C, F-14B, F-16C missions in User Files

 

Posted
4 hours ago, Temetre said:

 

2. When challenged on how those numbers actually translate into the ingame physics and flight model, you are using your 'observation' to tell the level of accuracy. Thats a lot less rigorous and scientific approach, especially if you didnt consider this facette yet.

 

Glad that helped clarify things a bit.

Just to add — the way the simulator uses the aerodynamic and engine data from the .lua files (such as thrust tables and drag polars) is actually well-understood and has been analyzed in detail over the years. It’s not a black box — the trajectory model applies this data in a consistent and predictable way, based on fairly straightforward physics.

That’s precisely why it’s possible to compare AI behavior to real-world flight data and get meaningful results. When you match conditions (mass, altitude, airspeed), the outputs — like climb rate, turn rate, and energy loss — generally follow from the input tables in a transparent way.

So yes, the numbers in the .lua files aren’t just decorative — they actually drive the simulation logic quite directly.

You can instruct the AI to follow a route with maximum climb, and even without TacView, measure the resulting vertical speed at various altitudes.

Alternatively, you can have the AI accelerate at maximum power while maintaining altitude, and determine its equivalent vertical speed — which allows you to cross-check the previous test using a different energy-based method.

It would then be possible to compare the test results with calculated performance values, to see how closely they match.

Posted
vor 4 Stunden schrieb Dragon1-1:

Note that a simulation also needs to include some way to model abnormal conditions for which data does not exist. You can put a simulated plane into a flight regime in which testing it for real would be too dangerous, for instance. In DCS, this is further compounded by having to figure out how a plane would fly with various kinds of battle damage. That's one reason why DCS doesn't use lookup tables only. While this could be passable for something like CMO, where you don't actually fly the aircraft, 

One thing GFM does is simulating some of those abnormal conditions. AI will be able to stall out and depart the aircraft. Hopefully, ED will take opportunity to look at decisionmaking process of the AI, and at the way it flies simple administrative tasks, as well.

I think one thing people often miss is that a game, or even simulation, cannot replicate reality. Reality is just infinitely complicated, and eg a plane might sometimes do specific things in specific environments that are hard to explain even in the real world. Where even the designers can only make a good guess why this happens when they start testing the plane.

Cant just expect a game engine to simulate that kinda stuff from the get go. Hence every accurate plane in DCS has an insane amount of custom coding/scripting/etc to make the flight model as realistic as possible despite the simplification.

  • Like 1
Posted (edited)
vor 3 Stunden schrieb Katmandu:

This kind of hyperbole reports are another reason why the AI didn't get fixed haha 😉 Gen3 fighters absolutely dominate the AI Mig-15, overpowered or not. Here is just a quick demo in a fully fueled F-5 vs Ace AI Mig-15, it just cannot keep up. AI Mig-15 definitely outclimbs and outturns human PFM MiG-15 - as the tests above demonstrate, but let's keep it real, it poses little threat to Gen3 and Gen4 Fighters 🙂

Its well possible im misremembering the exact details, but Id be careful to draw quick conclusions. You cant assume a buggy flight model always performs the same. Unintended behaviour is the problem!

For example, the Mig-15 vs 15 climb video showed the AI plane go past 7km (23k feet) without too much of a slow down. Yet in your video it seems to struggle a lot more.  

vor einer Stunde schrieb Lidozin:

Glad that helped clarify things a bit.

Just to add — the way the simulator uses the aerodynamic and engine data from the .lua files (such as thrust tables and drag polars) is actually well-understood and has been analyzed in detail over the years. It’s not a black box — the trajectory model applies this data in a consistent and predictable way, based on fairly straightforward physics.

That’s precisely why it’s possible to compare AI behavior to real-world flight data and get meaningful results. When you match conditions (mass, altitude, airspeed), the outputs — like climb rate, turn rate, and energy loss — generally follow from the input tables in a transparent way.

So yes, the numbers in the .lua files aren’t just decorative — they actually drive the simulation logic quite directly.

You can instruct the AI to follow a route with maximum climb, and even without TacView, measure the resulting vertical speed at various altitudes.

Alternatively, you can have the AI accelerate at maximum power while maintaining altitude, and determine its equivalent vertical speed — which allows you to cross-check the previous test using a different energy-based method.

It would then be possible to compare the test results with calculated performance values, to see how closely they match.

Lets be real: Basically everyone in this thread is either neutral or disagrees with you, but you seem to see no reason to question your conclusions and even claim its 'well understood' that you are correct?

Its fine to disagree, but it seems pointless to talk if your view is set in stone regardless of anything else. You wont convince anyone else with that either.

Edited by Temetre
  • Like 2
Posted
2 hours ago, Temetre said:

Its well possible im misremembering the exact details, but Id be careful to draw quick conclusions. You cant assume a buggy flight model always performs the same. Unintended behaviour is the problem!

For example, the Mig-15 vs 15 climb video showed the AI plane go past 7km (23k feet) without too much of a slow down. Yet in your video it seems to struggle a lot more.  

Lets be real: Basically everyone in this thread is either neutral or disagrees with you, but you seem to see no reason to question your conclusions and even claim its 'well understood' that you are correct?

Its fine to disagree, but it seems pointless to talk if your view is set in stone regardless of anything else. You wont convince anyone else with that either.

There are generally two ways to approach questions like this.

One is the engineering-based approach, grounded in measurable quantities, physical laws, and repeatable analysis. The other is more perception-driven, relying on impressions, intuition, and speculative reasoning.

In the recent discussion about AI F4U, many participants spent considerable time debating whether the AI's flight behavior felt right or wrong, and what might be causing that perception. But just a quick look at the aerodynamic tables — using the first approach — and a couple of reference plots were enough to answer the questions concretely and resolve much of the confusion.

Of course, when you apply this kind of analysis, it can be hard to “convince” anyone in a thread where conclusions are often shaped by impressions and group consensus. After all, engineering models don’t win by vote count — they win by predictive accuracy and consistency with real-world data.

That’s the challenge — and the strength — of sticking to a technical approach. It might not sway opinions immediately, but it builds a foundation that can be tested, reproduced, and improved over time.

 

P.S. Some have expressed doubts about the validity of results derived from aerodynamic tables, suggesting that the simulation might apply those tables differently than assumed in the calculations. That, again, reflects the second — intuitive — approach: questioning the outcome not through concrete counter-evidence, but through uncertainty about the internal mechanics.

But from an engineering perspective, the solution is straightforward: you don't need to guess. Numerous forum posts over the years — including examples from developers and community  — have documented how the simulation reads and uses these tables. The logic is well-known, consistent, and has been independently confirmed. There’s no mystery here.

This is exactly the difference: one approach raises questions based on feeling or possibility. The other seeks out the implementation, reads the code or its documented behavior, and uses that to anchor the analysis.

That’s not to say intuition has no place — it can help spot issues — but resolving them ultimately requires verifiable structure.

image.png

Posted (edited)
3 hours ago, Temetre said:

You cant assume a buggy flight model always performs the same. Unintended behaviour is the problem!

For example, the Mig-15 vs 15 climb video showed the AI plane go past 7km (23k feet) without too much of a slow down. Yet in your video it seems to struggle a lot more.  

It's buggy, sure! But not in the sense that it is intermittently wrong, it is buggy in the sense that it is always wrong. AI Mig-15 always flies to the same parameters specified in the lua file DCS World OpenBeta\CoreMods\aircraft\MiG-15bis\MiG-15bis.lua. AI Mig-15 always outclimbs human PFM Mig-15, for example. But it never outclimbs a Gen3 fighter - if they share the starting conditions in my test mission, of course.

Here is the controlled climb test of human controlled F-5 vs AI Ace Mig-15:

And here is the human controlled Mig-15 vs AI Ace Mig-15 again:

 

I also attach the mission file, so anyone could try it with any plane vs any AI plane. Feel free to try the mig-15 AI vs the F-4.

To sum up, the AI Mig-15 is unquestionably overpowered vs human/PFM Mig-15 (this is bad when we fly the F-86 Sabre of course). But, that is not to say that the AI Mig-15 is overpowered vs Gen3 fighters, those are in a different league.

AI_test_climb_Mig15.miz

Edited by Katmandu
  • Like 2
Posted (edited)
42 minutes ago, Katmandu said:

It's buggy, sure! But not in the sense that it is intermittently wrong, it is buggy in the sense that it is always wrong. AI Mig-15 always flies to the same parameters specified in the lua file DCS World OpenBeta\CoreMods\aircraft\MiG-15bis\MiG-15bis.lua. AI Mig-15 always outclimbs human PFM Mig-15, for example.

If the LUA file is correct, and if the AI MiG-15 flies exactly as LUA file says it should, then the only logical conclusion given the above is that human-flown PFM MiG-15 underperforms. Perhaps a test to check if it does is in order, but I'd be surprised if that was the case. Since it's been shown the LUA is pretty much spot on compared to the charts, either AI flies better than the LUA implies it should, or the player doesn't fly well enough. 

Also keep in mind that we might also be looking at something like with Reflected's warbird formations. There, it was possible to climb with the AI, just really hard. You had to be perfectly on speed, have trims set in a perfect way, and never, ever fall behind even a bit. Then, you'd stay in formation, despite AI performing a max performance climb. So we also need to exclude the scenario in which the AI simply hits the (arguably unrealistic in practice) theoretical maximum and the human doesn't.

Edited by Dragon1-1
  • Like 1
Posted
On 7/10/2025 at 7:53 PM, Lidozin said:

If there’s a mismatch in energy performance, turn rate, climb, etc., under those circumstances — that’s something worth looking into. But if the concerns are about form-up logic, taxiing behavior, or scripted transitions, those are separate layers of the simulation, and not what’s being discussed when we refer to the AI using a physics-based trajectory model during combat.

The examples provided prove physics is not observed 100% of the time. Only that. Nothing else.

You can state any physics observation you like, but it doesnt serve as evidence to a question of software.

What makes you so certain that the software is being used properly and consistently when you never wrote it and don't have access to it? 

A model can be correct, physics, can be proved. I'm happy that a model is safely beyond reproach. What I'm not convinced of is that its applied correctly or consistently. You cannot read the software, its going on beyond your eyesight. Software does not observe laws therefore you cannot use physics to prove that software conforms.

Now, you marked yourself as the solution in this thread. I don't care about the arrogance of that, but it's a sign that you don't consider any previous or future argument to be of value. SO, since you are th esolution to your own thread, I think you can dispense with everyone else in the world and go back to single player. It's where you shine.
 

  • Like 4
  • Thanks 2

___________________________________________________________________________

SIMPLE SCENERY SAVING * SIMPLE GROUP SAVING * SIMPLE STATIC SAVING *

Posted (edited)
9 hours ago, Dragon1-1 said:

If the LUA file is correct, and if the AI MiG-15 flies exactly as LUA file says it should, then the only logical conclusion given the above is that human-flown PFM MiG-15 underperforms.

Lua file tables provide only the input to the flight model equations. For example, there is are no table values like "climb at 25m/s at pitch angle 45Deg". The tables provide Thrust 20000 at Alt 5000, wing area constant, drag and lift coefs for specific Mach, and so on. The actual aircraft behavior -like the climb rate at some particular altitude, speed, attitude and angle of attack- is not directly derived from the tables, it is computed using ED's SFM model. And, because the SFM is more simplistic than PFM, the 2 models predict different behavior. Jarringly different - in the case of Mig-15.

One way to address the problem (outside of waiting for ED's GFM that is X years away):

1) Assume that the PFM model is the much more realistic model of the two - because it actually models fluid dynamics etc

2) Tweak the SFM input tables in the lua for closer match in behavior in controlled tests. Yes, the input would deviate from the real life table data, but the actual SFM model behavior would be closer to a more detailed PFM model.

Edited by Katmandu
  • Like 1
Posted
vor 15 Stunden schrieb Katmandu:

It's buggy, sure! But not in the sense that it is intermittently wrong, it is buggy in the sense that it is always wrong. AI Mig-15 always flies to the same parameters specified in the lua file DCS World OpenBeta\CoreMods\aircraft\MiG-15bis\MiG-15bis.lua. AI Mig-15 always outclimbs human PFM Mig-15, for example. But it never outclimbs a Gen3 fighter - if they share the starting conditions in my test mission, of course

You definitely got a strong point there. I probably confused or misremembered the Mig-15 with a Mig-19 or 21, my bad!

  • Like 1
Posted (edited)

 

@Lidozin Frankly, this thread tells more about pyschology than any aspect of mechanics and simulation. You wrote these two things in the same post:

First this:

Zitat

Of course, when you apply this kind of analysis, it can be hard to “convince” anyone in a thread where conclusions are often shaped by impressions and group consensus

And then this:

Zitat

Numerous forum posts over the years — including examples from developers and community  — have documented how the simulation reads and uses these tables. The logic is well-known, consistent, and has been independently confirmed. There’s no mystery here.

In one you say everyone else is wrong, because factual arguments have a hard time against group consensus (which is a funny thing to say btw).

In the second you say you dont need to present facts, analysis or evidence regarding applicability of your analysis, because group consensus supports your position (also funny in context of this topic).

 

How do you rationalize these two contradicting lines of argumentation?

 

Edited by Temetre
  • Like 2
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...