Jump to content

powerload

Members
  • Posts

    20
  • Joined

  • Last visited

  1. There's a VSN F-4 mod that seems more complete.
  2. In case you are not aware, there is a reasonably nice VSN F104 Starfighter mod floating around.
  3. I'm not disputing the importance of other factors, but it would still be very interesting to see just what difference the shape makes.
  4. It means that if the model shape is accurate, with ray traced radar, the range at which a 5th generation fighter can be detected by radar will be realistically reduced.
  5. Radar ray tracing is a very interesting idea. If it is true ray tracing against the actual surfaces of the aircraft model, it would provide some very interesting insights into a few things. There are now mods available adding F-22, F-35 and Su-57 to DCS. Make the radars work based on real ray tracing and... I'm sure you can see where I'm going with this...
  6. With all the bouncing, 100 landings is more like one approach with 100 bounces.
  7. +1 MiG-29 in FC3 bounces like a bastard on landing, specifically the nose gear. It's as if there is a shock amplifier rather than a shock absorber on the front landing strut. No other aircraft in DCS (I have most of them) behaves even remotely similarly in landing. I have no way of telling whether it's realistic design fault or a simulation issue.
  8. Anyone aware of such a thing? I'd settle for it being just a 29S with the MFD from the Su-27/J-11, smokeless engines with an extra 7% of thrust and an arrester hook for carrier landing.
  9. Since you asked... Full disclosure: I run in a VM.The issue was a somewhat complex one, but in a nutshell: 1) As of some 18xx build of Windows 10, MS changed the way the timer interrupts work. Instead of being moderated and timer dependent, it went to a statically set 2,000 timer interrupts per second. While it was doing that for some RTCs before, it didn't do that for HPET. What changed was that it started doing that for HPET as well. This in turn made the VM suck up about 7-10% of each CPU core it was given when idle. The solution was to fix that. How is a little more complex. 2) Expose the HyperV clock device <clock> <timer name='hypervclock' present='yes'/> </clock> This fixed the huge CPU suck due to ridiculous timer interrupt rate. Problem with this is that doing so exposes the hypervisor's presence implicitly. This in turn makes Nvidia's driver refuse to initialise the GeForce GPU in the VM. Nvidia implement this really asinine way of product differentiation in the driver. If you want to run PCI passthrough of a GPU to a VM, the driver will refuse to initialize a GeForce GPU (but will work if you have a Tesla or Quadro). Nvidia have in the past claimed that it is a bug, but it is in fact completely deliberate - there's a whitelist in the driver and it is updated when new GPUs are released, so it's definitely a feature rather than a bug. So since this is one way that Nvidia driver detects presence of a hypervisor, it has to be neutered (other ways were already neutered). The way to do that for the HyperV clock source is: <features> <hyperv> <synic state='on'/> <stimer state='on'/> <vendor_id state='on' value='null'/> </hyperv> <kvm> <hidden state='on'/> </kvm> </features> KVM hidden state was there before (another way to prevent the Nvidia driver from noticing the VM). The synic and stimer features are necessary to fix the clock CPU drain issue. Setting vendor_id to null is the other necessary way to neuter GeForce driver's ability to detect the VM. Between those, I've got an extra 35% of CPU per core passed to the VM, which completely cured the problem, and probably provided a more stable clock signal to boot. Granted, VMs are not 0 overhead (even if it's not showing up as idle CPU usage), but with this fix it's back in the "good enough" territory for the foreseeable future. So on the whole, probably not applicable to most people hitting CPU starvation issues in DCS, but I hope it's useful to anybody who finds this post in the future.
  10. Indeed. What surprised me most is that this is the very first time that I have found something for which a Xeon X5690 (3.6GHz, 6 cores, top of the line for it's Nehalem/Westmere generation) isn't massively overspecified for. It never occurred to me that there could be a workload on which it doesn't utterly annihilate a 2.8GHz i3 instead of just about barely matching it. Either way, I guess I've managed to put off an upgrade for another year or two. :-)
  11. OK, I think I solved or at least largely alleviated the problem somewhat with a bit of CPU related tuning. It turns out that DCS doesn't degrade gracefully with CPU performance. When it hits 100% of CPU usage, for the time it is bottlenecked, performance doesn't degrade just by the simple fraction of extra CPU it could do with - the performance craters completely. If it's CPU requirements fit into the envelope of what's available, it's fine. When it needs even 1% more CPU than what it has, the performance reduces to effectively 0, in my case from 50-60fps down to under 1fps. Thanks for your help. :-)
  12. Thanks. I'll try to get a better, higher resolution measurement on CPU usage to prove or disprove the CPU starvation hypothesis.
  13. I see. But the i3 is dual core, the Xeon has the advantage that the test of the software stack can run on the cores not used up by the 2 DCS threads. So realistically it will be better than the single thread benchmark shows. As another reference point, while I have this problem with the Fortress Mozdok mission 3 in the Su-27, I haven't experienced any issues with F-14 missions so far, and the F-14 should be sucking up a lot more CPU with it's much more detailed modelling. Also, the CPU load doesn't get much past 80-90% on the two most heavily used cores, the graph definitely isn't flatlined. And I would expect the performance to degrade gracefully rather than just tank from 60fps to under 1fps. It doesn't make sense.
  14. What is this "pts" unit you speak of? I struggle to believe that a 3.6GHz Nehalem isn't substantially faster in single thread performance than a 2.8GHz i3.
  15. I have a strange issue, not dissimilar to what is discussed on other threads, but it is massively more pronounced. My setup is as follows: CPU: Xeon X5690 (boosting to 3.6GHz, power management disabled) RAM: 24GB GPU: 1080Ti (tweaking makes no difference, it seems to stay below 65C and clocks look good). Still using 419.xx driver since I heard 430.xx has CPU hogging issues. The game runs perfectly smoothly to begin with, and most of the time, but at times, particularly when getting into a furball, everything grinds to a halt. I'm trying to complete Fortress Mozdok in single player and as soon as I get into missile range with the F-16s in mission 3, the game goes from perfectly smooth 60fps to about 1 frame every 2-3 seconds - and it never recovers. Or at least it doesn't recover in the time it takes for the AMRAAMs to blow me away (because 0.5 fps). I tried reducing all graphics settings to a minimum, including terrain detail, disabled all the mods - nothing helps. The GPU doesn't seem to be getting anywhere near throttling temperatures. CPU doesn't seem to be getting hot enough to hit thermal throttling. I disabled HT, and that didn't seem to help at all. I know that a top of the line Westmere Xeon isn't exactly bleeding edge, but the most bleeding edge CPU also definitely isn't 50x faster which is what it would take to make things playable. Looking at the Windows performance meter, all the CPUs are used, and none are 100% maxed out - two of the cores hover around 80-90%, the rest are much lower.. This is hardly an underpowered machine, and Fortress Mozdok mission #3 doesn't seem to be that demanding, there are fewer than 10 aircraft flying. What else can I try? I haven't seen this kind of sudden performance cratering in any other game or application. It all works fine right up to the point where I am in missile range of the F-16s and as soon as the first missiles are exchanged, everything over the following few seconds grinds to a complete halt. Edit: What seems to help is slowing down game time by a notch (LAlt+Z). Even just one notch slower, and the FPS goes from 1fps back to 60fps. Obviously that isn't a solution, but I'm hoping it might help identify the root cause.
×
×
  • Create New...