Jump to content

Spectre1986

Members
  • Posts

    96
  • Joined

  • Last visited

Everything posted by Spectre1986

  1. Your 7800x3D doesn't have a second, non-cached CCD. All of your cores are on the same CCD with the huge cache so your CPU should be performing near-peak, you could consider using Process Lasso to contain it to the even-numbered physical cores for a potential boost on the CPU side, but you're GPU limited according to the DCS frame counter in your screenshot. That 7900XT isn't much better than the 3090ti, especially in VR, and DCS is an odd one that may prefer Nvidia and perhaps charges an AMD tax on the GPU side. It may be worth swapping GPUs and testing again.
  2. I did some more testing using Process Lasso. First, I used it to always limit DCS to the first 8 physical cached cores. So cores 0, 2, 4, 6, 8, 10, 12, and 14. I am excluding the entire non-cached CCD and also excluding virtual cores (SMT\HyperThreading) from the first CCD. This provided the best experience yet, nearly locked at 90 FPS, with very short and rare dips down to 45 FPS. The experience was noticeably better as it's mostly held back by the FPS limit I've set. This testing is in the Huey Hard Quick Start. After I've killed everything, I land facing the column of burning and exploding units, then take the screenshot. I'm in the Pimax Crystal and for software running I have Pimax, Simshaker for Aviators, Simshaker Sound Module, Process Lasso, Discord, and Simapppro (along with DCS). These apps do not have affinity set, only DCS. I'm able to verify with Task Manager that DCS is only loading the physical cores on the cached\gaming CCD, so PL is working. Here are the results without any affinity set in Process Lasso, this is just regular DCS using the standard code. It's the same mission, same settings, and a screenshot of the same burning units, but with nothing currently exploding (so it should be even easier to render). You can see the performance is not only cut in half, the frame time has much more "noise" when holding a constant FPS, the graph is rougher and the experience is VERY noticeably worse. Task manager verifies DCS is mostly using the wrong CCD, it's maybe using ONE core on the correct CCD, then dumping all the data across the slow Infinity Fabric... The DCS developers have failed to address this issue despite multitudes of feedback, expanded logging testing, and log delivery. Even though EVERY modern gaming CPU has a hybrid design (Intel has P and E cores, AMD has gaming vs productivity CCDs) they're stuck in the past and appear to just randomly assign threads to nonsensical physical and virtual cores. Luckily we're able to take things into our own hands and force it to be done correctly with Process Lasso. I'm using the free version but I'm not sure if there are limitations that'll force me to buy it.
  3. I resolved the fluctuating FPS issue by "resetting defaults" in the OpenXR Toolkit via the in-sim Ctrl+F2 menu. I did this when I initially enabled quad views per the instructions, but I think I changed some settings when troubleshooting DLSS visual quality issues and it needed the additional reset. The sim is still using the wrong cores. Other games can see a 25% performance loss when using both CCDs and these hybrid CPUs are the present\future. This sim needs to know how to handle them correctly. I suspect the sim is using the cores that clock higher, since the non-cache\non-gaming CCD cores clock higher. We could try downclocking the cores on the second CCD to make them slower than the the gaming CCD cores. This way the fastest cores will also be the cached gaming-cores. Maybe DCS will handle it better and the devs could use this info to fix the programing?
  4. Thanks for the input Deanzsyclone, we have the same config, but my issue seems to start in the main menu and persists thru to in-sim at any settings. Can you load the sim up to the main menu, press RCtrl+Pause to show the frame time graph and get a screenshot ("prt sc" on keyboard) and paste (Ctrl+V) it here? I'd like to see if you have the same fluctuations. I'll also test your shadow settings recommendation and report back. I just went and did the latest BIOS update for my "Asus Tuf Gaming x670e-plus WiFi" motherboard and then did all available driver updates, including GPU. It made no change for me. It's still loading the wrong cores and giving a terrible experience. I have included the screenshot from the "Huey Hard Ground Attack" quickstart mission on the default map. Note: I'm not loading any extra software (SimShaker, SimAppPro, etc.) for these tests, I'm only running Pimax and DCS. Here is my newest dcs.log file with the expanded logging mentioned previously. null
  5. My Xbox game bar app did need an update, but updating did not help. DCS 2.9 still uses the wrong cores and you can see the fluctuations that persist from the main menu through to in-sim. The top graph is the menu, the bottom is in-sim with minimum settings. In case you didn't know, you display this graph with RCTrl+Pause. I even tried some heavy underdamping to really reduce the load on my top-end PC, it still fluctuates like the main menu. null
  6. Here is my log file dcs.log (from C:\Users\<youruser>\Saved Games\DCS.openbeta\Logs). I used the mt.lua file for expanded logging (it makes your log file more useful to the devs when you drop it in C:\Users\<youruser>\Saved Games\DCS.openbeta\Config). Here is my DxDiag.txt (run > DxDiag, let it run then "Save all information"). Here is a screenshot from in-sim with very low settings, it shows the same repeating fluctuations from 45-90FPS that we get in the main menu. My PC should easily provide 90 FPS with these settings, especially in the menu. You can also see that DCS is loading up most of my non-gaming cores in task manager 16-31. Please find and squash these bugs, 45 FPS VR looks pretty bad when you're accustomed to 90FPS. The constant fluctuations between the two is really bad. null 7950X3d 4090 64GB @ 6Ghz
  7. As you should know, AMD's new 7950X3D and 7900X3D have two CCD (clusters), with one dedicated to gaming. The gaming CCD (first 8 cores of 7980X3D)) has a massive 96Mb of cache, the other CCD only has 32MB and is reserved for productivity tasks (non-gaming) DCS 2.9 MT is using many cores from both CCDs, potentially causing low FPS and FPS fluctuations far worse than ever seen on previous MT versions. All games, including OLD non-MT DCS, are automatically isolated to the gaming CCD as they should, which are the first 16 threads in Task Manager for the 7950x3D. The problem is that DCS 2.9 MT (and the new 2.9 Single Threaded) uses many cores from both CCDs. This is wrong. There is a substantial latency penalty when communicating between CCDs that can cause studders. It does NOT matter how minor or insignificant you think these tasks are, simply using ANY cores on the second CCD for ANYTHING causes a latency penalty as the infinity fabric is used. DCS needs to use the first CCD's cores of the top-end AMD x3d CPUs ONLY for all threads. I believe this is causing these new massive fluctuations in FPS (45-90) that I'm seeing, even in the main menu. You can see this in my screenshot and note that my graphics settings are set to the minimum possible for this main menu screenshot. The main menu always had 90 FPS in previous versions prior to 2.9 and these same fluctuations happen in-sim at any graphics settings. Something needs to be fixed here. From what I understand, AMD uses the "xbox game bar" to determine if the load is a game or productivity workload. If this is true it seems the Xbox GB detects the old standard DCS as a game, but 2.9 ST and 2.9 MT DCS as a productivity workload. Unrelated, but many performance metrics seem to be missing, for example I only get 0 for the "Simulation" frame time when pressing RCTRL+Pause twice and expanding the block of data, this frame time does seem to show on the graph though. Before running this test and capturing this screenshot I cleared the FXO and Metashaers2 folders and also deleted all of my multiplayer tracks. It didn't make a difference. 7950X3d 4090 64GB @ 6Ghz
  8. Just for reference, this first screenshot is what the CPU load looks like in task manager with everything EXCEPT DCS running. So SimShaker for Aviators, Windows Mixed Reality, MS Paint, and Chrome. null The second screenshot is with all of this plus regular non-MT DCS. It stays on the first CCD as it should. You can see that the overall FPS reads lower, but the CPU frame time (Simulation) is much better and more consistent. They both "feel" about the same despite MT reporting a higher FPS. Probably because GPU shortfalls are handled better and are much harder to notice vs the hard studders caused by CPU latency. Note: The attached screenshots and logs in this post are for the NON-MT version of DCS. Referred to as the standard version of DCS.null dcs.log
  9. nullHere are the results with the latest update. I've added foveated rendering through the OpenXR Toolkit, so the performance is better compared to my other screen shots. I do have DCS.exe added to the Windows 11 Graphics Settings and set to High Performance. It looks like the load is still spread across all cores, using both CCDs and the slow communication between them. The CPU is still the bottleneck with some pretty bad 18ms spikes. This is too high to maintain 90 FPS, which means it's not a good experience in VR. It may be worth noting that the two CCDs also have different CORE performance. The first "gaming" CCD actually has lower maximum boost frequencies than the second "productivity" CCD. The extra cache is stacked on top of the cores, limiting their thermals, so the cores actually run a little slower on the gaming CCD. This may have something to do with which cores are selected for certain loads. It may seem better to pick the faster cores on CCD 2 for some tasks and the cores with massive cache for others, but in reality this introduces latency because of the slow connection between the two CCDs. It would be better to pick one or the other for everything. History has proven that the massive cache is a huge advantage in flight simulators, but if you want to code it both ways I can test each. I'll run a complex mission and record FPS with whatever recording tool is recommended (I have experience with FRAPS) to show max, min, 1% lows, etc. Here is the relevant portion of the attached log to match the previous poster's info. 2023-06-15 01:32:11.642 INFO EDCORE (Main): CPU: AMD Ryzen 9 7950X3D 16-Core Processor [2x L3 caches] 2023-06-15 01:32:11.642 INFO EDCORE (Main): CPU caches have different sizes: [32-96] MB 2023-06-15 01:32:11.642 INFO EDCORE (Main): Cores sharing L3 cache 96 MB: {8, 9, 10, 11, 6, 7, 4, 5, 12, 13, 2, 3, 0, 1, 14, 15} 2023-06-15 01:32:11.642 INFO EDCORE (Main): Cores sharing L3 cache 32 MB: {18, 19, 22, 23, 26, 27, 24, 25, 30, 31, 16, 17, 20, 21, 28, 29} 2023-06-15 01:32:11.642 INFO EDCORE (Main): all CPU cores have the same efficiency class 0 2023-06-15 01:32:11.642 INFO EDCORE (Main): CPU cores have different performance classes: [0-14] 2023-06-15 01:32:11.642 INFO EDCORE (Main): logical cores with performance class 7: {8, 9} 2023-06-15 01:32:11.642 INFO EDCORE (Main): logical cores with performance class 6: {10, 11} 2023-06-15 01:32:11.642 INFO EDCORE (Main): logical cores with performance class 5: {6, 7} 2023-06-15 01:32:11.642 INFO EDCORE (Main): logical cores with performance class 4: {4, 5} 2023-06-15 01:32:11.642 INFO EDCORE (Main): logical cores with performance class 3: {12, 13} 2023-06-15 01:32:11.642 INFO EDCORE (Main): logical cores with performance class 2: {2, 3} 2023-06-15 01:32:11.642 INFO EDCORE (Main): logical cores with performance class 1: {0, 1} 2023-06-15 01:32:11.642 INFO EDCORE (Main): logical cores with performance class 0: {14, 15} 2023-06-15 01:32:11.642 INFO EDCORE (Main): logical cores with performance class 14: {18, 19} 2023-06-15 01:32:11.642 INFO EDCORE (Main): logical cores with performance class 14: {22, 23} 2023-06-15 01:32:11.642 INFO EDCORE (Main): logical cores with performance class 13: {26, 27} 2023-06-15 01:32:11.642 INFO EDCORE (Main): logical cores with performance class 12: {24, 25} 2023-06-15 01:32:11.642 INFO EDCORE (Main): logical cores with performance class 11: {30, 31} 2023-06-15 01:32:11.642 INFO EDCORE (Main): logical cores with performance class 10: {16, 17} 2023-06-15 01:32:11.642 INFO EDCORE (Main): logical cores with performance class 9: {20, 21} 2023-06-15 01:32:11.642 INFO EDCORE (Main): logical cores with performance class 8: {28, 29} 2023-06-15 01:32:11.642 INFO EDCORE (Main): common cores: {12, 13, 2, 3, 0, 1, 14, 15} 2023-06-15 01:32:11.642 INFO EDCORE (Main): render cores: {8, 9, 10, 11, 6, 7, 4, 5} 2023-06-15 01:32:11.642 INFO EDCORE (Main): IO cores: {18, 19, 22, 23, 26, 27, 24, 25, 30, 31, 16, 17, 20, 21, 28, 29} 2023-06-15 01:32:13.106 INFO EDCORE (Main): Create boot pool. 2023-06-15 01:32:13.107 INFO EDCORE (Main): Created boot pool: n:32 2023-06-15 01:32:13.108 INFO APP (Main): Command line: "D:\Program Files\Eagle Dynamics\DCS World OpenBeta\bin-mt\DCS.exe" --force_enable_VR --force_OpenXR 2023-06-15 01:32:13.108 INFO APP (Main): DCS/2.8.6.41066 (x86_64; MT; Windows NT 10.0.22621) 2023-06-15 01:32:13.108 INFO APP (Main): Application revision: 221066 2023-06-15 01:32:13.108 INFO APP (Main): Renderer revision: 24140 2023-06-15 01:32:13.108 INFO APP (Main): Terrain revision: 24104 2023-06-15 01:32:13.108 INFO APP (Main): Build number: 385 2023-06-15 01:32:13.108 INFO APP (Main): CPU cores: 16, threads: 32, System RAM: 64654 MB, Pagefile: 4096 MB 2023-06-15 01:32:13.318 INFO EDCORE (Main): (dDispatcher)enterToState_:0 2023-06-15 01:32:13.318 INFO Dispatcher (Main): 2023/6/14 20:32 V1803061700 2023-06-15 01:32:13.341 INFO ASYNCNET (Main): ProtocolVersion: 327 2023-06-15 01:32:13.345 INFO ED_SOUND (Main): Using driver: wasapi dcs.log
  10. Actually, there is an even newer BIOS. It's BETA but it is essential to keep your X3D chip from EXPLODING! I'm testing with it now.
  11. DCS version 2.8.4.39313 with default windows settings, running MT in VR (Reverb G2) on they Ryzen 9 7950x3D with latest BIOS and chipset. I'm still seeing the load spread across both CCDs, instead of contained to the gaming CCD with the extra cache. I also see a frame time improvement when disabling the non-gaming CCD, the experience is noticeably better when forced to threads 0-15. dcs.log
  12. Did you measure 1% and 0.1% lows by chance, frame times from both the CPU and GPU, in 2D or VR, in a complex mission? I ask because you only mention averages and averages do not show the microstudders we'd expect to see, and we do see (but could just be ED's code). There is a massive component demand shift possible in many scenarios in DCS, you'll want to test with a mission, module, weather and time of day specifically designed to stress the CPU\RAM and verify you're not testing at settings causing a GPU bottleneck. As DCS updates and improves, the CPU will become more important. Let's not make excuses, but instead expect ED to do it right like everyone else.
  13. Do not look at percent utilization for CPU, that hasn't meant anything since single core CPUs were a thing about 20 years ago. Look at "frame times" instead. They're built into DCS or you can use tools like MSI Afterburner or fpsVR. Lower frame times are better. Press LCtrl+Pause in-sim to get the basic graph, hover your mouse over it and you'll get the frame time numbers coming from your CPU (includes RAM) and your GPU. They do not add together, it is a race, the component with the higher number is your bottleneck and will determine your maximum FPS. If they are unbalanced in a demanding scenario, you can turn the settings up that load the stronger component for free visual upgrades (hover your mouse over the settings and it'll tell you which component is loaded). The words do not match what I'm seeing with the numbers either. It is bugged, I'm GPU limited but it doesn't say so. Also, if you press LCtrl+Pause a second time and expand the data, you'll see more missing data (like the "simulation" frame time) that normally shows up in non-MT. Ryzen 7950X3D 64Gb@6Ghz 4090 Reverb G2
  14. The thread usage didn't seem to change with that file. Here is a screenshot and I've attached the log. null dcs.log
  15. Here are mine. Here are directions for everyone else. Dxdiag information: Windows Key + R In the run box type "dxdiag" (without quotes) and press enter. Give it a minute to finish running then click "save all information...", pick a save spot and drag it to a post here or otherwise attach it. DCS Log: Navigate to the DCS log folder and drag the file called "DCS" to the same post. The file is usually found here: C:\users\<your user folder>\Saved Games\DCS.openbeta\Logs dcs.logDxDiag.txt
  16. Complete speculation, but it seems like they could be doing some sort of CPU lookup and assigning workloads to cores arbitrarily. For example they see the AMD 7950X3D is 32 threads and have manually mapped certain workloads to specific cores. This could explain why DCS fails to start with cores disabled? I've found DCS MT always loads the same 7 threads, which could support this theory. If this is true it could be as simple as the developers detecting X3D CPUs and assigning these workloads to threads 0-15, or even limiting them to the first 8 even numbered physical cores. It'd be a little more work, but we live in the world of hybrid CPUs from both AMD and Intel. The same would need to be done for Intel 12th generation and up if this is true. If it is really MS, AMD, or Intel's fault, why does non-MT work as expected? Task manager clearly shows every game and DCS non-mt running on the correct cores, you can trust it. We shouldn't need to purchase and run third party software to fix this (long term) and ED doesn't need our CPUs in house to investigate. I volunteer to test fixes.
  17. I appreciate it, thanks! I have a feeling it is just a matter of making sure Xbox Game Bar (or whatever) knows this is a game. However, the other reports are a little concerning. If Intel P\E cores are having issues, people that disable their E cores can't start the sim, and Process Lasso\ setting CPU affinity doesn't work, there may be an issue with the coding.
  18. Can you reassure me you've read my title or post? Your response indicates you have not. Standard DCS uses these new AMD X3D CPUs correctly, parking the productivity cores and using only the gaming cores with their massive cache. This large cache has proven to dominate in flight simulators by a large margin. MT DCS uses random cores on both CCDs, this is wrong. You (or xbox game bar) are currently sabotaging this bleeding edge architecture with this beta MT version. Even if you feel like you can blame the studders on something else, it isn't working correctly. Our expensive hardware is being handicapped. I'm sure you can fix it easily, as standard DCS already works.
  19. I notice similar with my new Ryzen 7950X3D, 64Gb@6Ghz, 4090 rig with the reverb G2. In my situation it could be that MT uses random cores when it should be limited to the first 16 threads. There is extra latency involved when the game has to move between CCDs.
  20. Here is the standard single-threaded DCS, which stays on the first 16 threads. The overall FPS is worse, but the studders are not as bad as MT. Others have reported that Process Lasso can't even successfully contain DCS MT to the gaming CCD, so it appears some work needs to be done. The X3D chips are the best flight simulation CPUs available by a large margin, so it's important DCS handles them correctly. Rig specs: 7950X3D with PBO 64Gb@6Ghz 4090 4Tb WD Black NVMe null
  21. AMD's new 7950X3D and 7900X3D have two CCD (clusters), with one dedicated to gaming. The gaming CCD on my 7950X3D has 8 cores and 16 threads that share a massive 96Mb of cache, the other CCD only has 32MB and is reserved for productivity tasks (non-gaming). All games, including non-MT DCS, are automatically isolated to the gaming CCD as they should, which are the first 16 threads in Task Manager. The problem is that DCS MT uses SOME cores from both CCDs. There is a latency penalty when communicating between CCDs that can cause studders. These seem to be most prevalent when burning units are moved into view, this causes the sharp green CPU spikes on the performance graph. Note: The overall FPS is higher with MT, but there is a noticeable studder.null From what I understand, AMD uses the "xbox game bar" to determine if the load is a game or productivity workload. If this is true it seems the Xbox GB detects standard DCS as a game, but MT DCS as a productivity workload. This could explain the studders I experience in MT that are not present in standard. Many performance metrics seem to be missing, for example I only get 0 for the "Simulation" frame time when pressing RCTRL+Pause twice and expanding the block of data, this frame time does seem to show on the graph though. null
  22. Vulkan made a MASSIVE difference in my XP11 VR FPS and overall experience on day one, similar experiences were widely reported. I'm not sure what was wrong with your system if you're talking about the same sim...
  23. Just FYI for anyone that's not aware, CPU and GPU utilization percentage tell you somewhere between "very little" and "nothing". You want frame times from the GPU and the frame times from the CPU+RAM, lower are better. You can get these times with a number of free tools like MSI Afterburner (enable it deep in the settings), with paid tools like fpsVR, or even built into DCS. I believe the default is RCTRL+Pause twice, it'll show a block of data. IIRC GPU frame time is just called "Frame Time" (shown in the first line or near it) and CPU+RAM frame time is near the center of the block of info and is labeled "Simulation". You will need a very complex single player mission or busy multiplayer server to realistically test CPU+RAM performance in DCS. Make sure you add things like MRLS, cluster munitions, and dozens (hundreds with Raptor Lake?) of ground units moving off-road to really tax CPU+RAM. Quick start or simple mission testing is pointless and will give bad results. Also, disabling hyperthreading and the little cores is always worth testing with DCS. Obviously, overclocking is always helpful too, but more limited with Raptor Lake. I'm very close to upgrading but need the data for VR. Thanks, in advance. i7-8086K@5.1Ghz 32GB@3.6GHz 3090@stock 2TB NVMe Reverb G2 Index
  24. Just FYI everyone that's not aware, CPU and GPU utilization percentage tell you somewhere between "very little" and "nothing". You want frame times from the GPU and the frame times from the CPU+RAM, lower are better. You can get these times with a number of free tools like MSI Afterburner (enable it deep in the settings), with paid tools like fpsVR, or even built into DCS. I believe the default is RCTRL+Pause twice, it'll show a block of data. IIRC GPU frame time is just called "Frame Time" (shown in the first line or near it) and CPU+RAM frame time is near the center of the block of info and is labeled "Simulation". You will need a very complex single player mission or busy multiplayer server to realistically test CPU+RAM performance in DCS. Make sure you add things like MRLS, cluster munitions, and dozens (hundreds with Raptor Lake?) of ground units moving off-road really tax CPU+RAM. Quick start or simple mission testing is pointless and will give bad results. As usual with DCS World, disabling the little cores and disabling hyper-threading are definitely worth testing too. I'm very close to upgrading but need the data. Thanks guys. i7-8086K@5.1Ghz 32GB@3.6GHz 3090@stock 2TB NVMe Reverb G2 Index
  25. Does anyone have a reference to how the 5800X3D compares with any Intel in DCS VR (G2 and Index)? I'm on a: i7-8086K@5.1Ghz 32Gb@3.6Ghz 3090 The i7-8086K is the same as the i5-10600K, so it is still pretty good. I'm planning on waiting for 13th gen\new Ryzen before doing a new build so I'm just curious where my rig lines up.
×
×
  • Create New...