doright Posted August 26, 2013 Posted August 26, 2013 I'm very frustrated with my frame rate on my fairly fast AMD based machine and rather then post my components here and have another round of go update this, that hardware is so last quarter, and "have you tried turning everthing down to 8 bit and running a Nintendo emulator?" I'd thought I would ask a simple question since so many people with AMD hardware seem to have problems and some observations of oddities of frame rate and hardware usage monitoring. Does DCS use a compiler that has AMD optimizations enabled or just Intel (or worse yet the Intel compilers)? If not, could DCS try a beta with an AMD executable please!
ED Team BIGNEWY Posted August 26, 2013 ED Team Posted August 26, 2013 I have a feeling it is optimised for NVIDIA But never heard anything official Forum rules - DCS Crashing? Try this first - Cleanup and Repair - Discord BIGNEWY#8703 - Youtube - Patch Status Windows 11, NVIDIA MSI RTX 3090, Intel® i9-10900K 3.70GHz, 5.30GHz Turbo, Corsair Hydro Series H150i Pro, 64GB DDR @3200, ASUS ROG Strix Z490-F Gaming, PIMAX Crystal
Feuerfalke Posted August 26, 2013 Posted August 26, 2013 Don't think it's optimized for anything other than Tianhe-2. On the other hand: Of 3,2 million cores, DCS would still use just one. MSI X670E Gaming Plus | AMD Ryzen 7 7800X3D | 64 GB DDR4 | AMD RX 6900 XT | LG 55" @ 4K | Cougar 1000 W | CreativeX G6 | TIR5 | CH HOTAS (with BU0836X-12 Bit) + Crosswind Pedals | Win11 64 HP | StreamDeck XL | 3x TM MFD
Pepec9124 Posted August 26, 2013 Posted August 26, 2013 (edited) Intel is faster in core vs core comparsions. Without multithread support AMD users will suffer. Ain't there some way to make multi core cpu to look like single core monster ? It would require some kernel hacks, right ? Edited August 26, 2013 by Pepec9124
doright Posted August 26, 2013 Author Posted August 26, 2013 Intel is faster in core vs core comparsions. Without multithread support AMD users will suffer. Ain't there some way to make multi core cpu to look like single core monster ? It would require some kernel hacks, right ? :doh: Ask a simple question and discussion flushes right back down the 'it's your hardware' hole.
sobek Posted August 26, 2013 Posted August 26, 2013 Ain't there some way to make multi core cpu to look like single core monster ? It would require some kernel hacks, right ? Well, basically that is what your operating system does. Unfortunately that is also where it gets very complicated (synchronisation). Good, fast, cheap. Choose any two. Come let's eat grandpa! Use punctuation, save lives!
Pilotasso Posted August 26, 2013 Posted August 26, 2013 I have a feeling it is optimised for NVIDIA But never heard anything official I double dare you to find anything in these forums remotely associating ED with NVIDIA! :) .
ED Team BIGNEWY Posted August 26, 2013 ED Team Posted August 26, 2013 I double dare you to find anything in these forums remotely associating ED with NVIDIA! :) http://www.digitalcombatsimulator.com/shop.php?end_pos=1322&scr=shop&lang=en Recommended system requirements: Operating system 64-bit: Windows Vista and 7; Processor: CPU: Core 2 Duo E8400, AMD Phenom X3 8750 or better; Memory: 4+GB; Hard disk space: 7 GB; Video: Shader 3.0 or better; 896MB NVIDIA GeForce GTX260 DirectX 9.0c or better; Sound: DirectX 9.0c - compatible; DirectX: 9.0C; requires internet activation. Minimum system requirements: Operating system: Windows XP, Vista or 7; Processor: Core 2 Duo 2.0 GHz; Memory: 3 GB; Free hard disk space: 7 GB; Video: 512 MB RAM card, DirectX 9 - compatible; Sound: DirectX 9.0c - compatible; requires internet activation. do I get a prize ! :megalol: Forum rules - DCS Crashing? Try this first - Cleanup and Repair - Discord BIGNEWY#8703 - Youtube - Patch Status Windows 11, NVIDIA MSI RTX 3090, Intel® i9-10900K 3.70GHz, 5.30GHz Turbo, Corsair Hydro Series H150i Pro, 64GB DDR @3200, ASUS ROG Strix Z490-F Gaming, PIMAX Crystal
Pilotasso Posted August 26, 2013 Posted August 26, 2013 oh wow, you went far to do it but it was so much closer you didn't even see it! :D .
tute Posted August 26, 2013 Posted August 26, 2013 I wonder if Wags ever reads this kind of thread...
ED Team BIGNEWY Posted August 26, 2013 ED Team Posted August 26, 2013 I wonder if Wags ever reads this kind of thread... of-course he does, just for the entertainment value lol Forum rules - DCS Crashing? Try this first - Cleanup and Repair - Discord BIGNEWY#8703 - Youtube - Patch Status Windows 11, NVIDIA MSI RTX 3090, Intel® i9-10900K 3.70GHz, 5.30GHz Turbo, Corsair Hydro Series H150i Pro, 64GB DDR @3200, ASUS ROG Strix Z490-F Gaming, PIMAX Crystal
doright Posted August 27, 2013 Author Posted August 27, 2013 of-course he does, just for the entertainment value lol Does he have a white cat in his lap to pet, and a good evil laugh too. "No, Mr. AMD user, I expect you to die (to frame rate related issues)."
Slazi Posted August 27, 2013 Posted August 27, 2013 (edited) If you're talking about CPUs: I'm not exactly sure how they would optimizise it for AMD CPUs... Do they have feature sets that Intel doesn't? Multicore concurrency optimizations, on the hand, would be much appreciated by all. Edit - Seems AMD are having some problems with Intel compilers recently. Interesting. Hadn't heard about that before. But still, I don't think it's common for companies to compile .exe s for both cpus. Edited August 27, 2013 by Slazi
aussieboy Posted August 27, 2013 Posted August 27, 2013 There appears to be a misunderstanding in this thread of exactly what optimizing an exe for AMD or Intel actually entails. The main difference between the two CPU's is the instruction set architecture (ISA) which is slightly different between Intel and AMD. I'm still under a confidentiality contract so I can't go into too much detail in this area. In Visual Studio for example programmers have an option to compile for AnyCPU, x86 or x64. It's a click of a menu option. Multi-threading relates to lines of code in the program and requires a heap of work to convert it over from x86 to "true" 64bit software, in fact from my own experience it's a right pain in butt. x64 ISA is purely an extension of the old x86 ISA and nothing more.
Slazi Posted August 27, 2013 Posted August 27, 2013 Interesting. You mean the Platform specified in the Configuration Manager - Win32 / x64? I can't see an option for AnyCPU. I compile my c++ with Visual Studio 2012 (v110) if that makes a difference. Is it a .NET compile option only, or is it also for c++? The new engine should be friendlier towards concurrency, when that finally gets released. - sorry if I'm derailing your thread a little. I'm just curious :)
aussieboy Posted August 27, 2013 Posted August 27, 2013 (edited) Compiling with /clr:safe will give the same result. Also for 64bit you have these options too /favor:blend /favor:AMD64 /favor:INTEL64 Edited August 27, 2013 by aussieboy missed the favour/option
Harrysound Posted August 27, 2013 Posted August 27, 2013 I've gone from 1920x1280 with an i7 2.6ghz, 12 gig triple channel ram and a 7850 2 gig OC to 2560x1440 on an iMac with i7 3.4ghz 16 gig ram and a Geforce 680mx 2gig. My frame rates are very good now, going from around 30 to around 40-60fps on very high details. I would just say that the biggest frame rate leap was when I stopped using Freetrack and started using Track ir. My FPS probably doubled right there. I get serious FPS chug when dropping bombs or rockets though. It's almost like the game engine can't handle the particle effects of the explosion. Like the pixel fill rate dive bombs.
Slazi Posted August 27, 2013 Posted August 27, 2013 Thanks Assie. Interesting information. Harry - Yeah, particles are a huge issue for many players right now. EDGE will hopefully fix this. 2.6ghz to 3.4 is quite a jump. Odd that TrackIR has such a negative effect on FPS. I haven't looked at that before.
SkateZilla Posted August 27, 2013 Posted August 27, 2013 (edited) I think alot of the FPS issues are DX9 and Display driver related.... As DX9c does have Object /Draw Call Limits before it causes Severe CPU Workloads. AMD has also expressed no Immediate Interest in checking the drivers with DCSW to see if there’s an Catalyst Application Profile adjustment they can make to help. AMD is also neglecting a lot of DX9c Issues in their drivers. The more I read the DirectX 9/10 Developer Docs, the more it makes sense when you link it with other Articles. DX9 Doesnt use modern General Purpose Shaders Processors Efficiently, at all. DX9 Uses and relies on Vertex and Pixel Drawing to render objects and Shaders to apply the effects/alterations to the image. So on Modern GPUs, DX9c Uses some shaders for Vertex and Pixel Drawing and some shaders to apply shader effects, When a Particular Scene requires extremes of one and not the other, the GPU Cores Idle and wait for work. DX9 has a high Overhead on CPU DrawCalls, and a Max of 500 Drawcalls before CPU Overhead has significant effect on performance, The only way around this is to group several objects into one draw call. DX9’s Avg CPU Overhead is 40%, DX10’s is less than 20% DX10+ is designed to work with General Purpose Shader Processors, DX10 no Longer uses Vertex and Pixel Drawing, instead uses a Geometry Shader to do all the Work, Vertex and Pixel Drawing as well as Image Shaders. Therefore using all possible shaders on your GPU in every scene/frame. Which allows Devs to use less Draw calls and less CPU Overhead. DX10 uses a Unified Architecture, and tighter hardware standards, allowing DX10 Code to Run Efficiently on all DX10 Compatible hardware. DX10 has reduced CPU Overhead on DXDrawCalls. DX10 also Introduces Instancing, which allows the same object to be rendered multiple times using one Draw Call (so all them AI objects, trees, buildings etc, all now have one draw call per type instead of one draw call per object), reducing Significant CPU Overhead. This doesn’t even take into account DX11 Features. Edited August 27, 2013 by SkateZilla Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2), ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9) 3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs
gavagai Posted August 27, 2013 Posted August 27, 2013 I have a feeling it is optimised for NVIDIA But never heard anything official I double dare you to find anything in these forums remotely associating ED with NVIDIA! :) And this thread is about CPUs, not GPUs. Flight simmers are full of conspiracy theories about hardware.:joystick: P-51D | Fw 190D-9 | Bf 109K-4 | Spitfire Mk IX | P-47D | WW2 assets pack | F-86 | Mig-15 | Mig-21 | Mirage 2000C | A-10C II | F-5E | F-16 | F/A-18 | Ka-50 | Combined Arms | FC3 | Nevada | Normandy | Straight of Hormuz | Syria
SkateZilla Posted August 27, 2013 Posted August 27, 2013 The era of using Intelx86 and AMDx86 Compilers ended long ago.. This was a big topic when AMD K6/K6II/K6III's were outperforming Intel Pentiums but Intel had benchmark programs compiled to run more efficiently on Intel Architectures Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2), ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9) 3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs
ED Team BIGNEWY Posted August 27, 2013 ED Team Posted August 27, 2013 so long story short DCS and AMD suck at the moment . . . we can only hope things change when the Engine is updated until then we just have to put up with it. :) Forum rules - DCS Crashing? Try this first - Cleanup and Repair - Discord BIGNEWY#8703 - Youtube - Patch Status Windows 11, NVIDIA MSI RTX 3090, Intel® i9-10900K 3.70GHz, 5.30GHz Turbo, Corsair Hydro Series H150i Pro, 64GB DDR @3200, ASUS ROG Strix Z490-F Gaming, PIMAX Crystal
xracer Posted August 27, 2013 Posted August 27, 2013 I think alot of the FPS issues are DX9 and Display driver related.... As DX9c does have Object /Draw Call Limits before it causes Severe CPU Workloads. AMD has also expressed no Immediate Interest in checking the drivers with DCSW to see if there’s an Catalyst Application Profile adjustment they can make to help. AMD is also neglecting a lot of DX9c Issues in their drivers. The more I read the DirectX 9/10 Developer Docs, the more it makes sense when you link it with other Articles. DX9 Doesnt use modern General Purpose Shaders Processors Efficiently, at all. DX9 Uses and relies on Vertex and Pixel Drawing to render objects and Shaders to apply the effects/alterations to the image. So on Modern GPUs, DX9c Uses some shaders for Vertex and Pixel Drawing and some shaders to apply shader effects, When a Particular Scene requires extremes of one and not the other, the GPU Cores Idle and wait for work. DX9 has a high Overhead on CPU DrawCalls, and a Max of 500 Drawcalls before CPU Overhead has significant effect on performance, The only way around this is to group several objects into one draw call. DX9’s Avg CPU Overhead is 40%, DX10’s is less than 20% DX10+ is designed to work with General Purpose Shader Processors, DX10 no Longer uses Vertex and Pixel Drawing, instead uses a Geometry Shader to do all the Work, Vertex and Pixel Drawing as well as Image Shaders. Therefore using all possible shaders on your GPU in every scene/frame. Which allows Devs to use less Draw calls and less CPU Overhead. DX10 uses a Unified Architecture, and tighter hardware standards, allowing DX10 Code to Run Efficiently on all DX10 Compatible hardware. DX10 has reduced CPU Overhead on DXDrawCalls. DX10 also Introduces Instancing, which allows the same object to be rendered multiple times using one Draw Call (so all them AI objects, trees, buildings etc, all now have one draw call per type instead of one draw call per object), reducing Significant CPU Overhead. This doesn’t even take into account DX11 Features. Yes, i've also read articles from experienced developers which in short says that AMD is not taking the job of optimizing drivers too seriously. But i was looking for a way to force any newer DirectX application to use DX9.0c. This is usually done by devs supplying a switch for the exe. So what do you do if DX11 is installed and the exe dont have the switch? After a lot of Google work there was this apparently easy solution. I didnt find any verification of this, but if you tick compatibility mode for an earlier version of Windows you may be able to force DX9.0c on a Windows 7 install. I tried Windows Server 2008 SP2 on DCS.exe and i got a immediate fps reduction of 6-8 fps in-game. So if DX9.0c was used here this will in a way proove that there are some DX11 calls in DCS which makes DX11 more efficient. Btw. does anyone remember from way back regarding ED products if the difference between AMD and Nvidia always was as big as today? How was the difference back i the Lock On days? I've just started to try out Nvidia and always used AMD before so i dont remember. System spec: Intel Core i7 920@4.2Ghz (stable, 65degC fully loaded), EVGA GTX-780, Asus P6T Deluxe V2 v.5.04 BIOS, Saitek X52, 1TB/500GB WD HD for system/storage. Kingston SSD 120 GB for DCS, 250GB Samsung 840 SSD for the rest. 16GB Kingston KHX1600C9D3 Memory, 9 GB Pagefile, EK HFX-240 Watercooling, Corsair HX-1000 PSU. HAF-932 Tower, TrackIR-5, Win64Ult [sIGPIC][/sIGPIC]
SkateZilla Posted August 27, 2013 Posted August 27, 2013 (edited) Yes, i've also read articles from experienced developers which in short says that AMD is not taking the job of optimizing drivers too seriously. But i was looking for a way to force any newer DirectX application to use DX9.0c. This is usually done by devs supplying a switch for the exe. So what do you do if DX11 is installed and the exe dont have the switch? After a lot of Google work there was this apparently easy solution. I didnt find any verification of this, but if you tick compatibility mode for an earlier version of Windows you may be able to force DX9.0c on a Windows 7 install. I tried Windows Server 2008 SP2 on DCS.exe and i got a immediate fps reduction of 6-8 fps in-game. So if DX9.0c was used here this will in a way proove that there are some DX11 calls in DCS which makes DX11 more efficient. Btw. does anyone remember from way back regarding ED products if the difference between AMD and Nvidia always was as big as today? How was the difference back i the Lock On days? I've just started to try out Nvidia and always used AMD before so i dont remember. Compatibility Modes will reduce your FPS Regardless of DirectX Versions due to Hardware Emulation on certain hardware. Back in the LockOn Days we had GPUs with Separate Geometry Processors and only a Handful of Shader Processors. (like 128 SPs). GPU would Run like 575-650Mhz, and the Shaders would run up to 1500MHz. This is what DirectX 9 was designed to run on, or the cards were designed to Run on DX9 Actually, lol. DX10 Cards Kept the Separate Geometry Processor/Stream Processor Design for about 2 generations after that (ie For nVidia DX9 Gen Card, then DX10 Gen 8x00 GTS/GTX, 8800GT, through 9800GTX, all Had Geometry Cores and Separate Stream Processors). (They were all the Same Architecture). When nVidia Introduced the 400 Series (Fermi) they Had Jumped the General Purpose Shader Count but kept the Geometry Processor and General Purpose Processors Separate on the Fermi Architechture. 500 Series was the same as the 400, just a lil more General Purpose Shaders. when the 600 Series Launched they finally changed the Architecture to Simply General Purpose Shaders and Dropped the Geometry Processor. (jumping from 512 General Purpose Shaders, to 1536 General Purpose Shaders). AMD Switched to General Purpose Shaders Long before nVidia. Which is why the 580s decimated the AMD Cards in DX9 Games, as they still had the Individual Geometry Processor to do the Main Rendering and Shaders operating at ~1.4Ghz to do the image Processing. Edited August 27, 2013 by SkateZilla Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2), ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9) 3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs
Recommended Posts