Aerophobia Posted August 1, 2022 Posted August 1, 2022 (edited) Before I start, let me just say that I have done a lot of research and talked to a lot of people about these two cards, and from what I've determined they are practically equal in terms of performance outside of DCS. The main difference specs-wise I have determined between these two cards that the 6700XT has a lower bandwidth (196 bit bus) and higher VRAM (12GB) and the 3060 Ti has a higher bandwidth (256 bit bus) and lower VRAM (8GB). 50% of the people I've talked to say get the 6700 XT for the extra VRAM as that's more important than than the bandwidth, and 50% of people say the opposite advocating for the 3060Ti. Honestly, the specs mean nothing to me now. I just want to know which would perform better in DCS from those who've actually used these cards. The setup I'll be using: Ryzen 5 5600 32GB 3200 DDR4 2560 x 1080 100hz monitor (a 1080p ultrawide) I also want to take VR into consideration. I have a higher tolerance for stutters, lower fps, etc, and I have a Rift S, which has a lower resolution than most VR headsets, so I believe I'll be fine playing on it with a GPU from this price range. If anyone here has used both cards their input would be greatly appreciated! EDIT: Another thing to consider is that under current sales in Australia the RX 6700 XT is $600AUD ($420USD) and the 3060 Ti is $680AUD ($475USD) Edited August 1, 2022 by Aerophobia Add info on what prices are available for the GPU's.
AngleOff66 Posted August 1, 2022 Posted August 1, 2022 Here is a dcs video from 2021 and the 67000. It is in Portuguese, but the subtitles auto translate to english seems to work well. You may have to watch it on yt to get the auto translate. I checked reddit. Typed in DCS 6700 xt. Bunch of results. May be worth a read. Bandwidth vs vram geesh thats a question indeed. 1
LucShep Posted August 5, 2022 Posted August 5, 2022 (edited) @Aerophobia - this thread may help: FWIW, I went through same choice/decision after over 2 years with an RX5700XT 8GB and al the AMD quirkiness, and I feel I made the right decision going with the RTX3060Ti 8GB instead of the RX6700XT 12GB. DCS largely prefers Nvidia, that much is certain in my experience. Overall, the Nvidia drivers are still better (to me), and the larger bandwidth does seem to help the GPU. Performance of RTX3060Ti is better than expected (though it's no RTX3090Ti, that's for sure), and with undervolting (for example), performance is same as a stock RTX3070. On the VRAM matters, even the 12GB of the RX6700XT will be fully used (and then some) just the same, so that line of argument quickly goes down on its knees seeing that, depending on module/map/settings, it still won't suffice for DCS (yes the game is that unoptimized)... Edited August 5, 2022 by LucShep 1 CGTC - Caucasus retexture | A-10A cockpit retexture | Shadows Reduced Impact | DCS 2.5.6 - a lighter alternative Spoiler Win10 Pro x64 | Intel i7 12700K (OC@ 5.1/5.0p + 4.0e) | 64GB DDR4 (OC@ 3700 CL17 Crucial Ballistix) | RTX 3090 24GB EVGA FTW3 Ultra | 2TB NVMe (MP600 Pro XT) + 500GB SSD (WD Blue) + 3TB HDD (Toshiba P300) + 1TB HDD (WD Blue) | Corsair RMX 850W | Asus Z690 TUF+ D4 | TR PA120SE | Fractal Meshify-C | UAD Volt1 + Sennheiser HD-599SE | 7x USB 3.0 Hub | 50'' 4K Philips PUS7608 UHD TV + Head Tracking | HP Reverb G1 Pro (VR) | TM Warthog + Logitech X56
Aerophobia Posted August 17, 2022 Author Posted August 17, 2022 (edited) Ultimately went for the 6700 XT. Some of the things that sold me were: 1. In Australia, it was over $100 cheaper 2. One of my friends had experience with Radeon drivers and spoke pretty positively about them. 3. The rumours are that the RTX 4070 will have a 192-bit bus... But will have more VRAM. Sold me that VRAM is more important than bandwidth. Also if the new gen is going for lower bandwidth, it's likely going to be more future proof to get more VRAM over getting more bandwidth. Since I've gotten my RX 6700XT I've optimised my settings, and played around with my old GTX 1060 presets. Went from 56 fps average on the 1060, to 140 fps. Adjusted the settings a little more since, to give constant ~ 100 fps. Looks great with Radeon sharpening, legitimately looks like a higher resolution. Often I don't actually fill all my VRAM, and get to around 9-10gb usage. If I'd gotten the 3060Ti I would've been bottlenecked. Also played with VR, having no issues driver-wise. It's a shame that none of the Radeon Adrenalin features go into VR, but I'm also kind of happy about it because it means there's less things to reduce my VR performance. Did my usual optimisations, used FSR heavily through VRperfkit, (on Rift S, set resolution to 2000 horizontal in SteamVR, set rendering to 0.75 in Vrperfkit) and was getting great quality and constant max frame rate (80fps) with only slight dips below when right above trees, and when servers cause spikes. SAM / resizable bar seems to give slightly lower FPS but a smoother experience. Ultimately very happy with the card. Tried out some Ray tracing in other games, definitely not worth going for the 3060 ti for the ray tracing, such a massive drop in performance for not a whole lot. I will say that if you can get a good deal on it, and you have the time to do some optimising, the 6700 XT will not disappoint. Edited August 17, 2022 by Aerophobia
SkateZilla Posted August 21, 2022 Posted August 21, 2022 On 8/17/2022 at 6:41 AM, Aerophobia said: Ultimately went for the 6700 XT. Some of the things that sold me were: 1. In Australia, it was over $100 cheaper 2. One of my friends had experience with Radeon drivers and spoke pretty positively about them. 3. The rumours are that the RTX 4070 will have a 192-bit bus... But will have more VRAM. Sold me that VRAM is more important than bandwidth. Also if the new gen is going for lower bandwidth, it's likely going to be more future proof to get more VRAM over getting more bandwidth. Since I've gotten my RX 6700XT I've optimised my settings, and played around with my old GTX 1060 presets. Went from 56 fps average on the 1060, to 140 fps. Adjusted the settings a little more since, to give constant ~ 100 fps. Looks great with Radeon sharpening, legitimately looks like a higher resolution. Often I don't actually fill all my VRAM, and get to around 9-10gb usage. If I'd gotten the 3060Ti I would've been bottlenecked. Also played with VR, having no issues driver-wise. It's a shame that none of the Radeon Adrenalin features go into VR, but I'm also kind of happy about it because it means there's less things to reduce my VR performance. Did my usual optimisations, used FSR heavily through VRperfkit, (on Rift S, set resolution to 2000 horizontal in SteamVR, set rendering to 0.75 in Vrperfkit) and was getting great quality and constant max frame rate (80fps) with only slight dips below when right above trees, and when servers cause spikes. SAM / resizable bar seems to give slightly lower FPS but a smoother experience. Ultimately very happy with the card. Tried out some Ray tracing in other games, definitely not worth going for the 3060 ti for the ray tracing, such a massive drop in performance for not a whole lot. I will say that if you can get a good deal on it, and you have the time to do some optimising, the 6700 XT will not disappoint. The 192 Bit Bus width is more than enough at the frequency and RAM Chip Size, But the reason it's 192 Bit, is because the Samsung and Micron GDDR6X VRAM Modules are now 2GB Per Chip, each VRAM Chip is 32-bit, 6x 2GB Chips = 12 GB. The RTX4070 has 6x2 GB Chips at 32-Bit each, for a total of 192 Bit Bus Width. The RTX3090Ti has 12x2GB Chips at 32-Bit each, for a total of 384 Bit Bus Width. The RTX3080Ti has 12x1GB Chips at 32-Bit each, for a total of 384 Bit Bus Width. The RTX3080 has 10x1GB Chips at 32-Bit each, for a total of 320 Bit Bus Width. The RTX3070 has 8 x1GB Chips at 32-Bit each, for a total of 256 Bit Bus Width. The RX6700XT has 6x2GB Chips at 32-Bit each, for a total of 192 Bit Bus Width. Higher Bus width simply means more VRAM Modules, so the interface is wider to address all the modules, the bus width just tells you how many Ram Chips you are getting on the PCB. Wherever the rumor of wider bus width offering more performance started or came from I honestly dont know, the performance between 12x1GB Chips on a 384 Bit Bus, vs 6x2GB Chips on a 192-Bit Bus would be the same, you'd have less items spread across more chips, less power consumption, less heat, and likely higher clocks using 6 Chips vs 12. you wouldn't be able to put 6 Chips on a 384 Bit Bus, as each chip only has a 32-bit bus, so the "extra" 192 bits would go unused in the bus.. 2 Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2), ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9) 3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs
Svsmokey Posted August 21, 2022 Posted August 21, 2022 Thanks for that . Good to know... 15 hours ago, SkateZilla said: The 192 Bit Bus width is more than enough at the frequency and RAM Chip Size, But the reason it's 192 Bit, is because the Samsung and Micron GDDR6X VRAM Modules are now 2GB Per Chip, each VRAM Chip is 32-bit, 6x 2GB Chips = 12 GB. The RTX4070 has 6x2 GB Chips at 32-Bit each, for a total of 192 Bit Bus Width. The RTX3090Ti has 12x2GB Chips at 32-Bit each, for a total of 384 Bit Bus Width. The RTX3080Ti has 12x1GB Chips at 32-Bit each, for a total of 384 Bit Bus Width. The RTX3080 has 10x1GB Chips at 32-Bit each, for a total of 320 Bit Bus Width. The RTX3070 has 8 x1GB Chips at 32-Bit each, for a total of 256 Bit Bus Width. The RX6700XT has 6x2GB Chips at 32-Bit each, for a total of 192 Bit Bus Width. Higher Bus width simply means more VRAM Modules, so the interface is wider to address all the modules, the bus width just tells you how many Ram Chips you are getting on the PCB. Wherever the rumor of wider bus width offering more performance started or came from I honestly dont know, the performance between 12x1GB Chips on a 384 Bit Bus, vs 6x2GB Chips on a 192-Bit Bus would be the same, you'd have less items spread across more chips, less power consumption, less heat, and likely higher clocks using 6 Chips vs 12. you wouldn't be able to put 6 Chips on a 384 Bit Bus, as each chip only has a 32-bit bus, so the "extra" 192 bits would go unused in the bus.. 9700k @ stock , Aorus Pro Z390 wifi , 32gb 3200 mhz CL16 , 1tb EVO 970 , MSI RX 6800XT Gaming X TRIO , Seasonic Prime 850w Gold , Coolermaster H500m , Noctua NH-D15S , CH Pro throttle and T50CM2/WarBrD base on Foxxmounts , CH pedals , Reverb G2v2
LucShep Posted August 21, 2022 Posted August 21, 2022 (edited) 19 hours ago, SkateZilla said: The 192 Bit Bus width is more than enough at the frequency and RAM Chip Size, (...) Higher Bus width simply means more VRAM Modules, so the interface is wider to address all the modules, the bus width just tells you how many Ram Chips you are getting on the PCB. Wherever the rumor of wider bus width offering more performance started or came from I honestly dont know, the performance between 12x1GB Chips on a 384 Bit Bus, vs 6x2GB Chips on a 192-Bit Bus would be the same, you'd have less items spread across more chips, less power consumption, less heat, and likely higher clocks using 6 Chips vs 12. you wouldn't be able to put 6 Chips on a 384 Bit Bus, as each chip only has a 32-bit bus, so the "extra" 192 bits would go unused in the bus.. I'm not sure I can agree with you there, as it doesn't present other important aspects, some of which have been observed during benchmarks analysis through the years. I'm most likely missing more info but, AFAIK, the memory controller load is directly correlated with GPU load, which as we know is also correlated with resolution. We can assume as a fact that the higher the maximum GPU load, the higher the memory bandwidth will be, and vice-versa. Which may explain - even if it doesn't entirely - why the Nvidia RTX3000 models with bigger bus and bandwitch are, in fact, noticeably more performant at higher resolutions (and also at higher settings) beyond 1440P - so at 4K, in VR, or with triple-screens - than the AMD RX6000 equivalents. The "RX6700XT vs RTX3060Ti" is a very interesting Versus case because it's like a repeat of the "R9 380 vs GTX960" of 2015, though this time the bigger memory bus and bandwidth are on opposite sides of Green (Nvidia) and Red (AMD) camps. Looking back, we all know that the GTX960 was creamed all over the place by the R9 380, noticeably once you'd hit anything higher than 1080P, or use SuperSampling, or raise settings to "Ultra" with certain games. The smaller bus and bandwidth of the former were stated as limiting factors (though the GTX960 wasn't all that powerful to begin with), even when later they added the 4GB of VRAM to it (originally was 2GB) and repeated the tests just to get to the same results. It may well be a case of horses for courses but, bigger memory bus and bandwidth can trump bigger VRAM, depending on game/application. DCS World is an odd case, a bit of an anomaly (monstrosity really), as it devours everything that you throw at it, graphics wise. It always wants more VRAM (blame the devs for using oversized texture sizes and format techniques, as documented on these forums more than once), but it also wants bigger memory bus and bandwidth for performance, more so once you go over 1440P (4K, VR or triple-screens) - for instances, check facts with people in these forums that have had both RX6800/RX6900XT and RTX3080/Ti/3090, whom have swapped the former(s) to the latter(s) and never looked back. Again, in reality I could be hugely wrong here, but it has been brought up by many well established people in the area (IIRC, Steve from HardwareUnboxed + TechSpot, the other Steve from GamersNexus, etc), and when their arguments are backed up by factual data (loads of gaming benchmarks), I'd tend to go along with that. Edited August 21, 2022 by LucShep 2 CGTC - Caucasus retexture | A-10A cockpit retexture | Shadows Reduced Impact | DCS 2.5.6 - a lighter alternative Spoiler Win10 Pro x64 | Intel i7 12700K (OC@ 5.1/5.0p + 4.0e) | 64GB DDR4 (OC@ 3700 CL17 Crucial Ballistix) | RTX 3090 24GB EVGA FTW3 Ultra | 2TB NVMe (MP600 Pro XT) + 500GB SSD (WD Blue) + 3TB HDD (Toshiba P300) + 1TB HDD (WD Blue) | Corsair RMX 850W | Asus Z690 TUF+ D4 | TR PA120SE | Fractal Meshify-C | UAD Volt1 + Sennheiser HD-599SE | 7x USB 3.0 Hub | 50'' 4K Philips PUS7608 UHD TV + Head Tracking | HP Reverb G1 Pro (VR) | TM Warthog + Logitech X56
SkateZilla Posted August 22, 2022 Posted August 22, 2022 (edited) 7 hours ago, LucShep said: I'm not sure I can agree with you there, as it doesn't present other important aspects, some of which have been observed during benchmarks analysis through the years. I'm most likely missing more info but, AFAIK, the memory controller load is directly correlated with GPU load, which as we know is also correlated with resolution. We can assume as a fact that the higher the maximum GPU load, the higher the memory bandwidth will be, and vice-versa. Which may explain - even if it doesn't entirely - why the Nvidia RTX3000 models with bigger bus and bandwitch are, in fact, noticeably more performant at higher resolutions (and also at higher settings) beyond 1440P - so at 4K, in VR, or with triple-screens - than the AMD RX6000 equivalents. The "RX6700XT vs RTX3060Ti" is a very interesting Versus case because it's like a repeat of the "R9 380 vs GTX960" of 2015, though this time the bigger memory bus and bandwidth are on opposite sides of Green (Nvidia) and Red (AMD) camps. Looking back, we all know that the GTX960 was creamed all over the place by the R9 380, noticeably once you'd hit anything higher than 1080P, or use SuperSampling, or raise settings to "Ultra" with certain games. The smaller bus and bandwidth of the former were stated as limiting factors (though the GTX960 wasn't all that powerful to begin with), even when later they added the 4GB of VRAM to it (originally was 2GB) and repeated the tests just to get to the same results. It may well be a case of horses for courses but, bigger memory bus and bandwidth can trump bigger VRAM, depending on game/application. DCS World is an odd case, a bit of an anomaly (monstrosity really), as it devours everything that you throw at it, graphics wise. It always wants more VRAM (blame the devs for using oversized texture sizes and format techniques, as documented on these forums more than once), but it also wants bigger memory bus and bandwidth for performance, more so once you go over 1440P (4K, VR or triple-screens) - for instances, check facts with people in these forums that have had both RX6800/RX6900XT and RTX3080/Ti/3090, whom have swapped the former(s) to the latter(s) and never looked back. Again, in reality I could be hugely wrong here, but it has been brought up by many well established people in the area (IIRC, Steve from HardwareUnboxed + TechSpot, the other Steve from GamersNexus, etc), and when their arguments are backed up by factual data (loads of gaming benchmarks), I'd tend to go along with that. The Bus width doesn't directly affect render performance, It **Can** help in memory intensive render applications, where 12GB x 6 Modules on 192 Bit Bus, is slower than 12GB x 12 on a 384 Bit bus, simply because there's more lanes for the information to go, at the expense of power consumption, heat, and data fetching/fragmenting between modules. but the difference is barely noticeable in practical environments and only shows a large gap in synthetic benchmarks. and that's assuming the VRAM Modules are rated to run at the same speed, with the trade off in cost and power consumption between the two is likely not worth it, and there's no GPU ever created that would house double the VRAM Modules and have the same TDP, if the 2 PCBs aren't running the same VRAM Speeds, but the same TDP, then the 12GB x 6 on the 192 Bit bus is likely running fast enough to overshadow the difference in total bandwidth of the 12GB x 12 Modules. (that or the 12x 1GB Modules are likely throttled enough to lose any total bandwidth advantage over the 12 x 6 Configuration.) However GDDR6/GDDR6X has a high enough bandwidth, the bus width really doesnt matter anymore, no GPU manufacturer is going to continue to put 12 VRAM Chips on a GPU when it's cheaper to use 6, and assuming Micron's 4GB Modules are cost effective for GDDR6X next year, you'll likey start seeing Entry Level 8 GB cards with 64bit Bus Widths, Mainstream at 16 GB 128 Bit, 24 GB at 192 Bit for high end, and 32 gb on 256 Bit for Entusiast. DCS World doesn't rely on bus width, DCS World Relies on Draw call execution, which at the moment is a hard ceiling for DX11. The overall bandwidth is Bus Width and VRAM Speed, however you can't just slap 3x 2 GB VRAM Chips on a card and give it a 256 Bit Bus, it doesnt work that way, as each chip only registers 32-bits each, nothing more.-- (each chip is technically 16bit's each, but since it's 2-Way, it's 2x, so 16x2 = 32 Bits). So: RTX3070 at 1750Mhz on 256Bit: 1750 x 2 x 4 = 14,000 Bits or 14 Gbps 14,000 x 256 Bit Interface / 8 = 448,000 Bytes or 448 GB/s Assuming Rumors hold true: RTX4070 At 2625Mhz on 192Bit: 2625 x 2 x 4 = 21,000 Bits or 21 Gbps 21,000 x 192 Bit Interface / 8 = 504,000 Bytes or 504 GB/s So bandwidth actually goes up ~10% despite the Bus width decrease. as bandwidth and capacity go up, bus width will continue to decline. Edited August 22, 2022 by SkateZilla Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2), ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9) 3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs
LucShep Posted August 22, 2022 Posted August 22, 2022 (edited) 12 hours ago, SkateZilla said: The Bus width doesn't directly affect render performance, It **Can** help in memory intensive render applications, where 12GB x 6 Modules on 192 Bit Bus, is slower than 12GB x 12 on a 384 Bit bus, simply because there's more lanes for the information to go, at the expense of power consumption, heat, and data fetching/fragmenting between modules. but the difference is barely noticeable in practical environments and only shows a large gap in synthetic benchmarks. and that's assuming the VRAM Modules are rated to run at the same speed, with the trade off in cost and power consumption between the two is likely not worth it, and there's no GPU ever created that would house double the VRAM Modules and have the same TDP, if the 2 PCBs aren't running the same VRAM Speeds, but the same TDP, then the 12GB x 6 on the 192 Bit bus is likely running fast enough to overshadow the difference in total bandwidth of the 12GB x 12 Modules. (that or the 12x 1GB Modules are likely throttled enough to lose any total bandwidth advantage over the 12 x 6 Configuration.) However GDDR6/GDDR6X has a high enough bandwidth, the bus width really doesnt matter anymore, no GPU manufacturer is going to continue to put 12 VRAM Chips on a GPU when it's cheaper to use 6, and assuming Micron's 4GB Modules are cost effective for GDDR6X next year, you'll likey start seeing Entry Level 8 GB cards with 64bit Bus Widths, Mainstream at 16 GB 128 Bit, 24 GB at 192 Bit for high end, and 32 gb on 256 Bit for Entusiast. DCS World doesn't rely on bus width, DCS World Relies on Draw call execution, which at the moment is a hard ceiling for DX11. The overall bandwidth is Bus Width and VRAM Speed, however you can't just slap 3x 2 GB VRAM Chips on a card and give it a 256 Bit Bus, it doesnt work that way, as each chip only registers 32-bits each, nothing more.-- (each chip is technically 16bit's each, but since it's 2-Way, it's 2x, so 16x2 = 32 Bits). So: RTX3070 at 1750Mhz on 256Bit: 1750 x 2 x 4 = 14,000 Bits or 14 Gbps 14,000 x 256 Bit Interface / 8 = 448,000 Bytes or 448 GB/s Assuming Rumors hold true: RTX4070 At 2625Mhz on 192Bit: 2625 x 2 x 4 = 21,000 Bits or 21 Gbps 21,000 x 192 Bit Interface / 8 = 504,000 Bytes or 504 GB/s So bandwidth actually goes up ~10% despite the Bus width decrease. as bandwidth and capacity go up, bus width will continue to decline. That's a different subject you're discussing now. I understand what you mean there and, yes, the current trend is to cut cost and also aim for (somewhat) lower power consumption and temperatures, (instead of developing different ways to stack memory, also to increase bus and bandwidth). Like always, compromise is the point in any business, as ROI for investors is what mandates decisions. And for GPUs that is no exception. But that is a different matter. The previous point was if bus and bandwidth size affect performance or not. You said and, I quote "Wherever the rumor of wider bus width offering more performance started or came from I honestly dont know". Whereas I said that there is some evidence to indicate that it can and does affect performance, depending on situation/variables. It could be verified that it did in the past (GTX960 vs R9 380, like I mentioned) and it actually can be verified to some extent in current gen of AMD vs Nvidia (RTX3080 vs RX6800XT, or RTX3090 vs RX 6900XT, also RTX3070/3060Ti vs RX6750/6700XT). There's a very old article (decade+ old) that discuss the matter on laptop dedicated GPUs. Even if things there get different, maybe you'll find it interesting: https://www.realworldtech.com/gpu-memory-bandwidth/ I'm now digressing but, on this matter, HBM was actually hugely favorable in the matter of mem bus and bandwith, just far too expensive in comparison to DDR6. Who knows, maybe we'll see an iteration of HBM2 again in the future, considering how impressive it was in that regard (3092+ bit!). My initial point is, and still stands, that DCS blatantly favours Nvidia over AMD. That much I could atest once testing my RX5700XT vs an RTX2070 (same performance, according to reviews) with same exact ingame settings and respective driver settings. The latter ran far better than the former in DCS 2.7, and why I've decided on the RTX3060Ti (instead of an RX6700XT), which I certainly do not regret going for, so far. Plus, it seems, those that have swapped their RX6800/6900XT to an RTX3080/Ti/3090 can also confirm that the usual parity of all these models is thrown out of the window once you run DCS at 1440P+ resolutions, or VR, or triple-screens. Be it drivers, size of bus and bandwitdh, or simple bad optimization of this game, what I'll say is that, considering straight comparisons and user feedback from users/owners experience (myself included) that have used both, I'd always recommend Nvidia over AMD if getting a GPU specifically for DCS World, every single time. Edited August 22, 2022 by LucShep CGTC - Caucasus retexture | A-10A cockpit retexture | Shadows Reduced Impact | DCS 2.5.6 - a lighter alternative Spoiler Win10 Pro x64 | Intel i7 12700K (OC@ 5.1/5.0p + 4.0e) | 64GB DDR4 (OC@ 3700 CL17 Crucial Ballistix) | RTX 3090 24GB EVGA FTW3 Ultra | 2TB NVMe (MP600 Pro XT) + 500GB SSD (WD Blue) + 3TB HDD (Toshiba P300) + 1TB HDD (WD Blue) | Corsair RMX 850W | Asus Z690 TUF+ D4 | TR PA120SE | Fractal Meshify-C | UAD Volt1 + Sennheiser HD-599SE | 7x USB 3.0 Hub | 50'' 4K Philips PUS7608 UHD TV + Head Tracking | HP Reverb G1 Pro (VR) | TM Warthog + Logitech X56
SkateZilla Posted August 22, 2022 Posted August 22, 2022 (edited) 4 hours ago, LucShep said: That's a different subject you're discussing now. I understand what you mean there and, yes, the current trend is to cut cost and also aim for (somewhat) lower power consumption and temperatures, (instead of developing different ways to stack memory, also to increase bus and bandwidth). Like always, compromise is the point in any business, as ROI for investors is what mandates decisions. And for GPUs that is no exception. But that is a different matter. The previous point was if bus and bandwidth size affect performance or not. You said and, I quote "Wherever the rumor of wider bus width offering more performance started or came from I honestly dont know". Whereas I said that there is some evidence to indicate that it can and does affect performance, depending on situation/variables. It could be verified that it did in the past (GTX960 vs R9 380, like I mentioned) and it actually can be verified to some extent in current gen of AMD vs Nvidia (RTX3080 vs RX6800XT, or RTX3090 vs RX 6900XT, also RTX3070/3060Ti vs RX6750/6700XT). There's a very old article (decade+ old) that discuss the matter on laptop dedicated GPUs. Even if things there get different, maybe you'll find it interesting: https://www.realworldtech.com/gpu-memory-bandwidth/ I'm now digressing but, on this matter, HBM was actually hugely favorable in the matter of mem bus and bandwith, just far too expensive in comparison to DDR6. Who knows, maybe we'll see an iteration of HBM2 again in the future, considering how impressive it was in that regard (3092+ bit!). My initial point is, and still stands, that DCS blatantly favours Nvidia over AMD. That much I could atest once testing my RX5700XT vs an RTX2070 (same performance, according to reviews) with same exact ingame settings and respective driver settings. The latter ran far better than the former in DCS 2.7, and why I've decided on the RTX3060Ti (instead of an RX6700XT), which I certainly do not regret going for, so far. Plus, it seems, those that have swapped their RX6800/6900XT to an RTX3080/Ti/3090 can also confirm that the usual parity of all these models is thrown out of the window once you run DCS at 1440P+ resolutions, or VR, or triple-screens. Be it drivers, size of bus and bandwitdh, or simple bad optimization of this game, what I'll say is that, considering straight comparisons and user feedback from users/owners experience (myself included) that have used both, I'd always recommend Nvidia over AMD if getting a GPU specifically for DCS World, every single time. That's a tale of Drivers. I'm still on a now over 10 year older R7970 Lightning, and even the latest Legacy Drivers, have built in driver level presets that just kill DCS Performance. The same problem is across most of the DX11 Games, which is why AMD Tried to put out a specific DX11 Driver a while back, and it didn't really do much in heavy draw call engines. DX11_0 is legacy status for now both AMD and nVidia, however, nVidia's Driver Stack/Pipeline for DX11 was always superior to AMD's. Their Driver Overhead, plus DX11's API Overhead cause Performance issues. Plus AMD Continued to Use GDDR5 when nVidia had already moved to GDDR6, and then AMD went to GDDR6 when nVidia moved to GDDR6X. Their HBM and HBM2 Experiments set them behind in the memory department. Edited August 22, 2022 by SkateZilla 1 Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2), ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9) 3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs
LucShep Posted August 22, 2022 Posted August 22, 2022 (edited) 1 hour ago, SkateZilla said: That's a tale of Drivers. I'm still on a now over 10 year older R7970 Lightning, and even the latest Legacy Drivers, have built in driver level presets that just kill DCS Performance. The same problem is across most of the DX11 Games, which is why AMD Tried to put out a specific DX11 Driver a while back, and it didn't really do much in heavy draw call engines. DX11_0 is legacy status for now both AMD and nVidia, however, nVidia's Driver Stack/Pipeline for DX11 was always superior to AMD's. Their Driver Overhead, plus DX11's API Overhead cause Performance issues. Indeed, that seems the case. Now wondering, have you tried DXVK? Specifically the latest DXVK ASYNC version (FROM HERE) gave performance results that were interesting with my AMD RX5700XT, to say the least, though there are some graphical bugs, such as contrails becoming continuous puffs of white smoke, among a few others. Maximum frames were not increased, but the suttering down low was almost completely solved, much improved minimum framerate, incredible at times. Though it didn't do much for neither GTX1070 I also had, nor the RTX3060Ti I'm using (seems not as effective with Nvidia). If it was not for the few graphical bugs, this would be a pretty good "hack" to bring up some "minimum FPS" performance in DCS out of AMD GPUs, old or new, I believe. Edited August 22, 2022 by LucShep CGTC - Caucasus retexture | A-10A cockpit retexture | Shadows Reduced Impact | DCS 2.5.6 - a lighter alternative Spoiler Win10 Pro x64 | Intel i7 12700K (OC@ 5.1/5.0p + 4.0e) | 64GB DDR4 (OC@ 3700 CL17 Crucial Ballistix) | RTX 3090 24GB EVGA FTW3 Ultra | 2TB NVMe (MP600 Pro XT) + 500GB SSD (WD Blue) + 3TB HDD (Toshiba P300) + 1TB HDD (WD Blue) | Corsair RMX 850W | Asus Z690 TUF+ D4 | TR PA120SE | Fractal Meshify-C | UAD Volt1 + Sennheiser HD-599SE | 7x USB 3.0 Hub | 50'' 4K Philips PUS7608 UHD TV + Head Tracking | HP Reverb G1 Pro (VR) | TM Warthog + Logitech X56
Flappie Posted August 22, 2022 Posted August 22, 2022 /small parenthesis The continuous puffs of white smoke bug is usually solved by increasing tessellation level to the max. ---
DishDoggie Posted August 24, 2022 Posted August 24, 2022 System Manufacturer Gigabyte Technology Co., Ltd. System Model X570 GAMING X System Type x64-based PC Processor AMD Ryzen 9 3950X 16-Core Processor, 3501 Mhz, 16 Core(s), 32 Logical Processor(s) BaseBoard Manufacturer Gigabyte Technology Co., Ltd. BaseBoard Product X570 GAMING X Platform Role Desktop Installed Physical Memory (RAM) 64.0 GB GPU AMD RADEON RX 6900XT 16GB I run DCS on a extra wide ACER Monitor 3440 x 1440 144 hz BUT in VR 99% of the time. in 2D Flat Screen mode I can get over 100FPS in VR you will always take a hit well below the Flatscreen FPS. I also run with every VR MOD advantage I can get my hands on to reduce Memory hog programs loading in the background to run VR. Like your VR HOME...no matter what VR you run. So for me Steam VR is GONE Openxr tool kit And Openxr Tool Kit Companion and Openxr Composit and WMR and SkySpaces (low mem loading VR home environment) 3gb is used for just your VR home page you only use to launch DCS from Why let it eat your GPU Memory up. My point here is No matter How HOT a system you have VR will kick it's Virtual Butt. I will NOW admit with all of this I still need to run with Motion Reprojection on in VR in DCS For no other reason then to just not have to deal with video problems so I run 45 FPS with it on no less then 45 My settings I will say are on High I fly low to the Ground in the Huey 99% of the time. IF I go WWII flying I can turn it off and get as high a 90fps in the air up high. At ground level it will drop back to 50 or high 40's if the map is not a problem and the airfield is stock. I went with AMD this time as a test because they were making what looked to be a good card. Is it the Best for DCS? I will not say that but what I have is getting it done for now in VR with no Regrets. I don't think I would go lower then a RX 6900XT in the AMD and would go with a 3080 Nvidia Card from all the testing done by Gamers Nexus alone will prove it is a better card when tested overall. If you must stick with the 2 cards you have picked go with the 3060. MY RX 6900XT RED DEVIL is a power hog requiring a 1000 watt PS to run. It looks like a CON ED substation electric wires are a full 3" wide mass of power wires to power it from the SP. Lol I am very happy with it but not sure a RX 6700XT could make me feel that way in VR in DCS. Flat Screen games my card Rocks Ultra Settings all the time. 1
DishDoggie Posted August 25, 2022 Posted August 25, 2022 Also I would hold off on the GPU if you can Prices are dropping and Gamers Nexus is saying in just a few more months this will happen again because there will be some new high end cards coming out and prices will drop more. Nvidia cards are just now starting to drop a little but will drop more if you can wait.
SkateZilla Posted August 27, 2022 Posted August 27, 2022 (edited) Considering a RTX4060 will Beat the RTX 3080 and even the RTX 3090 in some numbers, and the RTX 4080 will req. a 1500W Gold PSU. From what I've seen from nVidia and AMD, There will be the generational leap in render performance this time around, every 4 or 5 years we get a massive jump, this is that year for both GPU Companies. Granted there wasnt a big leap from GTX to RTX, outside of Raytracing, they are essentially the same design, Tensor cores were already on GTX Cards, just disabled via laser cut. GPU Cycles are usually, 1. New Architecture (usually a large performance leap in 1 or more categories over previous Generation) -> (ie GTX 2080ti (Turing)/ RX5700XT (rDNA)) 2. Architecture Refinements + Die Shrinks(Higher Clocks and lower TDP) -> (RTX 3080ti (Ampere) / RX6900XT (rDNA2)) 3. Architecture Refinements + Higher yields (Higher clocks and lower per unit cost) -> (RTX 3090ti (Ampere) / RX6950XT (rDNA2 ) Sometimes there's a 4th refinement cycle, but lately it's been 3 and out, except for AMD, they stuck with the GCN for way to long and they know this. Believe it or not that was a management decision, same as sticking with CMT Cores from Bulldozer Architectures. So nVidia is moving on the Lovelace and AMD rDNA 3, AMD is moving to Chiplet design, where 1 chiplet fabricated at a smaller size per wafer and more usable chips per wafer, can be linked via interposer layer to create larger chips at a fraction of the cost, vs previous traditional method of trying to fabricate large chips w/ less units per wafer due to size and wafer defects. Allows AMD to Limit production lines to Maintstream Chips, w/ Higher yields per wafer, and lower costs, allows AMD to run Less fabrication lines (ie 1 for Entry / eSports), another for Mainstream and Enthusiast. Instead of 4+ Lines, they are running 2, Entry level GPUs would have 1 small chiplet, eSports would have 2, Maintstream would have 1 Medium sized chiplet, Enthusiast would have 2. As a rough example. If AMD's Medium Sized Chiplets get 32 Dies per wafer, and a wafer has 2 defects, you would likely still get 30 usable dies for 30 or 15 cards. If you make a larger chiplet die and only get 16 Dies per wafer and 2 defects, you get 14 usable dies for 14 total high end cards, which would be a higher cost due die per wafer and wafer cost. Essentially AMD is going to Microchiplets, where Entry level is 1 Chiplet, esports would be 2, Mainstream would be 3, Enthusiast would be 4, etc. 1 manufacturing line developing small sized dies at 6-7x the yield per wafer would drive costs down on even the high end cards. Die wafers are circular, so larger dies waste more spaces on the edges as well, so a smaller die, even a perfect 1/4 size die would yield more than 6-7x the dies as the larger due to the wasted space on the die, and the size of the die. So Assuming a fictional number that a wafer costs $10,000 and you get: 10 Usable High End Chips out of it, each Chip is a base Price of $1,000. 66 Usable Chiplets out of it, each Chip is a base price of $150. Now 4 of those Chiplets would Statistically = 1 High End Chip. So Assuming Chipcost, Fictional VRAM Cost (Lets say 150), and Overall PCB Cost (lets say $75) and assuming a 30% min. markup margin A Single high End GPU would cost 1225 unit cost, and likely sold for $1599 MSRP. a Chiplet GPU of the same spec would cost 825 unit cost and likely sold for 1099 MSRP. So Fictionally that's about a 30% decrease in cost for statistically the same performance. then there's also the power saving features, the Card can turn off entire chiplets when not being used, in combination of the current dynamic clock rates used by current gpus, you'd cut your high performance Card to the same power draw as an entry level card by disabling 3/4 of the Chiplets on the card and running reduced clocks. Now the only thing AMD has to do is make sure the chiplets perform as they should and there's no issues with the interposer layer. Seeing what their Chiplet powered AI Processors are doing, I dont think that would be a big problem. I foresee some games and driver issues, but nothing major. Edited August 27, 2022 by SkateZilla 1 Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2), ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9) 3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs
Recommended Posts