BranchPrediction Posted April 25, 2020 Posted April 25, 2020 xD hahah. Must be nice to be rich tho, shiiit. Just buying the best there is. Damn. My whole pc cost 1030ish euros. 2080ti price xD.
Gnadentod Posted April 25, 2020 Posted April 25, 2020 Good to know . Thanks ! You need the MorePowerTool by igorslab to be able t downclock VRAM, it's not an option in just the normal driver. Since were talking about the navi drivers from amd. Hardware unboxed, a youtube channel, did a survey about the new driver from amd that supposedly fixed the issues. 35% people said they had no issues to begin with, 45% said that the new drivers fixed their issues and 20% said they still have issues. U can fact check these numbers on their youtube channel. And it probably only got better ever since that survey. Navi is fantastic performance for a good price. On nvidia's side, 17% had issues ATLEAST AT SOME POINT in its lifetime. The 17% is different from amd's 20% because were talking about having had issues ATLEAST AT SOME POINT in its lifetime. And turing cards died at launch with the 'ufo' artifacts. In any case hope that made sense. Cheers Of course the drivers get better and fix small or even big stuff here and there and that's for both sides. But the wave of people screaming at AMD for their drivers having blackscreens so often during load is not a problem of driver but too high clocked VRAM. Earlier this century this was more common knowledge and people knew how to handle it but since today every kiddo has access to computers and the internet the outcry is a lot bigger. It also doesn't help that those crying at AMD for their drivers feel comfortable in their echo chamber, even if you give them the solution to their problem, most of the time they don't even try it out and keep screaming at AMD for "bad drivers"...
BranchPrediction Posted April 25, 2020 Posted April 25, 2020 Is it really vram clocked too high? Why did AMD clock it up so high?
deadpool Posted April 25, 2020 Posted April 25, 2020 You need the MorePowerTool by igorslab to be able t downclock VRAM, it's not an option in just the normal driver. Of course the drivers get better and fix small or even big stuff here and there and that's for both sides. But the wave of people screaming at AMD for their drivers having blackscreens so often during load is not a problem of driver but too high clocked VRAM. Earlier this century this was more common knowledge and people knew how to handle it but since today every kiddo has access to computers and the internet the outcry is a lot bigger. It also doesn't help that those crying at AMD for their drivers feel comfortable in their echo chamber, even if you give them the solution to their problem, most of the time they don't even try it out and keep screaming at AMD for "bad drivers"... Wait a moment? In other threads you are defending broken modules because they are EA and because you got what's on the packaging. And here you are not defending people for expecting the clock speeds that were mentioned on the packaging? Make up your mind, DerHirte. Lincoln said: “Nearly all men can stand adversity, but if you want to test a man's character, give him power." Do not expect a reply to any questions, 30.06.2021 - Silenced by Nineline
DocSigma Posted April 25, 2020 Posted April 25, 2020 The cost is so prohibitive. As much as I would want a ti card, just cant justify the new normal super high price of the Ti. Maybe if they upped the VRAM on the card from 11Gb or something but still probably not. Ryzen9 5800X3D, Gigabyte Aorus X570 Elite, 32Gb Gskill Trident DDR4 3600 CL16, Samsung 990 Pr0 1Tb Nvme Gen4, Evo860 1Tb 2.5 SSD and Team 1Tb 2.5 SSD, MSI Suprim X RTX4090 , Corsair h115i Platinum AIO, NZXT H710i case, Seasonic Focus 850W psu, Gigabyte Aorus AD27QHD Gsync 1ms IPS 2k monitor 144Mhz, Track ir4, VKB Gunfighter Ultimate w/extension, Virpil T50 CM3 Throttle, Saitek terrible pedals, RiftS
Wdigman Posted April 25, 2020 Posted April 25, 2020 Will these new RTX 3000 cards be PCI-E 3.0 or 4.0? I imagine that will make a big difference too.
BranchPrediction Posted April 25, 2020 Posted April 25, 2020 Pci 3 is still enough though. But just maybe the top cards will benefit from pci e 4. I doubt it tho.
Svsmokey Posted April 25, 2020 Posted April 25, 2020 More vram will be one of my primary considerations . I expect DCS Vulkan will land during the next card's life-cycle . 9700k @ stock , Aorus Pro Z390 wifi , 32gb 3200 mhz CL16 , 1tb EVO 970 , MSI RX 6800XT Gaming X TRIO , Seasonic Prime 850w Gold , Coolermaster H500m , Noctua NH-D15S , CH Pro throttle and T50CM2/WarBrD base on Foxxmounts , CH pedals , Reverb G2v2
BranchPrediction Posted April 25, 2020 Posted April 25, 2020 U think the top cards will get 16gb? I hope amd does an HBM2 or HBM2E card. But that would almost certainly cost atleast a thousand.
DocSigma Posted April 25, 2020 Posted April 25, 2020 Dont forget Intel's Xe discrete cards are set to drop this summer. This is real good if they can put out a competitive card. Ryzen9 5800X3D, Gigabyte Aorus X570 Elite, 32Gb Gskill Trident DDR4 3600 CL16, Samsung 990 Pr0 1Tb Nvme Gen4, Evo860 1Tb 2.5 SSD and Team 1Tb 2.5 SSD, MSI Suprim X RTX4090 , Corsair h115i Platinum AIO, NZXT H710i case, Seasonic Focus 850W psu, Gigabyte Aorus AD27QHD Gsync 1ms IPS 2k monitor 144Mhz, Track ir4, VKB Gunfighter Ultimate w/extension, Virpil T50 CM3 Throttle, Saitek terrible pedals, RiftS
derneuemann Posted April 27, 2020 Posted April 27, 2020 The advantages of PCIe4.0. For a single GPU with enough memory, there is almost no difference, also measurable. But when the VRam is full, it makes the difference between playable and not playable. I5 13400F, 32GB DDR5 6200 CL30, RTX4070ti Super 2x 1tb m.2 (PCIe4.0)
Drag80 Posted April 27, 2020 Posted April 27, 2020 ^ True You will need 4 goats to break even Goat: $300 2080Ti: $1200 hahaha this post is great. with proof.
SkateZilla Posted April 27, 2020 Posted April 27, 2020 (edited) Revealed in August, Not For Sale. RTX2000 Was Revealed in August 2018, Founders Editions were Released Sept. 20, with Cards being Generally Available By December. And I'd Still wait for rDNA 2 Cards. RTX 3000's Up to 50% Faster Line is deceiving, as it applies mainly to RayTracing. If AMD's rDNA 2 Cards w/ RT Compete it will force nVidia to dump prices to match. Edited April 27, 2020 by SkateZilla Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2), ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9) 3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs
BranchPrediction Posted April 27, 2020 Posted April 27, 2020 Revealed in August, Not For Sale. RTX2000 Was Revealed in August 2018, Founders Editions were Released Sept. 20, with Cards being Generally Available By December. And not to mention that the 20 series cards were dying. Remember those 'UFO' artifacts xD. 1000€ card and it just works/dies. Noice nvidia, noice.
SkateZilla Posted April 27, 2020 Posted April 27, 2020 And not to mention that the 20 series cards were dying. Remember those 'UFO' artifacts xD. 1000€ card and it just works/dies. Noice nvidia, noice. Like Every Titan Card since the GTX Titan, The Launch Cards always seem to overheat and die, and there's press coverage of EVERY generation doing this, Flocking to Launch Cards on Day 1 was never a smart thing IMHO, those cards are basically test units, after articles and user feedback of problems, they change MFG procedures. AMD Does the Same thing, the 5700XT 50th Anniversary Edition had the same problems, as did every Large AMD Chip before that. Me, Once rDNA2 and RTX3000 Come out, I'll likely nab a clearance 5700XT for Cheap or 2, Crossfire/SLi is dead, Vulkan/DX12 mGPU isnt, and both of those communicate thru PCIe Bus. The Main Difference between the Turing and Ampere is Twice the RayTracing Shaders, about 10-15% More CUDA Cores depending on model However due to Die Shrink and heat, those cores are running up to 5% slower. You'll see jumps in Raytracing Performance, and games that use RT will see a small increase in performance. Outside of RayTracing, you may see 5-10% Performance increase from the 100-200 more cores, and maybe faster memory bus. 10% Performance for $1000+ is not worth the price to performance jump. Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2), ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9) 3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs
BranchPrediction Posted April 27, 2020 Posted April 27, 2020 Only 100 to 200 more cuda cores? If that is so then it will lose against rdna2. Or maybe the performance will come from a better architecture. Ray tracing is cool, turing just isnt good at it. Id love to see cockpit global illumination and reflections beiglng ray traced. Like in the huey u have green tinted glass above the pilots, but in dcs u dont see light being colored green by that glass.
Gryzor Posted April 27, 2020 Posted April 27, 2020 Like Every Titan Card since the GTX Titan, The Launch Cards always seem to overheat and die, and there's press coverage of EVERY generation doing this, Flocking to Launch Cards on Day 1 was never a smart thing IMHO, those cards are basically test units, after articles and user feedback of problems, they change MFG procedures. AMD Does the Same thing, the 5700XT 50th Anniversary Edition had the same problems, as did every Large AMD Chip before that. Me, Once rDNA2 and RTX3000 Come out, I'll likely nab a clearance 5700XT for Cheap or 2, Crossfire/SLi is dead, Vulkan/DX12 mGPU isnt, and both of those communicate thru PCIe Bus. The Main Difference between the Turing and Ampere is Twice the RayTracing Shaders, about 10-15% More CUDA Cores depending on model However due to Die Shrink and heat, those cores are running up to 5% slower. You'll see jumps in Raytracing Performance, and games that use RT will see a small increase in performance. Outside of RayTracing, you may see 5-10% Performance increase from the 100-200 more cores, and maybe faster memory bus. 10% Performance for $1000+ is not worth the price to performance jump. Cores 7nm are expected to run at higher mhz, not lower. Why do you say the opposite? I expect a gain marginally better, not 10% only. Maybe 40%. From 1080ti to 2080ti I gained about 30% in fps increase, in general use and without any changes in die size (the same 12nm in 1080 / 2080 ti). Maybe would wise from you, if you contrast a bit more the information.
Gnadentod Posted April 27, 2020 Posted April 27, 2020 (edited) Wait a moment? In other threads you are defending broken modules because they are EA and because you got what's on the packaging. And here you are not defending people for expecting the clock speeds that were mentioned on the packaging? Make up your mind, DerHirte. An EA module is never a "broken" module. It's literally the developers developing with you the module as a tester. If you're not able to grasp this then this isn't my fault and never will be. So I'm not "defending a broken module" because it isn't a "broken module". Everyone paid into this EA module willingly, I've yet to meet someone who was coerced into buying the F16 by force. And how does downclocking the VRAM by 50 MHz impact your performance and life in any meaningful way? It doesn't. You've seen how much people cry about the AMD drivers when all their problems could be solved with downclocking 50 or 100 MHz, which doesn't even impact your performance? It's almost impossible to even measure it. Back to refining your contextualizing skills, thank you very much. Edited April 27, 2020 by Der Hirte
SkateZilla Posted April 28, 2020 Posted April 28, 2020 (edited) Cores 7nm are expected to run at higher mhz, not lower. Why do you say the opposite? I expect a gain marginally better, not 10% only. Maybe 40%. From 1080ti to 2080ti I gained about 30% in fps increase, in general use and without any changes in die size (the same 12nm in 1080 / 2080 ti). Maybe would wise from you, if you contrast a bit more the information. a 12nm -> 7NM Dieshrink yes, core count stays the same, lower voltages, more TDP headroom, etc. a 7NM -> Rebuild, Where core counts are increased for both Cuda Cores and RT Cores, not so much. So you're going from 12NM to 7NM, which is a 43% Reduction, Now you're taking the Turing design, Adding likely 768-1024 Cuda Cores for each level (3080 TI, 3080, 2070), and Doubling RT Cores completely. 10xx to 20xx would be a big jump, it's an entirely new Micro-architecture, and it's 4 years of development between the two. 20xx to 30xx is a Revision and die shrink of the same Micro-architecture. Adding some SM Blocks and Doubling the RT Cores in the space gained from the die shrink. rDNA 2 is going to add RT, as well as VRS and a few features, on top of cleaning up power efficiency of rDNA 1. Increasing Performance per Watt by 50-65%. Unless the rDNA2 Cards pack more SMs and Cores, I dont see it keeping up with RTX3000, 50-65% increase in Performance Per Watt is roughly 33-40% increase over 5700XTs. assuming they keep the same 40 C.U. and just ramp up the clocks w/ the new TDP Headroom, which isnt a perfect scaling, performance drops at some point, overclocking begins to level off as Clock increases, and power draw begins to spike. Adding say 256-512 more cores and the RT Cores will bring it up to expected RTX 3000 Performance Level. So: rDNA1 vs rDNA2 rDNA1 40 C.U. (2560 Cores) 225wTDP vs rDNA2 40 C.U. (2560 Cores) in ~150wTDP So a rDNA2 Card is reported to have 80 CU Unit Max when Using Full Die. That will likely never happen w/ rDNA1 as it would be 450-500w TDP But now, if you want to sit under the 350w TDP, an 60-80 Compute Unit rDNA2 Chip is possible depending on clocks. Edited April 28, 2020 by SkateZilla Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2), ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9) 3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs
derneuemann Posted April 28, 2020 Posted April 28, 2020 a 12nm -> 7NM Dieshrink yes, core count stays the same, lower voltages, more TDP headroom, etc. a 7NM -> Rebuild, Where core counts are increased for both Cuda Cores and RT Cores, not so much. So you're going from 12NM to 7NM, which is a 43% Reduction, Now you're taking the Turing design, Adding likely 768-1024 Cuda Cores for each level (3080 TI, 3080, 2070), and Doubling RT Cores completely. 10xx to 20xx would be a big jump, it's an entirely new Micro-architecture, and it's 4 years of development between the two. 20xx to 30xx is a Revision and die shrink of the same Micro-architecture. Adding some SM Blocks and Doubling the RT Cores in the space gained from the die shrink. rDNA 2 is going to add RT, as well as VRS and a few features, on top of cleaning up power efficiency of rDNA 1. Increasing Performance per Watt by 50-65%. Unless the rDNA2 Cards pack more SMs and Cores, I dont see it keeping up with RTX3000, 50-65% increase in Performance Per Watt is roughly 33-40% increase over 5700XTs. assuming they keep the same 40 C.U. and just ramp up the clocks w/ the new TDP Headroom, which isnt a perfect scaling, performance drops at some point, overclocking begins to level off as Clock increases, and power draw begins to spike. Adding say 256-512 more cores and the RT Cores will bring it up to expected RTX 3000 Performance Level. So: rDNA1 vs rDNA2 rDNA1 40 C.U. (2560 Cores) 225wTDP vs rDNA2 40 C.U. (2560 Cores) in ~150wTDP So a rDNA2 Card is reported to have 80 CU Unit Max when Using Full Die. That will likely never happen w/ rDNA1 as it would be 450-500w TDP But now, if you want to sit under the 350w TDP, an 60-80 Compute Unit rDNA2 Chip is possible depending on clocks. AMD and Nvidia have announced 50% better efficiency. Respectively. More specifically, Nvidia has announced that it will be much more efficient at 50% more power First of all advertising promises that have to be proven by independent tests. The fact is that AMD cannot win against Turing with RDNA, even though one uses the newer process. I5 13400F, 32GB DDR5 6200 CL30, RTX4070ti Super 2x 1tb m.2 (PCIe4.0)
SkateZilla Posted April 28, 2020 Posted April 28, 2020 (edited) AMD and Nvidia have announced 50% better efficiency. Respectively. More specifically, Nvidia has announced that it will be much more efficient at 50% more power First of all advertising promises that have to be proven by independent tests. The fact is that AMD cannot win against Turing with RDNA, even though one uses the newer process. rDNA, did not even field a fully enabled/full size GPU, AMD Kept the largest at 2560 Cores due to Power. (they didnt want to go above 250w TDP) (A Full size rDNA Chip was Never Fabbed for Consumers). the rDNA was introduction of the new Micro-Architecture, so AMD did not want to push it, and had stockpile of GCN Chips left to sell. So 5700/5600 used rDNA, while VII and Other 5x0 Cards used GCN. rDNA2 is considered a refresh of rDNA1, although rDNA2 Adds RT, VRS. rDNA Offers 50% Performance/Watt, IPC Increase, as well as Higher Frequencies. (Likely due to GPU design Tweaking, as well as 7NM Process Node Improvements) So, like I said, the same 2560 Cores sucking 225w (260w realistically underloads), would now only use 150w (190w realistically) while gaining IPC and Higher Clocks, That by Itself would be enough for a "rDNA2" to match a 2080ti Now take into account the full size rDNA2 is 5120 Cores. So It's likely 6700XT/6700 would be refreshes of 5700XT/5700 Cards w/ RT and VRS and Better Performance. But now you can get 5120 Cores in under a 300w TDP Ceiling (350w realistically). So: I'd Expect AMD to Push out if needed: 5800XT/5800 w/ Core counts in the 3840 Core/3456 Cores Areas 5900XT/5900 w/ Core counts in the 5120 Core/4608 Cores Areas These Parts would easily on paper match up if not exceed RTX3080/3080 Ti Cards. You Might not even see 5900 Series, if 5800 Matches up to RTX3080/3080ti Close enough, AMD will undercut the prices, and sell cheaper w/ less than 10% Performance difference in benchmarks. Which is smart, as a Full size rDNA2 GPU would cost more to Fabricate and take up more wafer space. Fab the GPUs that sell, plus the smaller GPUs you can Fab more per wafer. Like I said before in another thread. nVidia dont care about Fab Costs, They will make a HUGE GPU that sucks 400w just to say they have the fastest GPU in the world, and end users flock to nVidia, just because they say that, but hardly anyone will buy said GPU. Everyone here might have 2080/1080/Tis'. But the truth is, if you look at GPU sales, the 2080/Tis and the 1080/Tis of Prev. Gen, were outsold by 1060/1060Ti and /2060/2060 Ti by a ratio close to 300:1, if not more. RTX Sales are no where near what nVidia wanted, and you can blame the Prices for that, as well as smaller than expected Performance jumps. Like I said, the Price:Perf. Ratio wouldnt warrant me to buy a RTX2080 to upgrade my GTX1080 and so on. Edited April 28, 2020 by SkateZilla Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2), ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9) 3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs
derneuemann Posted April 28, 2020 Posted April 28, 2020 2080TI is the old man now... The RTX3000 is the new answer to the RDNA2... So, we will see. I5 13400F, 32GB DDR5 6200 CL30, RTX4070ti Super 2x 1tb m.2 (PCIe4.0)
Nightstalker Posted April 28, 2020 Posted April 28, 2020 The cost is so prohibitive. As much as I would want a ti card, just cant justify the new normal super high price of the Ti. Maybe if they upped the VRAM on the card from 11Gb or something but still probably not. I absolutely love my 2080ti. I agree the prices have gotten stupid but remember one thing, it affects the price of older cards as well. I sold my 980ti for almost the same price I paid for it 3 years before, due to people not wanting to pay the price of the new cards. That cut the cost of my new one almost in half. Getting into the top end market is the kick in the nuts but after that it's much easier to deal with. There's nothing like running DCS on max settings with 100 + frames per second in 2k (at least pre-2.5.6 which I hope to get sorted out). My whole reason for not stepping to VR is due to the fact I never want to go back to 40ish or less frames per second. No thanks, I want the smooth feel and all of the eye candy.
BranchPrediction Posted April 28, 2020 Posted April 28, 2020 2080TI is the old man now... The RTX3000 is the new answer to the RDNA2... So, we will see. Rdna 2 will be competing with the 30 series, not turing. And this is just great, we need competition.
DocSigma Posted April 28, 2020 Posted April 28, 2020 I absolutely love my 2080ti. I agree the prices have gotten stupid but remember one thing, it affects the price of older cards as well. I sold my 980ti for almost the same price I paid for it 3 years before, due to people not wanting to pay the price of the new cards. That cut the cost of my new one almost in half. Getting into the top end market is the kick in the nuts but after that it's much easier to deal with. There's nothing like running DCS on max settings with 100 + frames per second in 2k (at least pre-2.5.6 which I hope to get sorted out). My whole reason for not stepping to VR is due to the fact I never want to go back to 40ish or less frames per second. No thanks, I want the smooth feel and all of the eye candy.That's just it. I run dcs all settings maxed at 2k and I am over 100fps. Maybe performance in vr might be a little better, but I use the rift and with that I run high settings and get very good performance. If I was set in running things in 4k or greater res. Id think a Ti would be tempting, but as it is, and with my current rig, I wouldn't experience no difference between a ti or my super. Frames are evil.. I only will look at them if I experience stutters or slow downs. I checked my frames when I got the system up and running to gauge performance, but now I dont bother as I get no slowdowns. Again, its 2k or rifts. Sent from my SM-G965U using Tapatalk Ryzen9 5800X3D, Gigabyte Aorus X570 Elite, 32Gb Gskill Trident DDR4 3600 CL16, Samsung 990 Pr0 1Tb Nvme Gen4, Evo860 1Tb 2.5 SSD and Team 1Tb 2.5 SSD, MSI Suprim X RTX4090 , Corsair h115i Platinum AIO, NZXT H710i case, Seasonic Focus 850W psu, Gigabyte Aorus AD27QHD Gsync 1ms IPS 2k monitor 144Mhz, Track ir4, VKB Gunfighter Ultimate w/extension, Virpil T50 CM3 Throttle, Saitek terrible pedals, RiftS
Recommended Posts