TZeer Posted February 11 Posted February 11 Seems the 5090 is even worse than the 4090 when it comes to potentially melting cables. 130-150 degrees celcius on the actual connection. Uneven powerdistribution through the cable. Does not look good. 1
Nightdare Posted February 11 Posted February 11 600W cable, GPU's pulling 630 watts consistently,... what could possibly go wrong? 1 Intel I5 13600k / AsRock Z790 Steel Legend / MSI 4080s 16G Gaming X Slim / Kingston Fury DDR5 5600 64Gb / Adata 960 Max / HP Reverb G2 v2 Virpil MT50 Mongoost T50 Throttle, T50cm Base & Grip, VFX Grip, ACE Interceptor Rudder Pedals w. damper / WinWing Orion2 18, 18 UFC & HUD, PTO2, 2x MFD1 / Logitech Flight Panel / VKB SEM V / 2x DIY Button Box
LucShep Posted February 11 Posted February 11 (edited) 58 minutes ago, TZeer said: Seems the 5090 is even worse than the 4090 when it comes to potentially melting cables. 130-150 degrees celcius on the actual connection. Uneven powerdistribution through the cable. Does not look good. Yep. And he already did a video on the RTX4090 before, explaining issues.* If the 5090 is now pushing upto 25% more power than the previous 4090, then of course it's now a problem of "when", not "if". To undervolt the 5090 is an absolute "must do", even more than before. To put it simply, this connector should have never been used. And if so insisting on it, it should have been two of them in the 5090 and 4090. * If you're short on time or patience, skip to 13:27 time of video: Edited February 11 by LucShep 1 CGTC - Caucasus retexture | A-10A cockpit retexture | Shadows Reduced Impact | DCS 2.5.6 - a lighter alternative Spoiler Win10 Pro x64 | Intel i7 12700K (OC@ 5.1/5.0p + 4.0e) | 64GB DDR4 (OC@ 3700 CL17 Crucial Ballistix) | RTX 3090 24GB EVGA FTW3 Ultra | 2TB NVMe (MP600 Pro XT) + 500GB SSD (WD Blue) + 3TB HDD (Toshiba P300) + 1TB HDD (WD Blue) | Corsair RMX 850W | Asus Z690 TUF+ D4 | TR PA120SE | Fractal Meshify-C | UAD Volt1 + Sennheiser HD-599SE | 7x USB 3.0 Hub | 50'' 4K Philips PUS7608 UHD TV + Head Tracking | HP Reverb G1 Pro (VR) | TM Warthog + Logitech X56
Panzerlang Posted February 11 Posted February 11 Hopefully somebody (or a class-action) will sue the spivs.
scommander2 Posted February 11 Posted February 11 Questions: 1, will the DIY cable make different than Nvidia provided cable? 2. will DCS draw that level of power from PS? Thanks. Spoiler Dell XPS 9730, i9-13900H, DDR5 64GB, Discrete GPU: NVIDIA GeForce RTX 4080, 1+2TB M.2 SSD | Thrustmaster Warthog HOTAS + TPR | TKIR5/TrackClipPro | Total Controls Multi-Function Button Box | Win 11 Pro
SharpeXB Posted February 11 Posted February 11 (edited) The 5 series cards can also use the improved 12V-2x6 design. Definitely a worthwhile replacement if you’re going that route. I’m sure these YouTubers know that despite the clickbait titles https://www.corsair.com/us/en/explorer/diy-builder/power-supply-units/what-power-cable-does-the-nvidia-geforce-rtx-5090-use/ 41 minutes ago, Panzerlang said: Hopefully somebody (or a class-action) will sue the spivs. Already happened it was dismissed https://www.classaction.org/news/nvidia-geforce-rtx-4090-melting-class-action-alleges-graphics-card-sold-with-defective-power-cable-plug-socket#:~:text=Filed: November 11%2C 2022 ◆,org's free weekly newsletter here. Edited February 11 by SharpeXB i9-14900KS | ASUS ROG MAXIMUS Z790 HERO | 64GB DDR5 5600MHz | iCUE H150i Liquid CPU Cooler | ASUS TUF GeForce RTX 4090 OC | Windows 11 Home | 2TB Samsung 980 PRO NVMe | Corsair RM1000x | LG 48GQ900-B 4K OLED Monitor | CH Fighterstick | Ch Pro Throttle | CH Pro Pedals | TrackIR 5
TZeer Posted February 11 Author Posted February 11 In general: 1: No, the cable is similar. The cable is nothing high-tech. Problem in the video is an extremely uneven distribution of power through the different wires. Some of the 3rd party cards has monitoring to check for these problems. But Nvidia founders card has not. But regardless, it's a <profanity>ty design and extremely poor QA from Nvidia's side. 2: It depends how much load there is on the GPU. 1
scommander2 Posted February 11 Posted February 11 (edited) 3 hours ago, TZeer said: 2: It depends how much load there is on the GPU. Thanks for the feedback.... I totally agree that objects are on terrains will make more loads and objects are like trees, AI planes... etc. I have to salute to the owned 50 series pioneers who share their issues so that Nvidia and other venders can improve the products. Edited February 11 by scommander2 Spoiler Dell XPS 9730, i9-13900H, DDR5 64GB, Discrete GPU: NVIDIA GeForce RTX 4080, 1+2TB M.2 SSD | Thrustmaster Warthog HOTAS + TPR | TKIR5/TrackClipPro | Total Controls Multi-Function Button Box | Win 11 Pro
EightyDuce Posted February 11 Posted February 11 Part of the issue, in that particular case, seems to stem from nvidia not actually utilizing any of the monitoring of the sense pins to do proper load balancing. Cable/connector spec is rated for 660w by Molex (600w certified by Nvidia), 5090 stock TDP is 575w. Real tight margins... Basically, 5090 should have two connectors with proper load balancing. 2 Windows 11 23H2| ASUS X670E-F STRIX | AMD 9800X3D@ 5.6Ghz | G.Skill 64Gb DDR5 6200 28-36-36-38 | RTX 4090 undervolted | MSI MPG A1000G PSU | VKB MCG Ultimate + VKB T-Rudders + WH Throttle | HP Reverb G2 Quest 3 + VD
Aapje Posted February 11 Posted February 11 Nvidia first makes a design with almost no margin of error, and then they don't even build in some security so that you can't have all the power going over one or two wires: It's an absolute disgrace that this is what you get for (at least) $2k. 3
TheBiggerBass Posted February 11 Posted February 11 This excessive power consumption - and price tag - are the main reasons why I don't want a 5090 or even 4090 in my system. My 4070 ti super already sinks about 300W. That has to be enough. DCS: A-10A Flaming Cliffs, F-4E Phantom II, F/A-18C, Normandy System: HP Z2 Tower, Win11 24H2, i9-14900K, 64GB RAM, 2TB SSD (M2) + 18TB HDD (Sata), GeForce RTX4070 TI Super 16GB VRAM, Samsung Odyssey 57" curved monitor (main screen) + BenQ 32" UW3270 (secondary screen), VelocityOne Flight Desk
BitMaster Posted February 11 Posted February 11 HOW STUPID must Nvidia be to NOT BE ABLE to produce a reliable 600w cable. No rocket science, no nuclear tricks, no 10k kpa, PLAIN NOTHING that rests on their side of the scale. Almost ANY household appliance I have has 600+ watts, just to give a picture guys. No, Nvidia refuses to learn the lesson. This is so sad. and I will not suggest, sell, install, or fix any of those cards. Nada 1 Gigabyte Aorus X570S Master - Ryzen 5900X - Gskill 64GB 3200/CL14@3600/CL14 - Sapphire Nitro+ 7800XT - 4x Samsung 980Pro 1TB - 1x Samsung 870 Evo 1TB - 1x SanDisc 120GB SSD - Heatkiller IV - MoRa3-360LT@9x120mm Noctua F12 - Corsair AXi-1200 - TiR5-Pro - Warthog Hotas - Saitek Combat Pedals - Asus XG27ACG QHD 180Hz - Corsair K70 RGB Pro - Win11 Pro/Linux - Phanteks Evolv-X
Panzerlang Posted February 11 Posted February 11 7 hours ago, SharpeXB said: Already happened it was dismissed https://www.classaction.org/news/nvidia-geforce-rtx-4090-melting-class-action-alleges-graphics-card-sold-with-defective-power-cable-plug-socket#:~:text=Filed: November 11%2C 2022 ◆,org's free weekly newsletter here. Quite clearly the judge took a brown envelope from NVidia. There was no legitimate defence available for NVidia and absolutely no possible way for that case to be legitimately dismissed. Yet another egregious example of corrupt corporations leveraging a corrupt judiciary.
SharpeXB Posted February 11 Posted February 11 12 minutes ago, Panzerlang said: Quite clearly the judge took a brown envelope from NVidia. There was no legitimate defence available for NVidia and absolutely no possible way for that case to be legitimately dismissed. Yet another egregious example of corrupt corporations leveraging a corrupt judiciary. They admit guilt by putting a better connector on their next card I got to experience this one firsthand and now I’m the owner of the world’s most expensive drink coaster. 1 i9-14900KS | ASUS ROG MAXIMUS Z790 HERO | 64GB DDR5 5600MHz | iCUE H150i Liquid CPU Cooler | ASUS TUF GeForce RTX 4090 OC | Windows 11 Home | 2TB Samsung 980 PRO NVMe | Corsair RM1000x | LG 48GQ900-B 4K OLED Monitor | CH Fighterstick | Ch Pro Throttle | CH Pro Pedals | TrackIR 5
Pilotasso Posted February 11 Posted February 11 (edited) yeah I had issues with the 4090 (ASUS STRIX). The 12VHPWR cable adapter that came in the box caused me stability issues. The card showed a red LED at the power plug indicating something was wrong but it still was capable of runing becnhmarks even. Then I bought a cablemod with 12VHPWR PCI-e with 4 connections to the PSU. LED still showed red and problems continued. Then I bought an ATX 3XPCI-E->to->12VHPWR cable from the PSU manufacurer (wasnt initialy available), LED came off and that has been rock solid ever since. The cables dont heat up. Just dont use 3rd party cables. If you dont have an adapter from the PSU brand just save yourself trouble and buy a new PSU 1000W+ with one dedicated 12V plug and cable. Edited February 11 by Pilotasso 1 .
LucShep Posted February 12 Posted February 12 (edited) Honestly, the more I read about this, the more I find this (far) worse than what happened with Intel Raptor Lake CPUs. 8 hours ago, EightyDuce said: Part of the issue, in that particular case, seems to stem from nvidia not actually utilizing any of the monitoring of the sense pins to do proper load balancing. Cable/connector spec is rated for 660w by Molex (600w certified by Nvidia), 5090 stock TDP is 575w. Real tight margins... Basically, 5090 should have two connectors with proper load balancing. Exactly. No security monitoring and too little margin of error. The power port on the 5090 overheats and burns because the current isn’t evenly distributed across the pins. In addition, it should have had at least a second connector (not just one). As the Der8auer tests show, one of the 12-pin connector wires draw 23 amps (over double of what is suppposed to run?!?) then causing temperatures to spike to 150ºC. That's crazy. A connector heating up to 70ºC, while the PSU gets extremely hot at 150ºC, all in just a few minutes of benchmark testing, is perhaps all that needs to be seen. And now think about this - if having the mentioned headroom of 5% (Nvidia 600W rating) or 15% (Molex 660W rating) is already too little on itself, then one has to wonder about the possible transient spikes, possibly going over 750W(?) in that puny cable and connector on each end, all prone to manufacturing tolerances.... What I wonder now is, seeing how this can get dangerous real quick, how could this have passed safety tests for consumer market etc? 7 hours ago, Aapje said: Nvidia first makes a design with almost no margin of error, and then they don't even build in some security so that you can't have all the power going over one or two wires: It's an absolute disgrace that this is what you get for (at least) $2k. Always good videos from Buildzoid. Edited February 12 by LucShep 1 CGTC - Caucasus retexture | A-10A cockpit retexture | Shadows Reduced Impact | DCS 2.5.6 - a lighter alternative Spoiler Win10 Pro x64 | Intel i7 12700K (OC@ 5.1/5.0p + 4.0e) | 64GB DDR4 (OC@ 3700 CL17 Crucial Ballistix) | RTX 3090 24GB EVGA FTW3 Ultra | 2TB NVMe (MP600 Pro XT) + 500GB SSD (WD Blue) + 3TB HDD (Toshiba P300) + 1TB HDD (WD Blue) | Corsair RMX 850W | Asus Z690 TUF+ D4 | TR PA120SE | Fractal Meshify-C | UAD Volt1 + Sennheiser HD-599SE | 7x USB 3.0 Hub | 50'' 4K Philips PUS7608 UHD TV + Head Tracking | HP Reverb G1 Pro (VR) | TM Warthog + Logitech X56
Dogmanbird Posted February 12 Posted February 12 It's been 6 months of disappointment. msfs2024, Assetto Corsa Evo and now RTX5090 They usually come in threes so hopefully the next thing to come out will be a winner Maybe AMD? 1
Aapje Posted February 12 Posted February 12 11 hours ago, Panzerlang said: Quite clearly the judge took a brown envelope from NVidia. There was no legitimate defence available for NVidia and absolutely no possible way for that case to be legitimately dismissed. Yet another egregious example of corrupt corporations leveraging a corrupt judiciary. The reporting suggests that Nvidia simply paid off the person who filed the suit and they withdrew the complaint. There was never a ruling. 1 1
Aapje Posted February 12 Posted February 12 (edited) 17 hours ago, LucShep said: What I wonder now is, seeing how this can get dangerous real quick, how could this have passed safety tests for consumer market etc? In the EU, the CE-marking is merely a legal construct. Putting it on the product is considered to be a promise by the manufacturer that they ensured that their product complies with the EU standards. So if they put the CE-marking on the product without actually doing that, the courts can conclude that it was not a mere oversight, but a willful act of not following the law. But there is no testing required by an independent/government body. Testing it would be hard anyway, since the Low Voltage Directive just uses generic language stating that the product should be safe to use and connect, and that the manufacturer should recall or fix the device if this turns out not to be the case (despite a solid effort to make it safe). Presumably, the courts would create jurisprudence, or it already exists, on what is considered to be safe enough, based on expert testimony or the assessment by national agencies, and when a recall is warranted. In the US, the laws seem more centered around empowering agencies to make and enforce rulings, but GPUs are probably not on the radar of any safety agency right now. So in practice, we probably either need a sufficiently big scandal with people dying for this to get on the radar of the agencies, or people need to sue themselves. Edited February 12 by Aapje 2
Panzerlang Posted February 12 Posted February 12 (edited) I'm going to guess that unless NVidia publicly acknowledges the flaw and just as publicly shows the fix, the sales of the 50 series will take a very serious hit. However, I wouldn't be at all surprised to learn it's a deliberate ploy to get out of the gaming market with less damage to their rep than simply saying "screw all you gamers", so they can go all-in with chips for AI only. It would explain the lack of cards at launch too, they wanted to minimize the chances of actually causing death by fire before they got their manufactured excuse to bail. Edited February 12 by Panzerlang 1
LucShep Posted February 12 Posted February 12 (edited) 2 hours ago, Aapje said: 12 hours ago, LucShep said: What I wonder now is, seeing how this can get dangerous real quick, how could this have passed safety tests for consumer market etc? In the EU, the CE-marking is merely a legal construct. Putting it on the product is considered to be a promise by the manufacturer that they ensured that their product complies with the EU standards. So if they put the CE-marking on the product without actually doing that, the courts can conclude that it was not a mere oversight, but a willful act of not following the law. But there is no testing required by an independent/government body. Testing it would be hard anyway, since the Low Voltage Directive just uses generic language stating that the product should be safe to use and connect, and that the manufacturer should recall or fix the device if this turns out not to be the case (despite a solid effort to make it safe). Presumably, the courts would create jurisprudence, or it already exists, on what is considered to be safe enough, based on expert testimony or the assessment by national agencies, and when a recall is warranted. In the US, the laws seem more centered around empowered agencies to make and enforce rulings, but GPUs are probably not on the radar of any safety agency right now. So in practice, we probably either need a sufficiently big scandal with people dying for this to get on the radar of the agencies, or people need to sue themselves. Woooaaa ...I had no idea it was that lenient! (and thanks for the explanation) I suppose that, yes, it'll have to get far worse before it gets any better. The next coming months will be revealing, I guess. 16 hours ago, BitMaster said: HOW STUPID must Nvidia be to NOT BE ABLE to produce a reliable 600w cable. No rocket science, no nuclear tricks, no 10k kpa, PLAIN NOTHING that rests on their side of the scale. Almost ANY household appliance I have has 600+ watts, just to give a picture guys. No, Nvidia refuses to learn the lesson. This is so sad. and I will not suggest, sell, install, or fix any of those cards. Nada +1. Ditto 58 minutes ago, Panzerlang said: I'm going to guess that unless NVidia publicly acknowledges the flaw and just as publicly shows the fix, the sales of the 50 series will take a very serious hit. However, I wouldn't be at all surprised to learn it's a deliberate ploy to get out of the gaming market with less damage to their rep than simply saying "screw all you gamers", so they can go all-in with chips for AI only. It would explain the lack of cards at launch too, they wanted to minimize the chances of actually causing death by fire before they got their manufactured excuse to bail. It'd be a shame to be without future GPUs from the leading hardware/software manufacturer in the area. But then, if this is really their modus operandi, releasing products on the verge of price gouging while being potentially faulty (and dangerous), then I honestly think we'd be better without them (good riddance). There'd still be AMD and Intel (and possibily others who'd venture, like in past) picking where it'd be left at. Edited February 12 by LucShep 1 CGTC - Caucasus retexture | A-10A cockpit retexture | Shadows Reduced Impact | DCS 2.5.6 - a lighter alternative Spoiler Win10 Pro x64 | Intel i7 12700K (OC@ 5.1/5.0p + 4.0e) | 64GB DDR4 (OC@ 3700 CL17 Crucial Ballistix) | RTX 3090 24GB EVGA FTW3 Ultra | 2TB NVMe (MP600 Pro XT) + 500GB SSD (WD Blue) + 3TB HDD (Toshiba P300) + 1TB HDD (WD Blue) | Corsair RMX 850W | Asus Z690 TUF+ D4 | TR PA120SE | Fractal Meshify-C | UAD Volt1 + Sennheiser HD-599SE | 7x USB 3.0 Hub | 50'' 4K Philips PUS7608 UHD TV + Head Tracking | HP Reverb G1 Pro (VR) | TM Warthog + Logitech X56
Nightdare Posted February 12 Posted February 12 1 hour ago, LucShep said: It'd be a shame to be without future GPUs from the leading hardware/software manufacturer in the area. On the other hand, it would force game coders to start programming and optimizing their games more efficiently Because this would be an admit to defeat to increase the performance of hardware for fidelity 1 Intel I5 13600k / AsRock Z790 Steel Legend / MSI 4080s 16G Gaming X Slim / Kingston Fury DDR5 5600 64Gb / Adata 960 Max / HP Reverb G2 v2 Virpil MT50 Mongoost T50 Throttle, T50cm Base & Grip, VFX Grip, ACE Interceptor Rudder Pedals w. damper / WinWing Orion2 18, 18 UFC & HUD, PTO2, 2x MFD1 / Logitech Flight Panel / VKB SEM V / 2x DIY Button Box
scommander2 Posted February 12 Posted February 12 1 hour ago, LucShep said: it'll have to get far worse before it gets any better. Yup, everything is going to display on table. Spoiler Dell XPS 9730, i9-13900H, DDR5 64GB, Discrete GPU: NVIDIA GeForce RTX 4080, 1+2TB M.2 SSD | Thrustmaster Warthog HOTAS + TPR | TKIR5/TrackClipPro | Total Controls Multi-Function Button Box | Win 11 Pro
SharpeXB Posted February 12 Posted February 12 (edited) It would sure seem to be the best move for anyone getting this card to use the 12V-2x6 connector. Not the 12VHPWR https://www.corsair.com/ww/en/explorer/diy-builder/power-supply-units/evolving-standards-12vhpwr-and-12v-2x6/ Edited February 12 by SharpeXB i9-14900KS | ASUS ROG MAXIMUS Z790 HERO | 64GB DDR5 5600MHz | iCUE H150i Liquid CPU Cooler | ASUS TUF GeForce RTX 4090 OC | Windows 11 Home | 2TB Samsung 980 PRO NVMe | Corsair RM1000x | LG 48GQ900-B 4K OLED Monitor | CH Fighterstick | Ch Pro Throttle | CH Pro Pedals | TrackIR 5
LucShep Posted February 12 Posted February 12 (edited) 11 hours ago, SharpeXB said: It would sure seem to be the best move for anyone getting this card to use the 12V-2x6 connector. Not the 12VHPWR https://www.corsair.com/ww/en/explorer/diy-builder/power-supply-units/evolving-standards-12vhpwr-and-12v-2x6/ There's still no security monitoring. The current is still not evenly distributed across the pins. There's still too little margin of error with one single connector. The problem remains, i.e. the difference with the new cable exhists but issues are only better disguised. Edited February 13 by LucShep 1 CGTC - Caucasus retexture | A-10A cockpit retexture | Shadows Reduced Impact | DCS 2.5.6 - a lighter alternative Spoiler Win10 Pro x64 | Intel i7 12700K (OC@ 5.1/5.0p + 4.0e) | 64GB DDR4 (OC@ 3700 CL17 Crucial Ballistix) | RTX 3090 24GB EVGA FTW3 Ultra | 2TB NVMe (MP600 Pro XT) + 500GB SSD (WD Blue) + 3TB HDD (Toshiba P300) + 1TB HDD (WD Blue) | Corsair RMX 850W | Asus Z690 TUF+ D4 | TR PA120SE | Fractal Meshify-C | UAD Volt1 + Sennheiser HD-599SE | 7x USB 3.0 Hub | 50'' 4K Philips PUS7608 UHD TV + Head Tracking | HP Reverb G1 Pro (VR) | TM Warthog + Logitech X56
Recommended Posts