Jump to content

BitMaster

Members
  • Posts

    7752
  • Joined

  • Last visited

  • Days Won

    4

Everything posted by BitMaster

  1. It's many stupid things coming together, LoL, but Igor's photo's and explanation are the best so far for what has happened to the 4in1 Adapters. Still, the 30x connection leave a bad taste, even if they are not foremost a fire hazard.
  2. Oh well, than it's up to Nvidia to fix this. Still, I doubt that another Adapter or ATX3.0-PSU cable will be completely safe. The way the Pins are arranged cries for such things to happen. BTW, the 30x in&out applies to all, not only the adapters and it actually means or refers to all parts involved, incl. the GPU socket. The high load also not jumped onto that cable 1 inch before passing the socket, it came on this wire straight from the PSU. I would double and tripple check my PSU Pins as they also have been running the exact same load on those wires that melted. etc etc.. OK, I am gonna stop here, ain't gonna buy one anyway. EDIT: Igor seem's to have found the real problem, which is actually not the Pins ( I still dislike them ) but a bridge plate onto which the cables are soldered for load distribution.
  3. Afaik the issue occured also on MSI, Gigabyte and Asus boards and again, afaik, those ship their own Adapters. You can't have a female that isn't closed all around, as soon as the male pin wiggles or puts stress on it, or 30x in & out, the whole thing becomes so wide that there is no more full contact given, and that, I am afraid, comes with all those adapters as that kind of male/female is what they use and I think they can only use those. So it's only more or less likely to occur from manufacturer to manufacturer but the error is in the design chosen. The design is ok, for much lower ampere loads. I used to fly high powered electric R/C planes, most where roughly around 12v, give or take. When you pull up to 250w, G2 2mm Gold Contact is ok, 250-500w it's G 2.5 and G4 for higher loads up to 200amp and watts up to at least 13kW is what I have seen working with G4 Gold Contacts. Those not-closed-loop female pins are the culprit imho. Anyway, the damage is done, Nvidia will sell less just because of that and AMD might get some more customers just because of that. What a dumb decision they took.
  4. I think the problem is down to how the female part of the connector is made, not saying the male couldn't be better too. It's unbelievable no one saw this problem before it was too late. How can you make a female socket that is made like that for such a use case??? A solid hollow female that you cannot pry open and a better, solid male. 100watts through one of those pairs.... freaking suicide. When you do electric motors, you roughly know how much gauge you need for cont. 100w and what the plug should look like. More like this, with feathers on the male and solid female: This is a really bad show for all involved. IT SHOWS HOW MUCH THEY CARE but want your money !
  5. Yes, you do, the ONE button at the VERY LEFT BOTTOM of your screen I am located in Germany. Try if you get it done, if not, there is Teamviewer. That is way better than any phone call can ever be. Send me a PM if you need remote assistance to get that file sorted. You will need to download Teamviewer, install it and send me your ID and PWD via PM, NOT IN THIS PUBLIC CHAT !!!!
  6. Easy, right-click on Start button and chose Command Prompt or Terminal as Administrator. you do not need to be in c:\windows\syswow64 to execute the command. Open the terminal and execute the command, that's it.
  7. This should fix it: download the correct file for your OS version here https://www.dll-files.com/qt5core.dll.html copy to c:\windows\syswow64 open command prompt as admin and execute regsvr32 qt5core.dll
  8. AMD's point of view to this: From Guru3d, minutes ago... AMD in a tweet shares that the 12+4 pin ATX 12VHPWR connector will not be used in any of its upcoming Radeon RX 7000 series next-generation graphics cards.
  9. I watch many of his videos and tbh, I may understand 10-20% of that electronical stuff ( VRM's, Caps, SPS's, MosFETs...All OMG's 2 me ) but it's enough to get what he tries to say. The hardest part was really trying to learn better RAM OC from him and that took many many videos and way more than a year to get a better understanding, still, far away from his level. Regarding that stupid connector... I am just waiting for the first fellow Pilot that posts :"Had to eject, flight-computer caught fire"
  10. To be fair it must be said that if you undervolt aka powerlimit the 4090 it consumes less than most high end 3000 series cards while still having more fps. the elephant in the room is not the card itself. It’s the price. If tuned right I have to admit it’s a step towards more efficiency. Just out of the box it’s tuned to max performance and not common sense.
  11. It highly depends on the board's circuits and the CPU's integrated memory controller, lesser a RAM concern itself. I have 2 kits of 4-module B-Die and they run great, IN THE RIGHT BOARD. If you are unlucky, not much will make them stable but reduce MHz, 3000 is a safe value in my experience. For example: My 3600-16-16-165-36 32GB/4x8 kit ran up to ~4000MHz in my 7700k Z270 combo, but it only runs w/o BSOD at 3000MHz with my 8700k Z370 combo, no matter the Volts. If I take 2 modules out of the Z370 they again run like hell, just not all 4 in that particular board. Temps were never an issue. With my 5900X rig I cut RAM corners so to say and hoped to compensate it with luck & skill. I bought 3200-14-14-14-34 and oc'ed them to 3600-14-14-14-34 @ 1.45 ( results in 1.48v in HWinfo etc. ). They can get pretty warm when stressed, around 60ish°C, idle temps are mid 40's°C. Till today they passed all tests I could throw at it, most important, it is stable and does not crash, never did since I own it. To top it and because I was bored last night I downloaded AMD Ryzen Master, hit the Curve Optimizer and let it run for 90min to tune my voltage curves, raised the PPT TDC and EDC to north of 250w and ampere and let it run linpack extreme and prime9 with all AVX etc ON, stable... it boosted to 4.6ish GHZ all core under stress, consumed ~200w, peak was 230w, temps on CPU were in the high 80's for loads that fit in Cache but it passed it. I didnt fear the CPU to crash, more the overclocked RAM when the CPU IMC is stressed by OC and the heat from it but it worked out. Was it skill, no, was it LUCK, yes. Unless you dial in the wrong values it's mostly the parts, they either can do it or not and you only find out if you test it. Imhoi t's more like the board or the IMC is playing foul in your bcase
  12. Well, you can build a budget 5800X3D on a B450 and more RAM kits to choose from. I am not convinced. Performance per Watt & per Dollar they all lack behind the X3D. The big ? is, how much more will AMD's 7000 X3D deliver. My hope is it dominates like the 5800X3D does.
  13. So you are saying that you pull 520w through "TWO" PCIe-cables ? It doesn't really matter if they are dual-headed or single, it remains 2x 8-wire cable. Each cable has a rating of 150w max. You should not use the dual-head config, use 4 cables with 8 wires each, if needed, buy a new PSU. You run that thing way above safe limits. just my 2 cents
  14. Starting to crawl back, a bit late for my taste. Let this ripe, the last word on "what GPU can I actually afford with the future electricity bills in mind.." hasn't been said yet nor have those astronomically high kW/h rates reached all those who will eventually get them and open their eyes, paired with a new Gas bill that will scale likely. With -5k€/year on the energy side you will have to rethink, if you like it or not. There is no room for such idiotism in the near future for most users. A Gaming Rig that will consume ~500w total while .listen...GAMING.. will become a different taste for most in central Europe where energy prices skyrocket. The US might be different in that regard but if it hits Europe hard it will hit & affect every market.
  15. Just checked, heck yes, though mine doesn't crash as yours, it refuses to maximize again. I need to click on the main program icon to make it visible again. Kinda strange Latest 11 Updates & latest Nvidia btw.
  16. Well, it's your money that you safe but also risc.
  17. You should not run pre-builds, dev-editions or such on a PC that you rely on !!! Use Hyper-V, VMware, Virtual Box, just to name a few, install them virtually. Buy some Win10 keys off ebay and use those for your extra needed licenses then, its 5€ per key....and your PC remains stable, let the VM's crash I run 5x Win Desktops just for tinkering in VMware, along many others, Firewall, NAS, Win SRV, Linux SRV, Linux Desktops, MS-DOS and some other freaky OS'. It's fun too, you learn a lot
  18. DRAM overheat can occur when certain criterea meet together. 4 modules + little fan power + a GPU tha dumps a lot of heat + WORST, the GPU infrared heat radiation that you can't fight properly with air alone. Buildzoid has a nice video about that issue. If your RAM or board reports DRAM temps, mine for example does, you can measure the temps at different loads. Mine can get pretty warm if I stress them but they don't fail. As a rule of thumb, don't let them go much over 60°C. Your case is a perfect show-case for others asking RAM questions: Adding RAM is not as easy as many think and should be your 2nd choice only.
  19. Nice, just tried to reconfigure what you specify as "all maxed out"...what it does among other things is add a 3080ti and one of the BIG processors, either 5950X or 7950X and keeps the 750watt SFF-PSU. There is no PSU choice. If that is the case, I wouldn't do a full tilt stress test, it may exceed your PSU's limits !
  20. Your board features DDR4, not DRR3 ! https://us.msi.com/Motherboard/Z170A-GAMING-M3/Specification If you want to play Multiplayer with big maps & missions you will basically have to replace at least motherboard + cpu + ram + gpu, maybe PSU too and 1-2 NVMe drives are cool too unless you have enough SSD drive space already. DCS can outgrow a 500GB SSD if you buy all maps & modules, mind that. I would take a 1TB for DCS/Games and a 2nd drive 500GB or bigger for your OS 10/11. There is no upgrade path with your board, you are stuck with 6th gen Intel and even if 7th gen would work it would make zero sense to do so. If your budget allows, keep this PC as a functional entity, and build yourself a new one for DCS.
  21. It's a symptom of falling empires, the parties get wilder and expenses are of no concern. For me personally, 4090, completely of no interest. I would betray my family if I would do so. Der Krug geht zum Brunnen bis er bricht.
  22. Reviewers like Guru3d are allowed officially to publish their reviews as of now. https://www.guru3d.com/articles-pages/geforce-rtx-4090-founder-edition-review,1.html
×
×
  • Create New...