

kksnowbear
Members-
Posts
881 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by kksnowbear
-
This is true, provided of course the 4-cable power "tail" is configured correctly. (There have been cables provided - with 1000W PSUs - that still limit the GPU to 450W). There are also three-cable variants that signal the GPU that the PSU can provide 600W. I know this because I have one As explained above, it is wise to leave room as 'overhead' in sizing a power supply - how much will depend on a lot of factors including desired efficiency and overall system configuration. Certainly nothing excessive about a 1000W unit (or even 1200). As noted, the manufacturer's recommendation for some 4090s is 1000W, and for good reason. Actual load on a PC switched-mode PSU cannot be measured effectively by a UPS or a plug-in 'wall wart' type watt meter. They are useful, I'm not saying they're not - I have two of the watt meters and at least a dozen UPS units. But they have limitations as applies to measuring power, and those limitations are often overlooked or misunderstood (as above in this thread).
-
When I asked where and by what means the load was being measured, I meant at what physical point, and using what type device. You are not measuring load physically at the GPU itself, and you're using a UPS display (or software) to read the load at the line side of your PSU. This is outside the chassis, and in fact isn't even measuring the actual load on the PSU itself. Again, it is not appropriate to size a PSU on anything other than full system load, definitely not while running a single game (even if that's the only game you ever play on the PC). You cannot (to repeat, cannot) determine actual PSU load by reading UPS software or meters/panel displays - such displays/software are not designed to sample or 'catch' max transients, as I already explained. It's just not possible using that measurement. Among other things, your UPS is not on the load side of a switch-mode power supply that is always running at <100% efficiency, so it's not even capable of telling you the exact load on your PSU - and certainly not in real-time. (BTW, if the "high accuracy" of measurement you're referring to is by using the free PowerPanel Personal software that your UPS supports, then I'd say it's not likely anywhere nearly as accurate nor capable as you think it is). If you have a 4090 that's power limited to 450W, great. But that doesn't mean that every 4090 will be limited the same way. As a specific example, my Asus TUF unit is set up at 600W (and that's both in VBIOS and a proper 12VHPWR cable). Again, there have been documented tests showing 4090 power excursions to 700W - which your UPS is unable to catch and report. The power reported by these types of measurement devices is absolutely limited by how fast they can sample, and the transient excursions can happen and be gone faster than the sample rate of those devices. Not only that, but transients like this are absorbed/obscured by the electronics inside the PSU; anyone who's ever waited on DC LEDs inside a PC to extinguish after switching off a PSU has witnessed this. Such GPU power excursions will never be felt at the line side of a PSU, much less the measuring circuit in a UPS that powers the PC - but they will be felt at the 12v (low voltage/load side) of the PSU itself, which is why the PSU must be able to handle them. The reason your system might not experience this type of GPU load is precisely because your GPU is power limited to 450W. Typically, that's done because the manufacturer knows they used weaker VRMs in the hardware, and they limit the power to what those weaker components can handle. Not all 4090s are that way. This is also the exact same reason you cannot base any/every 4090 scenario on your experience: You don't know what kind of PSU, cables, or the exact model of 4090 someone else might have, and you cannot say that your setup is representative of all others. As I describe above, my own 4090 doesn't have the same power limit yours does - and neither do many others. That is as factual and "real world" as it gets. Also, 1200W is by no means "overkill" for a 4090-based system. It allows for the maximum total system load x2, which means the PSU itself will run at max efficiency, per the 80Plus spec. Factually, if you run a PSU at a higher load than 50%, it's efficiency decreases. If my device has actually measured 658W max, and if I were using an 850W PSU then I'm certainly not operating at 50% load and thus definitely not max efficiency. That means more heat, higher cost, etc. These may be small factors, but they are absolutely meaningful nonetheless. This isn't to say a 1200W PSU is required, but it's certainly not overkill if you're trying to achieve max efficiency - and that's a fact. You're also dismissing that all three of the biggest GPU manufacturers in the US have recommended 1000W PSUs for some of their 4090 models. So if someone happens to get one of those models of 4090, and follows your intimation that 850W is enough, they're actually disregarding the manufacturer's recommendations - and thus could be denied warranty service if there's ever a problem. In fact, if the manufacturer recommends a 1000W PSU and someone has an 850W, then the manufacturer's support team would absolutely be within their rights to refuse any support. As much as you dismiss the specs, they exist for a reason. It's foolish to ignore that reason when designing power systems, and professionals know better than doing that. Never mind that it makes zero sense to spend thousands on a PC with a 4090, but cheap out on the PSU to save 2% of the cost.
-
Yup lol couldn't agree more - what a PITA. Especially if you've gone to great lengths to clean it all up and tie everything down. I just went through that not too long ago, having changed cables from the hideous 4090 'tail' to a custom 600W 12VHPWR cable. To be clear, though, there are models of the 4090 for which an 850W PSU is adequate - they will likely be power limited in VBIOS, and this is a 'hard' limitation in that it is usually based on the GPU's voltage regulators (VRMs), which of course cannot be readily changed. Some people have realized the limits in VBIOS can be 'cheated' by loading different VBIOS with higher limits, but it should be obvious why that can be dangerous. So if you picked your 4090 carefully...no need to change PSUs for adequate capacity (though both efficiency and overhead are different matters which should be considered). Best of luck!
-
You don't specify where 340w is being measured, or by what means...and "full load in VR" sounds like you mean in-game (i.e., while running DCS) which is not the same thing as "full load" for an entire system, nor necessarily even full load for only the GPU. Regardless, it's not appropriate to use a figure like average power, or one game, to properly size a PSU. Among other things, regardless of efficiency, the TDP of a 4090 is fully 100w higher than a 3090, and that's per Nvidia specs. I've measured 658W total system load, albeit with a comparatively unsophisticated measuring device. Again, these devices cannot accurately sample the type of excursions that are documented to occur with a 4090. But if my "basic" device shows 658W, it is entirely likely the max transients are higher. And if we're talking about efficiency, you need a PSU with roughly twice the capacity of the typical max load to run at its highest efficiency. In my case, that works out to over 1200w. EDIT: Sorry, don't mean to hijack the thread and this PSU discussion is somewhat off the thread topic. That said, the question of power supply was raised by someone else, and it does seem prudent to correct inaccurate information.
-
That's actually not entirely accurate. In fact, it depends on the model of the GPU, the PSU, and the connection between them (which you don't specify). The GPU could possibly be power limited in it's VBIOS, or by the cable connection. Some 4090s are limited to 450W in their VBIOS, and some cables limit the GPU to 450W... some power supplies, even 1000W units, have shipped with 450W cables. It just depends. Your system might be such that 850W is adequate, but that absolutely does not mean all systems can run any 4090 GPU with any 850W PSU. Factually, a 4090 has been proven to draw 700w by itself, and that's measured at the GPU by accurate equipment, not from software or wall plug monitors which are unable to accurately sample rapid excursions. Also, all of the big three GPU mfrs in the US (Asus, MSI, and Gigabyte) specifically recommend 1000w PSUs for their top of the line 4090 models. TBH it makes no sense if you can afford a card like a 4090 to save $50 by cheaping out on the PSU. (For the record, yes I do own a 4090.)
-
I'll see your Token Ring, and raise you file transfer over a parallel cable (which was an upgrade over serial port LOL)...I actually had a plain old hub before I ever got my first router c.2000 (the 'outside' connections were still dialup before that...then we got 1.5Mb DSL woo-hoo!!) All the cable I've bought and strung has been CAT6...but of course, that one run is the bottleneck. I had hoped to be out of this place before it became an issue
-
Smart...in my case, the house was already prewired by someone else and I can't get into the walls etc. Or at least, I've been too lazy so far to do it Can't say I've had any trouble with lengths. As long as I've respected the distances within reason, I've gotten rated speed or better (unless something else was at issue).
-
Getting ready to update board and cpu but need help
kksnowbear replied to Jeb's topic in PC Hardware and Related Software
Also a good point, though I might argue that DLSS isn't for everyone I would also point out that newer AMD GPUs (since 5000 series) support what's called RSR (Radeon Super Resolution), which does not require any specific game support or implementation unlike DLSS. It just works - and, based on my own first hand experiences, it works well. In any game. (Note this is not to be confused with AMD's FSR, which does require specific support/implementation. IIRC RSR doesn't support GPUs other than their own 5000 series + cards, where FSR actually supports even some Nvidia cards) -
Getting ready to update board and cpu but need help
kksnowbear replied to Jeb's topic in PC Hardware and Related Software
Not a bad idea, and I couldn't agree more about Nvidia but OP doesn't specify whether AMD is an acceptable alternative. Many people won't even consider it - which I think is really unfortunate and misguided. -
Cat5e is good enough for even gigabit (Gb) speeds. As mentioned above, I am also doubtful you actually have 5Gbit service (especially if in a residence). Not impossible, just not at all common. Cat6 is good enough to 10Gb, which is well beyond what most homes have (even if they're on fiber, usually ~2Gb is the upper limit). The house I'm in was prewired with CAT5e. I have (only) one cable going from the closet where the service enters to upstairs where most all the computers are; this run is probably over 100-150'. I have Gb service, and routinely get 1.25Gb on the machines upstairs via that CAT5e cable. That wouldn't be possible unless CAT5e is capable of Gigabit speed (the distance limit is 328'/100m). The connection going to the router is limited by your ISP, either from the modem (if you have one), from the outside interface panel, or whatever is next in the 'up line' - and is not likely greater than gigabit... Realistically the only thing CAT8 cable will do in your scenario (with the assumptions listed) is most likely cost more
-
Getting ready to update board and cpu but need help
kksnowbear replied to Jeb's topic in PC Hardware and Related Software
TBH it really depends on your goal. Your situation is interesting, because - *if* we look at the CPU usage plots as accurate over time - then both the CPU and GPU are working hard, probably because of your settings. (BTW on that note, I'd consider lowering the settings to ease things up a bit if it were me). I'd be very curious, also, about the CPU usage shown in your pic. The graphs are great, but don't really show anything about what's causing all that load, and I somehow doubt it's all because of DCS. This needs investigation IMO. That said, back to your goal: (Note the assumption here is you want new hardware where it is available; obviously buying used stuff can be much cheaper but the exact cost depends on a lot of factors) If you need to get a decent upgrade now on a tight budget, then I'd say it might be best to find a good deal on a 3070Ti. They can be had for as little as $425 new (maybe less). That's a considerable step up from the 2070S (~35%, though that doesn't always translate directly to a proportional increase in FPS). Also, if there's budget for it, I'd consider changing the 9600k to a 9900k. There's a cost (~$300) but in this 'budget priority' scenario, it's less than a new motherboard (which will also require new RAM). Speaking of RAM, I'd consider 64G at some point (although this is not urgent and is optional from a strictly budget standpoint). Doing these things will ultimately result in the best upgrade you're going to get for the cost IMHO. If, OTOH, your goal is to replace the entire system via incremental upgrades, then yes, you could start with the new motherboard/CPU/RAM - with the understanding that the 2070S will be the constraining factor at that point and will need upgrading soon thereafter. It's worth nothing that this last point (2070S being the limiting factor) essentially returns us to the first argument above for doing a GPU first, even if your intent is to replace everything, because a better GPU will work well enough with what you have now, and then can be re-used when you finally do change motherboard/CPU/RAM. But, in any case, it depends on your goal - which almost always depends on your budget and timeline. HTH -
Well...I might not consider the GPU unimportant lol but I think I see your point. PSU is definitely important. 1200W isn't necessarily too much, really...there is a legitimate argument that running the PSU at 50% load is most efficient (therefore less power used and less heat generated). If a 4090-based system runs ~650w, then a 1200w PSU puts you right around 50% load when gaming. The exception is when you're *not* gaming; the power supply is hardly loaded at all and efficiency drops...but not much you can do about that and it's a different discussion anyhow. The only 'problem' there might be with a top-end very high wattage PSU is cost. But, as discussed above, if you spend the kind of money to have a 4090 in the first place, then a more expensive PSU isn't outrageous. It doesn't makes sense to 'cheap out' on a PSU to save $50 when the balance of the system exceeds $2000
-
If I'm looking at it correctly, yours is one of the aforementioned GPUs for which the manufacturer recommends a 1000W PSU (the "Gaming OC" model?). It appears that model uses a VBIOS that supports a 600W power limit. If (whatever cable arrangement you use) supports 600W sideband signalling, and/or you have a PCIe 5.0-compliant PSU with a 12VHPWR connector, you should be all set. The lesser Windforce model has a PSU recommendation of 850, and accordingly, appears to have a 479w power limit in VBIOS (or possibly 500W, not 100% certain and don't have one to test). This is almost certainly because of lower spec hardware (VRMs) used on that model. Again, whether the 'extra' power allowed for in the 600W cables increases performance at all is not at all certain (and definitely not at all if the limit is lower in VBIOS, regardless of cable or PSU).
-
The terms "minimum" and "recommended" are, pretty much universally, two different things. At least according to the specifier, you must have at least the minimum, but you really should have the recommended. Although I'm admittedly not checking specs right this minute, I believe the 'big three' GPU manufacturers still left in the game in the US since eVGA bailed (i.e. Asus, Gigabyte and MSI) specify an 850W PSU as minimum for some of their 4090 models - but they also recommend a 1000W unit for some of their high-end models. Of course, some of the 'experts' here have tried to insist 850W is enough, but the actual load is going to depend a lot on the exact PSU and also on the circumstances. Considering it's proven that a 4090 can draw 700W *by itself* in the right circumstances, plus factoring in the kind of money someone will have spent to have a 4090 in a system in the first place (and the balance of the system's cost as well)... ...I can't imagine why anyone would want to 'cheap out' on a minimal power supply. Never a good idea TBH. I have a 4090 myself, and I have measured 650W as a max from the wall - but there are a *lot* of "howevers". One is that most consumer measuring devices simply are not going to be accurate/sensitive enough to 'catch' the extremes that usually occur faster than most meters can sample. This applies equally to software, and/or displays driven by software (such as on front of a UPS). Any professional meter good enough to measure this kind of thing is going to be extremely expensive, and not something any typical consumer would ever own. Additionally, many people (for various reasons) use the 'tail' that came with their GPU, because their PSU doesn't have a 12VHPWR connector. Some of these tails are set up to limit what the GPU can draw, by omitting the sideband signalling pins. But, anyone can buy a 600W adapter cable and slap it on a PSU that cannot deliver that kind of power to the GPU. Whether the 600W cable makes a significant difference in performance might be arguable...but what *is* clear is that it's stupid easy to 'fool' a GPU into thinking it's OK to draw 600W, even if the PSU isn't up to it. Finally, there is also the VBIOS. The GPU's VBIOS will have a certain power limit programmed in it. Usually, this value is based on actual hardware, because by design it considers components (like voltage regulators) on the card itself. So a GPU built with VRMs that can only handle 450W will have a corresponding power limit. Low end units will have lower power limits than high end units, which are typically the only ones you see with 600W power limits in VBIOS. Thus, even *if* you program a card with VBIOS that has a higher power limit value, you still don't change what the hardware itself is capable of, and you could damn well wind up frying your card (or 'bricking' it altogether) by screwing around with something you shouldn't be. I see a real potential here for people who don't know what they're doing - because they think it's a "free" GPU performance upgrade - to buy these 600W cables and fit them to a PSU that might not be appropriate to it, and/or with a GPU that is VBIOS limited anyway. Stupid and dangerous, IMHO (I've already seen one member on this very forum who has done exactly this...there was a thread about it but he got butthurt when I pointed out issues in his results, and he deleted the thread...lol which of course doesn't change the facts). Actually, that depends on how the cable is made. You can create a cable that will 'tell' the GPU that it can draw 600W, regardless of the actual PSU and cable...it could be a hardwired SATA power cable on a 400W PSU and it can be 'fooled' by a specifically constructed adapter/cable. I AM NOT RECOMMENDING ANYONE SHOULD DO THAT - EVER - But the reality is that it's entirely possible. Of course it won't work very well, if at all...and this is what's wrong with people just buying these connectors/adapters without understanding the other factors involved. As usual, just because someone sells something that will connect two otherwise incompatible devices doesn't mean it's going to work (and possibly, spectacularly so). Whether you attribute it to George Carlin or WC Fields, there's a saying: “If you nail two things together that have never been nailed together before, some schmuck will buy it from you.” Hardware makers - especially cable manufacturers, it seems - are constantly 'nailing two things together' and uninformed or misguided people will line up to buy them. Factually, just because you can plug this into that doesn't mean it's going to work, nor that it was ever intended to be connected that way. And again, you can damage stuff doing this. Anyhow, as always, the foregoing is strictly my opinion - which is typically based on verifiable fact
-
FWIW, my opinion is that the 64G will do you more good than any perceived loss from difference in latency. There are numerous threads about DCS using over 32G RAM, and if you land in that position even momentarily, the adverse effects from running low on RAM will hurt far worse than the (almost certainly imperceptible) difference between CL14 and 16 at the same clock could offset. Don't expect frame rate increase, it isn't likely at all. The persistent claims of stutter improvements are all subjective IMO. Once again, the foregoing is entirely my opinion. An informed, professionally trained opinion based on 40+ years' experience specific to this discipline - but an opinion nonetheless. Best of luck.
-
Yes. It's been documented that 4090s can draw in excess of that (though obviously not continuously). I know of at least one instance where the excursions were measured at 700w, if I recall correctly. If the cable isn't set up for 600w sideband signaling via the 12VHPWR cable, the GPU will inherently limit what it draws from the PSU. This behavior is by design, allowing the GPU to "know" how much power the PSU can provide. Why do you ask?
-
DDR4-3600 CL18 is 10nsec absolute or 'first word' latency. He's got CL36 DDR5 now; running at 3600 with default timing, that's *double* the absolute latency (20ns) A lot of people went to DDR4-3600 CL16 as a 'sweet spot', which is 8.88ns, faster still than the CL18 stuff. I believe this is among the reasons early DDR5 reviews were not very flattering. DDR4 with its inherently lower CAS levels will often outperform DDR5 with CAS Levels up around 36(+). Even DDR5-8000 at CL36 still has higher latency (9ns) than 4-3600/CL16. It's not a new thing, but basically the chip manufacturers can really only increase the speed of the 'devices' on the memory sticks with a proportional increase in CAS level. About the stuttering with 4 modules: Precisely. About the RGB lighting...yeah, it's unfortunate but I know exactly what you mean. Still not clear if the shop only sold the memory or the motherboard too...if the latter, I'd be for getting a refund myself, and go with a different board. It might not make things better - but it isn't likely to be any worse! I had a totally different experience with an Asus board and memory that was very similar. And I think all the hype about the burning CPUs was exactly that - a couple of instances where loudmouth 'reviewers' sensationalized a very (very) isolated fraction of the whole, and made a public spectacle out of it - mostly I'm sure to further their own notoriety as opposed to any other goal. These 'influencers' have gotten to the point where they're not much better than the news media, which can always be trusted to sensationalize everything.
-
Well, I am truly sorry for your woes, but we must caution ourselves against inaccurate advice in public forums, as it misleads people who (often) tend to look at it as gospel. I understand what you're trying to convey, and I appreciate your making the distinction. Interestingly the memory you cited above is very similar to what I used and got four sticks to run just fine (see pic below). In fact, I think one of the few differences in yours and mine is that mine is not even EXPO, yet it worked OK in my case. I was trying to avoid being too specific, so as not to step on toes - but I honestly believe it's the ASRock motherboard that's the biggest (if not only) problem here. Just my opinion. My point about the shop was that you shouldn't be without recourse. If they talked you into something, they should make it right - up to and including a full refund on *everything* they sold you. (If you had some parts already, got them elsewhere, etc...well, can't fairly blame that on the shop. Again, I ran memory like that at higher than 3600 no problem). I think your advice is perhaps the best: Use two modules, avoid the potential for problems. Sorry if it seems I'm arguing with that, because I'm not. I'm just saying that there's a lot more to it (and, as a professional system builder myself), that it matters who you work with. FWIW I still disagree that 3600 will automatically cause stutters, regardless of VR or not. Something else is going on - in my opinion. As I explained previously, if that were true then logically everyone with 3600 (even two sticks) would have stuttering - and I just really don't think that's true. Good luck. The RAM in the build I showed earlier, which successfully ran 4 passes (6 hours) in MemTest8 at 4800 is shown below. That's in addition to FireStrike/TimeSpy stress tests I ran, countless benchmarks, and (perhaps most important) zero call-backs from the client. I"m actually fairly sure it's running at rated speed/6000 ever since, but I can't readily prove that ATM. I don't have another AM5 setup right now, so it would mean asking that client to bring his back...ain't gonna happen
-
Samsung 57" Odyssey Neo G95NC
kksnowbear replied to rapid's topic in PC Hardware and Related Software
Well, that might be true (see below*)...but the monitor we're discussing doesn't support GSync - it supports Freesync Premium Pro. At least that's what I see in the specs. * FWIW I have read that there are some rare units that will actually support GSync Compatible over HDMI 2.1...but I'm not first-hand familiar with them. -
Samsung 57" Odyssey Neo G95NC
kksnowbear replied to rapid's topic in PC Hardware and Related Software
Well, to be accurate your 3090 should have a single HDMI 2.1A port, which is enough to support what the 57" Samsung requires. Yeah, I got a wall mount for two monitors to use with my 49. The wall bracket is bigger, and I intend to use both VESA mounts on the mount for the one monitor. Mounted that way, it won't move if the house was to blow over. -
Samsung 57" Odyssey Neo G95NC
kksnowbear replied to rapid's topic in PC Hardware and Related Software
Yes. The Ultrawide aspect means the vertical dimension is less problematic than typical 16:9. This is also one reason that monitors that have adjustable height (both up and down) cost a bit more. TVs typically won't have this, nor will cheaper monitors. Finally, as I said above, a VESA mount is worth considering. Though not necessary (and not necessarily cheap lol), there are units that are fully articulating and can adjust height through a fairly broad range. I actually got a fairly cheap dual head mount (total something l Iike $50 I think) to use the two mounts combined for my G9...just haven't got that far yet lol -
Samsung 57" Odyssey Neo G95NC
kksnowbear replied to rapid's topic in PC Hardware and Related Software
Uhhhh....yeah. I just last Christmas got "approval" for the 49" G9 lol...somehow I don't think that's happening again any time soon It looks magnificent though. I will say that monitors this big take up a lot of space, so I would suggest some type of VESA mount arrangement if at all possible. Also, it does take some getting used to...especially when *not* gaming; it's nice to have the equivalent of two 27" screens side by side, but you do wind up turning your head a lot, and for quite a distance. And, it's a totally stupid software thing...but some apps don't know how to stay put when resuming from sleep/being minimized. Instead of 'snapping' back to their original left or right half-screen position, they'll just pop up in the middle somewhere or off one side. Easy to correct but recurrent and annoying as hell. EDIT: You know, I was looking over the specs for this monitor...it requires DP 2.1 for 'full-speed' refresh rate (240) performance at native resolution... I'm using an Asus TUF 4090 ATM, and I also have a couple 3090s (eVGA, Asus) and a 3090Ti...all very high-end GPUs...and none of them has DP 2.1 support. WTH kind of card would you have to have to use DisplayPort with this thing? Yes, the cards and the monitor support HDMI 2.1, so it would still work...but all this time the standards in monitors are moving away from HDMI and toward DP. Just look at the ports: Typically 2 or 3 DP but as time goes on, as few as a single HDMI connector.