

kksnowbear
Members-
Posts
877 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by kksnowbear
-
That's actually not entirely accurate. In fact, it depends on the model of the GPU, the PSU, and the connection between them (which you don't specify). The GPU could possibly be power limited in it's VBIOS, or by the cable connection. Some 4090s are limited to 450W in their VBIOS, and some cables limit the GPU to 450W... some power supplies, even 1000W units, have shipped with 450W cables. It just depends. Your system might be such that 850W is adequate, but that absolutely does not mean all systems can run any 4090 GPU with any 850W PSU. Factually, a 4090 has been proven to draw 700w by itself, and that's measured at the GPU by accurate equipment, not from software or wall plug monitors which are unable to accurately sample rapid excursions. Also, all of the big three GPU mfrs in the US (Asus, MSI, and Gigabyte) specifically recommend 1000w PSUs for their top of the line 4090 models. TBH it makes no sense if you can afford a card like a 4090 to save $50 by cheaping out on the PSU. (For the record, yes I do own a 4090.)
-
I'll see your Token Ring, and raise you file transfer over a parallel cable (which was an upgrade over serial port LOL)...I actually had a plain old hub before I ever got my first router c.2000 (the 'outside' connections were still dialup before that...then we got 1.5Mb DSL woo-hoo!!) All the cable I've bought and strung has been CAT6...but of course, that one run is the bottleneck. I had hoped to be out of this place before it became an issue
-
Smart...in my case, the house was already prewired by someone else and I can't get into the walls etc. Or at least, I've been too lazy so far to do it Can't say I've had any trouble with lengths. As long as I've respected the distances within reason, I've gotten rated speed or better (unless something else was at issue).
-
Getting ready to update board and cpu but need help
kksnowbear replied to Jeb's topic in PC Hardware and Related Software
Also a good point, though I might argue that DLSS isn't for everyone I would also point out that newer AMD GPUs (since 5000 series) support what's called RSR (Radeon Super Resolution), which does not require any specific game support or implementation unlike DLSS. It just works - and, based on my own first hand experiences, it works well. In any game. (Note this is not to be confused with AMD's FSR, which does require specific support/implementation. IIRC RSR doesn't support GPUs other than their own 5000 series + cards, where FSR actually supports even some Nvidia cards) -
Getting ready to update board and cpu but need help
kksnowbear replied to Jeb's topic in PC Hardware and Related Software
Not a bad idea, and I couldn't agree more about Nvidia but OP doesn't specify whether AMD is an acceptable alternative. Many people won't even consider it - which I think is really unfortunate and misguided. -
Cat5e is good enough for even gigabit (Gb) speeds. As mentioned above, I am also doubtful you actually have 5Gbit service (especially if in a residence). Not impossible, just not at all common. Cat6 is good enough to 10Gb, which is well beyond what most homes have (even if they're on fiber, usually ~2Gb is the upper limit). The house I'm in was prewired with CAT5e. I have (only) one cable going from the closet where the service enters to upstairs where most all the computers are; this run is probably over 100-150'. I have Gb service, and routinely get 1.25Gb on the machines upstairs via that CAT5e cable. That wouldn't be possible unless CAT5e is capable of Gigabit speed (the distance limit is 328'/100m). The connection going to the router is limited by your ISP, either from the modem (if you have one), from the outside interface panel, or whatever is next in the 'up line' - and is not likely greater than gigabit... Realistically the only thing CAT8 cable will do in your scenario (with the assumptions listed) is most likely cost more
-
Getting ready to update board and cpu but need help
kksnowbear replied to Jeb's topic in PC Hardware and Related Software
TBH it really depends on your goal. Your situation is interesting, because - *if* we look at the CPU usage plots as accurate over time - then both the CPU and GPU are working hard, probably because of your settings. (BTW on that note, I'd consider lowering the settings to ease things up a bit if it were me). I'd be very curious, also, about the CPU usage shown in your pic. The graphs are great, but don't really show anything about what's causing all that load, and I somehow doubt it's all because of DCS. This needs investigation IMO. That said, back to your goal: (Note the assumption here is you want new hardware where it is available; obviously buying used stuff can be much cheaper but the exact cost depends on a lot of factors) If you need to get a decent upgrade now on a tight budget, then I'd say it might be best to find a good deal on a 3070Ti. They can be had for as little as $425 new (maybe less). That's a considerable step up from the 2070S (~35%, though that doesn't always translate directly to a proportional increase in FPS). Also, if there's budget for it, I'd consider changing the 9600k to a 9900k. There's a cost (~$300) but in this 'budget priority' scenario, it's less than a new motherboard (which will also require new RAM). Speaking of RAM, I'd consider 64G at some point (although this is not urgent and is optional from a strictly budget standpoint). Doing these things will ultimately result in the best upgrade you're going to get for the cost IMHO. If, OTOH, your goal is to replace the entire system via incremental upgrades, then yes, you could start with the new motherboard/CPU/RAM - with the understanding that the 2070S will be the constraining factor at that point and will need upgrading soon thereafter. It's worth nothing that this last point (2070S being the limiting factor) essentially returns us to the first argument above for doing a GPU first, even if your intent is to replace everything, because a better GPU will work well enough with what you have now, and then can be re-used when you finally do change motherboard/CPU/RAM. But, in any case, it depends on your goal - which almost always depends on your budget and timeline. HTH -
Well...I might not consider the GPU unimportant lol but I think I see your point. PSU is definitely important. 1200W isn't necessarily too much, really...there is a legitimate argument that running the PSU at 50% load is most efficient (therefore less power used and less heat generated). If a 4090-based system runs ~650w, then a 1200w PSU puts you right around 50% load when gaming. The exception is when you're *not* gaming; the power supply is hardly loaded at all and efficiency drops...but not much you can do about that and it's a different discussion anyhow. The only 'problem' there might be with a top-end very high wattage PSU is cost. But, as discussed above, if you spend the kind of money to have a 4090 in the first place, then a more expensive PSU isn't outrageous. It doesn't makes sense to 'cheap out' on a PSU to save $50 when the balance of the system exceeds $2000
-
If I'm looking at it correctly, yours is one of the aforementioned GPUs for which the manufacturer recommends a 1000W PSU (the "Gaming OC" model?). It appears that model uses a VBIOS that supports a 600W power limit. If (whatever cable arrangement you use) supports 600W sideband signalling, and/or you have a PCIe 5.0-compliant PSU with a 12VHPWR connector, you should be all set. The lesser Windforce model has a PSU recommendation of 850, and accordingly, appears to have a 479w power limit in VBIOS (or possibly 500W, not 100% certain and don't have one to test). This is almost certainly because of lower spec hardware (VRMs) used on that model. Again, whether the 'extra' power allowed for in the 600W cables increases performance at all is not at all certain (and definitely not at all if the limit is lower in VBIOS, regardless of cable or PSU).
-
The terms "minimum" and "recommended" are, pretty much universally, two different things. At least according to the specifier, you must have at least the minimum, but you really should have the recommended. Although I'm admittedly not checking specs right this minute, I believe the 'big three' GPU manufacturers still left in the game in the US since eVGA bailed (i.e. Asus, Gigabyte and MSI) specify an 850W PSU as minimum for some of their 4090 models - but they also recommend a 1000W unit for some of their high-end models. Of course, some of the 'experts' here have tried to insist 850W is enough, but the actual load is going to depend a lot on the exact PSU and also on the circumstances. Considering it's proven that a 4090 can draw 700W *by itself* in the right circumstances, plus factoring in the kind of money someone will have spent to have a 4090 in a system in the first place (and the balance of the system's cost as well)... ...I can't imagine why anyone would want to 'cheap out' on a minimal power supply. Never a good idea TBH. I have a 4090 myself, and I have measured 650W as a max from the wall - but there are a *lot* of "howevers". One is that most consumer measuring devices simply are not going to be accurate/sensitive enough to 'catch' the extremes that usually occur faster than most meters can sample. This applies equally to software, and/or displays driven by software (such as on front of a UPS). Any professional meter good enough to measure this kind of thing is going to be extremely expensive, and not something any typical consumer would ever own. Additionally, many people (for various reasons) use the 'tail' that came with their GPU, because their PSU doesn't have a 12VHPWR connector. Some of these tails are set up to limit what the GPU can draw, by omitting the sideband signalling pins. But, anyone can buy a 600W adapter cable and slap it on a PSU that cannot deliver that kind of power to the GPU. Whether the 600W cable makes a significant difference in performance might be arguable...but what *is* clear is that it's stupid easy to 'fool' a GPU into thinking it's OK to draw 600W, even if the PSU isn't up to it. Finally, there is also the VBIOS. The GPU's VBIOS will have a certain power limit programmed in it. Usually, this value is based on actual hardware, because by design it considers components (like voltage regulators) on the card itself. So a GPU built with VRMs that can only handle 450W will have a corresponding power limit. Low end units will have lower power limits than high end units, which are typically the only ones you see with 600W power limits in VBIOS. Thus, even *if* you program a card with VBIOS that has a higher power limit value, you still don't change what the hardware itself is capable of, and you could damn well wind up frying your card (or 'bricking' it altogether) by screwing around with something you shouldn't be. I see a real potential here for people who don't know what they're doing - because they think it's a "free" GPU performance upgrade - to buy these 600W cables and fit them to a PSU that might not be appropriate to it, and/or with a GPU that is VBIOS limited anyway. Stupid and dangerous, IMHO (I've already seen one member on this very forum who has done exactly this...there was a thread about it but he got butthurt when I pointed out issues in his results, and he deleted the thread...lol which of course doesn't change the facts). Actually, that depends on how the cable is made. You can create a cable that will 'tell' the GPU that it can draw 600W, regardless of the actual PSU and cable...it could be a hardwired SATA power cable on a 400W PSU and it can be 'fooled' by a specifically constructed adapter/cable. I AM NOT RECOMMENDING ANYONE SHOULD DO THAT - EVER - But the reality is that it's entirely possible. Of course it won't work very well, if at all...and this is what's wrong with people just buying these connectors/adapters without understanding the other factors involved. As usual, just because someone sells something that will connect two otherwise incompatible devices doesn't mean it's going to work (and possibly, spectacularly so). Whether you attribute it to George Carlin or WC Fields, there's a saying: “If you nail two things together that have never been nailed together before, some schmuck will buy it from you.” Hardware makers - especially cable manufacturers, it seems - are constantly 'nailing two things together' and uninformed or misguided people will line up to buy them. Factually, just because you can plug this into that doesn't mean it's going to work, nor that it was ever intended to be connected that way. And again, you can damage stuff doing this. Anyhow, as always, the foregoing is strictly my opinion - which is typically based on verifiable fact
-
FWIW, my opinion is that the 64G will do you more good than any perceived loss from difference in latency. There are numerous threads about DCS using over 32G RAM, and if you land in that position even momentarily, the adverse effects from running low on RAM will hurt far worse than the (almost certainly imperceptible) difference between CL14 and 16 at the same clock could offset. Don't expect frame rate increase, it isn't likely at all. The persistent claims of stutter improvements are all subjective IMO. Once again, the foregoing is entirely my opinion. An informed, professionally trained opinion based on 40+ years' experience specific to this discipline - but an opinion nonetheless. Best of luck.
-
Yes. It's been documented that 4090s can draw in excess of that (though obviously not continuously). I know of at least one instance where the excursions were measured at 700w, if I recall correctly. If the cable isn't set up for 600w sideband signaling via the 12VHPWR cable, the GPU will inherently limit what it draws from the PSU. This behavior is by design, allowing the GPU to "know" how much power the PSU can provide. Why do you ask?
-
DDR4-3600 CL18 is 10nsec absolute or 'first word' latency. He's got CL36 DDR5 now; running at 3600 with default timing, that's *double* the absolute latency (20ns) A lot of people went to DDR4-3600 CL16 as a 'sweet spot', which is 8.88ns, faster still than the CL18 stuff. I believe this is among the reasons early DDR5 reviews were not very flattering. DDR4 with its inherently lower CAS levels will often outperform DDR5 with CAS Levels up around 36(+). Even DDR5-8000 at CL36 still has higher latency (9ns) than 4-3600/CL16. It's not a new thing, but basically the chip manufacturers can really only increase the speed of the 'devices' on the memory sticks with a proportional increase in CAS level. About the stuttering with 4 modules: Precisely. About the RGB lighting...yeah, it's unfortunate but I know exactly what you mean. Still not clear if the shop only sold the memory or the motherboard too...if the latter, I'd be for getting a refund myself, and go with a different board. It might not make things better - but it isn't likely to be any worse! I had a totally different experience with an Asus board and memory that was very similar. And I think all the hype about the burning CPUs was exactly that - a couple of instances where loudmouth 'reviewers' sensationalized a very (very) isolated fraction of the whole, and made a public spectacle out of it - mostly I'm sure to further their own notoriety as opposed to any other goal. These 'influencers' have gotten to the point where they're not much better than the news media, which can always be trusted to sensationalize everything.
-
Well, I am truly sorry for your woes, but we must caution ourselves against inaccurate advice in public forums, as it misleads people who (often) tend to look at it as gospel. I understand what you're trying to convey, and I appreciate your making the distinction. Interestingly the memory you cited above is very similar to what I used and got four sticks to run just fine (see pic below). In fact, I think one of the few differences in yours and mine is that mine is not even EXPO, yet it worked OK in my case. I was trying to avoid being too specific, so as not to step on toes - but I honestly believe it's the ASRock motherboard that's the biggest (if not only) problem here. Just my opinion. My point about the shop was that you shouldn't be without recourse. If they talked you into something, they should make it right - up to and including a full refund on *everything* they sold you. (If you had some parts already, got them elsewhere, etc...well, can't fairly blame that on the shop. Again, I ran memory like that at higher than 3600 no problem). I think your advice is perhaps the best: Use two modules, avoid the potential for problems. Sorry if it seems I'm arguing with that, because I'm not. I'm just saying that there's a lot more to it (and, as a professional system builder myself), that it matters who you work with. FWIW I still disagree that 3600 will automatically cause stutters, regardless of VR or not. Something else is going on - in my opinion. As I explained previously, if that were true then logically everyone with 3600 (even two sticks) would have stuttering - and I just really don't think that's true. Good luck. The RAM in the build I showed earlier, which successfully ran 4 passes (6 hours) in MemTest8 at 4800 is shown below. That's in addition to FireStrike/TimeSpy stress tests I ran, countless benchmarks, and (perhaps most important) zero call-backs from the client. I"m actually fairly sure it's running at rated speed/6000 ever since, but I can't readily prove that ATM. I don't have another AM5 setup right now, so it would mean asking that client to bring his back...ain't gonna happen
-
Samsung 57" Odyssey Neo G95NC
kksnowbear replied to rapid's topic in PC Hardware and Related Software
Well, that might be true (see below*)...but the monitor we're discussing doesn't support GSync - it supports Freesync Premium Pro. At least that's what I see in the specs. * FWIW I have read that there are some rare units that will actually support GSync Compatible over HDMI 2.1...but I'm not first-hand familiar with them. -
Samsung 57" Odyssey Neo G95NC
kksnowbear replied to rapid's topic in PC Hardware and Related Software
Well, to be accurate your 3090 should have a single HDMI 2.1A port, which is enough to support what the 57" Samsung requires. Yeah, I got a wall mount for two monitors to use with my 49. The wall bracket is bigger, and I intend to use both VESA mounts on the mount for the one monitor. Mounted that way, it won't move if the house was to blow over. -
Samsung 57" Odyssey Neo G95NC
kksnowbear replied to rapid's topic in PC Hardware and Related Software
Yes. The Ultrawide aspect means the vertical dimension is less problematic than typical 16:9. This is also one reason that monitors that have adjustable height (both up and down) cost a bit more. TVs typically won't have this, nor will cheaper monitors. Finally, as I said above, a VESA mount is worth considering. Though not necessary (and not necessarily cheap lol), there are units that are fully articulating and can adjust height through a fairly broad range. I actually got a fairly cheap dual head mount (total something l Iike $50 I think) to use the two mounts combined for my G9...just haven't got that far yet lol -
Samsung 57" Odyssey Neo G95NC
kksnowbear replied to rapid's topic in PC Hardware and Related Software
Uhhhh....yeah. I just last Christmas got "approval" for the 49" G9 lol...somehow I don't think that's happening again any time soon It looks magnificent though. I will say that monitors this big take up a lot of space, so I would suggest some type of VESA mount arrangement if at all possible. Also, it does take some getting used to...especially when *not* gaming; it's nice to have the equivalent of two 27" screens side by side, but you do wind up turning your head a lot, and for quite a distance. And, it's a totally stupid software thing...but some apps don't know how to stay put when resuming from sleep/being minimized. Instead of 'snapping' back to their original left or right half-screen position, they'll just pop up in the middle somewhere or off one side. Easy to correct but recurrent and annoying as hell. EDIT: You know, I was looking over the specs for this monitor...it requires DP 2.1 for 'full-speed' refresh rate (240) performance at native resolution... I'm using an Asus TUF 4090 ATM, and I also have a couple 3090s (eVGA, Asus) and a 3090Ti...all very high-end GPUs...and none of them has DP 2.1 support. WTH kind of card would you have to have to use DisplayPort with this thing? Yes, the cards and the monitor support HDMI 2.1, so it would still work...but all this time the standards in monitors are moving away from HDMI and toward DP. Just look at the ports: Typically 2 or 3 DP but as time goes on, as few as a single HDMI connector. -
Absolutely. (It's worth mentioning here that there are 64G kits in two modules - so it is attainable, even within the AMD-specified limits) I'm not sure it's bad RAM, or even dissimilar modules (although I believe it is accurate to say you can't actually get an AMD EXPO kit with 4 modules due to the discussed limitations)... I actually wonder if it doesn't have more to do with the quality of the motherboard, and/or that of the RAM itself, and/or the combination of RAM/motherboard. That being the case, it is most likely as BitMaster said above, not something you could work around - with that hardware. Which is what I was saying: Whoever designed/spec'd the build is responsible, and if it was a shop then they should be willing to work out a solution. I'm trying very hard to be diplomatic and avoid a pissing contest if I say anything too specific about the hardware itself - but again, if I sold it from my shop, I'd stand behind what I sold. This support is the single most important reason it matters who does the build/who you work with, especially if you don't have experience. My original comment (in this part of the thread) was "There are actual technical reasons it can be problematic, but just as well, there are real technical solutions. " The technical reasons include hardware selection, and the solution - albeit in retrospect - might well involve different hardware. Still, for me, comes down to who's designing the system, particularly if it was a "shop" (though it's not clear in this case if that means just a retail parts store, or a place that actually builds machines). Retail parts salespeople are usually morons, but a business that builds machines should know better.
-
Yes, I am aware of all that...yet the results I've shown above are conclusive proof that the official specs are not absolute. And 4800 is a damn sight beyond the 'guaranteed 3600'. A full 33% more, to be exact, which is definitely significant (and in fact just as close to 6000 as it is far from the 'guaranteed 3600', so hardly "significantly closer'). In case you hadn't noticed, that's a Zen4 CPU in the picture I posted. Looks like "AMD Zen4, 4 sticks = outperforming factory specs" to me. There's also no proof at all that any system that runs 3600 will have issues with stuttering. I'm sure, if I were to try, I could find someone who runs DDR5 at 3600 and has no issues with stutters. What makes me so sure? Well, among other things, at least according to AMD, that's the max speed you can get with 4 modules - but I'd bet there are at least a few people running DCS with four modules on an AM5 board without having stutters. If running 4 modules means no more than 3600, and 3600 means stutters...then logically, everyone with 4 modules would have stutters. Somehow I don't really think that's the case.
-
Which is why the person who designed the system should've been aware - and should stand behind it, as I described. Doesn't change anything: Like I said, if I sold it, I'd make it work as I represented or give a refund. You *can* run 4 modules, and it doesn't have to be at 3600: Below is a picture of a build I did earlier this year, running MemTest 8. 64G, all four slots populated (second pic). Running 4800** (not 3600) and if I'd played with it more I am fairly certain I'd get more out of it, I just didn't have time; the client was happy to get better than what AMD says is officially max and was glad to take the system without needing to wait for me to fiddle with it more. ** Yes, I know it's 6000 RAM, but that ain't the point lol...AMD says not more than 3600 with 4 modules - which clearly isn't always true. It simply depends on a number of factors.
-
Naturally, the "technical solutions" I referred to includes hardware choice, albeit that is unfortunately in retrospect at this point. If this was the result of working with a competent professional then they should be willing to either demonstrate it can work, or issue a refund (provided it's within a reasonable time frame and no misuse or damage has occurred).