

kksnowbear
Members-
Posts
877 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by kksnowbear
-
Not so sure about that. I tested mine just now, and it boots in ~20 sec (past the GPU LED) even though I just turned the PSU switch off and left it for several minutes. Granted, I suppose it *could* still be training memory in that short time span - but when I've seen 'training' it usually take much longer. Several minutes. It's my impression this is what the Power Down Enable does, as opposed to just Memory Context Restore. Makes sense that some people could run PDE off and think it wasn't different from only MCR, if they aren't turning off the PSU (who does?). My take on that is they (Asus) probably wouldn't bother with two separate settings if they were not different things. Now, again, I know it was changed recently, so maybe it didn't work like that all along. Can't say that I've had to mess with it that much; I always set both on and that's that. I also did not specifically test to see what happened if I turned off PDE *and* switched the PSU off, so that would be necessary to say 100% it causes retraining. Maybe some day when I have time. I believe it accurate to say that, for the boards which have this issue to begin with, BIOS typically features a setting for a training voltage - so you don't have to do a 'blanket' voltage increase just to solve the training issues. I still sometimes use a slight increase on RAM voltage, especially if all the slots are being populated (which I almost always do for various reasons.) Not usually more than 5% (0.07v @ 1.35) This does seem to help with stability, but that's been teh case for many generations of hardware now, not just the AM5 stuff. Voltage "fanout" is a known electrical phenomenon, and it definitely applies to motherboard RAM slots.
-
But you've mentioned several times that your 4090 gives you a frame rate in DCS that exceeds your monitor's 120hz refresh rate, so you cap frames at 117. Even if a 5090 were an improvement, how's that going to increase your monitor's refresh rate? You'll be generating frames that your monitor cannot display. Far as the future and other games go...well, unless you're getting a native render rate around 100-120, MFG is likely to do more harm than good (please see HUB video linked above, and note that this is more likely in games like flight sims where your view is constantly and quickly changing). And if you *are* getting 100-120 "non-magic" frames, why would you pay $2150 (possibly much more) for more frames, when your monitor cannot display more than 120 regardless?
-
Obviously. Realistically, not for anything of any statistical significance. Beyond a frame rate you've indicated several times you already cap, because your 4090 exceeds your monitor's 120 refresh in DCS? So you'll be buying a new monitor (to get a higher refresh rate; see $4300 beakdown above)...or getting more frames than your monitor can physically display. Alrighty then. You keep trying to impose a grade-school explanation of simple supply and demand economic theory...Nvidia is limiting supply so that demand will increase price. Pretty simple, really. Most people seem to get what's happening, anyway. No, you wouldn't necessarily see AMD undercut them, because some people won't buy an AMD GPU regardless. AMD's smart enough to realize that there's no point cutting their own throats to entice people who aren't going to buy anyway. Only if you impose a simple 'supply and demand' argument. Look around online, there are plenty of reputable sources discussing how Nvidia is limiting supply to levels not seen before. Again, that 'law' exists only as a means to explain an inversely proportional relationship at a grade-school level. It only explains what happens *when* supply goes down; it doesn't account for *how*. Nvidia isn't controlling the entire GPU market and I didn't say that. They're controlling the Nvidia 50-series GPU market.
-
Your own personal experience buying a few GPUs does not a market make. Being on a wait list (still) doesn't guarantee a card. (And yes, we all now understand you'll be buying a 5090, as I'm sure was already the case before this discussion ever started...see above re: target market. Can't wait to see your new sig...lol) No one said anything about a monopoly. And what Nvidia is doing with 50-series supply has zero to do with AMD or the 4080. As I said already, Nvidia is perfectly happy if some people believe that simple supply and demand explains their behavior. Some of us know better.
-
And yet, somehow, none of that changes the fact that being on a wait list doesn't guarantee a card. Demand will still be high only because it's relative to supply, which everyone already knows Nvidia is manipulating to control the market. As I (and others) already explained, the simple "supply and demand" argument doesn’t apply here. That "law" exists as a means to explain an inversely proportional relationship to grade-schoolers, not to account for questionable ethics and business practices where intelligent adults are concerned. Nvidia is perfectly happy if some people believe that simple supply and demand explains their behavior. Some of us know better. Besides that, most of what's driving demand in this case is people who will pay exorbitant prices to secure bragging rights - and Nvidia figured out long ago that those people will pay, even though the facts show that in the overwhelming majority of cases, the real value just isn't there.
-
There is no statement from Nvidia regarding 'target consumers'. You are expressing an opinion which is not based on factual knowledge. (You'll be good enough to tell us if you do, in fact, have first-hand access to such knowledge from Nvidia. I suspect you do not.) Factually, 'fake frames' don't improve performance *at all* unless supported in a given game. AI does nothing for sheer horsepower, and 'fake frames' come with drawbacks, such as input lag and "artifacts". Factually, the gain in performance without the smoke and mirror nonsense is small, and even then it only applies to those running 4k - which, factually, are far and away in the minority. The vast majority won't see even these 'disappointing' gains (per Steve at HUB) and again, as I posted previously: Even at 4k, we're seeing "no improvement in cost per frame". Nvidia has not in any way limited 5090 sales specifically to 4k monitor users, nor indeed restricted sales of those units to any particular 'target' consumers. In my view, the 'target' is anyone who has money enough to blow on being able to brag about getting a 5090 before anyone else, even though unless they meet very specific conditions, they aren't getting anywhere near what they're paying for. (And that's *assuming* MSRP isn't a joke that may yet be rendered moot by politics.) Being on a wait list doesn't put a card in anybody's hands. As many (many) learned with other GPU releases, orders/pre-orders can be cancelled by vendors with zero notice or explanation. And there are also quality issues in many titles. I watched a video yesterday from HUB saying that they recommend MFG (the 'smoke and mirrors' that the 50 series brings) *only* when you're getting a base render rate of 100-120 FPS anyway... https://www.youtube.com/watch?v=B_fGlVqKs1k ...so if you're already getting 100-120FPS without *any* 'magic', why would anyone need to introduce problems like input lag and artifacts, poor image quality...? (Particularly if their 4k monitor only refreshes at 120 or even 144 lol). Do we just need to brag about 350FPS that badly, when the monitor cannot physically display that number of frames? So now the 4k monitor needed to realize any real benefit from 50-series smoke and mirrors, should also be a high refresh model? If someone has consistently said their beautiful 48" OLED monitor is 120Hz, and their 4090 gets over 120FPS, so they cap frames just below the 120 refresh of their monitor... How's a 5090 supposed to make the monitor display more frames than it's even physically capable of? So now, if we assume best case for the card itself (MSRP at $2000), with even conservative taxes puts you up near $2150...plus how much for a 48" 4k OLED high refresh monitor, another $2150 after taxes? So, $4300 all in? Someone please tell me I've got the math wrong here... (BTW that's just a price for a 144Hz monitor I found...so you'd still only get 144 at most; seems given a 240Hz model would be even more outrageous...but I had trouble finding a 48" 240Hz 4k OLED unit, though admittedly I didn't look too hard.)
-
I think you misunderstood. I said before (essentially) what you said after, in terms of Nvidia abusing the mechanism. I've said previously it's not really explained by the classic 'supply and demand' argument. Please see here: The 'other discussions' I consider off topic are, as I described: These are clearly not on the topic, particularly once the title was changed. I believe we agree in principle at least, on the questions of "why can't I afford the one I want?" and "do I need that overpriced crap at all?" (although I might not express the questions exactly in those words).
-
IMO it's a fallacy to blindly apply "supply and demand" here, when it's already well known that Nvidia is intentionally manipulating things by throttling the supply. I think it's misleading to suggest that, in the classic sense, this can be accounted for with simple "supply and demand" argument. Pretty sure that "law" is intended to account for/explain the relationship between supply and demand, not to justify questionable ethics and business practices.
-
Sorry if it wasn't clear. It happens I agree with you about costs, prices, etc - as it applies to the 50 series GPUs. I was referring to nonsense about how many FPS humans can see, claims of USAF test reports that cannot be proved, and whether/why there are no modular GPUs that can be upgraded. The business about supply and demand is misplaced, I think, as well. The topic is "Nvidia 5 Series cards" as per title change.
-
I have four AM5 boards in my shop lately. Three are Asus; one is MSI. The MSI setup is not on a bench right now, but I've had the three Asus boards accessible most recently. When I updated the BIOS on these Asus boards, I noted that all of them now switch Power Down Enable on whenever Memory Context Restore is switched on. I can tell you for absolute certain it was not like this in older BIOS versions; the reason I know is because previously, you always had to change both. Now when you flip Memory Context Restore on, Power Down Enable changes automatically. In fact, if you even just click on Enable for MCR (even though it's already Enabled), PDE will switch if it's not already Enabled. It's not just that the settings behave as they do now, but the fact that they changed this behavior, recently. That's very telling. This would seem to indicate what Asus thinks is the best configuration, at least for the majority/in most cases. I seriously doubt Asus would expend the resources to make a change like this unless it was appropriate/needed. I am unaware of any problems with this configuration, having done so on (at least) six different AM5 builds now...though I don't know what 'memory latency benchmarks' refers to specifically. I use memtest on every build and it passes fine. Never seen an unstable machine pass. Then again, I personally quit messing with memory overclocking, as I realized that whatever tiny gain you might get from it is not worth the hassle and cost IMO. So I guess my perspective is "don't ask for problems". I cannot afford to get bogged down in instabilities for free, and (so far) *very few* people have been willing to pay for the time associated with overclocking as an extra. A couple frames more on 100 is not worth it to them either, it would seem "Overclocking" has always carried the potential for unusual behavior. The manufacturers have all said this, pretty much since they even acknowledged overclocking to begin with. Any individual can overclock if they want, but it's inaccurate and misleading to say a certain behavior is "normal" if you intentionally change config for the sake of overclocking. (LOL If you want proof, just try to get warranty service/return on a board because your overclock fails, or because booting takes too long when you switch off Context Restore so you can overclock....aaaaaaand we're done here.)
-
Then - precisely as I said up front, your own configuration is what's causing the issue you have and you're basically choosing to accept the long boots. Entirely your choice, of course. However, these BIOS settings exist to avoid this behavior (within the limits of the hardware in use ...BTW there are two settings, not just Context Restore). It's not "normal" to have that behavior. And not updating the BIOS isn't helping with the problem either. It is a myth that memory training (and the associated long boot times) are 'normal' and unavoidable on AM5 platforms, provided they're properly built and configured. If someone chooses to overclock, entirely up to them. But it is perfectly 'normal' for these machines to have issues like these if overclocking isn't done within the abilities of the hardware. You've pushed the overclock so far that, with the memory you're using, you're forcing memory training every boot. Even in the BIOS settings, it actually says Context Restore will avoid training "when possible" (this is also in the BIOS manual, at least for my ROG Strix X670E model): It's not the hardware's fault that your configuration causes this; you can't have it both ways. The switch is in the BIOS, it's your choice whether or not you use it. It works just fine, provided you're not trying to exceed what the hardware is capable of. Incidentally, AMD actually was very forthcoming about all of this with the Zen4 release. Generally speaking it will work even beyond what they actually specify. Like I said, it's only a problem when someone gets too crazy and/or has an improper configuration for the hardware being used.
-
No, memory training isn't always 'normal' and the machine can be configured to skip it (which can in turn stop long boot times, if other factors are in order). And yes, I'm familiar with all that article says, but it's absolutely misleading because it doesn't tell you anything about what you can do to skip training, which the boards support, pretty sure any ROG Strix board will. The article you linked basically says "Just get used to it", which obviously doesn't change anything. If you read that and nothing else on the subject, then yeah, you're going to be stuck with it. Then it's absolutely true to say your specific setup is causing the issue - as I said in my first post. And, as I said, with that setup, changing the CPU won't likely make any difference. But it is not true to say it's normal, nor that it happens on all AM5 CPUs/boards. That's like saying "I put 5,000 pounds of bricks in my sub-compact 1500cc car, and it's slow....therefore all 1500cc cars are slow". If by "overclocked" you mean running XMP or EXPO profiles, that's rated performance and it should still work fine - again, unless the builder gets too crazy and/or it's being caused/aggravated by configuration issues. But hey, have it your way
-
You're referring to memory training, and it's not necessarily 'normal' or a given on these machines. I've built several, including my own ROG Strix X670E-F and others including for DCS players, and the slow booting isn't normal or automatic, I can tell you. (All these have had four modules, BTW) What speed is your RAM? CAS level? Have you made the relevant BIOS changes? It is true if you populate all four slots with certain modules and/or certain speeds/CAS levels, etc you can run into problems. But that doesn't necessarily make it 'normal' or impossible to change. It can work just fine, with the right setup.
-
If so, you probably have something in your setup that's causing or aggravating the problem. And, if that's the case, it won't be any better on the same board just by changing the CPU. If you're willing to provide details, you might be able to improve this. Where did you get the machine? (pre-built, DIY, etc)
-
Of course, you have a perfectly valid point. Some people might get the base model FE cards, but some may prefer better models from AIBs for their own perfectly valid reasons. That's subjective. Some won't be able to get base models at all, due to historically unprecedented poor availability for these units. They'll wind up paying more. Factually, this happened several times in the past, too, with prior generations of cards. Given the very poor availability, it seems practically assured this time around as well. (There might have been different causes...but the end result is the same: higher prices). Some (myself included) feel this is just another way to drive higher prices while still advertising lower MSRP, even though very few will likely get cards for that low price. (There is very little way for anyone to prove this isn't true BTW) My guess - which seems to be echoed by recognized, expert reviewers - is that the vast majority will not get a 5090 (of any variety) for the $1999 MSTP cited for only the FE model. Some prefer to only look at the lowest possible cited price - the Nvidia MSRP - since doing so artificially inflates the value of performance vs cost. Of course, Nvidia likes people seeing it this way as it makes their product look better, even though some recognized experts are saying otherwise (and your common sense is telling you the same thing lol). Some people are very upset that the 50 series so far appears to be proving it's not all Nvidia wants us to believe. The reality is not all 5090s are created equal, thus it's inaccurate to assign one MSRP to all 5090s. Please stop trying to pick a fight with me. The thread is about 50 series GPUs, and not just pricing.
-
I really hate to say so...but if you're still using the Seasonic 760w PSU in your sig, then I am all but certain it's too small for a 4090... 4090s are known to draw surges of 700w (GPU alone). This is among the reasons new PSU specs require units to withstand "excursions" of 2-3 *times* the rated output of the PSU. Unless I'm mistaken, any 4090 will carry a manufacturer's recommendation of at least 850W, and some actually recommend 1000w for certain models. I personally have tested two different name brand high-end 850W PSUs that shut down when loaded by a 4090. I have built machines around or installed 4090s in five machines now, and have never recommended or used less than a 1000W PSU, nor would I. I hope you can accept this is a genuine effort to be helpful and I wish you the best.