Jump to content

kksnowbear

Members
  • Posts

    882
  • Joined

  • Last visited

Everything posted by kksnowbear

  1. Most likely the two utilities are just using different types of loads (for example AVX vs not). I believe you can configure, at least to some extent, the testing that P95 does. Don't know about CPUZ stress test as I don't use CPUZ for that.
  2. Well at least it's more realistic. As I've said more than once, by all means put whatever GPU you can afford in it...just accept that it's going to create a situation where the rest if the machine is hurting the GPU. And, if I'm being honest, that was *before* your saying this is a PCIe 2.0 system. I have several Z68 boards but I made sure when acquiring them that they all support Gen3. That's going to enter into this as well. A 1080ti will be affected, a 2080ti more so. There's nothing wrong (necessarily) with putting a too powerful GPU in a build like yours...just that (for lack of a better way to put it) it probably won't "feel like" the upgrade you thought you paid for. IMHO your 2600k platform was already doing well to keep up with a 1070. (Note that a 1070 won't be affected by PCIe 2.0 vs 3.0, it's not fast enough.) But just like with the other parts...the faster a GPU you put in it, the more adversely all these other factors will affect the GPU. That's my honest opinion fwiw.
  3. So you cannot provide any authoritative reference for your own statement that it's "not recommended". Likewise, you cannot provide any reference to corroborate your statement that what I'm doing is "Not something AMD says you should be able to do". And you still clearly don't know what I'm talking about. As a hint, you can't do it on a 5800X3D (to my knowledge)...so you're now saying you actually have done it on the 7800X3D, which you're saying you also have? PS I read and comprehend just fine, thanks. What I don't do is fall for it when someone pees on my leg and tells me it's raining: If you had just corrected yourself, and if it were just a "minor terminology slip" as you now say, then it wouldn't be necessary to say in the very same post that it's "not recommended". See? You didn't just correct yourself for an isolated slip. You 'doubled down' on yet another inaccuracy, just to make it sound as if you were right in the first place. And then you furthered it with your statement about what AMD said. It's not really just correcting yourself when you throw more inaccurate assertions on top. But let's assume I'm wrong...by all means, tell us: Not recommended by whom, exactly? And where does AMD say what you indicated?
  4. Absolutely none of which is the point. You said X3D CPUs have a fixed clock, and that's not accurate. I didn't say anything about whether it's the same as what people do with these Intels. I didn't say any thing about levels it can be done with unlocked multipliers, nor whether you do or don't. All that is just plain old distraction. I said your statement about X3D CPUs having a fixed clock is inaccurate. And it is. You said what I mentioned was "Not recommended" ...but never indicated by whom. You now say it's "Not something AMD says you should be able to do". Where do they say that, exactly? By all means, enlighten us all. See, what you're doing there is trying to sound authoritative...but I doubt there's any real accuracy in these comments. I could be wrong, but I doubt it. LMAO And after all that, it's pretty obvious that you have no idea. Got it.
  5. Nope. I'm talking about increasing clocks on a 7800X3D, same as overclocking has always been done. And it's not a fixed clock either way. Nothing to do with undervolting, directly, nor locked multiplier. (lmao Save us both a lotta time if you just go ahead now and acknowledge you don't have any idea...)
  6. This is fairly accurate, at least according to the test results I have: The (3DMark FireStrike) Graphics Scores are ~45-47% higher, as are frame rate scores in the graphics-heavy Game Tests. (Note this was a 1070Ti) However: This is also true, and not at all good. The 50% figures above are only valid when comparing the performance of a 1080Ti directly with a 1070, and do not consider the rest of a system. When the CPU is factored in, as I described above with the Combined test figures, the improvement suffers significantly. Overall scores only increased ~25% on all three of the platforms I discussed above. Not that 25% is bad - but the 1080Ti should be closer to a 50% gain, it's just being held back by the CPU...which kind of illustrates the point about mismatching the GPU and the balance of the platform. And remember: The least powerful CPU in my results (a 4790k) was still ~15% better than the OP's 2600K.
  7. I bought a Crucial T705 end of March for $282. Even 4 months ago they were less than 300 including tax (6% here)...but now they're $266 at Amazon (before tax). Better drive, nearly 50% faster than all the Gen5 drives that only read ~10000, and cheaper. Not really any point paying more, when you can get the best drive on the market for less. Also not much point IMO in paying for storage speed, just to kill that performance with file compression.
  8. Is it ging to work in this old system? Almost certainly (unless you having something fairly unusual going on)... just not as well as if it were better matched. Is it even going to fit? It's a big bugger. Couldn't tell ya Is there going to be an improvement or not? Again: Almost certainly (unless you having something fairly unusual going on)... just not as well as if it were better matched.
  9. Not yet, that I know of...but you can bet your a$$ there are lawyers out there as we speak, just drooling
  10. Nothing "not recommended" about it. (Not recommended by whom? You?) Perfectly safe and stable with proper cooling. Plenty of info online from *very* reputable, far more knowledgeable sources, saying it can be done. Actually the CPU runs better (less power and heat, once it's dialed in properly) and performs better. Been doing it now for many months on my primary machine, no problems whatsoever - even as compared side-by-side with a like platform that doesn't support the feature and thus can't benefit from the same overclock. It's still overclocking, and it's (still) inaccurate to say X3Ds have a 'fixed clock', which is what you said. (PS Probably not really a good time to compare Intel chips to...well, anything lmao) The 7800X3D is already a better gaming CPU anyway, and doesn't destroy itself after 6 months
  11. Just because I actually have a fair amount of data from decades of testing, I thought I'd go back just to see if I had anything that might help illustrate the effect I'm referring to. It happens I have records - they're benchmark results, which I know aren't necessarily going to translate directly to DCS results, but benchmarks exist for comparisons exclusive of specific game(s), so they're still valid; arguably better in some respects. Anyway, I have records from testing where I had upgraded three platforms from a 1070Ti to a 1080Ti: A 4790K, a 6700K, and a 7600K. Same board and CPU each time, only change was swapping the GPU from a 1070Ti to a 1080Ti. if you use the right data, this shows very clearly the effect of pairing a more powerful GPU with less powerful CPUs. Going from a 1070Ti to a 1080Ti on an i5-7600K yields a 14.61% increase in 3DMark FireStrike's Combined Test* frame rate score. On a i7-6700K, changing a 1070Ti for a 1080Ti only increases the Combined Test frame rate score by 6.5%. Same platform/test - less than half the increase. Going down to an i7-4790K CPU, the same change in GPUs (1070Ti>1080Ti), the increase in frame rate scores drops to 4.6%. That's now down to less than 1/3 the gain with the same change in GPUs - just because of the lesser CPU. (* I am using the Combined Test results here because that considers both CPU and GPU in FireStrike). You really start to see the effect, even with two GPUs that aren't that far apart, and a comparatively smaller range of CPUs. Obviously, the delta between a 1070 and 2080Ti (as with the OP's case) is going to result in an even greater difference in "bottleneck" than from a 1070Ti to a 1080Ti (as much as I do really hate that term). Just as obviously, the gain from changing GPUs on the platforms in my test data will continue to diminish as the CPU's performance decreases. My test data goes from a 7600K down to a 4790K, but the effect of a mismatched GPU will be more when paired with a 2600K (since the 4790K performance is somewhere around 15% more than the 2600K). As I mentioned earlier, anyone is free to drop a 4090 into a 2600k platform if that's what they want to do. I don't really "have a dog in that fight" lol I'm just providing this data to further illustrate and quantify the point about mismatching CPU and GPU, both as relates to this specific discussion/thread and for future reference where others will likely see this. I think this is often misunderstood or underestimated, and hopefully this data helps avoid that.
  12. That's not accurate. I have a 7800X3D with a X670E board, and even if we exclude what the CPU and board do on their own, some boards allow additional control over CPU clock (within reasonable limits, as with *any* board/CPU). I bought and use the board I do for this exact feature (among others). Not true to say you can't overclock an X3D CPU.
  13. Absolutely (although I'd have to say go with AMD rather than Intel, as much as I might hate having to say it) To the OP: No one's saying you can't upgrade the GPU while still using the 2700K - you most assuredly can. Put a 4090 in it if you want lol... ...but you have to consider the effects of an imbalance in GPU/CPU. Get the GPU for now if it is all your budget allows for and if you really want to approach it that way (I did ask about budget already, never got a response BTW)...upgrade the rest of the machine later. No law against it. You just have to accept that it's going to be even more mismatched by putting in a more powerful GPU, unless/until you bring the balance of the platform up to the level of the GPU. TBH my opinion is even a 1080Ti is going to be mismatched to a 2600K, at 1080p. Maybe if the resolution were higher (1440p) it would offset somewhat the imbalance, but you can only get so far with that approach. Upgrading gaming computers is not as simple as reading a number on the screen and throwing a better GPU at it (if it's done properly, that is).
  14. Yeah...DLSS is remarkable technology in a lot of ways... ...but I'm not sure this is really an ideal "use case", (at least not with a 2600k CPU and at 1080p). Using AfterBurner is not a bad idea for monitoring usage/load generally - but you have to be aware it can also be misleading if it's output data is misinterpreted. All it does is give you the data; accurately figuring out what the data means is sometimes not as straightforward as reading numbers. I have seen, first hand, numbers displayed that could easily be misinterpreted (and were being misinterpreted by less experienced people).
  15. Ah. Now we're getting somewhere... I'm not sure you are reading it correctly, or rather, interpreting it correctly. I'm not an expert on that particular tool, so someone else might explain better than I can about why/how it works, but I've heard more than once it's not accurate. Also heard that (in a sort of left-handed way), it's preferred to have that show GPU bound. That aside, where I feel my expertise is proven, is in system building/hardware...and from that perspective I can absolutely, 100% assure you: your CPU is - by far -the weak link in your system.
  16. I really find that extremely unlikely. What are you basing it on?
  17. Agree 100%. A very concise and accurate description, and a reasonable suggestion for dealing with it.
  18. BTW your sig also lists a "Samsung M2 NVMe".... How is this connected to a motherboard that supports a 2600K CPU (and thus doesn't have m2 slots)? Please do not say USB...
  19. If you're referring to me, what I would suggest is first, establish a budget. This means money now calls the shots (as it usually does anyway), but more importantly, allows for allocating amounts to the various components, and winding up with a more 'balanced' system. What comes next will honestly depend on the budget you have available. If you've only got money for the GPU you've asked about, that makes it a much trickier decision. It's worth noting here that the 1070 you have now is already somewhat overmatched to a 2600K CPU. Putting an even more powerful GPU in such a system isn't really a good idea.
  20. I would, however, strenuously reiterate my earlier point concerning mismatched components. Doesn't matter if it's been OK with the 1070 you have. You can actually make the symptoms of this mismatching worse than they were, by going to a better GPU.
  21. Then yes, without going into a ton of detail here, I'd say the 2080Ti is a better deal.
  22. Again, it depends on cost. A free 1080Ti is much better than a 2080Ti that costs $200 (this is only an example). If cost is not a concern, then the 2080Ti is the better GPU.
  23. That has nothing to do with performance. The 1080Ti is still an awesome GPU, especially at 1080p which is what most gamers by far use. And factually, a 2080Ti performs on level with a 3070, plus has more memory and a wider bus. These are two workhorses that are still plenty viable, even with newer games.
  24. As a preface, I am assuming you mean a 1080Ti and 2080 Ti. The 2080Ti is a more powerful card, but which one is 'better' would obviously depend on cost. If the 1080Ti is a lot less, they are great cards. If they are anywhere close to the same cost, then a 2080Ti is a better value. All that being said, it should be mentioned that *either* of those cards will be grossly mismatched to the CPU your signature lists. If your sig is current, I'd say it's way past time to update the balance of the system. It's really not a good idea to create an imbalance in a system by using a CPU or GPU that is a lot more powerful than the other - they need to be at least reasonably 'matched' otherwise it will cause serious "bottleneck" (and I dislike that term, but there it is).
×
×
  • Create New...