Jump to content

Waxer

Members
  • Posts

    593
  • Joined

  • Last visited

Everything posted by Waxer

  1. You mean in DCS? Probably marginal. I think 3080 is really ideal as a 4K card. But sure... let's see the actual results. We'll find out soon.
  2. In terms of CUDA cores / the GPU chip itself, I don't think Nvidia have anything between the 3090 and 10GB 3080. I think the only thing in between will be the 20GB 3080. And I don't think that will be a six month wait. More like a 1-2 month wait. Probably either just before the 6900 XT launch (to spoil the launch) or soon after once they know the pricing of the AMD part. And we already know there is something over and above the 3090, except that it will cost >$4,000 and not be supported by game drivers.
  3. It is not quite as simple as that. In DCS there will always be a bottleneck somewhere. The simplistic statement that many people make is "the CPU is the bottleneck" or "the GPU is..." or whatever. In some people's system - where components are not well optimised - it might well be one component of the PC that is consistently the bottleneck. In a well optimised system the bottleneck will be moving from component to component depending on what you are doing at any particular time. And it is generally true that on most people's systems are more often towards the CPU part of the spectrum, or something related to it like memory bandwidth (feeding information to the CPU from memory). But the more you travel from 1080p, to 1440p, to 4K, to 5K (my setup) to 8K the more you shift the bottleneck towards the GPU more of the time. (There is some guy on here that keeps droning on about CPU bottlenecks, but while he is not wrong in general terms, many recent threads have been focusing on the GPU bottleneck. So I feel his repeated comments about the CPU are singing a song that a lot of people already know the words to). I digress. If your CPU is 100% of the time the bottleneck (at your monitor resolution and settings) you can move from a 980 Ti to a 3090 and it will literally do nothing for you. That is one possible reason why could be seeing no GPU scaling. On the thing that the OP just mentioned - Ampere CUDA cores are not equal to Turing CUDA cores - this is true. But Nvidia have a very good and legitimate reason for counting CUDA cores as they do, independent of the marketing spin. For details listen to this, but be warned that it gets rather technical. The long preamble is partially necessary to listen to understand the part where he starts to explain the difference with Ampere. Otherwise you will not understand what he is talking about.
  4. I have very high level of interest in a high quality F-16 throttle & grip. And if it were high enough quality I would be prepared to pay a realistic price for it. (In other words, I know it would not be cheap like buying a Warthog throttle). But they need to get the quality sufficiently high.
  5. AMD will be announcing Zen 3 in two and a half weeks. And... it is going to be good. Very, very, very good. Will it beat Comet Lake in gaming... yes, I believe so. Will it beat Rocket Lake? Maybe, but even if it doesn't it is likely to be a dead heat in anything above 1080p. Big IPC gains and closing the clock speed gap with Intel by another 200MHz or so. I would be tempted by a 10600K, but knowing Zen 3 is coming out I think it will be a no brainer. (Well, only if you intend to upgrade that is).
  6. No. Your maths is wrong: Pay 114% more for 4-11% gain in games. :megalol: https://videocardz.com/newz/nvidia-geforce-rtx-3090-gaming-performance-review-leaks-out 20% is just for synthetic benchmarks. And, no. You don't need to say anything.
  7. The video is 95% unless. All he does is multiply 3080 results by 20% on the basis of synthetic benchmark results. Meanwhile a Chinese site has leaked in game results at 4K in a Z490 / OCed 10900K system. Bottom line is 4-10% better fps in a selection of AAA games. Not 20%.
  8. There is nothing fundamentally wrong with this shopping list. In some areas what has been selected is overkill for DCS, but if the PC is being used for other applications too then there could be good reasons for this. Some specifics: 1. Getting K not KF = good suggestion. Not enough price differential. Makes near as no difference to overclock potential. And also gives you the option to use the computer with iGPU if you are without a graphics card while you wait for a new GPU to get delivered. (You can still get online, web browse etc). 2. No need for 10 core. i7 8 core fine too. And now that hyperthreading is enabled on the i5 the 6 core 10600K is a great, inexpensive CPU too. For DCS you would be giving nothing up getting the i5. (It would also be a bit easier to keep cool, than the 8 or 10 cores). 3. Memory 2x32GB is overkill. 2x16GB is plenty for DCS. Also I'd get the 3200 CL 14 or the 3600 CL 16 as the OP also advised. Get Samsung B-die if you can... typically G.Skill or Team Group. Not Corsair as their secondary timings are loose. 4. I used to own a Corsair HX1000i. It blew up. No reason. It just blew up. Big bang. Blue smoke. You don't need 1.2kW. 1.2kW is fine as the PSU will hardly break a sweat. 850W, 1000W, 1200W all fine. I'd probably pay the extra for Gold, Platinum and I typically would favour Seasonic or EVGA. 5. Nice Asus Hero motherboard chosen, but it is more than you need to spend. But also the MSI MEG Unify is good for the price (bit cheaper). If I was going to spend more than the Unify I would get the Apex in preference to the Hero. But it is personal preference, not a big deal or difference really. 6. I would stick to M.2 drives... just less messing around with cables and all. Basically what you got is great, just you could save a bit if you wanted and make a few slightly better choices. Mothing major.
  9. We've been through it multiple times and it is getting very, very boring. I want information on the new GPUs with DCS and VR. I am not interested on opinions about CPU limiting DCS. I simply ask that people that want to share their opinions about that do it in a thread about CPU vs GPU throttling separate to this one. Is that too much to ask?
  10. Indeed. And most people already understand that. And they also know based on their own use case how much of the time they are CPU bound and how often it is the GPU that is bounding their performance. Hence the wish to get BACK ON TOPIC. And the desire of learn from the experience of lucky community members that have taken delivery of an Ampere card and are now using it in our preferred application.
  11. I think that you are correct: RTX features use GPU RAM. Currently most RTX titles only have a couple of effects active. But DOOM for example uses a number of RTX features and this is one of the reasons it hammers the memory hard. 24GB certainly future proofs you. But so would the 20GB version of the 3080. You don't want to wait and you want the extra 5-10% performance... that is fine. There is also the consideration that Ampere's fps horsepower might still not be enough for too many RTX features to be on to a large extent at the same time... in that case the speed of the card would likely become an issue before 16GB (3070 16GB) or 20GB (3080 20GB) or 24GB (3090) became an issue.
  12. Please can people stay on topic and discuss the 3080 and 3090? If you want to talk about CPU bottlenecks please start a new thread.
  13. Thank for posting that link FoxTwo. If true - we can't be sure what is actual and what is synthetic - then RIP. That really makes an additional £750 over and above the 3080 price, very, very hard to justify. At least for all but the most price insensitive. And yes... the extra heat... RIP.
  14. Yeah. It is worth it. At least on Intel's side: they is low hanging fruit and you can easily get a decent few percent increase in DCS performance and all round performance from clicking a few buttons in bios. And provided that you don't have a completely potato PC ventelation and CPU cooling performance there is very little cost and no risk to doing so. Marginal downside (bit of extra power consumption and Saint Greta of Stockholm is angry with you, that's all), small upside. But to push a high overclock starts to involve compromises and is best left to someone that knows what they are doing. On AMD the built in algos are so effective that, no, overclocking is not worth it and more often that not you go backwards.
  15. Of course not. And I don't and never have advocated buying power hungry components for the sake of it. In fact if you check my recent posts you would find that I have been critical of Ampere for being a bit of a power hog. (Problems: noise, heat, energy bill, polar bears). That is why I have a relatively modest i7-7800X 6 core CPU (which I bought at 50% of RRP) and overclock to 4.8GHz. I think the 10600K is a great value CPU. I see loads of people with that CPU and a mid range motherboard getting 5.0GHz overclocks. For relative peanuts. If I was buying a CPU for DCS and I didn't actually have anything already, that is what I would pick if I had to buy this week. But I would still prefer to wait a few weeks to see what Zen 3 comes up with given the clock speeds are rumoured to be reaching 4.9 boost, boost is likely to be maintained for longer, and IPC is rumoured to be improved 20% on Zen 2. But the point is that my PSU operates in silent mode with passive cooling only. And my system is easily upgradable to ANYTHING I throw at it without having to upgrade PSU. I don't even have to think about it. It is a non issue for a component I only have to buy once where technology is moving relatively slowly. And - personal choice - I built my own custom PSU cables to correct lengths. Not changing PSU for a long time means that I can stick with my custom PSU cables without worrying about any potential compatibility problems. Likewise I spent a lot on my case. But it means that I never have to buy another PC case. Like ever. Someone that a year ago was sat there with an online calculator that spat out the answer 650W power supply and is now looking to upgrade to a 3080 - or heaven forbid a 3090 - is now facing the prospect of buying a new PSU as well as a new GPU. Whereas if they had only bought an 850W PSU to begin with they would only have to upgrade one component. That is my point.
  16. My understanding is that the application leaves some headroom for non-application system use. But the allocation that it has taken can run out.
  17. It allocates 10GB of 11GB (if available). That does not mean it uses 10GB. There is plenty of discussion of this online. Unfortunately there is no easy way of telling how much is used until you actually run into a memory wall and your frame rates tank. I wish one of the Youtubers would do an analysis on this. It would be a popular video.
  18. I think that is the issue. DLSS fills in the blanks and makes guesses. The nature of DCS - outside the cockpit at least is that you are scanning the sky for a bandit, who is a couple of fast moving pixels. Maybe the software would work if you just wrote script instructing the algo to leave bandit dots alone and render them fully. RTX ray tracing and DLSS would certainly be a nice in cockpits however. But I'd rather see better multicore CPU optimisation as a priority than GPU bells and whistles like those. CPU generational performance is moving much slower than GPU performance.
  19. With respect this is a perfect example of someone who has armed himself with an online PSU calculator and made sensible choices for a mid-range PC. Nothing wrong with that. But back to my point: upgrading to "balls to the wall" components like a 10900K overclocked and a 3080 OC or 3090 OC (we are on a thread titled "3090? 3080? AMD!" after all) will require an upgrade to the PSU. How do you do that on your 600W PSU? Nvidia themselves make the point about customers underestimating the impact of PSU age on PSU peak performance. NVidia themselves make the point about customers underestimating the importance of transient responses for PSUs. People making PSU choices based on Wattage alone expected to run into trouble. Both of these points were made explicitly in the "deep dive Q&A" that Nvidia gave prior to Ampere's launch. RMAs of Ampere will be higher than average for an Nvidia release.
  20. I wish that I had £1 for every time that some well meaning, but ill informed, person advised me to get a 550-600W PSU for my PC builds. Variations of "ohh, you don't need more than that for 1 CPU and 1 GPU. And there is no point of SLI anymore." These flawed advice always ignore headroom for power hungry components, headroom for adding pumps, fans etc, headroom for overclocking. They also assume a saving of £10-20 on a PSU makes a significant difference. Frankly saving that little is peanuts in the context of a rig, so it is far more important to me to get quality components that will last through multiple rebuilds. This works out cheaper than buying multiple barely sufficient PSUs. They ignore the fact that PSUs degrade in efficiency and performance as the components age. They ignore that PSUs are at peak efficiency if you run them around half maximum rated capacity. They ignore that PSUs run quiet if you run they at low loads relative to their rating. They ignore that bigger units have nice modular cables as a rule, not an exception. They ignore that PSUs running at currents well below their peak will cope with large and sudden transients far better than PSUs running towards their limit. In fact this last point is something that is likely to cause problems with the 3080 and 3090 Ampere launches: so much so that Asus has installed LEDs at its EPS power connectors on the GPU to signal power supply failures where transient response was not fast enough to deliver the load asked by the GPU. I am guessing that the RMA return rate is going to be quite high on Ampere and this likely being one of the reasons. (Another being the GDDR6X memory).
  21. Yeah, I agree with you. If I had a 1080 Ti or older I would be upgrading to something this year for sure. But already having a 2080 Ti which I am fundamentally quite happy with so far I've been crunching the numbers and looking at early reviews. My conclusion is that - even though I can afford it - the 3080 is a nice upgrade, but not great value at £800-850 including the cost of a water block. And 10GB is a step backwards. And I am not convinced that £750 extra for a 3090 is worth it. I am also not impressed by the heat generation of these cards. Heat = noise and also uncomfortable rooms during the summer. Especially so wearing VR kit. So my plan is to wait for the G2, use it initially with my 2080 Ti and be happy. Only then if I am crying out for more GPU performance will I consider one of the upgrade options. By the time I have assessed the G2 paired with the 2080 Ti we should also know about the performance of the 6900 XT and Nvidia should have announced their 20GB 3080. Waiting is the smart play right now for 2080 Ti owners. I might even sit this cycle out. Likewise Zen 3 should be very, very good with IPC gains and clock speed gains (base and sustained turbo). But I don't fancy investing in the last of the AM4 chips ahead of an architecture change to DDR5 and all the other bells and whistles from Xen 3's successor. Again, my Haswell is reaching 4.8 GHz and getting the job done for peanuts, and it is a fully depreciated sunk cost anyways.
  22. $900 for a 10GB 3800? There will be a 16GB 6900 XT announced in October and so Nvidia will announce a 20GB 3080 for c.$1000 in October or November... It is not long to be patient. And yes, I know... there will most likely be some delays getting supplies on these products too, but still.
  23. Oh good grief. Not this tangental discussion again. Please! We've been through this before. And you've made your opinion clear. Okay? Let's move back on to a discussion on the 3080.
  24. Hardware Unboxed specifically addressed this question. There are one of two slides that summarise the results of their testing. 1) They did X570 board using PCIe 4.0 x 16; then tested PCIe 3.0 x 16. Result: 4K literally no difference. 1440p c.2% 1080p c.4% But literally no one will be buying a 3080 for 1080p And even for 1440p a 3080 is overkill unless you are using it for e-sport first person shooters. At the 4K resolution the 3080 is best suited to there is no difference 2) They also tested X570 vs Z490. Result: At 4K near identical performance as GPU bound. And below 4K the speed of the Intel chips cancelled out the small advantage of PCIe 4.0. So dead heat effectively at 1080p, 1440p and 4K. NVidia's SSD RTX I/O tech could change the maths on that for new titles.
  25. Well congrats. And enjoy! What are you running currently? Any chance you can give us some before and after scores on whatever system you have once you get up and running? (What screen res are you using etc).
×
×
  • Create New...