Jump to content

Sn8ke_iis

Members
  • Posts

    537
  • Joined

  • Last visited

Everything posted by Sn8ke_iis

  1. Hey uh...Penetration247... Welcome to the DCS forums! You picked a relevant topic but for a question like this probably best to start a new thread. You pretty much answered your own question though. That's a decent CPU to start. You'll want to go to at least 16 GB RAM (32 even better) and upgrade GPU to the best you can afford. Preferably Nvidia 3060 Ti or better, or one of the AMD 6000 series. AMD chips are sold with very little overclocking headroom. If you are interested in OCing an AMD system you'll want to focus on the RAM. Get some good fast RAM with a fast XMP profile that you can OC even further and tighten the timings on. After RAM and GPU you want to see what kind of CPU your motherboard can handle. If you can go to 32 GB RAM, get a 3060 Ti, and throw in a 5600x your system will be able to push graphics that look as good as the Youtube videos. Sn8ke_iis
  2. Take some of the budget for RAM (128 GB) and get an AMD 5000 series or Intel 10600 or better. Unless you are planning on doing video editing all that RAM and data storage isn't necessary for DCS. You can ditch the second M.2 drive as well. The main limiting factor in DCS is single core CPU performance. Especially in VR. You want the fastest CPU you can afford. Right now those are the AMD 5600x, 5900x, 5950x and the Intel 10600K, 10700K, and 10900K. For GPU you want an Nvidia 3070 or better or one of the AMD 6000 series. Other than marketing material, I haven't seen any evidence that a system will be faster if you combine an AMD CPU and GPU. IIRC the new AMD CPUs don't come with a cooler, so you'll need to add a CPU cooler as well.
  3. Yep...couldn’t agree more. I’ve never had a permanent stutter that was caused by DCS. I’ve been able to always diagnose to something else as well. Try saying that on Hoggit and watch the excrement storm ensue. I wish our PCs were more like consoles for gaming purposes in terms of plug and play but they’re inherently not. PCs are so user friendly these days, you can do do much without having to open up the hood so to speak.
  4. Guys, if you want to help new players who ask this question, the correct answer is “Download and find out” Its free guys! And even the expensive modules have ftp periods now. The minimum specs haven’t been updated in years and are very vague for noobs considering the nomenclature of CPUs these days. I got DCS to boot on 1 core during an experiment. Wouldn’t want to actually play it like that.
  5. This has been typical of Nvidia since the 980 Ti. Major release every 2 years with a refresh (Ti, Supers) a year between those.
  6. Hey Chops, The 3070 will do great for DCS in 1440p or 4K. It’s roughly equivalent to a 2080 Ti. There’s a lot of good 1440p monitors out there these days. But don’t get your hopes up to use them in DCS above 60 FPS. TrackIR isn’t compatible with Gsync. There’s a site called rtings.com that will tell you more than you ever wanted to know about your monitor or TV.
  7. As you may have noticed, the 5600 is a little scarce and none of the major Youtubers and benchmarkers test DCS, only MSFS2000. You think you could use some of that time and energy and play through some different scenarios? No one is expecting you to go all Gamer’s Nexus with graphs and charts. Just establish a baseline first (frame rate, frametime, etc.) and calculate a simple percent change. ( New - old / old ) x 100 MSI Afterburner has a nice benchmark feature that’s user friendly. That’s what I and most Youtubers use. Some of the major questions we’re working on right now are memory allocation versus actual usage in 2D and VR? How do raw generic benchmark data and scores translate to actual performance in DCS? The major goals in an objective sense are being able to maintain 120 FPS in 2D for TrackIR owners and 90-120 FPS for VR owners even in complex low altitude scenarios with lots of AI and scripts. We are only going to do this by pushing Intel and AMD to improve single core performance with the proper incentive. There’s only so much Vulkan and multithreading will be able to do and still stay in sync. Getting a real time graphics engine to scale well with core and thread count is difficult at best. Anyway, now my post is turning in to a wall of text...
  8. When you have a sufficient GPU to run the resolution and AA you want to play at (e.g 2080 Ti/3090+ etc.) that's not the bottleneck the CPU is. The has been well established in lots of other threads. You are also running at 4K60 which limits stress on the CPU as you are only pushing 60 fps. The issue that most players care about right now is being able to play complex missions in with lots of scripts and AI objects especially in VR (90-120fps and the topic of the thread). This is what puts stress on the CPU while higher resolutions and AA settings put stress on the GPU. Can we please stop arguing about whether DCS is CPU or GPU limited. It's getting tedious. It can even vary in the same mission (i.e high altitude over the ocean vs low altitude over a city). Thanks for posting single thread scores and graphs, much more useful than the walls of text. You should consider starting another thread with benchmark results. I'm hoping to get a hold of a 5950X and benchmark it. My current CPU hits 564.4 on CPU-Z stock and 616 OC'd. We're all happy AMD has competitive processors now but lets keep things in perspective. I've seen some benchmarks where AMD wins but it's under specific conditions like 1080p medium, so far the data says it depends on the game which is faster, how well you can tune and OC, and silicon lottery. Thankfully we have competition in single core performance again. It's essentially been stagnant since the Intel 7000 series. Intel's Rocket Lake is supposed to have a nice double digit percentage bump from what we have now. I've also seen a good bench from Gamer's Nexus where a tuned 5600x does very well in games relative to processors double the cost.
  9. The OS scheduler will hop from core to core so fast it looks like all cores are being used. For my CPU I tested restricting cores. No difference after turning on more than 3 cores. You can use Process Lasso to assign core affinity.
  10. The Hyperthreading issue depends on the chip. If you have a 7700K keep HT on, if you have 8700K, 9900K, or 10600K you won't necessarily gain anything in performance by turning it off but you can overclock the core to a higher GHz and still be stable. Anytime Gamer's Nexus does tests with HT or SMT off there is usually a performance gain but depends on the game. In general games don't use hyperthreading of cores like Blender or Cinebench does. If your cooling system is limited it might be worth a try to turn off some cores but if you lock affinity to 3 cores with Process Lasso the other cores stay idle and cool most of the time. Doubt there would be much difference but you never know for sure until you test it. And regarding OCing, YMMV but if you bought a K chip from Intel and a Z motherboard and RAM with an XMP profile you are leaving performance on the table if you don't use them. The limiting factor for most people will be cooling. And right now rather than spend money on cooling you can just buy a better CPU or GPU and then cool that when you get bored.
  11. Per these it's kind of a toss up. Depends on the game and how comfortable/good you are with overclocking and luck with silicon lottery. If I had the budget for a second rig I would like to compare the two. But nobody has done a specific comparison on DCS. Now Steve also had some good benchmarks where a stock 5950X is beating his 10900K OC'd to 5.2 GHz on some games, which is very impressive, but depends on the game. RDR2 for instance seems to be partial to Intel. Before I've always recommended Intel for gaming rigs for obvious reasons but now, I would advise whatever you can get the best price on the CPU, Motherboard, RAM package. This build season since the new AMD chips are gettting hyped so much I think you'll find better availability and prices on Intel, at least in the States. The minimum recommended specs I would recommend if you are building a new rig is 10600K or 5600X paired with a 3070 or better along with 32 gig @ 3200 Mhz. You can build budget rigs for less than that, but the new consoles coming out will have better performance. Keep in mind the delta between the two is negligible and GN tests at 1080p medium settings to focus the benchmark on the CPU. At 1440p and 4K not much difference at all. It's a good time to buy CPUs.
  12. You are going to be very limited in your BIOS setting with that chipset. For your CPU you want a Z490 to get the full performance of your chip and be able to overclock.
  13. Thanks Supmua, this is good info. Looking forward to your data.
  14. Uhh...yes? Those cards are two generations apart (4 years). 1080 Ti has 3584 CUDA cores of an older uArch (Pascal). A 3070 has 5884 CUDA cores of the new "Ampere" architecture. The new chip has denser components that run on less power as well. Gamers wouldn't be freaking out for the new cards if they didn't improve performance. Youtube is full of gaming benchmark videos right now. The 3070 is the best price/performance card out until the new AMD cards are properly tested. Now if you are hitting CPU bottlenecks then it won't matter. But I wouldn't pay 400 pounds for a 1080 Ti unless it was a hybrid or a Kingpin. If you can get a good price on a 2080 Ti that's something to consider.
  15. Hey, this is actually useful information. I wish we could pin it or something. No one is expecting new 3080 or 3090 owners to go all Gamer's Nexus and make a video with graphs and charts. Just establish a simple baseline and post before and after numbers. Percent change formula is [(new - old / old) * 100] for those that aren't aware. Have we figured out a way to properly compare VRAM allocation, quantity, quality, etc? i.e. Is the performance bump from more CUDA cores and improved uArch, or does the memory help too? And this isn't me being pedantic guys. I was just waiting for the 3090 Kinpin to come out, but seeing all this competition is very exciting. All these new CPUs and GPUs are going to have very close performance in their respective price range. Since none of the reputable Youtube benchmarkers test DCS we have to do the best we can ourselves. Just some quick back of the envelope math it would take a $10,000+ budget to test all the new CPU/GPU combos properly. Without any definitive data things tend to devolve into the usual speculative debates with the usual fanboyism thrown in. Me, I like data. This forum is better than a lot of PC related forums as far a fanboyism goes, I hope it stays that way. Given the supply/demand/COVID issues I would be very patient and don't get your heart too set on a new build this NOV/DEC. Just get what parts you can on sale. It's looking like 1st quarter 2021 will be better in terms of prices and consumer choice. I'm sure Intel, AMD, and Nvidia want to ship as much as they can this build season but getting the newest shiniest CPU or GPU for Christmas might be difficult.
  16. I got up to about 83% scaling in my tests in 4K with high AA settings. Which is actually very good relative to other games. In 1080p not worth it. In a nutshell, if the graphic setting is GPU bound it will benefit from 2 GPUS. It won't help with CPU bound settings like Visib Range, shadows, or anything that increases the quantity and frequency of draw calls. If you have the slot and the surplus wattage and can get a second card cheap, go for it. I think a lot of 1080 Ti and 2080 Ti owners would be pleasantly surprised. By my rough math you can beat the performance of a single 3090 with 2 2080 Ti's. I haven't tested this myself though. Once I get a 3090 I'll be able to say for sure. Key note, this is for 2D. No one has gotten SLI to work to with VR so don't bother. I never tried testing on 1080p low settings specifically to see how high I could push the frame rate. But out over less dense areas and at high altitude I saw 160fps+ with everything maxed out. Pretty sure that's the rough CPU limit but I have yet to specifically test that. This is all dependent on you being willing to use Nvidia Inspector and tweak the GPU driver manually. It's not officially supported, so don't put in a help ticket if you are having problems getting it to work. You have to follow these steps word for word: https://forums.eagle.ru/forum/englis...65#post4947807 Not sure why some people keep saying SLI doesn't work in DCS, it works very well in fact, just at higher resolutions and AA settings.
  17. Depends, you could get a 10600K that's faster depends on random chance and/or if you are willing to pay for a binned chip. Gamer's Nexus has gotten really good results overclocking 10600Ks. DCS is CPU limited on a single core. It doesn't scale from 6 to 8 cores. I can run it on my 9900K with 5 cores disabled, 3 cores on, and hyperthreading off.
  18. His benchmark for the 2080 Ti was accurate enough. He only got to test the 3090 really quickly though. Last time for the 2080 he did some really thorough benchmarks. Even then I bet it will be around 40% based on his report and other gaming benchmarks in general. Curious to see the frametimes for VR at different settings especially AI objects, shadows, etc. Memory usage as well. Good to hear some of you guys actually have the card on order. I'll probably end up getting whatever is in stock for me first. I've been using the T-51 free flight over Caucauses, but any mission will work as long as you establish a good baseline for comparison. Free module/free map is easily replicable though and people don't have to find a user file to download. MSI Afterburner has a good benchmark feature. I suspect the 3080 will be sufficient for most people but I'd be willing to shell out the extra dough for that 10-15% extra performance of the 3090. I had a Reverb but sold it, looks like preorders are shipping in Dec/Jan for the G2.
  19. It already does that though. If the application is only using 2-3 cores those cores will hit 5.0 GHz stock or whatever you have your overclock set for. One of the main reasons it switches around like that is to keep hot spots from forming on the chip. As long as you aren't hitting thermal limits the cores will do 5+ GHz. The main limiting factor for most people will be their cooling solution not some arbitrary GHz limit. All core overclocks will get you higher scores in Cinebench and Timespy Physics score but don't actually benefit in games that I've seen. For DCS you want to push 1 core overclock as far as it will go. You could actually even shut down cores in BIOS. I saw do difference going from 3 cores to 4+ cores. Single core and <8 core overclocks draw less voltage and heat and are more stable at higher speeds. Turning hyperthreading off gives a little more headroom too. As I understand it as long as you are on the proper power plan there will be no delay in "spinning up". Not to derail the thread, thought most of you guys knew this already. When I have a new card in hand I'll post some benchmarks and before and after data. As far as I can tell, no one who knows how to properly benchmark DCS has their hands on one yet. No telling when that will be though...
  20. Rather than giving you the answer you guys should ask yourself why would overclocking all cores benefit a game that runs on three cores and is limited by single core speed? Curious what the rationale for that would be. Anyways pretty easy to test this yourself guys. Intel XTU can do per core overclocks. I'm not sure why someone would run a 9900K with 2 cores at 4.7 when the stock turboboost pushes 3 to 5.0 GHz by default.
  21. Short answer yes, although it's more complicated of course. Intel's Turboboost feature in general picks the best core and allocates demanding applications to that core. The current Windows scheduler does have a feature that hops demanding apps from core to core as a way to prevent hot spots on the chip. For day to day DCS at 60fps 2D you might not notice a difference in performance from this process happening. Although I've never specifically tested in CPU stress/bound scenarios like lot of shadows and AI objects. I've set up my gaming overclocks to work in conjunction with a utility called Process Lasso which makes it convenient to set up CPU power profiles for specific games/apps. You can prioritize CPU cycles and memory very easily, turn off hyperthreading, restrict cores, etc. This is all for just CPU btw. :) For GPU I recommend a utility called MSI Afterburner. It's agnostic to different brand cards and is free. I also do things like upgrade the thermal paste and TIM on my GPUs when I switch them to waterblocks. For RAM, I'm working with a kit that was binned at 3200MHz but was able to push to 3600MHz very easily. I still need to try and lower the latency timings.
  22. I'd give the Intel XTU utility a try. You can set the overclock per core in a much more user friendly interface than most BIOS. You really can't go wrong with 1.35v ADAPTIVE. In adaptive the chip will only draw as much voltage as necessary and not when idling. You are correct in that you want to turn off any kind of auto voltage or multicore enhancement as these tend to overvolt by default to keep stability. And for DCS you want to push one core as far as you can. Trying to overclock all 6 of your cores will just require more voltage/current/heat. There's not benefit to gaming or DCS from an all core overclock. That really only helps get higher synthetic benchmark scores. These skills can pay dividends besides gaming as well. I've read positive reports in the forums of people undervolting their laptops and upgrading thermal paste. By using the minimum voltage necessary for stable day to day computing tasks they minimize heat and power draw which extends battery time.
  23. It looks like the 3090 is the only new card that supports NVlink (SLI) and Nvidia will no longer be making drivers/profiles for it. It's on the developer to support it in DX12 or Vulkan. DCS in SLI with 20 series cards works very well in 2D but not VR unfortunately. I assume Pascal 10 series cards work too but did not have any in hand to test. https://forums.eagle.ru/showpost.php?p=4170083&postcount=21 In my tests with two 2080 Ti's scaling was at 80%+ at 4K resolutions and high AA settings (GPU bound scenario) but not worth it for 1080p. I did not test at maxed out AA settings at 1440p unfortunately, best guess scaling would be comparable to 4K resolution. If you have the free PCIe slot and surplus wattage from your power supply and you can pick up a second 1080 Ti/2080 for cheap on the used market and you like to play at 1440p or higher resolution you might be pleasantly surprised at what you can achieve. I would definitely put this in the advanced user category as you need to feel comfortable with Nvidia Inpector, Nvidia drivers, and compatibility bits. https://forums.eagle.ru/showpost.php?p=3410823&postcount=65 https://forums.eagle.ru/showthread.php?t=201454 A big fat caveat in all this is this is a user level workaround and DCS does not support SLI in an official capacity so don't go putting in a help desk ticket or start a complaint thread if you can't get it to work. And you still have the issue with TrackIR and 60fps but a second card gives you more surplus overhead to turn up GPU bound graphic settings like resolution and antialiasing. But SLI works very well in DCS if you know what you are doing.
  24. Best to go right to the source, I can't remember all the specs on these chips they're all binned a little differently each gen. https://ark.intel.com/content/www/us/en/ark/products/126684/intel-core-i7-8700k-processor-12m-cache-up-to-4-70-ghz.html From looking at that you'd need to push 1-2 cores to 4.8 GHz or better to see any improvement over the stock turboboost. The 8700K is a good chip, you bought well. You may not have gotten a golden sample for overclocking, but they're good chips. And a 8700K/1080 Ti is one of the best builds you could have done in recent memory in terms of performance/price and longevity. Money well spent... You can still squeeze a little more out of her though. If you can pick up a second 1080 Ti and like to play at 1440p or 4K you'd see a lot of improvement with that as well.
  25. Absolutely, yes, if you have a Z motherboard and K processor and you aren't overclocking you are just throwing money away. Make sure the BIOS on your MB is current. You will get the most stable overclocks that way, for your CPU and RAM. What kind of cooling do you have? I've read people getting good results with a Noctua air cooler, but ideally you want a 240mm or 360mm AIO or liquid cooling. I have a custom loop with a THICK (more surface area) EK 360mm radiator. All the copper blocks, rads, and fittings aren't cheap, but they don't really wear out and aren't subject to Moore's law. If you are worried about frying your CPU just keep voltage @ 1.4v (1.35v even better). I'm a big fan of adaptive voltage, otherwise your CPU is drawing more current and creating more heat than necessary. Temps will throttle a Skylake Intel chip at 100° C, but you want to keep it below about 85 for daily driving. Ideally if their is a specific guide for OCing your motherboard or ASROCK BIOS, you would want to use that. This is good general guide that covers most BIOS, don't worry that is says 9900K, not really that much of a difference between your chip and mine except 2 more cores. https://www.tweaktown.com/guides/9225/intel-core-i9-9900k-kf-overclocking-guide/index.html Two other good sources are r/overclocking on reddit and overclock.net. Be forewarned, they're not really noob friendly, so best to read a lot and follow the guides before you ask too many questions. Look into Intel XTU, you can overclock per core. Intel chips come stock with a turbo boost that affects 2-3 cores depending on chip. They designed it this way for a reason, games don't use all the cores of a modern CPU. You will get more beneficial overclocks per core with less voltage and heat than trying to OC your whole chip. That really only helps for Cinebench scores, not for gaming. BTW, the 40% usage stat you cited is most likely for the whole CPU, you want to look at usage per core as DCS will be limited by single core performance, which is why it responds so well to overclocking. Over Intel XTU is a lot more user friendly than overclocking through BIOS for noobs. Make sure you XMP profile for you RAM is enabled (setting in BIOS) as well. Intel sells an overclock warranty for like $20 if you are really worried about. Keep your CPU under 85°C and 1.35V for daily OC and you'll be fine. People have run overclocks for years on chips at the edge of specs with no issues. There's a whole lot more, overclocking is kind of a rabbit hole, but that should get you started. You can definitely hit diminishing returns and for most people the money would be better spent on a new CPU or better GPU but if you already have top tier components and want to squeeze all the performance out you can, and you like to tinker a little bit, overclocking is a lot of fun. And you get instant gratification in you game frame rates and graphics settings. A lot depends on how much fan noise you can tolerate if you don't have liquid cooling :D Make sure your Windows power settings are on high performance and the same for you Nvidia driver. Everything comes with energy and heat saving features on by default, you want to turn those off for a gaming rig.
×
×
  • Create New...