Jump to content

Sn8ke_iis

Members
  • Posts

    537
  • Joined

  • Last visited

Posts posted by Sn8ke_iis

  1. That's exactly what multithreading means. Hyperthreading is something different entirely as other users have explained.

     

    Hyperthreading is just Intel's trademark name for CPU multithreading. AMD calls theirs simultaneous multithreading (SMT) which I don't believe is trademarked.

     

    If you are using the term multithreading in the software sense only, sure DCS is software multithreaded. But we already knew that because DCS uses more than one core.

     

    The reason why you are seeing no FPS difference in DCS by using hyperthreading is that the core thread singlehandedly saturates one CPU and that is your FPS cap. All the other threads use so little CPU that you can probably fit them into another core. If you were on a single core CPU with hyperthreading, you would see a difference.

     

    If you have more than one physical core, hyperthreading does nothing for DCS because the load is spread too unevenly over its threads.

     

    I'm pretty sure DCS would just crash or not load if you actually tried to run it on a single core CPU regardless of whether HT was enabled or not. DCS uses 2 cores.

     

    MINIMUM SYSTEM REQUIREMENTS (LOW GRAPHICS SETTINGS):

    OS 64-bit Windows 7/8/10; DirectX11; CPU: Intel Core i3 at 2.8 GHz or AMD FX; RAM: 8 GB (16 GB for heavy missions); Free hard disk space: 60 GB; Discrete video card NVIDIA GeForce GTX 760 / AMD R9 280X or better; requires internet activation.

  2. i think you are confusing terms. multithreading and hyperthreading are two completely different things.

     

    Hyperthreading is just Intel's proprietary marketing term for CPU multithreading because some marketing guy thought it sounded cool.

     

    AMD uses the term simultaneous multithreading (SMT) for their CPUs.

     

    https://en.wikipedia.org/wiki/Simultaneous_multithreading

     

    https://stackoverflow.com/questions/5593328/software-threads-vs-hardware-threads

     

    At this point you are kind of just being pedantic if you want to argue that more than one software thread is multithreading and it's not really relevant to the OP's question or people who are doing a new build or trying to decide between Intel and AMD. You should have just stated the difference between software and hardware multithreading in the first place. Now as one poster mentioned they are just confused and it's not really helping anybody.

  3. What are you asking exactly then? I'm not sure I understand your question. You can't determine whether a program uses a processor's multi-threading capabilities by looking at the number of threads quantitatively in task manager.

     

    the only requirement for "multithreading" is for there to be more than one thread in the process.

     

     

    That's not what multithreading/hyperthreading means.

  4. with regards to hyperthreading (HT). how does one create a program to use hyperthreads?

     

    there js no api in windows to do this

     

    I can't answer that, I'm not a software engineer. I know Adobe Premiere Pro uses multi-threading, maybe there is a FAQ on their website. Or you can read up on the Vulkan API.

  5. Sigh...I've already done that thanks. No one here has said that DCS does not use more than one thread. That's not what multithreading/hyperthreading means. I'm not aware of any program or game that only uses one thread. There are a few background processes in Windows Task manager that show only one thread being executed. If multithreading actually meant using more than one thread then pretty much every program uses multithreading which is obviously not the case. As I said DCS does not use multithreading and you are getting your definitions confused. DCS does not utilize logical cores only physical cores.

     

    The simplest explanation I can think of is it's like 2 people using the same calculator simultaneously in parallel, e.g. if one person is adding 2 + 2 and the other is adding 3 + 3 then they would enter (2 +) (3+) (2) (3) and get the result (=4) (=6).

     

    Instead of being stubborn you should actually read the links I've been posting here and on other threads, it wasn't for my benefit. I've already read them. If you had went to the wiki page you would have read below the first paragraph I already posted...

     

    https://en.wikipedia.org/wiki/Multithreading_(computer_architecture)

     

    "Where multiprocessing systems include multiple complete processing units in one or more cores, multithreading aims to increase utilization of a single core by using thread-level parallelism, as well as instruction-level parallelism. As the two techniques are complementary, they are sometimes combined in systems with multiple multithreading CPUs and with CPUs with multiple multithreading cores."

     

    https://en.wikipedia.org/wiki/Multiprocessing

     

    https://www.techopedia.com/definition/24297/multithreading-computer-architecture

     

    "Threading can be useful in a single-processor system by allowing the main execution thread to be responsive to user input, while the additional worker thread can execute long-running tasks that do not need user intervention in the background. Threading in a multiprocessor system results in true concurrent execution of threads across multiple processors and is therefore faster. However, it requires more careful programming to avoid non-intuitive behavior such as racing conditions, deadlocks, etc."

     

    p75GqOG.png

     

     

    2wGZTfI.png

     

    Hyperthreading OFF

    11-01-2020, 15:20:27 DCS.exe benchmark completed, 1252 frames rendered in 35.844 s

    Average framerate : 34.9 FPS

    Minimum framerate : 33.4 FPS

    Maximum framerate : 35.8 FPS

    1% low framerate : 32.2 FPS

    0.1% low framerate : 31.7 FPS

     

    JghFS50.png

     

    tevsQVj.jpg

     

    Hyperthreading ON

     

    11-01-2020, 15:33:58 DCS.exe benchmark completed, 1539 frames rendered in 43.078 s

    Average framerate : 35.7 FPS

    Minimum framerate : 34.1 FPS

    Maximum framerate : 37.9 FPS

    1% low framerate : 32.9 FPS

    0.1% low framerate : 31.4 FPS

     

    As is clearly shown in the benchmarks and screenshots there is no significant difference in DCS performance with Hyperthreading On or Off. This is why the 9700K is marketed towards gamers and not content creators. Only software like rendering and video editing suites actually use hyperthreading/multithreading.

     

    Does this make more sense now?

     

    ?u=https%3A%2F%2Fwww.thelastdragontribute.com%2Fwp-content%2Fuploads%2F2014%2F07%2FBruce-Leroy-I-AM.png&f=1&nofb=1

  6. I'm not spreading misinformation. You are getting your definitions confused. DCS does not use multi threading. I'm not sure why you keep saying this.

     

    "In computer architecture, multithreading is the ability of a central processing unit (CPU) (or a single core in a multi-core processor) to provide multiple threads of execution concurrently, supported by the operating system. This approach differs from multiprocessing. In a multithreaded application, the threads share the resources of a single or multiple cores, which include the computing units, the CPU caches, and the translation lookaside buffer (TLB)."

     

    https://en.wikipedia.org/wiki/Multithreading_(computer_architecture)

     

    You can easily test this yourself by turning off Hyperthreading in BIOS or ProcessLasso. It makes no difference whatsoever in performance in DCS.

     

    You can message the developers if you need to and they will confirm this. DCS testers use the 9700K which does not Hyperthread.

     

    Video editing and rendering packages do though. If you look at the gaming benchmarks I linked to in previous posts you'll see that the 9700K does very well in various gaming benchmarks but not as well as the 9900K or AMD processors in productivity benchmarks.

  7. I'm totally highjacking the thread off topic. Hope the OP doesn't mind. I sold my second 2080 Ti, got a good price for it too. Shipped if off a few days ago. I did get some good benchmark data before I shipped it. CPU is definitely the bottleneck with 2080 Ti's, even with a 9900K. I'm planning on doing a more thorough post so it's searchable on the forums but here's the TL;DR of what I got. It really only helps with 1440p and up and SSAA and MSAA settings. At 1080p current CPUs just can't send enough draw calls with the current engine. These were all on the latest Open Beta and December's Nvidia driver. These were on the stock mission over Tbilisi right over the airfields and the city with lots of trees and smoke stacks so it's a good stress test. I wanted to do some for the F-18 too with all the MFDs on but I ran out of time.

     

    DCS TF-51 Single GPU

    1080p High Preset, Mirrors On, Caucuses, Instant Action: Flight Over Tbilisi

     

    05-01-2020, 16:26:17 DCS.exe benchmark completed, 2970 frames rendered in 21.579 s

    Average framerate : 137.6 FPS

    Minimum framerate : 122.4 FPS

    Maximum framerate : 143.6 FPS

    1% low framerate : 97.7 FPS

    0.1% low framerate : 58.1 FPS

     

    DCS TF-51 Dual GPU SLI

    1080p High Preset, Mirrors On, Caucuses, Instant Action: Flight Over Tbilisi

     

    05-01-2020, 21:20:37 DCS.exe benchmark completed, 4354 frames rendered in 31.047 s

    Average framerate : 140.2 FPS

    Minimum framerate : 137.1 FPS

    Maximum framerate : 143.3 FPS

    1% low framerate : 114.1 FPS

    0.1% low framerate : 99.1 FPS

     

    DCS TF-51 Single GPU

    1440p High Preset, Mirrors On, Caucuses, Instant Action: Flight Over Tbilisi

     

    05-01-2020, 19:27:37 DCS.exe benchmark completed, 2222 frames rendered in 18.063 s

    Average framerate : 123.0 FPS

    Minimum framerate : 120.7 FPS

    Maximum framerate : 128.1 FPS

    1% low framerate : 112.7 FPS

    0.1% low framerate : 97.2 FPS

     

    DCS TF-51 Dual GPU SLI

    1440p High Preset, Mirrors On, Caucuses, Instant Action: Flight Over Tblisi

     

    05-01-2020, 20:11:45 DCS.exe benchmark completed, 3897 frames rendered in 27.688 s

    Average framerate : 140.7 FPS

    Minimum framerate : 136.4 FPS

    Maximum framerate : 145.7 FPS

    1% low framerate : 121.0 FPS

    0.1% low framerate : 102.4 FPS

     

    DCS TF-51 Single GPU

    4096x2160p High Preset, Mirrors On, Caucuses, Instant Action: Flight Over Tbilisi

     

    05-01-2020, 23:08:58 DCS.exe benchmark completed, 2098 frames rendered in 25.547 s

    Average framerate : 82.1 FPS

    Minimum framerate : 79.9 FPS

    Maximum framerate : 84.6 FPS

    1% low framerate : 78.8 FPS

    0.1% low framerate : 58.2 FPS

     

    DCS TF-51 Dual GPU SLI

    4096x2160p High Preset, Mirrors On, Caucuses, Instant Action: Flight Over Tbilisi

     

    05-01-2020, 23:44:22 DCS.exe benchmark completed, 5352 frames rendered in 38.656 s

    Average framerate : 138.4 FPS

    Minimum framerate : 130.7 FPS

    Maximum framerate : 143.4 FPS

    1% low framerate : 106.3 FPS

    0.1% low framerate : 80.2 FPS

     

    It worked out to almost 70% scaling in SLI at 4k resolution at high preset. I was pleasantly shocked. That's way better than most of the games Gamer's Nexus tested in SLI.

     

    I went ahead and went balls to the wall on settings and cranked up all the important settings, extreme draw distance, 4x MSAA, and 2X SSAA and the results were even better.

     

    DCS TF-51 Single GPU

    4096x2160p Custom Extreme, Mirrors On, Caucuses, Instant Action: Flight Over Tbilisi

     

    06-01-2020, 00:33:24 DCS.exe benchmark completed, 2411 frames rendered in 43.844 s

    Average framerate : 54.9 FPS

    Minimum framerate : 53.5 FPS

    Maximum framerate : 56.0 FPS

    1% low framerate : 52.3 FPS

    0.1% low framerate : 51.7 FPS

     

    DCS TF-51 Dual GPU SLI

    4096x2160p Custom, Mirrors On, Caucuses, Instant Action: Flight Over Tbilisi

     

    06-01-2020, 00:58:18 DCS.exe benchmark completed, 4455 frames rendered in 44.172 s

    Average framerate : 100.8 FPS

    Minimum framerate : 95.0 FPS

    Maximum framerate : 105.4 FPS

    1% low framerate : 63.6 FPS

    0.1% low framerate : 36.8 FPS

     

    That's 83% scaling. I saw those and I was :huh: but I double checked and did some more runs and the math checked out again. They've definitely made some improvements to the engine over the last year and it looks really really beautiful in full 4k with all the eye candy cranked. Even on a 65" big screen you can't see any jaggies. I was flying along the Normandy coast around Mont St Michel and the frame rate counter hit 160+ out over the water. :)

     

    2O44Vnl.jpg

     

    31eEI8I.jpg

     

    It's getting late here, I got to get some sleep, hopefully people found all the data useful. Hopefully I get that rig built sometime this year for a good Intel - AMD comparison. The 3080 Ti Ampere is definitely on my list when it's released. I'm tapped out on disposable fun money for the next couple months. But I've been learning Blender to do video editing for YT videos.

  8. Another great newsletter guys, keep up the outstanding work.

     

    Looking forward to the new Weather, Damage Modeling, and ATC, very exciting for single player and campaigns where I spend the bulk of my time.

     

    And Phil, just watched your new video on the Channel map, great stuff. Love your content, you're one of the best creators for DCS right now.

     

    The community mods have more patience than I could ever hope to have. I don't know how you do it. Hoggit in general has gotten so much better lately but anytime WWII content comes up the same dozen or so people get their panties in a bunch. Real Debbie Downers which is a shame.

     

    I own almost all the modules but I'm pretty much at my limit for what I can learn and fly time wise, even with Chuck's Guides. But I'll buy new maps all day. More maps please. Going from 4 to 8 flyable maps is very exciting.

     

    Cheers to all the people commenting staying positive and constructive. I'm sure it's appreciated by many besides myself. Happy flyin' everbody... :pilotfly:

  9. giphy.gif

     

    :megalol:

     

    They have to convert 4.3 millions lines of code. It might actually be a while before the new engine is out. This time next year is probably optimistic at best. There's a popular GA sim that's also doing a Vulkan conversion and it's taking a long time.

  10. If you invest in a Z390 chipset you will fall short when a card superior the 2080ti arrives.

     

    The 2080ti is likely the last single card/max performance which will can get by with PCIe v3.

     

    Seeing that as a fact, voids any Intel if you plan to use the rig for 3-5 years and upgrade the GPU every 1-2 years.

     

    Source: Der 8auer in one of his last YT videos.

     

    Using SLI on an Intel desktop chipset ( ie Z370/Z390 ) with 2 x 2080ti and high fps will cut your performance, whereas those 2 cards could strive with 2 x 8x PCIe v4.

     

    https://www.techpowerup.com/review/nvidia-geforce-rtx-2080-ti-pci-express-scaling/

     

    The OP didn't mention anything about SLI (1 x 1070) and the link you posted doesn't test PCIe 4.0. It's not a fact yet and doesn't void Intel, you are just speculating as the new cards would have to surpass the theoretical bandwidth thresholds of PCIe 3.0. Which if they do would be very impressive.

     

    I tried to find the video you were looking for and couldn't find it. I would be interested to watch it as I respect his work and learned a lot about overclocking from him. I did find this one.

     

     

    Here's a good explanation of PCIe 3.0 vs PCIe 4.0. On its face having a motherboard that supports 4.0 would make sense for future proofing but the reality is more nuanced and complex.

     

    https://www.pcworld.com/article/3400176/pcie-40-everything-you-need-to-know-specs-compatibility.html

     

    "...because few games ever saturate the 32GBps of data today’s x16 PCIe 3.0 slot can carry."

     

    "Ryzen 3000 “only” can support a single-slot x16 PCIe 4.0,..."

     

    So there is actually no difference in current gaming CPUs as Intel's 9000 series/Z390 chipset and AMD's 3000 series/X570 chipset can still only support 1 PCIe x16 lane. They both can have mulitiple x16 slots but the CPU and chipset still can only fully utilize 1 PCIe x16 slot at a time.

     

    To get more PCIe lanes for dual x16 you have to use the expensive HEDT X series or Xeon processors with the x299 chipset or AMD Threadripper. Those chips do not have better single thread performance in synthetic benchmarks.

     

    https://www.cpubenchmark.net/singleThread.html

     

    In that benchmark and on AMD's website it does show good singlethread for the PRO processors but those don't seem to be available for DIY systems, only prebuilt.

     

    Here's some real world gaming tests comparing PCIe 3.0 to 4.0 in 20+ games.

     

    https://www.techpowerup.com/review/pci-express-4-0-performance-scaling-radeon-rx-5700-xt/

     

    CONCLUSION:

     

    "Looking at the results, we can see a whole lot of nothing. PCI-Express 4.0 achieves only tiny improvements over PCI-Express 3.0—in the sub-1-percent range When averaged over all our benchmarks, we barely notice a 1% difference to PCIe Gen 3. I also included data for PCI-Express Gen 2, data which can be used interchangeably to represent PCIe 3.0 x8 (or PCIe 4.0 x4). Here, the differences are a little bit more pronounced, but with 2%, not much to write home about, either. These results align with what we found in previous PCI-Express scaling articles.

     

    That's of course a good thing as it confirms that you do not need an expensive PCI-Express 4.0 motherboard to maximize the potential of AMD's new Radeon RX 5700 XT. It also produces strong evidence that PCIe 4.0 won't be needed for even more powerful next-gen graphics cards because our three tested resolutions reveal more details.

     

    If you look closely, you'll notice that lower resolutions show bigger differences in performance when changing the PCI-Express bandwidth, which seems counter-intuitive at first. Doesn't the graphics card work harder at higher resolutions? While that may (mostly) be true, graphics card load does not increase PCI-Express bandwidth; it actually lowers it because frame rates are lower. The amount of data transferred over the PCIe bus is fairly constant—per frame. So if the graphics card can run at higher FPS rates because the resolution is lower, the PCIe bus does have more traffic moving across it.

     

    So even if next-gen graphics cards significantly increase performance, we won't see huge differences in PCIe requirements because you'd not use those graphics cards at 1080p, but rather 4K. In such a scenario, with FPS increasing on 4K, the difference in scores would be more similar to this review's data for 1440p, or even 1080p.

     

    When looking at individual game results, the effects of constrained PCIe bandwidth vary wildly. Some games, Shadow of the Tomb Raider, for example, barely show measurable differences, while titles like Rage 2 and Wolfenstein are much more dependent on PCI-Express bandwidth. I can't see a clear trend between APIs or engines as it rather looks like a dependency on how the game developer chooses to implement their game and how much data they copy from the CPU to the GPU, or even back.

     

    These results are also good news for people who consider running their graphics card at reduced link width, like x8 or even x4, to free up precious PCI-Express lanes for other devices, like storage."

     

    Hopefully we have the Vulkan API update that takes better advantage of modern processors by this time next year. I'll definitely run some comparison benchmarks when it does.

  11. I initially scrolled to your post and saw the first graph, which gives off the impression that the 9900K is far superior in terms of gaming compared to the 3800 or 3900X, which is not true. It was true before the latest AGESA versions came out.

     

    So I posted and a moment later edited because I saw your second graph which is way more true to this. The more you look for latest benchmarks or bench yourself, the more you can see that there really is no difference between these processors anymore, it only and only depends on the application you're benching with/using in general.

     

    Which benchmarks? I always look for them. The benchmarks I posted from PC Gamer are from DEC 9 2019. Just posting that there are different benchmarks without linking to them doesn't really help anybody. I'm not aware of any source that does better benchmarking and testing than Gamer's Nexus and Jarred Walton.

     

    Most people use their gaming rigs for gaming. Sometime in the next year I hope to build a second rig for CC and it will probably be an AMD 3900X as AMD performs really well in video editing and encoding benchmarks. Then I can run some comparison benchmarks for DCS specifically. I would love to do some benchmarks as thorough and exhaustive as GN and PCG but would need a budget of $20,000+ to buy all the chips and motherboards.

     

    DCS is limited by single thread performance. You should read the article and watch the video. The 9900K beats the 3900X by 10-30 fps in all of the benchmarks except for the 2 that I already mentioned. The mean on the first slide was 10fps.

     

    The OP says he was looking at AMD's 3700X, 3800X and Intel's 8700K and 9700K. The 9700K is currently cheaper than the 3800X in the US.

     

    I was just providing the OP with data so that he can make an informed decision rather than rely on opinions which are subject to bias.

  12. Whoa, great news! Thanks for the heads up!

     

    Question though...

     

    I currently use EVGA Precision to limit my FPS to 60...would using NVIDIA Control Panel be better for limiting the frame rate?

     

    You're welcome

     

    And to answer your question I have no idea to be honest.

     

    I'm mostly playing VR now but I had been using the frametime limiter in Rivatuner. In NCP I used adaptive sync for my 60 Hz bigscreen and prerendered frames set to 1.

     

    I wouldn't be sure how to test it without having a way to test objectively latency and input lag. It might be hard to tell any difference with the Mk I eyeball. I see it definitely having potential though. guru3d.com and blurbusters get pretty hardcore about testing these things so hopefully they have something in the works.

  13. 3900X and 3800X are about the same performance as 9900K with the newest AGESA versions. Stop living in August of 2019 and spread false and/or outdated information. Thank you.

     

    Edit: There you go, you can see it on the second graph you yourself posted.

     

    Beg your pardon? What misinformation am I spreading exactly? Those graphs are from Dec 9, 2019 not August. The benchmarks speak for themselves. Opinions don't really matter when you have objective benchmarks. In every one of the benchmarks except for 2 use cases Intel beats AMD in gaming performance. In Assassin's Creed the TR 3960X ($1400) has a sleight edge and in Total War, AMD has the edge because those game uses multicore processing. DCS is still bound by single thread performance as stated in my post.

     

    The OP said he was looking at the Intel 8700 or 9700 or AMD's 3700X, 3800X. I don't think he wants to spend $1400 for a CPU that will have no benefit in DCS. The 9700K beats those AMD processors in every gaming benchmark except for the 2 multicore games previously mentioned. The 9700K is also cheaper than the 3800X right now in the US.

     

    In various productivity software AMD has an edge. The Gamer's Nexus benchmarks also confirm and replicate the same results. If you have better benchmarks please share.

     

    On the second graph I posted the results are very clear, perhaps you should look at it again. I never said they weren't "about" the same performance. All the processors are "about" the same performance. That's why we look at objective benchmarks with a quantitative score and compare prices. I think you need to actually read the article and watch the video. Nothing I stated was false or outdated and the evidence is clear for everyone to see.

     

    If you feel the results are in error or there is a flaw in the benchmarking methodology you need to contact Jarred or Steve directly as I did not conduct these benchmarks myself. I just posted the results.

     

    Also, as you get to 4K resolution there's almost no difference as you are hitting GPU thresholds. In high framerate 1080p gaming is where you see the most difference between CPUs. Which is why I asked the OP what resolution and budget he was looking at.

  14. New Nvidia Driver 441.87 has Native Frame Rate Limiter

     

    "Maximum Frame Rate

    This driver introduces a new Max Frame Rate setting that allows users to cap the frame rate at which a 3D game or application is rendered. This feature is helpful when trying to save power, reduce system latency or paired with your NVIDIA G-SYNC display to stay within variable refresh rate range. Access the feature from the NVIDIA Control Panel->Manage 3D Settings->Max Frame Rate."

  15. What's your budget and what resolution are you planning on playing at? At 1080p the CPU will be the limiting factor and at higher resolutions the GPU will be the limiting factor.

     

    You should take people's opinions with a grain of salt. Best to look at the objective benchmarks.

     

    Jarred Walton at PC Gamer and Steve at Gamer's Nexus have done the most thorough benchmarks I'm aware of. They don't test DCS unfortunately but they have tested all the current Intel and AMD CPU's on DX11 and DX12 games and Intel is still currently beating AMD in real world performance.

     

    https://www.pcgamer.com/best-cpu-for-gaming/

     

    For example, if you flip through the slides in that article and check Far Cry 5 which is based on the DX11 Graphics API like DCS is Intel's 9600K and even the 7700K still beat AMD's newest processors even the high end 3900X which is almost the same cost as a 9900K.

     

    re6IHxF.png

     

    DCS is still currently limited with the single thread performance of the CPU. The 9700K is as good or better at most games as the 9900K and is essentially the same chip without Hyperthreading/multithreading. DCS does not use multithreading and is really only used by software like Blender and Premiere for rendering and editing.

     

    b4tRrZ8.png

     

    Here's Gamer's Nexus review of the 3700X but they compare it to all the current CPUs in their benchmarks.

     

     

    One of the testers mentioned in the forums that they use the 9700K on the machines at the DCS office. I'd go with that one as it has great performance and is much cheaper than the Intel 9900K or the AMD 3900X.

  16. i’m not taking about overclocking, i’m saying that if you or DCS does not set core affinity in a way to limit the number of cores that can be used, then the task scheduler in the windows operating system will schedule any “ready-to-run” thread onto any available cpu core it wants to use.

    (i.e. it will not limit the “ready” threads to specific cores)

     

    thats the whole point of the core “affinity mask” setting in the windows kernel

     

    nothing to do with overclocking

     

    Intel Turboboost chooses that fastest cores automatically unless the user manually turns it off. And Turboboost is an automatic form of overclocking from the CPU's stock baseclock. I don't set core affinity manually, I never said I did. DCS's rendering thread settles on logical core 15 in my Afterburner OSD automatically with no intervention on my part. It's pretty easy to test this yourself. I recommend MSI Afterburner if you don't use it already. It's shareware and works with all graphics cards.

     

    https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-max-technology.html

     

    Here's a screenshot from some SLI benchmarks that I'm doing. I just have the stock overclocks enabled in BIOS and the High graphics preset for these tests as a baseline. As you can see Logical Thread 15 (Physical Core 8 ) is at 97%. That's DCS's rendering thread that's sending draw calls to the GPU. The one GPU being used is also at 97%. This is how you know you are at peak performance. The CPU's limiting factor is single thread IPC. A 9900K will slightly outperform a 8700k or 9700K (no hyperthreading), but not because of more cores or HT, it just has faster single thread performance. AMD's newest line matches or surpasses an average 9900K in single thread performance. The recent KS "special edition" is just a high binned 9900K as over time they perfect the lithography processes and QC and get better chips from when they were first released. That's why I buy my processors from Silicon Lottery. They test and bin processors for higher clocks.

     

    https://www.cpubenchmark.net/singleThread.html

     

    Y53ymZ9.png

  17. to limit the number of cores, the computer user, or the exe process has to set the core affinity mask and limit the core usage.

     

    we know that DCS does not do this itself (you can check for yourself by looking at the DCS cpu mask when it’s running).

     

    i’m not suggesting it can’t be limited to only two cores, but it’s not programmed to do that out of the box. it’s something a user would have to do manually with a tool (like process lasso)

     

    Not necessarily,

     

    I use ProcessLasso but I don't set core affinity with it or any Windows utility. And if you use ProcessLasso to turn HT on or off you won't see any difference in FPS. I maybe see a little smoother line in the frame time graph with HT off but we are talking fractions of millisecond. It's not noticeable with your eyes. This was under test conditions with no TrackIR, Tacview, recording, streaming, etc. I think Bit mentioned all the stuff going on that might be influenced by HT but that's just more variables to test.

     

    I set a 2 core ratio overclock in Asus BIOS to 5.2 GHz for DCS. Trying to OC all 8 cores requires more voltage and puts out more heat and you won't see any benefit in DCS. You can OC 2 cores higher and still stay stable versus OCing all 8 cores. I have a custom loop and a good chip from Silicon Lottery that was rated at 5.0 on all cores so YMMV. I think I might be able to push it to 5.3 or 5.4 GHz on 2 cores and stay in a safe voltage but haven't tried yet. I can stay stable playing DCS on OCs that crash Cinebench and other stress tests, DCS does not use AVX so you can usually push it pretty high and still stay stable.

     

    If Intel Turboboost is enabled in BIOS, as it usually is by default, it will put demanding programs and games on the best core without the user doing anything manually.

     

    https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-max-technology.html

     

    This should be transparent to most users as it's usually enabled by default and you would have to manually turn it off in your BIOS which I do not recommend.

  18. That's not 100% correct.

    Heatblur planes use a different core for their radar. So that would be three cores when playing the F-14 and Viggen.

    Maybe the current ED planes use that feature, too - I don't know.

     

    Oh wow, I didn't know that, thanks for sharing.

     

    Oh boy, now I have another variable to test and benchmark LOL

     

    I had settled on just using the TF-51 on the Caucasus map because it's free but I might run a few comparison's with Heatblur's modules. I own the Tomcat and the Viggen.

     

    This is why I love these forums and flight simming and gaming in general. You can never stop learning how to get the peak performance out of your system, there's always something new to explore. I honestly spend more time building, tinkering, and testing than I do just gaming.

     

    It's not unusual for me to start a new game, get the graphics and mods installed and tuned, and then get bored and start tuning a new game. :huh:

  19. this is correct.

     

    DCS creates many, many threads and they are NOT hamstrung and NOT pinned to a small limited set of cpu cores, unless you do it yourself.

     

    that rumor was thoroughly debunked years ago.

     

    I'm not sure where you are getting your information from but DCS does not use multithreading or more than 2 cores. I wish that weren't the case but we have to wait for the new engine that uses the Vulkan API.

     

    If you are referring to Afterburner, that can be deceptive because Win 10 can switch between cores faster than a person or the polling software can perceive to keep the CPU cool. It's still just 2 cores though.

     

    Every chip no matter how many cores will have a core that is faster. That's what Intel's turboboost is. With Turboboost enabled DCS's graphics thread will settle on that core. On my machine it's core 8 or logical thread 15 usually but can vary from processor to processor.

     

    If you are still skeptical of this you need to check with Big Newy or Skatezilla and they will confirm this. And they've both stated multiple times that DCS only uses 2 cores. That's where I am getting this information from.

     

    Edit: Here's the best thread I found with a quick search.

     

    https://forums.eagle.ru/showthread.php?t=201530

     

    Conclusion: DCS still only uses 2 cores at any given time and does not use multiple threads within the same core. Single threaded IPC is still the limiting factor for CPUs in DCS currently, i.e. you won't get higher FPS with more cores.

     

    And here's Skatezilla's first post on the Test: Setting CPU Affinity thread (no pun intended :music_whistling:)

     

    https://forums.eagle.ru/showpost.php?p=1959076&postcount=1

     

    And to make it even more confusing:

     

    https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html

  20. I can't answer that question you'll have to ask Skatezilla or one of the developers why DCS doesn't use multithreading or more than two cores.

     

    Hopefully the new Vulkan API comes out sooner rather than later, but there's a popular GA sim that's also doing a Vulkan conversion and it's taking a long time.

     

    On a related note, Gamer's Nexus just recently did a really cool benchmark on Read Dead Redemption 2 and they saw better performance with the Vulkan API versus DX12 so it looks like ED made the right decision pursuing Vulkan.

  21. You should message Skatezilla as he could explain it better. As I understand it, it does not prevent it from happening rather DCS's engine isn't programmed to utilize Hyperthreading/Multithreading to the extent possible on modern CPUs.

     

    A lot of people on these boards and in general interchange the terms multithreading and multiprocessing or incorrectly use the term.

     

    https://en.wikipedia.org/wiki/Multithreading_(computer_architecture)

     

    DCS does use multiprocessing i.e. 2 cores but does not use the two logical processing threads within a single physical processor core, if that makes sense.

     

    My CPU has 8 physical cores and 16 logical cores. DCS still only uses 2 physical cores at this time. Now Windows might be doing some other stuff under the hood and with background processes that does use multithreading but it's not DCS using those threads.

     

    Hopefully with the new Vulkan graphics engine the true power of our CPU cores working together in parallel will finally be unleashed. I think we need to be patient for that though as I'm sure it a time and labor intensive process that's akin to reprogramming the game from scratch.

     

    https://superuser.com/questions/740611/what-is-the-difference-between-multithreading-and-hyperthreading

     

    Now I'm not a graphics programmer and have never coded a graphics engine, but that's the extent of my understanding on the subject. Perhaps Skatezilla can shed more light on it then I can.

  22. This one here is a tempting offer, but I've heard they aren't reliable, and use poor quality components. Is this true? If not, I'll definitely get it.https://www.bestbuy.com/site/ibuypower-gaming-desktop-intel-core-i7-8700-16gb-memory-nvidia-geforce-gtx-1070-1tb-hdd-240gb-ssd-black/6389791.p?skuId=6389791

     

    Hey Flanker,

     

    If you are in the States, you don't have to buy from Bestbuy to get those builds. You can buy direct from their websites and you will have more choices for components and most likely get a better price.

     

    https://www.ibuypower.com/

     

    https://www.cyberpowerpc.com/

     

    I concur with Bit, get the best CPU you can afford right now and 16 GB of RAM. Especially if you are running at 1080p resolution. That's the foundation. Upgrading your RAM and GPU later when you have more fun money and get a higher resolution monitor is easy. Not as easy to upgrade motherboard and CPU later without essentially rebuilding the system yourself.

     

    RAM and GPU are easy to sell online to recoup some of your investment. And once you get more comfortable swapping out components, building your own system from scratch in the future will be easy peasy.

×
×
  • Create New...