Jump to content

Multi-Threading Discussion


Canada_Moose

Recommended Posts

Hello, I really appreciate the multi core development report pinned at the top of the forum.

Was wondering if we can expect real scalability with this development? For instance, I'm currently running a 3700x on an X570 motherboard and I am considering a 5900X or a 5800X 3D (when available) as it is the last CPU series that will be available for my socket.

12 cores has some other benefits for me in production areas but there will be pluses and minuses for each CPU.

Can we expect the multi core code of DCS to scale to 12 cores and beyond or will 8 suffice for best performance?

Maybe too early to know, but thought I would ask before I purchase anything in the near future.

  • Like 3
Link to comment
Share on other sites

4 hours ago, Canada_Moose said:

or will 8 suffice for best performance?

According to the Steam Hardware Survey, about a third of people use a 4-core CPU, and another third a 6-core one. So that's likely to be the target.
This obviously doesn't mean that it won't scale to 8 and more (it could, or it couldn't - we don't know that yet), but it does mean the new engine won't be optimised for 8+ cores because the problem then lies that it will exclude everyone who runs on fewer than 8 cores...

Spoiler

Ryzen 9 5900X | 64GB G.Skill TridentZ 3600 | Gigabyte RX6900XT | ASUS ROG Strix X570-E GAMING | Samsung 990Pro 2TB + 960Pro 1TB NMVe | HP Reverb G2
Pro Flight Trainer Puma | VIRPIL MT-50CM2+3 base / CM2 x2 grip with 200 mm S-curve extension + CM3 throttle + CP2/3 + FSSB R3L + VPC Rotor TCS Plus base with SharKa-50 grip mounted on Monstertech MFC-1 | TPR rudder pedals

OpenXR | PD 1.0 | 100% render resolution | DCS "HIGH" preset

 

Link to comment
Share on other sites

If they are working on multicore support on 2022 and is not easy scalable then they are doing something wrong...

  • Like 9

NZXT H9 Flow Black | Intel Core i5 13600KF OCed P5.6 E4.4 | Gigabyte Z790 Aorus Elite AX | G.Skill Trident Z5 Neo DDR5-6000 32GB C30 OCed 6600 C32 | nVidia GeForce RTX 4090 Founders Edition |  Western Digital SN770 2TB | Gigabyte GP-UD1000GM PG5 ATX 3.0 1000W | SteelSeries Apex 7 | Razer Viper Mini | SteelSeries Artics Nova 7 | LG OLED42C2 | Xiaomi P1 55"

Virpil T-50 CM2 Base + Thrustmaster Warthog Stick | WinWing Orion 2 F16EX Viper Throttle  | WinWing ICP | 3 x Thrustmaster MFD | Saitek Combat Rudder Pedals | Oculus Quest 2

DCS World | Persian Gulf | Syria | Flaming Cliff 3 | P-51D Mustang | Spitfire LF Mk. IX | Fw-109 A-8 | A-10C II Tank Killer | F/A-18C Hornet | F-14B Tomcat | F-16C Viper | F-15E Strike Eagle | M2000C | Ka-50 BlackShark III | Mi-24P Hind | AH-64D Apache | SuperCarrier

Link to comment
Share on other sites

Just to be clear, so I understand correctly, currently it is technically multi core but most of DCS runs through a single core notwithstanding sound which runs through a different core, is that right? 

i7700k OC to 4.8GHz with Noctua NH-U14S (fan) with AORUS RTX2080ti 11GB Waterforce. 32GDDR, Warthog HOTAS and Saitek rudders. HP Reverb.

Link to comment
Share on other sites

3 minutes ago, Willie Nelson said:

Just to be clear, so I understand correctly, currently it is technically multi core but most of DCS runs through a single core notwithstanding sound which runs through a different core, is that right? 

That’s the general understanding 

  • Like 1

System: 9700, 64GB DDR4, 2070S, NVME2, Rift S, Jetseat, Thrustmaster F18 grip, VPC T50 stick base and throttle, CH Throttle, MFG crosswinds, custom button box, Logitech G502 and Marble mouse.

Server: i5 2500@3.9Ghz, 1080, 24GB DDR3, SSD.

Link to comment
Share on other sites

On 2/11/2022 at 10:27 PM, 5ephir0th said:

If they are working on multicore support on 2022 and is not easy scalable then they are doing something wrong...

They're working on adding multicore support to an engine that got its start in 1995. Now, quite a bit of work has been done since Flanker 1.0, obviously, but  the groundwork has been laid all the way back then. Mutithreaded programs are an entirely different paradigm from singlethreaded ones, and unless they're gonna throw out the guts and remake the engine from scratch (unlikely, to say the least), then some compromises will ensue when implementing multicore in DCS. Scalability tends to be the first thing to go, if you're lucky the engine can be split into a fixed number of parallel threads running largely independently of one another.

  • Like 2
Link to comment
Share on other sites

20 minutes ago, Dragon1-1 said:

They're working on adding multicore support to an engine that got its start in 1995. Now, quite a bit of work has been done since Flanker 1.0, obviously, but  the groundwork has been laid all the way back then. Mutithreaded programs are an entirely different paradigm from singlethreaded ones, and unless they're gonna throw out the guts and remake the engine from scratch (unlikely, to say the least), then some compromises will ensue when implementing multicore in DCS. Scalability tends to be the first thing to go, if you're lucky the engine can be split into a fixed number of parallel threads running largely independently of one another.

Then it will be, like we say at Spain, bread for today, hunger for tomorrow

NZXT H9 Flow Black | Intel Core i5 13600KF OCed P5.6 E4.4 | Gigabyte Z790 Aorus Elite AX | G.Skill Trident Z5 Neo DDR5-6000 32GB C30 OCed 6600 C32 | nVidia GeForce RTX 4090 Founders Edition |  Western Digital SN770 2TB | Gigabyte GP-UD1000GM PG5 ATX 3.0 1000W | SteelSeries Apex 7 | Razer Viper Mini | SteelSeries Artics Nova 7 | LG OLED42C2 | Xiaomi P1 55"

Virpil T-50 CM2 Base + Thrustmaster Warthog Stick | WinWing Orion 2 F16EX Viper Throttle  | WinWing ICP | 3 x Thrustmaster MFD | Saitek Combat Rudder Pedals | Oculus Quest 2

DCS World | Persian Gulf | Syria | Flaming Cliff 3 | P-51D Mustang | Spitfire LF Mk. IX | Fw-109 A-8 | A-10C II Tank Killer | F/A-18C Hornet | F-14B Tomcat | F-16C Viper | F-15E Strike Eagle | M2000C | Ka-50 BlackShark III | Mi-24P Hind | AH-64D Apache | SuperCarrier

Link to comment
Share on other sites

On a positive note, with the dedicated server capability, they're already shown the ability to offload AI logic from the main thread by running it on the dedicated server.  So in theory, we already have something of a "quick and dirty" workaround to reduce some of the load from a single CPU core.  

I'm a little surprised that they've not taken more advantage of that capability.

  • Like 3

System: 9700, 64GB DDR4, 2070S, NVME2, Rift S, Jetseat, Thrustmaster F18 grip, VPC T50 stick base and throttle, CH Throttle, MFG crosswinds, custom button box, Logitech G502 and Marble mouse.

Server: i5 2500@3.9Ghz, 1080, 24GB DDR3, SSD.

Link to comment
Share on other sites

1 hour ago, 5ephir0th said:

Then it will be, like we say at Spain, bread for today, hunger for tomorrow

Around here, we say, "a sparrow in your hand is better than a woodpecker up a tree." If done right, even with a fixed number of threads you'll see massive performance gains. Sound, AI logic, rendering (it's not only GPU that involved), ownship physics and missile logic are just a few examples of things that, in theory, could be given their own thread. I have no idea how it'd actually work under the hood, but the possibilities are many, particularly when it comes to AI.

It sure would be nice to have ED weigh in on how many cores they expect DCS to make use of.

Link to comment
Share on other sites

10 hours ago, Dragon1-1 said:

Mutithreaded programs are an entirely different paradigm from singlethreaded ones

I stand to be corrected, but isn't mulithread and multicore two totally different technologies, that somewhat are meant to achieve the same goal? The way I always understood it was, multithreading is running multiple processes or parts of a single process on the same core at the same time. Multicore is taking advantage of a second or more cores. Both aim to achieve simultanious execution of different code. One through clever timing and use of cpu cycles (Multithread) one through actually executing the code at the same time (Multicore). Multithreading has been around for ages. Don't know how long exactly, but I think it should be long enough for the first engine of DCS to take advantage of it.

Link to comment
Share on other sites

8 hours ago, Cathnan said:

I stand to be corrected, but isn't mulithread and multicore two totally different technologies, that somewhat are meant to achieve the same goal? The way I always understood it was, multithreading is running multiple processes or parts of a single process on the same core at the same time. Multicore is taking advantage of a second or more cores. Both aim to achieve simultanious execution of different code. One through clever timing and use of cpu cycles (Multithread) one through actually executing the code at the same time (Multicore). Multithreading has been around for ages. Don't know how long exactly, but I think it should be long enough for the first engine of DCS to take advantage of it.

You're confusing a bunch of stuff. I'll try to explain some terminology here. I'll have to simplify a lot, because some of these things are terribly complicated.

A process is an operating system thing. It has nothing to do with your physical hardware. Your OS has to manage programs and their data, so they don't get mixed up. Think of taking a program, the data that belongs to it and stuffing them both in a box. This box is your process.

Inside each box is at least one thread. Like processes, threads aren't a physical hardware thing. They, too, are provided by your OS for management reasons. You can think of a thread the way you'd think of a secretary. She takes the part of the program that needs to be executed, the data that belongs to it and hands it over to a worker. Threads, like secretaries, fill a managerial role; the actual work is done elsewhere.

That elsewhere is a processor core. That's your worker in this analogy. Unlike processes and threads, cores are a hardware thing. They're a physical part of a processor. It's here that the actual work is done.

 

Your OS can hire extra secretaries for your programs on-demand, but the number of workers is fixed (you'd need to upgrade your processor). Which worker a secretary turns to is usually handled automatically by your OS. It spreads them out intelligently so you get to make the most out of your workers.
If you have more threads (secretaries) than cores (workers), they'll have to make queues. The cores then work through their queue one by one. They usually cycle through them, doing a bit of work for each of them. That way, no thread has to wait for too long.
This is how you could run multiple programs at once back in the old days, where your CPU had only one core.

So if you write a program that has multiple threads, it's automatically spread across cores by your OS. However, you only get a performance benefit from that if your CPU has multiple cores. Without multiple cores, it can only do one thing at a time.


Edited by TheSniperFan
  • Like 4
  • Thanks 5
Link to comment
Share on other sites

13 hours ago, TheSniperFan said:

You're confusing a bunch of stuff. I'll try to explain some terminology here. I'll have to simplify a lot, because some of these things are terribly complicated.

A process is an operating system thing. It has nothing to do with your physical hardware. Your OS has to manage programs and their data, so they don't get mixed up. Think of taking a program, the data that belongs to it and stuffing them both in a box. This box is your process.

Inside each box is at least one thread. Like processes, threads aren't a physical hardware thing. They, too, are provided by your OS for management reasons. You can think of a thread the way you'd think of a secretary. She takes the part of the program that needs to be executed, the data that belongs to it and hands it over to a worker. Threads, like secretaries, fill a managerial role; the actual work is done elsewhere.

That elsewhere is a processor core. That's your worker in this analogy. Unlike processes and threads, cores are a hardware thing. They're a physical part of a processor. It's here that the actual work is done.

 

Your OS can hire extra secretaries for your programs on-demand, but the number of workers is fixed (you'd need to upgrade your processor). Which worker a secretary turns to is usually handled automatically by your OS. It spreads them out intelligently so you get to make the most out of your workers.
If you have more threads (secretaries) than cores (workers), they'll have to make queues. The cores then work through their queue one by one. They usually cycle through them, doing a bit of work for each of them. That way, no thread has to wait for too long.
This is how you could run multiple programs at once back in the old days, where your CPU had only one core.

So if you write a program that has multiple threads, it's automatically spread across cores by your OS. However, you only get a performance benefit from that if your CPU has multiple cores. Without multiple cores, it can only do one thing at a time.

 

That's very helpful. Given your experience, where do you think multicore and Vulkan will likely take DCS? Do you think it is a possibility that this will provide similar framerate performance but with the opportunity to run a lot more processes that may be requruired for example for the new weather engine, flight models and dynamic campaign?

I'd be interested in anyone who know what they're talking about speculating on what multicore Vulkan may provide going forward.

i7700k OC to 4.8GHz with Noctua NH-U14S (fan) with AORUS RTX2080ti 11GB Waterforce. 32GDDR, Warthog HOTAS and Saitek rudders. HP Reverb.

Link to comment
Share on other sites

vor 3 Stunden schrieb Willie Nelson:

That's very helpful. Given your experience, where do you think multicore and Vulkan will likely take DCS? Do you think it is a possibility that this will provide similar framerate performance but with the opportunity to run a lot more processes that may be requruired for example for the new weather engine, flight models and dynamic campaign?

I'd be interested in anyone who know what they're talking about speculating on what multicore Vulkan may provide going forward.

I have some experience with Multi-Threading and OpenGL. Vulkan is still on my todo list but I have read some articles. So don't take anything as fact that I say about Vulkan. Also, since I am not involved in the code development of DCS, I might "overlook"/forget important aspects. This out of the way:

The answer regarding framerate is as always: it depends.

One important factor is the hardware we are looking at. There can be several bottlenecks like RAM, CPU, or GPU and the communication between them (bandwidth). All those pieces can also have their own internal bottlenecks as well and the question is which boundaries DCS is hitting on a certain hardware combination. So the total benefit might vary a lot from system to system

Anyways, since DCS is currently more or less a single core engine, the biggest bottleneck is probably the CPU on almost all systems. Assuming that the CPU workload distribution scales well in DCS, you could in theory get a performance gain close to the number of cores you have. BUT you will most likely run into another bottleneck first (like your GPU getting to its limits). 

Speaking of the GPU. A current problem of DCS being single-core might be that the GPU stalls (does nothing) when the CPU is drowning in work. It depends on the CPU to tell it what to do. So if you are running large missions and your frame rate drops even though you are sitting in the desert, it is because the CPU isn't able to keep the GPU busy. This is where multi-threading support will make a huge difference.

On the other hand, if you fly on an empty map with nothing going on, multi-threading will most likely not help that much here because we can assume that the CPU workload is minimal and it has plenty of time to feed the GPU *[1].

Now getting to Vulkan. To get the maximum frame rate, our GPU needs to be running 100% of the time without waiting for data. It also shouldn't spend any time on anything else then stuff related to rendering *[2]. To my knowledge, Vulkans main benefit is that it gives you much better control over the GPU and also supports multi-threading. Older APIs like OpenGL could only be used inside of a single thread. The problem here is that you might want to do a lot of preprocessing for the GPU, but if you do that in multiple threads, you have to sync them with the "OpenGL thread" which almost always results in someone waiting and doing nothing. With Vulkan, each thread can communicate its results directly to the GPU.

Regarding the control: Older APIs like OpenGL do some tedious setup stuff automatically for the programmer. The benefit is less code to write and worry about. The drawback is that the way it is implemented by OpenGL is very restrictive and most likely not optimal for your usecase, especially when we are talking about high-end engines.

So Vulkan will help to keep the GPU at max and might get us some smaller performance boosts, but we are not talking about huge numbers. I would expect something between 10-20% based on the articles I read and under the assumption that the current engine is already highly optimized. The real deal will be multi-threading. But one indirect consequence of Vulkan might be that ED rewrites old legacy code more efficiently during the update process, which might also yield some more FPS. As far as I know the rewriting was the main reason for the "miracles" Vulkan did to some other games, not the API itself.

 

Conclusion

If you are mainly doing sightseeing in the Huey, I would expect moderate frame rate improvements. Maybe 10-30 percent.

The real boost will come if you run a full scale war scenario with many aircrafts and ground assets on dense maps like syria. Ideally, you will have the same frame rate as on an empty map and the only bottleneck should be your GPU.

 

*[1] This is not necessarily true in complex software like a flightsim with high end aerodynamics and system modelling

*[2] it is possible that DCS does some computations on the GPU to take some workload from the single CPU 


Edited by Wychmaster
  • Like 5
  • Thanks 1
Link to comment
Share on other sites

How about a hyperconverged computing? Has anybody thought of offloading AI calculations over the network into another node? If you think about it, AI calculations are not affect and can tolerate a slight increase in delay.

This means one could use a dedicated server connected to a 10gb network, to calculate all the AI in the session.


Edited by stormrider
  • Thanks 1

Banned by cunts.

 

apache01.png

Link to comment
Share on other sites

40 minutes ago, stormrider said:

How about a hyperconverged computing? Has anybody thought of offloading AI calculations over the network into another node? If you think about it, AI calculations are not affect and can tolerate a slight increase in delay.

This means one could use a dedicated server connected to a 10gb network, to calculate all the AI in the session.

 

Sounds like an interesting concept, but I don't think this is practical for the vast majority of people. Even if you can run it on an older slower machine, I doubt the majority of players is willing to, or has the hardware lying around to setup a second machine. It also adds additional complexity software wise, because it's additional code to maintain for what is most likely marginal gains once multicore is released. Or do you mean the physics calculations for the AI? That could really give a performance boost if not all calculations are offloaded. But I still think it's not practical for most people.


Edited by Cathnan
Added another thought I first missed but is important (physics calculations)
  • Like 1
Link to comment
Share on other sites

4 hours ago, Cathnan said:

t, but I don't think this is practical for the vast majority of people. Even if you can run it on an older slower machine, I doubt the majority of players is willing to, or has the hardware lying around to setup a second machine. It also adds additional complexity software wise, because it's additional code to maintain for what is most likely marginal gains once multicore is released. Or do you

You can do something like this now,  run the DServer on another computer on your LAN and load complex missions on that computer, join as you would any multiplayer mission.  For a simple mission with very little AI you would not see much of a difference, however, a mission with lots of AI and very dense objects and complex scripts will run much better than if you run it in single player.  Essentially in MP the host computer is doing all of the AI calculations, executing scripts and all common calculations freeing the computing power of the client computer to render your environment, flight model etc which should give the CPU the ability to max the GPU.

The better your server computer the better the performance.  On a LAN the connections speed can't be beat except by running the DServer on the same computer.  In theory you could run it in parallel and it would utilize a different core of your CPU. You would sacrifice some RAM but set your page file to 4x the size of your RAM and it would help.  I have never tried this as I already have a server in my basement.

 

Link to comment
Share on other sites

6 hours ago, Wychmaster said:

 

Anyways, since DCS is currently more or less a single core engine, the biggest bottleneck is probably the CPU on almost all systems.

 

What is particularly interesting is the VR support. That is the heaviest load on the old engine (like nearly doubling the calculations needed, having to render different pictures for each eye), I'll be very interested in the solution ED makes with Vulkan+Multicore. Not to mention the aspects of old antialiasing techniques which are applied to DCS with no VR on the table causing some artefacts now in VR.

Right now if you're GPU bottlenecked in VR then by sacrifycing some quality you can go on the FSR route to allow the GPU to free some resources by calculating the image at lower than native resolution then upscaling it which is more efficient that calculating everything in native resolution. You can further help the GPU by applying motion reprojection which uses the CPU (!) to calculate the frames inbetween. Right now it does NOT matter if DCS is mainly running on one core, as other core(s) of my 10700k can help the GPU to calculate the inbetween frames to have smooth head movement even if the calculated DCS framerate is low (like 33 or 45 fps, I'm using 90 Hz Reverb G2). Of course if the mission places heavy load on the one CPU core then the FSR and MR will not be able to make miracle, the fps will tank.

If DCS engine will transfer properly to multicore, then it will distribute the load on more CPU cores. So be it, but then it could steal resorces from the now "dedicated" motion reprojection calculation which is not good. It can be balanced by some (or a LOT) optimization on the rendering method to reduce the nearly 2x load on the GPU by rendering the image for each eye. Anyhow the proper balancing and fine tuning Vulkan and Multicore together HAS the potential for a MUCH better VR experience in DCS, but it is surely a tremendous task. We're eagerly waiting for the fruits of this giant work.

I have a slight fear that the gains of the Multicore method will be nullified by the cons of loading more cores at least on unbalanced systems (like using weak GPU with strong CPU for VR). We'll see.

  • PC: 10700K | Gigabyte Z490 | Palit 3090 GamingPro | 32GB | Win10
  • HMD: HP Reverb G2 | OpenXR @ 120% | OpenXR Toolkit: exposure, brightness, saturation | DCS 2.9: DLAA with Sharpening 0.5 (no upscaling)
  • Controllers: VKB Gunfighter MkIII base & 200 mm curved extension center mounted + TM F16 Grip / MCG Pro Grip | TM TFRP
Link to comment
Share on other sites

71st, that's how I run my own dedicated server, i.e. just using a spare core in my CPU.  It does require another 5GB of RAM (ish), but works well at simplifying the load for the main core running the client of DCS.  

As mentioned, I'm a little surprised that ED haven't offered this as built in functionality as an "option" for those of us with enough spare RAM.

  • Like 2

System: 9700, 64GB DDR4, 2070S, NVME2, Rift S, Jetseat, Thrustmaster F18 grip, VPC T50 stick base and throttle, CH Throttle, MFG crosswinds, custom button box, Logitech G502 and Marble mouse.

Server: i5 2500@3.9Ghz, 1080, 24GB DDR3, SSD.

Link to comment
Share on other sites

16 hours ago, Wychmaster said:

I have some experience with Multi-Threading and OpenGL. Vulkan is still on my todo list but I have read some articles. So don't take anything as fact that I say about Vulkan. Also, since I am not involved in the code development of DCS, I might "overlook"/forget important aspects. This out of the way:

The answer regarding framerate is as always: it depends.

One important factor is the hardware we are looking at. There can be several bottlenecks like RAM, CPU, or GPU and the communication between them (bandwidth). All those pieces can also have their own internal bottlenecks as well and the question is which boundaries DCS is hitting on a certain hardware combination. So the total benefit might vary a lot from system to system

Anyways, since DCS is currently more or less a single core engine, the biggest bottleneck is probably the CPU on almost all systems. Assuming that the CPU workload distribution scales well in DCS, you could in theory get a performance gain close to the number of cores you have. BUT you will most likely run into another bottleneck first (like your GPU getting to its limits). 

Speaking of the GPU. A current problem of DCS being single-core might be that the GPU stalls (does nothing) when the CPU is drowning in work. It depends on the CPU to tell it what to do. So if you are running large missions and your frame rate drops even though you are sitting in the desert, it is because the CPU isn't able to keep the GPU busy. This is where multi-threading support will make a huge difference.

On the other hand, if you fly on an empty map with nothing going on, multi-threading will most likely not help that much here because we can assume that the CPU workload is minimal and it has plenty of time to feed the GPU *[1].

Now getting to Vulkan. To get the maximum frame rate, our GPU needs to be running 100% of the time without waiting for data. It also shouldn't spend any time on anything else then stuff related to rendering *[2]. To my knowledge, Vulkans main benefit is that it gives you much better control over the GPU and also supports multi-threading. Older APIs like OpenGL could only be used inside of a single thread. The problem here is that you might want to do a lot of preprocessing for the GPU, but if you do that in multiple threads, you have to sync them with the "OpenGL thread" which almost always results in someone waiting and doing nothing. With Vulkan, each thread can communicate its results directly to the GPU.

Regarding the control: Older APIs like OpenGL do some tedious setup stuff automatically for the programmer. The benefit is less code to write and worry about. The drawback is that the way it is implemented by OpenGL is very restrictive and most likely not optimal for your usecase, especially when we are talking about high-end engines.

So Vulkan will help to keep the GPU at max and might get us some smaller performance boosts, but we are not talking about huge numbers. I would expect something between 10-20% based on the articles I read and under the assumption that the current engine is already highly optimized. The real deal will be multi-threading. But one indirect consequence of Vulkan might be that ED rewrites old legacy code more efficiently during the update process, which might also yield some more FPS. As far as I know the rewriting was the main reason for the "miracles" Vulkan did to some other games, not the API itself.

 

Conclusion

If you are mainly doing sightseeing in the Huey, I would expect moderate frame rate improvements. Maybe 10-30 percent.

The real boost will come if you run a full scale war scenario with many aircrafts and ground assets on dense maps like syria. Ideally, you will have the same frame rate as on an empty map and the only bottleneck should be your GPU.

 

*[1] This is not necessarily true in complex software like a flightsim with high end aerodynamics and system modelling

*[2] it is possible that DCS does some computations on the GPU to take some workload from the single CPU 

 

Thanks very much for that, your conclusion is what I was suspecting, it makes sense too given my graphics are pretty decent in low AI scenarios now. It should be said they’re not too bad even in MP when I’m up and away from the action. I cannot begin to imagine the work they’re doing to rewrite such an enormous amount of code but I’m really grateful. Exciting times ahead indeed.

All those troops on the ground moving more realistically with greater aerodynamic and dynamic play shall be really something. 
 

Thanks again. 

  • Like 2

i7700k OC to 4.8GHz with Noctua NH-U14S (fan) with AORUS RTX2080ti 11GB Waterforce. 32GDDR, Warthog HOTAS and Saitek rudders. HP Reverb.

Link to comment
Share on other sites

On 2/15/2022 at 3:50 PM, Mr_sukebe said:

71st, that's how I run my own dedicated server, i.e. just using a spare core in my CPU.  It does require another 5GB of RAM (ish), but works well at simplifying the load for the main core running the client of DCS.  

As mentioned, I'm a little surprised that ED haven't offered this as built in functionality as an "option" for those of us with enough spare RAM.

How do you do this? I’ve got cores and ram coming out of my ears 

Link to comment
Share on other sites

38 minutes ago, scampaboy said:

How do you do this? I’ve got cores and ram coming out of my ears 

- Within your missions, ensure that the aircraft you want to fly are "clients" and not "pilots"

- Download and install the dedicated server (currently around 135GB, but it'll copy the files from your client, so will be fast)

- Run the dedicated server.  If you look in these forums, there's a good set of guidelines on how to do that.  It's not hard, but you'll need to remember to open the appropriate ports with your router

- Add some of your missions into the dedicated server mission list

- Run the mission your want to fly using the dedicated server

 - Start your DCS client

 - Go into multiplayer and login to your own server

Jobs a good un'!

  • Like 1
  • Thanks 1

System: 9700, 64GB DDR4, 2070S, NVME2, Rift S, Jetseat, Thrustmaster F18 grip, VPC T50 stick base and throttle, CH Throttle, MFG crosswinds, custom button box, Logitech G502 and Marble mouse.

Server: i5 2500@3.9Ghz, 1080, 24GB DDR3, SSD.

Link to comment
Share on other sites

On 2/15/2022 at 7:49 AM, Willie Nelson said:

That's very helpful. Given your experience, where do you think multicore and Vulkan will likely take DCS? Do you think it is a possibility that this will provide similar framerate performance but with the opportunity to run a lot more processes that may be requruired for example for the new weather engine, flight models and dynamic campaign?

I'd be interested in anyone who know what they're talking about speculating on what multicore Vulkan may provide going forward.

It's impossible to tell, because we don't know what exactly makes DCS run the way it does. It could be any number of things. What we can say with a reasonably high degree of confidence is that it's CPU-side. The CPU appears to be the bottleneck.

Multithreading is exactly how one would address that, except there are some problems.

  1. Writing an all-new engine from scratch isn't reasonable. It would be an unfathomable amount of work. Therefore some of the old, sub-optimal code is going to stay.
    How much of an impact is that going to have? Impossible to tell (see my first sentence of this post).
  2. There's more than one way of doing multithreading. Some of them are relatively simple (e.g. offloading long-running calculations to their own thread), others are more sophisticated (e.g. task graph based on a DAG). The former can be worked into existing code relatively easily, but doesn't have the same potential as the latter. However, the latter can't be just added easily. Algorithms have to be programmed in an entirely different way to make use of it.
  3. The missions use lua scripting, which isn't exactly the fastest way of doing things. It does have its benefits, but performance isn't one of them. It can be reasonably fast for some things and maybe ED has already done a stellar job of making that part of the engine fast. We don't know. Besides, changing stuff about how missions are made; is that going to break all existing missions? And speaking of breaking existing things...
  4. What about modules and campaigns from third party developers? There's a limit to how much you can improve bad designs while still maintaining backwards compatibility. How is ED going to handle situations in which an improvement would make existing content/modules incompatible? Do third party developers have a contractual obligation to update their modules in such a case? Can ED update third party modules themselves? Will ED leave the old code in and provide the new, faster code for developers going forward? (This would mean third parties would have to update their existing modules at their own discretion for there to be performance benefits.) Or are they just going to not touch such things, leaving more opportunities for optimization on the table? We don't know.

It all comes down to the same issue: We don't know why DCS runs the way it does. Which parts are slow? Which are optimized to hell and back? Which can be improved with a reasonable amount of work? Which would open a can of worms that ED isn't willing to deal with? We don't know.

 

I won't speculate on how big the performance uplift is going to be. There are too many things we simply don't know for me to make an educated guess.

  • Like 4
Link to comment
Share on other sites

On 2/19/2022 at 2:05 PM, TheSniperFan said:
  1. What about modules and campaigns from third party developers? There's a limit to how much you can improve bad designs while still maintaining backwards compatibility. How is ED going to handle situations in which an improvement would make existing content/modules incompatible? Do third party developers have a contractual obligation to update their modules in such a case? Can ED update third party modules themselves? Will ED leave the old code in and provide the new, faster code for developers going forward? (This would mean third parties would have to update their existing modules at their own discretion for there to be performance benefits.) Or are they just going to not touch such things, leaving more opportunities for optimization on the table? We don't know.

That one is easy enough to answer, as this actually came up before. After the Hawk debacle, ED has reserved the right to have a copy of the source files for all modules. So they could update the code if the original module devs aren't still around. Campaigns, that's another matter, but nobody is asking them to drop Lua altogether. In fact, as long as you don't abuse it, mission logic isn't the primary driver of performance loss. AI, physics and graphics are. There, a lot of headway could be made by booting each of those to a separate thread.

That said, I'm pretty sure multicore won't use the latest and greatest techniques, just what works within the current paradigm. That should still be a leap in performance, though.

Link to comment
Share on other sites

When we say "The Vulkan API" we don't actually mean the raw API code alone, perhaps we're all not really using these terms correctly perhaps, but I think everyone out there uses the term API to refer to the whole pipeline with the drivers and all, so saying "Xyz API" does this and that, I'm talking about the whole environment, not just the raw API code.

I guess perhaps the proper term would be to referr to it: "In a Vulkan API based environment" or "In a Vulkan API based application", but that's a mouthful.

When you run a Vulkan based application, or in Vulkan mode, it won't use the same drivers as it does in DX11 or DX12. DX11 and OpenGL drivers where by because of the API it self, they have a completely different approach and require a lot more overhead from the start, and weren't designed for multi-threading the graphics backend (the graphics code that runs on the CPU). Vulkan's approach uses a "thin" driver that apparently does far less things under the hood and under the developer's radar, therefore also far more predictable and transparent in the stuff that it does, this is why it's harder to develop because many responsibilities shift to the application.

https://www.intel.com/content/www/us/en/developer/videos/introduction-to-the-vulkan-graphics-api.html

Quote

With Vulkan, this additional effort can be avoided. That's why DirectX 12, Metal, and Vulkan are called thin drivers or thin APIs. Mostly, they only communicate user requests to the hardware, providing only a thin abstraction layer of the hardware itself. The driver does as little as possible for the sake of much higher performance. 

There are scenarios where you find no difference in performance between open GL and Vulkan. If someone doesn't need multi-threading or if the application isn't CPU bound, open GL Is enough and using Vulkan will not give you a performance boost. But if you want to squeeze every last bit from your graphics hardware, Vulkan is the way to go. Sooner or later, all major graphics engines will support these new low level APIs. 

 

https://www.intel.com/content/www/us/en/developer/articles/training/api-without-secrets-introduction-to-vulkan-part-1.html

Quote

The Vulkan API requires developers to create applications that strictly follow API usage rules. In case of any errors, the driver provides us with little feedback, only some severe and important errors are reported (for example, out of memory). This approach is used so the API itself can be as small (thin) and as fast as possible. But if we want to obtain more information about what we are doing wrong we have to enable debug/validation layers.

One of the other things why legacy high-level APIdrivers are so bloated and require more CPU is because they run validation at all times for all end-users, with Vulkan API validation is optional.

---------------------------------

As of right now, disclaimer I'm not a real Vulkan API or C++ programmer for that matter, I'm more of an analyst of the developer resource circles, going over the documentation and indeed spending hours reading it sometimes, various presentations, I spent many hours on reading up on Vulkan over the years.

I don't completely agree with the earlier posts that the standard Vulkan API  implementation out of the box does nothing without multi-threading. So CPU bound situations should definitely get some kind of a boost even if DCS wouldn't be getting the extensive multi-core enhancements across the board. The thin driver would immediately demand much less CPU resources by default, AFAIK. I can't find those slides right now but I distinctly remember that from the early presentations (That could have been Mantle API slides, but it should still apply to Vulkan).

We would need a specific apple-to-apple comparison of an example Vulkan API implementation in single-threaded mode to a DX11 (ST) implmentation and compare just the driver overheads, which is what 3DMark "API Overhead" test might be all about. As far as games and sims go, who would bother going all the way to switch or add Vulkan API based rendering engine only to remain in single-threaded mode, so yeah, combining all of these things gets you some serious results in the end and it is very welcome that ED is taking a wholesome approach, multi-threading not only the graphics backend, but pretty much all the other major DCS components, and this is where a lot of the CPU improvements are. However as far as I know, again the cost of each draw-call is fundamentally much smaller with Vulkan API environment "it self", so hypothetically if you had all of this application multi-threading and kept "using" DX11 level of driver overhead and all of it's behaviors minus having that driver overhead stuck on one thread, maybe you could push much higher draw calls than standard DX11, but all of the CPU cores would get busier and thus less room to add other CPU work that the rest of the application could use.

-----------------------------------

However, It remains to be seen to how much ED managed to split the various serial type of workloads, how much can be multi-threaded. I hope the AI logic calculations of each AI unit could have it's own thread or even threads, tho if they are expected to use very little CPU on their own for most of any kind of busy session, perhaps that wouldn't be necessary and a Unit Group could be it's own thread for example. That would get tricky when you have 100 units in one Unit Group tho, but I won't speculate on this right now, the approach could be totally different under the hood that takes care of my concerns here, dynamically spawning threads where needed, packing smaller jobs into threads, etc.

Let's remember that many simulation type calculations cannot be parallelized, they will remain serial workloads even after all of the DCS multicore enhancements, a stream of calculations where the next one depends on the result of the previous one. Stuff like missile tracking, ballistic trajectory calculation and physics I think are serial and will remain serial workloads, but I suppose they don't all have to be on the same thread right, each weapon is independent and their tracking and physics simulation calculations could, but then there's the question of sync with the core engine for consistency and multi-player, so it's tricky how much of this can be independent. There surely would still need to be a kind of parent/main thread that connects all of this together otherwise you wouldn't have a working game, but it would have far less workload to do.

I would appreciate if future reports would cover these questions as well. Would I be able to spawn 300x S-300 AA batteries and each unit would have it's own thread as per standard, then each missile launched would have it's own thread as well. Would that even matter in the end or would the engine always stall on something else first? But all of these small things could make a difference for really getting to be able to support those large dedicated server use cases, with a 128-core CPU and as much threading as possible maybe you could squeeze things out for a big mission, but then there are other things that are/would be parallelizable but may be too minor to make any difference in any kind of large scale mission, which would make that setup an overkill.

 

  • Like 3

Modules: A-10C I/II, F/A-18C, Mig-21Bis, M-2000C, AJS-37, Spitfire LF Mk. IX, P-47, FC3, SC, CA, WW2AP, CE2. Terrains: NTTR, Normandy, Persian Gulf, Syria

 

Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...