Jump to content

DCS Newsletter discussion 2nd December 2022 - DCS 2.8 Multithreading | SATAL 2023


BIGNEWY

Recommended Posts

1 hour ago, Dragon1-1 said:

Can this be elaborated on? Specifically, the bolded part, does this mean it's not true multithreaded physics and rendering? This statement implies all you did was to split off rendering and physics onto their own threads, with both systems remaining serialized and bound to a single core, just now independent of one another. This is progress, but much less than I hoped for, especially after such a long time. Now, I know even this is a big job starting out from thoroughly singlethreaded code, but DCS can't afford to stop there. Modern CPUs have 8 cores to work with, top end ones have 16 and even my ancient 4770 has four. That won't make DCS use 100% of my CPU, just 50% instead of 25%, possibly a bit more if the thread where everything else sits starts doing stuff. In 2023, it's not enough. Three threads (four if you count audio, which really isn't any kind of bottleneck) is only acceptable as a stepping stone toward a truly scalable, parallelized architecture. 

The reason is, physics dictate that without extreme cooling solutions or a technological revolution on par with invention of the microchip, single core CPU performance will not significantly improve (unless water cooling becomes standard, without OC they likely won't breach 6GHz). Any further visual improvements will degrade performance, as will the advance of display technology, unless advantage can be taken of a rising number of cores. So this is a must.

Thats how I read it. Not multi-threaded, just separate single threaded loops for graphics/logic. To be clear... I'll take it, but not what I was hoping for.

In all fairness, multi-threading is extremally difficult with games. I'm a developer by trade (ERP manufacturing and server systems, not games) and multi-threading for that is infinitely less difficult, but still difficult so I'm not proposing this is easy to do, but it is what I thought they were working on.

Inevitably, splitting loops that are not related to separate threads will always help (and is usually the path of least resistance) but will always leave you with a single thread that is always dominating a single core and your bottleneck. Unfortunately with many processes its very difficult to split them up as they have dependencies and require specific timings that would need a very complex scheduler which sometimes is more overhead than its worth.

Not all tasks can be completed in parallel unfortunately. I'm guessing this is the case when rendering frames. Business applications are a little different as the main factor is how long it takes to complete a task rather than a very specific 30 tasks a second that need to be in order to make sense.

I don't envy the developers here but I'm guessing this is a step (band-aid) towards vulkan which I'm guessing will offer them tools towards this. A pressure relief valve if you will.

I'd bet that reducing the draw calls that the CPU has to handle (direct to GPU) would do more for CPU limited cases than multi-threading the draw calls in most cases (flight sims being one of the highest cases I'd guess).

Either way, looking forward to any DCS releases and updates towards this end. This is one of the first times a GPU (4090) is being released that I haven't tried to acquire one immediately as I doubt I'll see any difference in overall frame rates. Will wait for the 79XXx3d series cpu's first to see how they play compared to 13'th gen in single core performance to see if I can bottleneck the 3090 first.

  • Like 1

AMD 7900x3D | Asus ROG Crosshair X670E Hero | 64GB DC DDR5 6400 Ram | MSI Suprim RTX 4090 Liquid X | 2 x Kingston Fury 4TB Gen4 NVME | Corsair HX1500i PSU | NZXT H7 Flow | Liquid Cooled CPU & GPU | HP Reverb G2 | LG 48" 4K OLED | Winwing HOTAS

Link to comment
Share on other sites

  • ED Team
1 hour ago, USA_Recon said:

Great news .

Suggestion:- another “next year” news item.  ED seems to feel the need to throw out arbitrary news with arbitrary dates that are never met.

Seems by now you’d stop doing this unless you like like the constant “2 weeks” banter as a mocking of how silly your dates are.

We’ve all grown old of this.  So how about you just wait until it’s done and then say “next patch we will introduce multicore” and not continue to make the same mistake over and over again.

Just 2 cents from someone who spent $6000 this year to play your game 🙂

 

We really cant win, people demand news about stuff we give them news then they don't want news until its ready. There is no pleasing everyone, but we try.

We will not give out dates until its closer, so we didnt. We wanted to share progress news so we did. 

  • Like 22
  • Thanks 9

64Sig.png
Forum RulesMy YouTube • My Discord - NineLine#0440• **How to Report a Bug**

1146563203_makefg(6).png.82dab0a01be3a361522f3fff75916ba4.png  80141746_makefg(1).png.6fa028f2fe35222644e87c786da1fabb.png  28661714_makefg(2).png.b3816386a8f83b0cceab6cb43ae2477e.png  389390805_makefg(3).png.bca83a238dd2aaf235ea3ce2873b55bc.png  216757889_makefg(4).png.35cb826069cdae5c1a164a94deaff377.png  1359338181_makefg(5).png.e6135dea01fa097e5d841ee5fb3c2dc5.png

Link to comment
Share on other sites

9 minutes ago, NineLine said:

We really cant win, people demand news about stuff we give them news then they don't want news until its ready. There is no pleasing everyone, but we try.

We will not give out dates until its closer, so we didnt. We wanted to share progress news so we did. 

Also don't envy you here.. .lol so true.

I like the news, i get it. Probably hard to come up with good newsletter material every single week also. I prefer to have the insights personally. I also recognize that development or testing is not an easy thing to anticipate completeness on. One could argue its never complete. The most difficult thing I ever do is deciding on a point at which it is complete enough to use. If your developers are anything like me, it could always use another feature or perform a bit better etc.. etc... feature creep and perfectionism prevents many a ship date.

I digress. I personally enjoy any amount of insight into DCS and I think most would agree.

  • Like 1

AMD 7900x3D | Asus ROG Crosshair X670E Hero | 64GB DC DDR5 6400 Ram | MSI Suprim RTX 4090 Liquid X | 2 x Kingston Fury 4TB Gen4 NVME | Corsair HX1500i PSU | NZXT H7 Flow | Liquid Cooled CPU & GPU | HP Reverb G2 | LG 48" 4K OLED | Winwing HOTAS

Link to comment
Share on other sites

lol. only 6000$ @USA_Recon

i just did a belt tensioner mod for my motion platform. rtx4090, new build with 5800x3d.. varjo and aero..

built 2 sets of tedac grips, the mdf panel along with it, also the ah64 collective.. not to mention two sets of joysticks resembling a minigun.

I think updates to the core engine is the best thing to improve.

dcs is the only application that allows me to do the sim flight that i want to do.

thank you ED

multi thread processing much much much welcomed


Edited by hannibal

find me on steam! username: Hannibal_A101A

http://steamcommunity.com/profiles/76561197969447179

Link to comment
Share on other sites

3 hours ago, trevoC said:

I'm absolutely limited (as would most) by the graphics thread. Yes, you are correct, in a heavy mission, the logical calculations saturates the main thread, but in a free flight scenario the upper fps limit is bound by the draw calls to the gpu that the cpu can handle. Graphics are usually limited by the CPU making the appropriate calls to the gpu. This is why new engines allow these calls to circumnavigate the cpu all together and make the calls directly.

This isn't a problem for high end 4k 2d, but VR users are most likely (like myself) bound by the number of CPU calls the single core is able to make which is only exacerbated by a heavy mission load.

my 3090 rarely exceeds 70% utilization in VR.

Thats why a 3090 or 4090 are considered overkill for gaming. They were actually entreprise cards with added dp ports but sold to gamers for greed. My 3080 along with my 5800x3d give me quite a good balance btween cpu gpu. but of course, im still playing my cv1 with an uplift of 1.5 ss.

  • Like 1
Link to comment
Share on other sites

11 minutes ago, tomcat_driver said:

Thats why a 3090 or 4090 are considered overkill for gaming. They were actually entreprise cards with added dp ports but sold to gamers for greed. My 3080 along with my 5800x3d give me quite a good balance btween cpu gpu. but of course, im still playing my cv1 with an uplift of 1.5 ss.

That just isn't true at all. My 24GB of ram are usually full (close to). Also, enterprise cards? I wish. Would save me 10's of thousands at work. The 3090 is pretty well saturated in MSFS. It just so happens that its not the bottleneck with DCS in VR. 3090 over 3080 in MSFS yields 20-25% gains. (enterprise perspective... a six year old p6000 will outperform a 3090 in certain business applications)


Edited by trevoC
  • Like 2

AMD 7900x3D | Asus ROG Crosshair X670E Hero | 64GB DC DDR5 6400 Ram | MSI Suprim RTX 4090 Liquid X | 2 x Kingston Fury 4TB Gen4 NVME | Corsair HX1500i PSU | NZXT H7 Flow | Liquid Cooled CPU & GPU | HP Reverb G2 | LG 48" 4K OLED | Winwing HOTAS

Link to comment
Share on other sites

27 minutes ago, NineLine said:

We really cant win, people demand news about stuff we give them news then they don't want news until its ready. There is no pleasing everyone, but we try.

We will not give out dates until its closer, so we didnt. We wanted to share progress news so we did. 

You can’t please everyone but I am sure many (like me) are very happy to get such updates. Moving into internal testing is a significant project milestone and well worth communicating IMHO.

  • Like 8

AMD 5800X3D · MSI 4080 · Asus ROG Strix B550 Gaming  · HP Reverb Pro · 1Tb M.2 NVMe, 32Gb Corsair Vengence 3600MHz DDR4 · Windows 11 · Thrustmaster TPR Pedals · VIRPIL T-50CM3 Base, Alpha Prime R. VIRPIL VPC Rotor TCS Base. JetSeat

Link to comment
Share on other sites

I drive a Reverb G2 with a 1080ti. Remember, it came out at a time when the best you could get, besides Titan series, was a 2080ti, a card that was panned for not being enough of an improvement over the 1080ti. In everything but DCS, I can manage the default supersampling, though not with all graphics settings maxed out. A better CPU would likely improve it further (my 4770K is almost a decade old at this point), not for raw power, but for all the tricks they added in the meantime, faster memory (still stuck on DDR3) and because I could actually overclock the thing since that specific piece of silicon hates OC with passion, no matter how well it's cooled. 

The 3090, and Titan before it, is overkill for gaming. Mostly because even in 4K, a regular 3080 would suffice for any game that doesn't try to render everything up to the horizon at 30kft. For flight sims, the bar is higher, they need photorealistic details and large view distances.

Link to comment
Share on other sites

2 hours ago, Dragon1-1 said:

This is progress, but much less than I hoped for, especially after such a long time.

That's how I feel about it as well: it's a step in the right direction (probably a larger one than I realise), but it feels like this is just the first step in a process that won't be properly completed for a while - as in: it will be a while after 2023 before we see meaningful gains... That's what I read in-between the lines anyway...

  • Like 2
Spoiler

Ryzen 9 5900X | 64GB G.Skill TridentZ 3600 | Gigabyte RX6900XT | ASUS ROG Strix X570-E GAMING | Samsung 990Pro 2TB + 960Pro 1TB NMVe | HP Reverb G2
Pro Flight Trainer Puma | VIRPIL MT-50CM2+3 base / CM2 x2 grip with 200 mm S-curve extension + CM3 throttle + CP2/3 + FSSB R3L + VPC Rotor TCS Plus base with SharKa-50 grip mounted on Monstertech MFC-1 | TPR rudder pedals

OpenXR | PD 1.0 | 100% render resolution | DCS "HIGH" preset

 

Link to comment
Share on other sites

ED actually came out will a full paragraph of info on multithreading explaining in some detail exactly what they are working on, sure no date,  but still good useful info.

Unlike some #cough# Magnitude 3 #cough#

i7 13700k @5.2ghz, GTX 3090, 64Gig ram 4800mhz DDR5, M2 drive.

Link to comment
Share on other sites

12 minutes ago, Dragon1-1 said:

I drive a Reverb G2 with a 1080ti. Remember, it came out at a time when the best you could get, besides Titan series, was a 2080ti, a card that was panned for not being enough of an improvement over the 1080ti. In everything but DCS, I can manage the default supersampling, though not with all graphics settings maxed out. A better CPU would likely improve it further (my 4770K is almost a decade old at this point), not for raw power, but for all the tricks they added in the meantime, faster memory (still stuck on DDR3) and because I could actually overclock the thing since that specific piece of silicon hates OC with passion, no matter how well it's cooled. 

The 3090, and Titan before it, is overkill for gaming. Mostly because even in 4K, a regular 3080 would suffice for any game that doesn't try to render everything up to the horizon at 30kft. For flight sims, the bar is higher, they need photorealistic details and large view distances.

I had a 1070ti, then 2080ti and now 3090. Each of these steps showed large improvements in VR. Yes, you don't need a 3090 to play at 1080p or 1440p for that matter. 4K maxed in MSFS or 2K VR in DCS/MSFS, the 3090 is a marked improvement over the 2080ti. Again, it depends on your setup. If you are gaming at 1440p, then probably not so much, but at 4K or 2K+VR you will absolutely see gains. Not sure how anything for flight sim is overkill if there are gains to be had. That being said, the context here is DCS so I'm not sure how the overkill for gaming applies here.

  • Like 2

AMD 7900x3D | Asus ROG Crosshair X670E Hero | 64GB DC DDR5 6400 Ram | MSI Suprim RTX 4090 Liquid X | 2 x Kingston Fury 4TB Gen4 NVME | Corsair HX1500i PSU | NZXT H7 Flow | Liquid Cooled CPU & GPU | HP Reverb G2 | LG 48" 4K OLED | Winwing HOTAS

Link to comment
Share on other sites

2 hours ago, NineLine said:

We really cant win, people demand news about stuff we give them news then they don't want news until its ready. There is no pleasing everyone, but we try.

Thanks, some of us appreciate it. 🙂 

As a software engineer of almost (gosh !) 25y experience, I am sometimes critical of ED for certain things that I think should be done better (non-regression testing, for instance). However, in the case of going from single to multithread (even if multi=2 for a start) on an old code base, I am behind them 100%. 

Remember, people, you're *not* asking for *multithreading*, not quite. You are really asking for *more performance*. In particular in VR, where I don't expect much from MT. And for complex scenarios like MP, or smarter AI, or whatever will be required for a dynamic campaign, where the background computations can be taxing on the CPU.

But know that thread-safety is incredibly hard to get right, especially when not designed in from the beginning, and it's quite easy to see either the ugliest, nastiest, bugs that computer-assisted mankind can produce. Or to overprotect (either by the "mutex sprinkle of death by crawling molasses" or the "death by copy-all no side-effect heaviness of misery") and see performance drop through the floor (worse than single-thread). In the latter case, you would get your multithreaded DCS but with unacceptable perf. In the former, you'd get epicly hard-to-reproduce bugs that would leave the userbase in arms and the devs baffled.

They *need* to get this right, from the get-go. So, patience. Waiting is.

  • Like 5
Link to comment
Share on other sites

4 hours ago, Dragon1-1 said:

"To improve efficiency of CPU resources usage, we have reworked the core of our engine. First, at the architectural level, it has been divided into two main threads: graphical and logical. "

Can this be elaborated on?

It is, in the next sentence:

"To improve efficiency of CPU resources usage, we have reworked the core of our engine. First, at the architectural level, it has been divided into two main threads: graphical and logical. This opens up new possibilities for further thread parallelization of calculations in both the logical and graphical parts of the engine independently."

Taken with the paragraph before the one you quoted from (in which E.D. make all the points you contain in your paragraph after the quote above) they have said:

  • CPU manufacturers have focused on multi-core as an approach to performance improvement rather than single core clock speed / throughput.
  •  To date (with the exception of some audio) DCS has run as a single threaded process, and so could only use a single thread on a single core at any given time. Individual cores are not increasing in performance, so the only way to improve DCS performance is to pursue multi-thread parallelisation.
  • As a first step the SIM has been split into 2 main threads. One dealing with 'logic' and the other with graphics.
  • If there are 2 threads running in parallel, the game is multi-core capable. (actually, 1 for graphics, 1 for logic and 1 for audio means the game would be capable of using at least 3 cores)
  • Having split the logical and graphical parts of the engine, E.D. are free to pursue further parallelisation of either one of, or both of, the 2 main processes - presumably depending on where they see the easiest performance wins coming from.

There isn't really much news in the statement other than that

  1. the initial split will only be into two threads & that these will be logic and graphics.
  2. the change is already in internal testing.null

1 might seem disappointing to those that were hoping to see all 88 threads maxed out, but expecting everything all at once is a curse placed on our times, and this is a realistic first step.

2 is good positive news of progress !

As for the rest of the message -They've been saying they're working on multi-threading for a long time, it's more of an expectation setting message than anything else.

I had some comments about shirt poosters, but I'll keep them to myself...

image.png

  • Like 3

Cheers.

Link to comment
Share on other sites

14 hours ago, ouky1991 said:

It's great so some progres with this, wish it would be more frequent. I'm not sure if I uderestand this correctly, but is there gonna be any noticeable performance gain for lightly populated missions?

Unless you’re currently limited I’d imagine there won’t be a lot of difference.

  • Thanks 1
Link to comment
Share on other sites

The new FPS counter in 2.8 revealed for us, that in the total frametime one of the largest chunk is "simulation". I wonder if the multithreading means that this "simulation" part will be completely detached from the rendering frametime, thus enabling higher framerates at start? Or the frames simply can't be rendered without the fresh simulation data no matter what?

The current CPU + GPU hardware seems to be enough for pleasingly high framerates in 2D, the VR what seems extremely critical because of the need of high framerates combined with very high resolutions and dual displays. But for VR what is the most important part is to have virtually lag free very smooth movement for precise head tracking. Despite of the needed 90 fps this can be pretty much solved with asynchronous reprojection (ASW for Oculus or motion reprojection for WMR) at much lower framerates which would be perfect.

The only problem are the visible artifacts of the reprojection. If those can be eliminated then even the current state of the sim would be very good for VR.

But if the rendering and logic threads will be separated then this could yield high gains in the long run.

The best description of the current situation is made by @SkateZilla in another topic here (I'm rather curious why not using such interesting details like this explanation in the newsletter by the way...):

 

  • Like 2
  • PC: 10700K | Gigabyte Z490 | Palit 3090 GamingPro | 32GB | Win10
  • HMD: HP Reverb G2 | OpenXR @ 120% | OpenXR Toolkit: exposure, brightness, saturation | DCS 2.9: DLAA with Sharpening 0.5 (no upscaling)
  • Controllers: VKB Gunfighter MkIII base & 200 mm curved extension center mounted + TM F16 Grip / MCG Pro Grip | TM TFRP
Link to comment
Share on other sites

Eagle dynamics should switch to unreal engine 5.X.
Their whole problem would be solved.
Multitreading (more than 2 cores)
Raytracing
Fluid management
DLSS 3, VFR
And future support for DirecStorage technology etc etc...

It is clear that the ED team will never be able to upgrade their graphics engine so why not switch to an optimized and proven engine?

  • Like 2
 
 

i9 10900k  |  Asus ROG Strix Z490 | WD NVMe SN850 1To | DDR4 Gskill 32Go 4400MHz |

RTX 4090 FE | Reverb G2  | Quest 3 | HOTAS Warthog |

Link to comment
Share on other sites

Interesting, very interesting. I'm running on a 9700K (8 cores, 8 threads, no HT for those who can't remember that far back!) that I bought because it had good single core performance at the time. I've been tempted by all the recent shiny CPUs and wondering if they would give me much more in DCS but I think this announcement gives me another reason to wait before upgrading. I can run any normal game very nicely, so it's not like there's a rush. I think I'll wait to see benchmarks for the multithread update before splooging on any more hardware to run my Reverb G2.

  • Like 1
Link to comment
Share on other sites

7 minutes ago, Bigounet said:

Eagle dynamics should switch to unreal engine 5.X.
Their whole problem would be solved.
Multitreading (more than 2 cores)
Raytracing
Fluid management
DLSS 3, VFR
And future support for DirecStorage technology etc etc...

It is clear that the ED team will never be able to upgrade their graphics engine so why not switch to an optimized and proven engine?

So make the entirety of DCS again from scratch. 

  • Thanks 1

i7 13700k @5.2ghz, GTX 3090, 64Gig ram 4800mhz DDR5, M2 drive.

Link to comment
Share on other sites

13 hours ago, Weta43 said:

might seem disappointing to those that were hoping to see all 88 threads maxed out

We already knew from previous communication that multithreading would come prior to Vulkan-implementation, and there are likely limits to what you can do within DX11.
I just wonder how they will proceed from here. Is this splitting into 2 threads enough to start moving towards Vulkan, or is more work needed within DX11 before the move to Vulkan can be made?

Spoiler

Ryzen 9 5900X | 64GB G.Skill TridentZ 3600 | Gigabyte RX6900XT | ASUS ROG Strix X570-E GAMING | Samsung 990Pro 2TB + 960Pro 1TB NMVe | HP Reverb G2
Pro Flight Trainer Puma | VIRPIL MT-50CM2+3 base / CM2 x2 grip with 200 mm S-curve extension + CM3 throttle + CP2/3 + FSSB R3L + VPC Rotor TCS Plus base with SharKa-50 grip mounted on Monstertech MFC-1 | TPR rudder pedals

OpenXR | PD 1.0 | 100% render resolution | DCS "HIGH" preset

 

Link to comment
Share on other sites

1 hour ago, Bigounet said:

Eagle dynamics should switch to unreal engine 5.X.
Their whole problem would be solved.
Multitreading (more than 2 cores)
Raytracing
Fluid management
DLSS 3, VFR
And future support for DirecStorage technology etc etc...

It is clear that the ED team will never be able to upgrade their graphics engine so why not switch to an optimized and proven engine?

Oh yes, the "Use the UE" again...

You guys, obviously has no idea of the amount of work that is needed for a engine change, its like make the game almost from zero and, please, tell me how proven is the UE5 engine, as far i know theres only one released game with it, Fornite, and its not a good comparison with DCS and UE4 engine has been plagued of different problems...

NZXT H9 Flow Black | Intel Core i5 13600KF OCed P5.6 E4.4 | Gigabyte Z790 Aorus Elite AX | G.Skill Trident Z5 Neo DDR5-6000 32GB C30 OCed 6600 C32 | nVidia GeForce RTX 4090 Founders Edition |  Western Digital SN770 2TB | Gigabyte GP-UD1000GM PG5 ATX 3.0 1000W | SteelSeries Apex 7 | Razer Viper Mini | SteelSeries Artics Nova 7 | LG OLED42C2 | Xiaomi P1 55"

Virpil T-50 CM2 Base + Thrustmaster Warthog Stick | WinWing Orion 2 F16EX Viper Throttle  | WinWing ICP | 3 x Thrustmaster MFD | Saitek Combat Rudder Pedals | Oculus Quest 2

DCS World | Persian Gulf | Syria | Flaming Cliff 3 | P-51D Mustang | Spitfire LF Mk. IX | Fw-109 A-8 | A-10C II Tank Killer | F/A-18C Hornet | F-14B Tomcat | F-16C Viper | F-15E Strike Eagle | M2000C | Ka-50 BlackShark III | Mi-24P Hind | AH-64D Apache | SuperCarrier

Link to comment
Share on other sites

13 hours ago, Weta43 said:

2 is good positive news of progress !

One that we knew for a long time. It's been in closed beta since at least a months, if not more. Statements to that effect go back a while.

The point is, after such a long time, I expected a much more comprehensive overhaul. We've waited long enough for a change that might well turn out to be marginal, as far as performance goes. Yes, it'll be faster if they split the graphics off from whatever "simulation" is, but don't expect performance problems to be solved, because the thread will still be bottlenecked. Even if the plan is to eventually make it properly multithreaded, how long is it going to take? Especially since I wouldn't be surprised if they focused on the graphics thread (Vulkan), which will help some, but not if the physics thread keeps holding things back. 

At least I know now why they insisted performance gain will not be super-high, when a proper multicore would see it go through the roof. I wanted clarification whether it's split into two threads (graphics+everything else) or 3 threads (graphics+physics+everything else), and if we can expect further parallelization soon. Given how long it took, I expected that the result would basically amount to a new engine, just one able to use the same data. What they did merely increments on an architecture wholly unsuited to modern hardware, which IMO is the wrong approach. It produces a result faster, but takes far longer to get it working as well as a modern engine does, since to do that, you have to eventually rewrite the whole thing, anyway. Doing it in stages only adds needless complexity and retains technical debt that could have otherwise been gotten rid of.

I do hope Vulkan implementation doesn't follow the same approach. There were some rumors about attempts to use DX and Vulkan at the same time, and that is a recipe for a disaster. The proper way to transition to Vulkan is to chuck all DX code and replace it with Vulkan code. Everything else will only cause problems down the line.

Link to comment
Share on other sites

3 hours ago, St4rgun said:

The new FPS counter in 2.8 revealed for us, that in the total frametime one of the largest chunk is "simulation". I wonder if the multithreading means that this "simulation" part will be completely detached from the rendering frametime, thus enabling higher framerates at start? Or the frames simply can't be rendered without the fresh simulation data no matter what?

The current CPU + GPU hardware seems to be enough for pleasingly high framerates in 2D, the VR what seems extremely critical because of the need of high framerates combined with very high resolutions and dual displays. But for VR what is the most important part is to have virtually lag free very smooth movement for precise head tracking. Despite of the needed 90 fps this can be pretty much solved with asynchronous reprojection (ASW for Oculus or motion reprojection for WMR) at much lower framerates which would be perfect.

The only problem are the visible artifacts of the reprojection. If those can be eliminated then even the current state of the sim would be very good for VR.

But if the rendering and logic threads will be separated then this could yield high gains in the long run.

The best description of the current situation is made by @SkateZilla in another topic here (I'm rather curious why not using such interesting details like this explanation in the newsletter by the way...):

 

 

Because it's a new letter not a tech doc.
Still explains the situation accurately without causing confusion. often the most accurate answer is the simplest.

  • Like 1

Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2),

ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9)

3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs

Link to comment
Share on other sites

A quick question: each thread (a native thread or forked into a sub process) in the DCS main process will be assigned to the dedicate vCPU that is scheduled by OS, is it the case?  Thanks.


Edited by scommander2

Dell XPS 9730, i9-13900H, DDR5 64GB, Discrete GPU: NVIDIA GeForce RTX 4080, 1+2TB M.2 SSD, TM HOTAS Warthog/TPR, TKIR5/TrackClipPro, Win 11 Pro

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...