Jump to content

Recommended Posts

Posted

I apologize in advance for what is likely a stupid question, but I’ve been wondering recently why is driving VR so much more system intensive than sending the same 4K video to a 50” LED flat panel display? What is it about VR and their tiny little lower resolution TV screens that stresses video cards and CPUs?

System HW: i9-9900K @5ghz, MSI 11GB RTX-2080-Ti Trio, G-Skill 32GB RAM, Reverb HMD, Steam VR, TM Warthog Hotas Stick & Throttle, TM F/A-18 Stick grip add-on, TM TFRP pedals. SW: 2.5.6 OB

Posted
Instead of one screen it's two screens with two different POV's?

 

That is simple thing to calculate. The GPU's limitation is really the required calculations in the pixel level that is required throughput.

 

So example a:

 

FHD is 1920 x 1080 = 2073600 = 2.1 Mpix

QFHD is 3840 x 2160 = 8294400 = 8.3 Mpix

 

So of course it is 4x more throughput required, as each pixel is with three colors, R, G or B.

 

Now if we take a VR display, like Rift S (lowest of all current models) that has 2560 x 1440 = 3686400 = 3.7 Mpix, so for individual eye it is 1280 x 1440 = 1843200 = 1.85 Mpix.

 

That is the throughput that is required per frame.

 

Now a game with 24 FPS is completely playable with great rendering when using a CRT display. With LCD screen you need about 60 FPS to get somewhat smooth. And if you put 90 FPS, it becomes heavy thing.

 

But VR is not same thing as one screen or two screens. As it is two different perspectives, the game engine needs to calculate all from two perspectives. So while the throughput can be about 1.8x of the throughput requirements per frame, the drawing all twice is the bottleneck.

 

All the lighting effects etc needs to be redrawn etc. It is not so straightforward even in that, as the CPU is lots of doing the calculations for geometries etc and it is easier for GPU to draw own effects, but it is that what makes it heavy.

 

So example, it is easy to get a game to run in QFHD (8.3 Mpix) at 120 FPS, but to get that same performance for a VR that has that 3.7 Mpix at 90 FPS can become really challenge.

 

The huge benefit is the VR fixed camera angles and distance, so lots of geometrical calculations, lighting etc can be simply faked. You calculate graphics once and then you just move the effect slightly for the another eye. It works, but only on specific effects.

i7-8700k, 32GB 2666Mhz DDR4, 2x 2080S SLI 8GB, Oculus Rift S.

i7-8700k, 16GB 2666Mhz DDR4, 1080Ti 11GB, 27" 4K, 65" HDR 4K.

Posted
That is simple thing to calculate. The GPU's limitation is really the required calculations in the pixel level that is required throughput.

 

So example a:

 

FHD is 1920 x 1080 = 2073600 = 2.1 Mpix

QFHD is 3840 x 2160 = 8294400 = 8.3 Mpix

 

So of course it is 4x more throughput required, as each pixel is with three colors, R, G or B.

 

Now if we take a VR display, like Rift S (lowest of all current models) that has 2560 x 1440 = 3686400 = 3.7 Mpix, so for individual eye it is 1280 x 1440 = 1843200 = 1.85 Mpix.

 

That is the throughput that is required per frame.

 

Now a game with 24 FPS is completely playable with great rendering when using a CRT display. With LCD screen you need about 60 FPS to get somewhat smooth. And if you put 90 FPS, it becomes heavy thing.

 

But VR is not same thing as one screen or two screens. As it is two different perspectives, the game engine needs to calculate all from two perspectives. So while the throughput can be about 1.8x of the throughput requirements per frame, the drawing all twice is the bottleneck.

 

All the lighting effects etc needs to be redrawn etc. It is not so straightforward even in that, as the CPU is lots of doing the calculations for geometries etc and it is easier for GPU to draw own effects, but it is that what makes it heavy.

 

So example, it is easy to get a game to run in QFHD (8.3 Mpix) at 120 FPS, but to get that same performance for a VR that has that 3.7 Mpix at 90 FPS can become really challenge.

 

The huge benefit is the VR fixed camera angles and distance, so lots of geometrical calculations, lighting etc can be simply faked. You calculate graphics once and then you just move the effect slightly for the another eye. It works, but only on specific effects.

 

Cool, thanks. Great explanation.

System HW: i9-9900K @5ghz, MSI 11GB RTX-2080-Ti Trio, G-Skill 32GB RAM, Reverb HMD, Steam VR, TM Warthog Hotas Stick & Throttle, TM F/A-18 Stick grip add-on, TM TFRP pedals. SW: 2.5.6 OB

Posted (edited)
That is simple thing to calculate. The GPU's limitation is really the required calculations in the pixel level that is required throughput.

 

So example a:

 

FHD is 1920 x 1080 = 2073600 = 2.1 Mpix

QFHD is 3840 x 2160 = 8294400 = 8.3 Mpix

 

So of course it is 4x more throughput required, as each pixel is with three colors, R, G or B.

 

Now if we take a VR display, like Rift S (lowest of all current models) that has 2560 x 1440 = 3686400 = 3.7 Mpix, so for individual eye it is 1280 x 1440 = 1843200 = 1.85 Mpix.

 

 

Just as an Addition to this, I'm not sure how the Oculas stuff works, but with any Steam VR game you have to add a 1.4 multiplier to the x and y pixels. this is built in super sampling that you can't really turn off (although you can lower your resolution to less than 100%, 51% ish I think for the Vive Pro, to simulate only rendering to your headsets native resolution.)

 

So for example on a VIVE Pro:

 

Native panel resolution = 1600x1440 = 2.3Mpix

 

Steam mandatory SS = 1600x1440 x 1.4 = 2240x2016 = 4.5Mpix

 

multiply this by 2 for each eye and you get 9Mpix. which is over 4K.

 

However most VR users will also add additional Super sampling in DCS, I run 1.4 for example.

 

So in this case its 2240x2016 x 1.4 = 3136x2822 = 8.8Mpix x 2 for both eyes = 17.7 Mpix.

 

Which is why I only get 45FPS most of the time even with an overclocked 2080TI :D even though I turn the settings down from max.

Edited by Thunderchief2000
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...