remi Posted January 10, 2019 Posted January 10, 2019 it may ease GPU load (great!) but it does nothing to fix the resolution and clarity issues. it’s still a long ways away from what a simple monitor can do You're wrong on that: foveated rendering allows the PPI of screens to be increased dramatically, but because only a fraction of the pixels are rendered at the highest resolution, performance shouldn't decrease significantly. If anything, FPS will actually increase. I think there's a 30% performance bonus with foveated rendering on versus off. So, if you increase the PPI by 30%, you should have equal performance. Not bad! [sIGPIC][/sIGPIC]
Brun Posted January 10, 2019 Posted January 10, 2019 If foveated rendering is so beneficial, why aren't we already getting variable pixel density between the centre and edges of the image? Asus Z690 Hero | 12900K | 64GB G.Skill 6000 | 4090FE | Reverb G2 | VPC MongoosT-50CM2 + TM Grips | Winwing Orion2 Throttle | MFG Crosswind Pedals
hansangb Posted January 10, 2019 Author Posted January 10, 2019 If foveated rendering is so beneficial, why aren't we already getting variable pixel density between the centre and edges of the image? For the same reason why cars didn't have seatbetls and airbags in the beginning. And why PCs shipped with cassettes and then floppies instead of HD. And why PCs today can be measured in GHz instead of MHz. And why SSD is the norm today. hsb HW Spec in Spoiler --- i7-10700K Direct-To-Die/OC'ed to 5.1GHz, MSI Z490 MB, 32GB DDR4 3200MHz, EVGA 2080 Ti FTW3, NVMe+SSD, Win 10 x64 Pro, MFG, Warthog, TM MFDs, Komodo Huey set, Rverbe G1
Gearbox Posted January 10, 2019 Posted January 10, 2019 I'm sceptical of significant performance gains from foveated rendering. Even though it might lighten the rendering load, the reason for of poor performance in VR (in DX11 at least) is having to 'create' the scene twice rather than render the pixels. I don't see how foveated rendering will do anything to alleviate that. Might be that DX12 and Vulcan benefit more, but that's not much use for DCS. Rendering the scene twice is a killer yes, so we need to claw back whatever performance we can. Your eye is really only capable of high resolution in a small cone in the middle of the retina (the fovea) so as you get further from there you can render the scene with less and less detail. You can knock down or even eliminate the anti aliasing, and probably get away with running half resolution or worse at the periphery. This frees up render power. Even if the double-rendering gets handled better or they finally give us dual GPU with single GPU per eye there are always gains to be had, especially for the crazy high resolutions of future headsets.
nrosko Posted January 10, 2019 Posted January 10, 2019 it may ease GPU load (great!) but it does nothing to fix the resolution and clarity issues. it’s still a long ways away from what a simple monitor can do I'm not sure i understand your logic. To run better clarity & resolution we need to reduce the load on GPU. That is the whole point of it. Win 10 64//4.5g i7 Kaby Lake//gtx Titan x pascal//16gb 3200ram//Asus Maximux Hero IX//Oculus Rift//
Jyge Posted January 11, 2019 Posted January 11, 2019 You're wrong on that: foveated rendering allows the PPI of screens to be increased dramatically, but because only a fraction of the pixels are rendered at the highest resolution, performance shouldn't decrease significantly. If anything, FPS will actually increase. So, I already asked this. Would the current resolution suffice for a better picture only if it was rendered better? If we even now had a still image and rendered it in Rift like no tomorrow, would we then see the picture more clearly with no SDE? Apparently, Samsung is somewhere near enough with resolution, actually providing more vertical pixels as Pimax, so foveated would bring the boost as the necessary quality can be reached. I think HTC Vive Pro has about the same resolution as Samsung?
etherbattx Posted January 11, 2019 Posted January 11, 2019 I'm not sure i understand your logic. To run better clarity & resolution we need to reduce the load on GPU. That is the whole point of it. you are saying that the bad resolution and clarity in VR is because the graphics card can’t keep up? which implies that if we are willing to accept 15fps instead of 90, the image magically gains resolution and becomes more clear? it doesn’t work that way.
remi Posted January 11, 2019 Posted January 11, 2019 you are saying that the bad resolution and clarity in VR is because the graphics card can’t keep up? which implies that if we are willing to accept 15fps instead of 90, the image magically gains resolution and becomes more clear? it doesn’t work that way. Foveated rendering of a scene focuses graphics rendering to a portion of the display and uses less intensive resources to the periphery. Thus, higher resolution, higher clarity, and equal/higher performance. This is a no brainer. If you own an Oculus already, then you're out of luck and will have to use outdated tech. [sIGPIC][/sIGPIC]
Mr_sukebe Posted January 11, 2019 Posted January 11, 2019 In a “hands on” comment that I read earlier from CES, there was talk of reducing GPU loading by 30%, so most definitely worth having. The only question is whether it’s cheaper to buy a new headset or a 2080ti 7800x3d, 5080, 64GB, PCIE5 SSD - Oculus Pro - Moza (AB9), Virpil (Alpha, CM3, CM1 and CM2), WW (TOP and CP), TM (MFDs, Pendular Rudder), Tek Creations (F18 panel), Total Controls (Apache MFD), Jetseat
remi Posted January 11, 2019 Posted January 11, 2019 In a “hands on” comment that I read earlier from CES, there was talk of reducing GPU loading by 30%, so most definitely worth having. The only question is whether it’s cheaper to buy a new headset or a 2080ti Get both. :D [sIGPIC][/sIGPIC]
etherbattx Posted January 11, 2019 Posted January 11, 2019 i think you forgetting that low resolution and lack of clarity in HMD’s is due to the display hardware, not the rendering pipeline or foveate algorithms. it does not get more clear if you give the gpu more compute resources or time to render.
remi Posted January 11, 2019 Posted January 11, 2019 i think you forgetting that low resolution and lack of clarity in HMD’s is due to the display hardware, not the rendering pipeline or foveate algorithms. it does not get more clear if you give the gpu more compute resources or time to render. What are you talking about? I already said that the screen's pixel density can increase by 30% and you can have the same FPS performance as before. Higher pixel density means higher clarity, and foveated rendering is the only way to make this possible. [sIGPIC][/sIGPIC]
Bwaze Posted January 11, 2019 Posted January 11, 2019 Isn't there a problem with foveated rendering that it really needs high FPS - everything under 200 FPS and your eyes see that they landed on low resolution portion of the scene before it turns to high resolution?
nrosko Posted January 11, 2019 Posted January 11, 2019 you are saying that the bad resolution and clarity in VR is because the graphics card can’t keep up? which implies that if we are willing to accept 15fps instead of 90, the image magically gains resolution and becomes more clear? it doesn’t work that way. Nope I'm not saying that at all. I'm saying if it takes less power to run a higher res screen then there is more chance we see better screens in the next HMDs. I think the confusion here is that i'm talking generally about foveated rendering & eye tracking & how its going to be utilised by future hardware & you are talking about this new Vive Pro? Thing is there is nothing confirmed for that headset other than eye tracking which could be used for all sorts of things. Currently sharpness (PPD) is poor in even in the Pimax & Odyssey compared to a monitor like you say but FR will help us get there. So for the sake of comparison(some approximations) Pimax8K 2 x 3840 x 2160 PPI 530 FOV 200 Subpixel Matrix Diamond RGB PPD 14 Pimax5K 2 x 2560 x 1440 PPI 530 FOV 200 Subpixel Matrix RGB PPD 14 Odyssey+2 x 3.5" 1600 x 1440 PPI 615 FOV 110 Subpixel Matrix Pentile PPD 14 Vive Pro 2 x 3.5" 1600 x 1440 PPI 615 FOV 110Subpixel Matrix Pentile PPD 14 Rift CV1 2 x 3.54" 1200 x 1080 PPI 456 FOV 110 Subpixel Matrix Pentile PPD 12 HTC Vive 2 x 3.62" 1200 x 1080 PPI 446 FOV 110 Subpixel Matrix Pentile PPD 11 PSVR 1 x 5.70" 1920 x 1080 PPI 386 FOV 100 Subpixel Matrix RGB PPD 10 A retinal resolution 2020 vision =60 PPD Average Person=80 PPD My 31" 4k monitor is 80 PPD So if we wanted a headset lets say with a FOV of 130 but want similar sharpness as a 1080p 31"monitor you would need at least 4k panel per eye like the Pimax 8k. Bear in mind most people are having to drop the gfx settings, reduce fov to run Pimax 8k @ 60hz. Win 10 64//4.5g i7 Kaby Lake//gtx Titan x pascal//16gb 3200ram//Asus Maximux Hero IX//Oculus Rift//
SonofEil Posted January 12, 2019 Posted January 12, 2019 (edited) If foveated rendering is so beneficial, why aren't we already getting variable pixel density between the centre and edges of the image? If you mean physical screen "pixel density", you still need full screen high resolution hardware. FR uses eye tracking to render the periphery image in a significantly lower resolution than your eyes focal center. It's the eye tracking hardware that's been missing in current gen VR headsets. If you move your eyes (not your head) to the side, FR is supposed to move the focal center of the image to the side as well, directly in front of where your eyes are now looking. The 'old' high resolution center on the hardware screen is now rendered with a much lower resolution image. Because your eyeballs only 'see' a 10-20 degree circle in the center of your vision in high detail, FR should be totally invisible to the user while significantly reducing GPU loading. Edited January 12, 2019 by SonofEil i7 7700K @5.0, 1080Ti, 32GB DDR4, HMD Odyssey, TM WH, Crosswind Rudder...
Brun Posted January 12, 2019 Posted January 12, 2019 My point is that the periphery of the *display* could currently be rendered at lower resolution, regardless of where the eye is looking. Current VR headsets have (to varying extent) a sweet spot in the centre, beyond which the optics make the image increasingly blurred. It's the reason why in DCS you have to actually move your head to read instruments and displays rather than simply being able to glance at them. There's no need to render the whole image with the same quality as the centre. There's another reason why the edges should be rendered at lower res. The distortion which is applied to the rendered image means the edges are actually over-sampled in comparison to the centre of the image. See page 14 onwards in this pdf for an explanation. If there are already good reasons for variable rendering resolution - which would be applicable to existing hardware - but (as far as I'm aware) it's not being used, why do people expect eye-tracking based rendering to be significant? Asus Z690 Hero | 12900K | 64GB G.Skill 6000 | 4090FE | Reverb G2 | VPC MongoosT-50CM2 + TM Grips | Winwing Orion2 Throttle | MFG Crosswind Pedals
Gearbox Posted January 12, 2019 Posted January 12, 2019 My point is that the periphery of the *display* could currently be rendered at lower resolution, regardless of where the eye is looking. Current VR headsets have (to varying extent) a sweet spot in the centre, beyond which the optics make the image increasingly blurred. It's the reason why in DCS you have to actually move your head to read instruments and displays rather than simply being able to glance at them. There's no need to render the whole image with the same quality as the centre. There's another reason why the edges should be rendered at lower res. The distortion which is applied to the rendered image means the edges are actually over-sampled in comparison to the centre of the image. See page 14 onwards in this pdf for an explanation. If there are already good reasons for variable rendering resolution - which would be applicable to existing hardware - but (as far as I'm aware) it's not being used, why do people expect eye-tracking based rendering to be significant? That's partly what this mod does: https://forums.eagle.ru/showthread.php?t=215373 Remember we are still in the infancy of VR tech and it takes time for things to be implemented. I don't know how old you are but I remember the infancy of 3D, and before that the infancy of home computers at all. It's a shame that the golden age of flight sims happened back when computers still sucked. Here's a screenshot of the first flight sim I ever played, on the Timex Sinclair 2068 boasting a 3.5MHz processor and 48 KB of RAM.
Zoomer Posted January 12, 2019 Posted January 12, 2019 I am afraid they seem to be working a little bit slower on that one since the video. Apparently, they got re-prioritized to light-weight entertainment on standalone sets like Go and Quest. For pushing that approach, the Quest intentionally does not feature a PC connection and has own processing for its own games. The Rift 2 seems to be pushed back (maybe out of the way, in order not to hamper the standalone strategy), CES 2019 for Oculus is just about Quest-approach. Maybe, but the same tech will be needed in the standalone units even to a greater extent, to maximise the fidelity with the more limited hardware provided. One is not exclusive to the other.
Zoomer Posted January 12, 2019 Posted January 12, 2019 That's partly what this mod does: https://forums.eagle.ru/showthread.php?t=215373 Remember we are still in the infancy of VR tech and it takes time for things to be implemented. I don't know how old you are but I remember the infancy of 3D, and before that the infancy of home computers at all. It's a shame that the golden age of flight sims happened back when computers still sucked. Here's a screenshot of the first flight sim I ever played, on the Timex Sinclair 2068 boasting a 3.5MHz processor and 48 KB of RAM. Yes, the good old Sinclair spectrum 48k, I still have it and played that sim I`m sure. To think what a good gaming pc can do today is really amazing. Every time I fly in VR it`s still a wonder of technology. Even with these first gen hmd`s the experience of immersion is a point of no return.
nrosko Posted January 12, 2019 Posted January 12, 2019 My point is that the periphery of the *display* could currently be rendered at lower resolution, regardless of where the eye is looking. Current VR headsets have (to varying extent) a sweet spot in the centre, beyond which the optics make the image increasingly blurred. It's the reason why in DCS you have to actually move your head to read instruments and displays rather than simply being able to glance at them. There's no need to render the whole image with the same quality as the centre. There's another reason why the edges should be rendered at lower res. The distortion which is applied to the rendered image means the edges are actually over-sampled in comparison to the centre of the image. See page 14 onwards in this pdf for an explanation. If there are already good reasons for variable rendering resolution - which would be applicable to existing hardware - but (as far as I'm aware) it's not being used, why do people expect eye-tracking based rendering to be significant? The sweet spot is due to the lenses & FOV, future headsets should improve the lense quality. Also fixed FR is already there with Oculus GO. https://developer.oculus.com/documentation/unreal/latest/concepts/unreal-ffr/ This imo is dirty FR not eally what you want for playing games. Win 10 64//4.5g i7 Kaby Lake//gtx Titan x pascal//16gb 3200ram//Asus Maximux Hero IX//Oculus Rift//
etherbattx Posted January 12, 2019 Posted January 12, 2019 why do people expect eye-tracking based rendering to be significant? well it’s the new hotness and the latest hype... and they promise it will fix all the sde, performance and resolution issues of VR. similar to how DX11 and DX12 solved the perf issues of gaming on Windows.
nrosko Posted January 12, 2019 Posted January 12, 2019 (edited) well it’s the new hotness and the latest hype... and they promise it will fix all the sde, performance and resolution issues of VR. similar to how DX11 and DX12 solved the perf issues of gaming on Windows. SDE is SDE its SUBPIXELS. This is the black lines between the pixel & that can be affected by other things like panel type. If you want to see an easy demonstration of this (not hype) look at a psvr vr orvr (PS has a different subpixel arrangement). Significantly less pixels on the psvr but les SDE. Resolution will always help with SDE because it it's shrinking everything but all sorts of things can effect SDE. IT'S NOT A SIMPLE AS + RESOLUTION. FR is good for VR i'm struggling with the logic of being negative about it. Edited January 12, 2019 by nrosko Win 10 64//4.5g i7 Kaby Lake//gtx Titan x pascal//16gb 3200ram//Asus Maximux Hero IX//Oculus Rift//
nrosko Posted January 12, 2019 Posted January 12, 2019 There are a few things coming the way of VR that is not hype but significantly important. For the sake of running DCS. 1 is ASW without artifacts (everyone will benefit from running a game at 45FPS without any obvious distortion). 2 foveated rendering. (imagine being able to gaze at your cockpit without zoom & read everything) So we can have 30+PPD. Win 10 64//4.5g i7 Kaby Lake//gtx Titan x pascal//16gb 3200ram//Asus Maximux Hero IX//Oculus Rift//
Brun Posted January 12, 2019 Posted January 12, 2019 I'm being realistic, not negative. Everyone with a Pascal or later Nvidia card - the overwhelming majority of VR users if this topic is any indication - already owns technology which is designed specifically for VR performance but isn't being used*. No single pass stereo No lens matched shading No VR SLI I just don't see any evidence that developers will jump at the opportunity to implement foveated rendering, especially seeing as it's likely to be even more complex than the above. I still think eye tracking has potential, but expect it will be used for interaction and effects (i.e depth-blur) rather than rendering performance. *Nvidia Funhouse doesn't count, sorry. Asus Z690 Hero | 12900K | 64GB G.Skill 6000 | 4090FE | Reverb G2 | VPC MongoosT-50CM2 + TM Grips | Winwing Orion2 Throttle | MFG Crosswind Pedals
Zoomer Posted January 12, 2019 Posted January 12, 2019 I'm being realistic, not negative. Everyone with a Pascal or later Nvidia card - the overwhelming majority of VR users if this topic is any indication - already owns technology which is designed specifically for VR performance but isn't being used*. No single pass stereo No lens matched shading No VR SLI I just don't see any evidence that developers will jump at the opportunity to implement foveated rendering, especially seeing as it's likely to be even more complex than the above. I still think eye tracking has potential, but expect it will be used for interaction and effects (i.e depth-blur) rather than rendering performance. *Nvidia Funhouse doesn't count, sorry. It`s critical mass. VR is new tech, and as adoption grows so will the tech and the software around it. Slower than we`d like, but I`ll take what we have over a monitor any day.
Recommended Posts