ChariotOfFLAME Posted February 2 Posted February 2 (edited) Hi folks. I'm having an issue where I'm seeing a portion of the center of my vision that OpenXR Toolkit believes is part of the Outer ring, see below (behind the menu, the triangle of culled pixel region): Note that menus and overlays are not impacted; only the game world. I have tried safe mode, uninstall/reinstall, with no luck. Is there some other config file I should get rid of? I have a Reverb G2, 4080 Super. @mbucchia I know this project is as dead as the support for my headset, but any tips on things I could try would be super awesome, but I'll understand if the answer is no. Edited February 2 by ChariotOfFLAME
Qcumber Posted February 2 Posted February 2 48 minutes ago, ChariotOfFLAME said: Hi folks. I'm having an issue where I'm seeing a portion of the center of my vision that OpenXR Toolkit believes is part of the Outer ring, see below (behind the menu, the triangle of culled pixel region): Note that menus and overlays are not impacted; only the game world. I have tried safe mode, uninstall/reinstall, with no luck. Is there some other config file I should get rid of? I have a Reverb G2, 4080 Super. @mbucchia I know this project is as dead as the support for my headset, but any tips on things I could try would be super awesome, but I'll understand if the answer is no. Why not use QVFR instead. This is the newer and better option. PC specs: 9800x3d - rtx5080 FE - 64GB RAM 6000MHz - 2Tb NVME - (for posts before March 2025: 5800x3d - rtx 4070) - VR headsets Quest Pro (Jan 2024-present; Pico 4 March 2023 - March 2024; Rift s June 2020- present). Maps Afghanistan – Channel – Cold War Germany - Kola - Normandy 2 – Persian Gulf - Sinai - Syria - South Atlantic. Modules BF-109 - FW-190 A8 - F4 - F5 - F14 - F16 - F86 - I16 - Mig 15 - Mig 21 - Mosquito - P47 - P51 - Spitfire.
ChariotOfFLAME Posted February 3 Author Posted February 3 18 hours ago, Qcumber said: Why not use QVFR instead. This is the newer and better option. I’m running a Reverb G2. QVFR won’t work for me
sleighzy Posted February 3 Posted February 3 2 hours ago, ChariotOfFLAME said: I’m running a Reverb G2. QVFR won’t work for me It should do. The documentation lists the G2 as being a supported and tested headset for FFR for non-eye tracked headsets. Home · mbucchia/Quad-Views-Foveated Wiki AMD 7800x3D, 4080Super, 64Gb DDR5 RAM, 4Tb NVMe M.2, Quest 2
ChariotOfFLAME Posted February 6 Author Posted February 6 My performance is not as good with QVFR, as I can't limit the field of view like I can with OpenXR Toolkit. Here is what I'm seeing when I use HAM mode, which exaggerates the issue: And here's my log file:XR_APILAYER_MBUCCHIA_toolkit.log If anyone has any ideas on how to recalibrate what OpenXR Toolkit considers to be the outer ring, or where that value is stored, that would be very helpful!
mbucchia Posted February 7 Posted February 7 (edited) This isn't a config issue or a solvable problem. Foveated rendering via VRS (what OpenXR Toolkit does) cannot be supported reliably outside of the game engine. It is **impossible** for an external tool to properly "triage" and classify render passes to do foveated rendering that works in 100% of the scenarios without engine support. What OpenXR Toolkit does (the "heuristic") is extremely fragile and can be broken by something as simple as "using a different aircraft" or "enabling a gfx setting" (best guess for your situation is perhaps DLSS or other form of upscaling). Same exact thing happened in MSFS, and I fixed it a few times, but it became too much work. AFAIK the feature is now useless in MSFS. 2+ years ago I made a thread on this forum to explain how ED (and any game developer) could add 5 lines of code in their engine to resolve these problems and make "universal" foveated rendering injection a reality. These 5 lines would preface the beginning of a render pass with a "hint" that OpenXR Toolkit could detect and know when/how to apply foveated rendering. Unfortunately that thread was ignored by the devs, and led to many angry discussions so I ended up deleting it. QVFR, while a better solution than VRS overall, does increase CPU and that is probably why it it's working as well for you. Edited February 7 by mbucchia 5 I wasn't banned, but this account is mostly inactive and not monitored.
skywalker22 Posted February 7 Posted February 7 7 hours ago, mbucchia said: This isn't a config issue or a solvable problem. Foveated rendering via VRS (what OpenXR Toolkit does) cannot be supported reliably outside of the game engine. It is **impossible** for an external tool to properly "triage" and classify render passes to do foveated rendering that works in 100% of the scenarios without engine support. What OpenXR Toolkit does (the "heuristic") is extremely fragile and can be broken by something as simple as "using a different aircraft" or "enabling a gfx setting" (best guess for your situation is perhaps DLSS or other form of upscaling). Same exact thing happened in MSFS, and I fixed it a few times, but it became too much work. AFAIK the feature is now useless in MSFS. 2+ years ago I made a thread on this forum to explain how ED (and any game developer) could add 5 lines of code in their engine to resolve these problems and make "universal" foveated rendering injection a reality. These 5 lines would preface the beginning of a render pass with a "hint" that OpenXR Toolkit could detect and know when/how to apply foveated rendering. Unfortunately that thread was ignored by the devs, and led to many angry discussions so I ended up deleting it. QVFR, while a better solution than VRS overall, does increase CPU and that is probably why it it's working as well for you. Which 5 lines of code? How come you are so sure only those 5 lines would do the difference? ps: Maybe they will listen to you know.
lefuneste01 Posted February 7 Posted February 7 11 hours ago, mbucchia said: This isn't a config issue or a solvable problem. Foveated rendering via VRS (what OpenXR Toolkit does) cannot be supported reliably outside of the game engine. It is **impossible** for an external tool to properly "triage" and classify render passes to do foveated rendering that works in 100% of the scenarios without engine support. Hello, I think it's worthless to do it for DCS as you already provided the needed tools for Varjo and other HMD, but I'm wondering what could be done for IL2 GB with reshade addon... I'm now able to do things in this way, having access to all render target, replace shaders code, inject CB, copy texture from 1 PS to another,...: Reshade VR Enhancer Mod (VREM) - Utility/Program Mods for DCS World - ED Forums But I always had in mind it will not be feasible to force the engine to do 4 rendering instead of 2. Intel i5 10400K @4.8 GHz, 3080ti, 32 GB RAM, Varjo Areo. I spend my time making 3dmigoto VR mods for BoS and DCS instead of flying, see https://www.patreon.com/lefuneste
mbucchia Posted February 8 Posted February 8 (edited) 20 hours ago, skywalker22 said: Which 5 lines of code? The #1 challenge **by far** in any foveated rendering injection (built outside of game engine) is to identify at what time to inject the VRS commands during the rendering. This is the issue that all of the 3 available solutions (OpenXR Toolkit, vrperfkit, and Pimax Magic) are struggling with. Currently, what these 3 tools do is hook into Direct3D calls, specifically ID3D11DeviceContext::OMSetRenderTargets, which is invoked sometimes before the engine begins to draw "something". The problem is that this "something" can be one of many things, it _can_ be the view to be rendered in your VR headset (*ding ding ding* that is the one you want to inject the VRS command at) or it can be something else, like an off-screen surface used for render-to-texture (very common for huds or instruments) or a menu or a miscellaneous surface used for a specific graphics effect (*bzzzzzt* no, you absolutely do not want to inject VRS commands for those). During rendering of a frame, this OMSetRenderTargets() is called many times, for different purposes. If the injector properly detects that this is for the VR views, then all things work fine. But if the injector accidentally mis-classifies a call for a VR view but it is in fact one of the other purposes, then you end up with issues, such as the one described in this thread. These issues tend to be catastrophic as they are very visible in the way they glitch. The is no universal solution for recognizing a VR view render pass from within an OMSetRenderTargets() call. What OpenXR Toolkit does is a relatively involved heuristic that involves querying some of the base data available during OMSetRenderTargets(), such as the dimension of the surface to render or the "format" (color type), all part of the D3D11_TEXTURE3D_DESC. Sadly this isn't enough to reliably detect that the engine is rendering the VR view. Also, fun fact, for newer tech like Direct3D 12 or Vulkan, they do not support "introspection" which means there is no trivial way to even extract this information in constant time. Doing something like adding a visual marker and then looking for it later at the end of the frame is also not possible, for two reasons, one is would kill performance to read back GPU memory and two it would be too late. And no, it isn't something that can be hard-coded somehow, because the order of the render passes in the engine changes often, it changes depending on what gfx you have enabled, which aircraft or scene, which segment of the game (menu, cockpit view, 3rd person view) and it also changes between versions of the game. Also, for dynamic foveated rendering, you must be able to not only detect that a render pass is for a VR view, but you must be able to know whether it is for the left eye or the right eye. This alone adds another insane degree of complexity and makes mistakes in that detection even less forgiving. Bottom line: in order to reliably implement foveated rendering in an injector, you need to classify render passes as they happen on the GPU, which effectively requires knowledge of the future. This is not a trivial problem, and AFAICT today this problem of predicting the future, is not solvable My proposal 2+ years ago was to have the game engine programmatically add a marker to the render targets that it uses for the VR view. Direct3D supports this via ID3D11DeviceChild::SetPrivateData, and it is very efficient to do, both in terms of effort (setting up this function call is less than 5 lines of code) and performance (there is no penalty to this if done properly). By providing such markers, it is now trivial for OpenXR Toolkit (and other tools) to look for the marker when hooking OMSetRenderTargets(), and to know - without an ounce of doubt - whether the VRS commands need to be injected. 20 hours ago, skywalker22 said: How come you are so sure only those 5 lines would do the difference? I am one of the 3 leading experts on this topic (the only foveated rendering injectors that work semi-universally today are OpenXR Toolkit, vrperfkit, and Pimax Magic). I probably have spent more time than anyone else on solving these problems. 20 hours ago, skywalker22 said: ps: Maybe they will listen to you know. It's too late now. None of the three tools mentioned above are in active development. The engine needs to add the marker, and then the tools also need modifications to look for the marker, something that isn't done today, since no such standard marker was agreed upon with the developers. 16 hours ago, lefuneste01 said: think it's worthless to do it for DCS as you already provided the needed tools for Varjo and other HMD Quad Views is not a solution that helps in all scenarios. Both VRS and Quad Views have pros and cons, one might help in a situation where the other doesn't help. Today if you do not have significant CPU headroom, Quad Views will not help you, while VRS on the other hand is almost free in terms of CPU usage. 16 hours ago, lefuneste01 said: but I'm wondering what could be done for IL2 GB with reshade addon... IL-2 suffers the same problems as listed above, and more. None of the 3 injectors work today with IL-2 as they cause mysterious crashes. I spend significant time with a user on the IL-2 forum (firmidigli or something, sorry I blank on their name) to troubleshoot why VRS causes the IL-2 engine to crash. We came up empty after weeks of investigation. There is something specific to what the IL-2 engine does that is just no working with VRS and causes random crashes. 16 hours ago, lefuneste01 said: But I always had in mind it will not be feasible to force the engine to do 4 rendering instead of 2. You cannot inject quad views outside of the game engine. Quad views is not post-processing (which is how Reshade works). There are hundreds and more places in every game engine where the engine assumes 2 views for rendering, in the geometry code, in the shaders, in the presentation code... I spent a significant amount of time working on quad views injection, and I could never make it work cleanly outside of basic sample code (worthless). Every game where I somehow successfully managed to inject quad views (mostly Unity games, can't remember their names), had completely broken graphic effects, because quad views is something that requires some precautions when implementing your engine. We brainstormed some ideas with other developers in the past (fholger, creator of vrperfkit) and the only approach that sounded remotely viable was dynamic shader recompilation or geometry shaders injection, both approaches are incredibly complex and would likely represent weeks/months of work by an expert developer just to support 1 game and would very likely still break many post-processing effects (aka wasting all this time). One of the other approaches I came up with was inspired by Luke Ross' alternate frame rendering, and consisted of "alternate views rendering" where each frame loop would alternate between view 1-2 and 3-4. However this causes significant CPU overhead (unacceptably higher than what we see with DCS today for example) and it breaks any temporal post-processing such as TAA or DLSS. I got this specific technique working in MSFS2020, and it was absolutely unusable both performance-wise and quality-wise. Edited February 8 by mbucchia 2 2 I wasn't banned, but this account is mostly inactive and not monitored.
mbucchia Posted February 8 Posted February 8 Edit: turns out the thread wasn't deleted (I did not create it, though I deleted all my replies after the thread turned badly) I wasn't banned, but this account is mostly inactive and not monitored.
skywalker22 Posted February 8 Posted February 8 @mbucchia https://drive.google.com/drive/folders/1iBj_ndlcJ6X0w0g7PIqB4oczDfPmqAfe Do you know anything about openxt toolkit v1.3.3? There are only 2 ddl file inside zip, and their names are very strange. Is it official? Strange thing is, you don't have it on official webiste.
lefuneste01 Posted February 8 Posted February 8 4 hours ago, mbucchia said: The #1 challenge **by far** in any foveated rendering injection (built outside of game engine) is to identify at what time to inject the VRS commands during the rendering. This is the issue that all of the 3 available solutions (OpenXR Toolkit, vrperfkit, and Pimax Magic) are struggling with. Currently, what these 3 tools do is hook into Direct3D calls, specifically ID3D11DeviceContext::OMSetRenderTargets, which is invoked sometimes before the engine begins to draw "something". The problem is that this "something" can be one of many things, it _can_ be the view to be rendered in your VR headset (*ding ding ding* that is the one you want to inject the VRS command at) or it can be something else, like an off-screen surface used for render-to-texture (very common for huds or instruments) or a menu or a miscellaneous surface used for a specific graphics effect (*bzzzzzt* no, you absolutely do not want to inject VRS commands for those). During rendering of a frame, this OMSetRenderTargets() is called many times, for different purposes. If the injector properly detects that this is for the VR views, then all things work fine. But if the injector accidentally mis-classifies a call for a VR view but it is in fact one of the other purposes, then you end up with issues, such as the one described in this thread. These issues tend to be catastrophic as they are very visible in the way they glitch. The is no universal solution for recognizing a VR view render pass from within an OMSetRenderTargets() call. What OpenXR Toolkit does is a relatively involved heuristic that involves querying some of the base data available during OMSetRenderTargets(), such as the dimension of the surface to render or the "format" (color type), all part of the D3D11_TEXTURE3D_DESC. Sadly this isn't enough to reliably detect that the engine is rendering the VR view. Also, fun fact, for newer tech like Direct3D 12 or Vulkan, they do not support "introspection" which means there is no trivial way to even extract this information in constant time. Doing something like adding a visual marker and then looking for it later at the end of the frame is also not possible, for two reasons, one is would kill performance to read back GPU memory and two it would be too late. And no, it isn't something that can be hard-coded somehow, because the order of the render passes in the engine changes often, it changes depending on what gfx you have enabled, which aircraft or scene, which segment of the game (menu, cockpit view, 3rd person view) and it also changes between versions of the game. Also, for dynamic foveated rendering, you must be able to not only detect that a render pass is for a VR view, but you must be able to know whether it is for the left eye or the right eye. This alone adds another insane degree of complexity and makes mistakes in that detection even less forgiving. Bottom line: in order to reliably implement foveated rendering in an injector, you need to classify render passes as they happen on the GPU, which effectively requires knowledge of the future. This is not a trivial problem, and AFAICT today this problem of predicting the future, is not solvable My proposal 2+ years ago was to have the game engine programmatically add a marker to the render targets that it uses for the VR view. Direct3D supports this via ID3D11DeviceChild::SetPrivateData, and it is very efficient to do, both in terms of effort (setting up this function call is less than 5 lines of code) and performance (there is no penalty to this if done properly). By providing such markers, it is now trivial for OpenXR Toolkit (and other tools) to look for the marker when hooking OMSetRenderTargets(), and to know - without an ounce of doubt - whether the VRS commands need to be injected. I am one of the 3 leading experts on this topic (the only foveated rendering injectors that work semi-universally today are OpenXR Toolkit, vrperfkit, and Pimax Magic). I probably have spent more time than anyone else on solving these problems. It's too late now. None of the three tools mentioned above are in active development. The engine needs to add the marker, and then the tools also need modifications to look for the marker, something that isn't done today, since no such standard marker was agreed upon with the developers. Quad Views is not a solution that helps in all scenarios. Both VRS and Quad Views have pros and cons, one might help in a situation where the other doesn't help. Today if you do not have significant CPU headroom, Quad Views will not help you, while VRS on the other hand is almost free in terms of CPU usage. IL-2 suffers the same problems as listed above, and more. None of the 3 injectors work today with IL-2 as they cause mysterious crashes. I spend significant time with a user on the IL-2 forum (firmidigli or something, sorry I blank on their name) to troubleshoot why VRS causes the IL-2 engine to crash. We came up empty after weeks of investigation. There is something specific to what the IL-2 engine does that is just no working with VRS and causes random crashes. You cannot inject quad views outside of the game engine. Quad views is not post-processing (which is how Reshade works). There are hundreds and more places in every game engine where the engine assumes 2 views for rendering, in the geometry code, in the shaders, in the presentation code... I spent a significant amount of time working on quad views injection, and I could never make it work cleanly outside of basic sample code (worthless). Every game where I somehow successfully managed to inject quad views (mostly Unity games, can't remember their names), had completely broken graphic effects, because quad views is something that requires some precautions when implementing your engine. We brainstormed some ideas with other developers in the past (fholger, creator of vrperfkit) and the only approach that sounded remotely viable was dynamic shader recompilation or geometry shaders injection, both approaches are incredibly complex and would likely represent weeks/months of work by an expert developer just to support 1 game and would very likely still break many post-processing effects (aka wasting all this time). One of the other approaches I came up with was inspired by Luke Ross' alternate frame rendering, and consisted of "alternate views rendering" where each frame loop would alternate between view 1-2 and 3-4. However this causes significant CPU overhead (unacceptably higher than what we see with DCS today for example) and it breaks any temporal post-processing such as TAA or DLSS. I got this specific technique working in MSFS2020, and it was absolutely unusable both performance-wise and quality-wise. Ok, so currently I’m able to inject reshade technique on one of the last pixel shader render target, I know if I’m on left eye, right eye, inner view or outer view. It is rendered to the HMD but not to the screen, but just because the mirror is done before my changes. But I’m in DirectX rendering, not openXR. I also have access to depth map, even to a stencil map. What should be feaseable to have VRS in this config ? Intel i5 10400K @4.8 GHz, 3080ti, 32 GB RAM, Varjo Areo. I spend my time making 3dmigoto VR mods for BoS and DCS instead of flying, see https://www.patreon.com/lefuneste
zildac Posted February 8 Posted February 8 5 hours ago, mbucchia said: The #1 challenge **by far** in any foveated rendering injection (built outside of game engine) is to identify at what time to inject the VRS commands during the rendering. This is the issue that all of the 3 available solutions (OpenXR Toolkit, vrperfkit, and Pimax Magic) are struggling with. Currently, what these 3 tools do is hook into Direct3D calls, specifically ID3D11DeviceContext::OMSetRenderTargets, which is invoked sometimes before the engine begins to draw "something". The problem is that this "something" can be one of many things, it _can_ be the view to be rendered in your VR headset (*ding ding ding* that is the one you want to inject the VRS command at) or it can be something else, like an off-screen surface used for render-to-texture (very common for huds or instruments) or a menu or a miscellaneous surface used for a specific graphics effect (*bzzzzzt* no, you absolutely do not want to inject VRS commands for those). During rendering of a frame, this OMSetRenderTargets() is called many times, for different purposes. If the injector properly detects that this is for the VR views, then all things work fine. But if the injector accidentally mis-classifies a call for a VR view but it is in fact one of the other purposes, then you end up with issues, such as the one described in this thread. These issues tend to be catastrophic as they are very visible in the way they glitch. The is no universal solution for recognizing a VR view render pass from within an OMSetRenderTargets() call. What OpenXR Toolkit does is a relatively involved heuristic that involves querying some of the base data available during OMSetRenderTargets(), such as the dimension of the surface to render or the "format" (color type), all part of the D3D11_TEXTURE3D_DESC. Sadly this isn't enough to reliably detect that the engine is rendering the VR view. Also, fun fact, for newer tech like Direct3D 12 or Vulkan, they do not support "introspection" which means there is no trivial way to even extract this information in constant time. Doing something like adding a visual marker and then looking for it later at the end of the frame is also not possible, for two reasons, one is would kill performance to read back GPU memory and two it would be too late. And no, it isn't something that can be hard-coded somehow, because the order of the render passes in the engine changes often, it changes depending on what gfx you have enabled, which aircraft or scene, which segment of the game (menu, cockpit view, 3rd person view) and it also changes between versions of the game. Also, for dynamic foveated rendering, you must be able to not only detect that a render pass is for a VR view, but you must be able to know whether it is for the left eye or the right eye. This alone adds another insane degree of complexity and makes mistakes in that detection even less forgiving. Bottom line: in order to reliably implement foveated rendering in an injector, you need to classify render passes as they happen on the GPU, which effectively requires knowledge of the future. This is not a trivial problem, and AFAICT today this problem of predicting the future, is not solvable My proposal 2+ years ago was to have the game engine programmatically add a marker to the render targets that it uses for the VR view. Direct3D supports this via ID3D11DeviceChild::SetPrivateData, and it is very efficient to do, both in terms of effort (setting up this function call is less than 5 lines of code) and performance (there is no penalty to this if done properly). By providing such markers, it is now trivial for OpenXR Toolkit (and other tools) to look for the marker when hooking OMSetRenderTargets(), and to know - without an ounce of doubt - whether the VRS commands need to be injected. I am one of the 3 leading experts on this topic (the only foveated rendering injectors that work semi-universally today are OpenXR Toolkit, vrperfkit, and Pimax Magic). I probably have spent more time than anyone else on solving these problems. It's too late now. None of the three tools mentioned above are in active development. The engine needs to add the marker, and then the tools also need modifications to look for the marker, something that isn't done today, since no such standard marker was agreed upon with the developers. Quad Views is not a solution that helps in all scenarios. Both VRS and Quad Views have pros and cons, one might help in a situation where the other doesn't help. Today if you do not have significant CPU headroom, Quad Views will not help you, while VRS on the other hand is almost free in terms of CPU usage. IL-2 suffers the same problems as listed above, and more. None of the 3 injectors work today with IL-2 as they cause mysterious crashes. I spend significant time with a user on the IL-2 forum (firmidigli or something, sorry I blank on their name) to troubleshoot why VRS causes the IL-2 engine to crash. We came up empty after weeks of investigation. There is something specific to what the IL-2 engine does that is just no working with VRS and causes random crashes. You cannot inject quad views outside of the game engine. Quad views is not post-processing (which is how Reshade works). There are hundreds and more places in every game engine where the engine assumes 2 views for rendering, in the geometry code, in the shaders, in the presentation code... I spent a significant amount of time working on quad views injection, and I could never make it work cleanly outside of basic sample code (worthless). Every game where I somehow successfully managed to inject quad views (mostly Unity games, can't remember their names), had completely broken graphic effects, because quad views is something that requires some precautions when implementing your engine. We brainstormed some ideas with other developers in the past (fholger, creator of vrperfkit) and the only approach that sounded remotely viable was dynamic shader recompilation or geometry shaders injection, both approaches are incredibly complex and would likely represent weeks/months of work by an expert developer just to support 1 game and would very likely still break many post-processing effects (aka wasting all this time). One of the other approaches I came up with was inspired by Luke Ross' alternate frame rendering, and consisted of "alternate views rendering" where each frame loop would alternate between view 1-2 and 3-4. However this causes significant CPU overhead (unacceptably higher than what we see with DCS today for example) and it breaks any temporal post-processing such as TAA or DLSS. I got this specific technique working in MSFS2020, and it was absolutely unusable both performance-wise and quality-wise. @BIGNEWY Can you highlight this to the devs again, please? Even if Matt is unable to implement the changes to the tools, having DCS "ready" for such an approach would likely benefit many VR users. 14900KS | Maximus Hero Z690 | ASUS 4090 TUF OC | 64GB DDR5 6600 | DCS on 2TB NVMe | WarBRD+Warthog Stick | CM3 | TM TPR's | Varjo Aero
mbucchia Posted February 8 Posted February 8 53 minutes ago, skywalker22 said: Do you know anything about openxt toolkit v1.3.3? There are only 2 ddl file inside zip, and their names are very strange. Is it official? Strange thing is, you don't have it on official webiste. It's not from me, and it doesn't really make sense. The guy added support for FOV override >100%, which creates a distorted image. I don't recommend this build since it's likely not digitally signed and will break anti-cheats. 43 minutes ago, lefuneste01 said: What should be feaseable to have VRS in this config ? As I've explained, VRS is not post-processing. Reshade (a post-processing injector) will not help you. Saving GPU cycles must happen while you are rendering. The work cannot be undone magically at the very end (post-processing). 1 I wasn't banned, but this account is mostly inactive and not monitored.
mbucchia Posted February 8 Posted February 8 25 minutes ago, mbucchia said: As I've explained, VRS is not post-processing. Reshade (a post-processing injector) will not help you. Just to be clear on how drastically different this is. Reshade being a post-processing injector, it operates after the game engine finished its work. So whatever time your GPU has spent rendering, it's already behind, and Reshade is never going to boost your performance. Reshade hooks into the "presentation layer", meaning the API that the game uses to deliver a frame to the device. This device can be a monitor or a VR headset. This hook is significantly easier to do, because Reshader is operating on the finished product of the game's rendering. It's all packaged and labeled. Left, right, depth, etc. There is no need to do any prediction, Reshade has all the information it needs, because the rendering has already happened. Reshade doesn't have to make a guess, and risk being wrong. (I'm not reducing the complexity and how great Reshade is, but let's say that Reshade's value added is in its scripting stack and interface, not how it hooks into the game) For something like VRS, the magic that VRS triggers must happen **during** the rendering. At this time, the injector doesn't know what is being done. The injector just knows "something is being rendered". It's not all packaged and labeled. The injector has to make a guess whether it's left, right, or something completely different. That guess is extremely difficult without knowing the future. And knowing the past doesn't really help. And if that guess is incorrect, the consequences are catastrophic (bad visual glitches). The only entity that knows for sure what is happening, is the game engine. Without the game engine giving the injector a hint of what it is doing, the odds that the injector is going to guess incorrectly, a not close to 0. Best thing that the injector can do is mitigate risks. Finding the right balance between making a bold guess and only making safe guesses. None of that is even remotely applicable to the much simple situation that Reshade has to deal with. 1 I wasn't banned, but this account is mostly inactive and not monitored.
lefuneste01 Posted February 8 Posted February 8 (edited) 14 hours ago, mbucchia said: Just to be clear on how drastically different this is. Reshade being a post-processing injector, it operates after the game engine finished its work. So whatever time your GPU has spent rendering, it's already behind, and Reshade is never going to boost your performance. Reshade hooks into the "presentation layer", meaning the API that the game uses to deliver a frame to the device. This device can be a monitor or a VR headset. This hook is significantly easier to do, because Reshader is operating on the finished product of the game's rendering. It's all packaged and labeled. Left, right, depth, etc. There is no need to do any prediction, Reshade has all the information it needs, because the rendering has already happened. Reshade doesn't have to make a guess, and risk being wrong. (I'm not reducing the complexity and how great Reshade is, but let's say that Reshade's value added is in its scripting stack and interface, not how it hooks into the game) For something like VRS, the magic that VRS triggers must happen **during** the rendering. At this time, the injector doesn't know what is being done. The injector just knows "something is being rendered". It's not all packaged and labeled. The injector has to make a guess whether it's left, right, or something completely different. That guess is extremely difficult without knowing the future. And knowing the past doesn't really help. And if that guess is incorrect, the consequences are catastrophic (bad visual glitches). The only entity that knows for sure what is happening, is the game engine. Without the game engine giving the injector a hint of what it is doing, the odds that the injector is going to guess incorrectly, a not close to 0. Best thing that the injector can do is mitigate risks. Finding the right balance between making a bold guess and only making safe guesses. None of that is even remotely applicable to the much simple situation that Reshade has to deal with. Reshade is now a bit more. With addons you can trap most of the rendering commands (eg init_resources, bind_pipeline, push_contants, draw_indexed,...) and trigger functions you defined in C++. For exemple I use init_pipeline to compute shader code hash and replace them by modified shader that I complied previously. I use bind_pipeline to trigger flag to get textures in the following push_decriptor. I’m not at home but I’ll post here tomorrow a log generated by trapping directX commands during a frame to show you how we can interfere with the whole rendering. Edited February 8 by lefuneste01 Intel i5 10400K @4.8 GHz, 3080ti, 32 GB RAM, Varjo Areo. I spend my time making 3dmigoto VR mods for BoS and DCS instead of flying, see https://www.patreon.com/lefuneste
lefuneste01 Posted February 9 Posted February 9 On 2/8/2025 at 10:49 AM, mbucchia said: Just to be clear on how drastically different this is. Reshade being a post-processing injector, it operates after the game engine finished its work. So whatever time your GPU has spent rendering, it's already behind, and Reshade is never going to boost your performance. To demonstrate the "addon" feature of reshade (as explained previously), I put here the log of a DCS frame I build with a reshade addon I did by mixing different sources : https://www.mediafire.com/file/mibat8e4r2pv2v2/shaderHunter_AH64_VR.log/file (It's 32 MB..) Small sample here: 07:33:42:123 [17112] | INFO | [Shader Hunter] draw_indexed(384, 1, 0, 0, 0) 07:33:42:123 [17112] | INFO | [Shader Hunter] push_descriptors(stage =pixel, layout.handle=00000170682001F0, param_index=1, update = { type:shader_resource_view, binding:0, count:1 })--> resource_view[0], handle = 0000000000000000 }) 07:33:42:124 [17112] | INFO | [Shader Hunter] push_descriptors(stage =compute, layout.handle=00000170682001F0, param_index=2, update = { type:constant_buffer, binding:0, count:4 }) 07:33:42:124 [17112] | INFO | [Shader Hunter] push_descriptors(stage =vertex, layout.handle=00000170682001F0, param_index=2, update = { type:constant_buffer, binding:0, count:4 }) 07:33:42:124 [17112] | INFO | [Shader Hunter] push_descriptors(stage =pixel, layout.handle=00000170682001F0, param_index=2, update = { type:constant_buffer, binding:0, count:4 }) 07:33:42:124 [17112] | INFO | [Shader Hunter] push_descriptors(stage =geometry, layout.handle=00000170682001F0, param_index=2, update = { type:constant_buffer, binding:0, count:4 }) 07:33:42:124 [17112] | INFO | [Shader Hunter] push_descriptors(stage =hull, layout.handle=00000170682001F0, param_index=2, update = { type:constant_buffer, binding:0, count:4 }) 07:33:42:124 [17112] | INFO | [Shader Hunter] push_descriptors(stage =domain, layout.handle=00000170682001F0, param_index=2, update = { type:constant_buffer, binding:0, count:4 }) 07:33:42:124 [17112] | INFO | [Shader Hunter] bind_render_targets_and_depth_stencil(1, { 0000017142DC2338, }, 0000017142DC3A38) 07:33:42:124 [17112] | INFO | [Shader Hunter] bind_viewports(0, 1, { ... }) 07:33:42:124 [17112] | INFO | [Shader Hunter] push_descriptors(stage =compute, layout.handle=00000170682001F0, param_index=2, update = { type:constant_buffer, binding:0, count:4 }) 07:33:42:124 [17112] | INFO | [Shader Hunter] push_descriptors(stage =vertex, layout.handle=00000170682001F0, param_index=2, update = { type:constant_buffer, binding:0, count:4 }) 07:33:42:124 [17112] | INFO | [Shader Hunter] push_descriptors(stage =pixel, layout.handle=00000170682001F0, param_index=2, update = { type:constant_buffer, binding:0, count:4 }) 07:33:42:124 [17112] | INFO | [Shader Hunter] push_descriptors(stage =geometry, layout.handle=00000170682001F0, param_index=2, update = { type:constant_buffer, binding:0, count:4 }) 07:33:42:124 [17112] | INFO | [Shader Hunter] push_descriptors(stage =hull, layout.handle=00000170682001F0, param_index=2, update = { type:constant_buffer, binding:0, count:4 }) 07:33:42:125 [17112] | INFO | [Shader Hunter] push_descriptors(stage =domain, layout.handle=00000170682001F0, param_index=2, update = { type:constant_buffer, binding:0, count:4 }) 07:33:42:125 [17112] | INFO | [Shader Hunter] bind_render_targets_and_depth_stencil(1, { 0000017263A14078, }, 0000000000000000) 07:33:42:125 [17112] | INFO | [Shader Hunter] bind_viewports(0, 1, { ... }) 07:33:42:125 [17112] | INFO | [Shader Hunter] bind_pipeline(input_assembler : 0000017068CA9950, pipelineHandle: 0000000000000000) 07:33:42:125 [17112] | INFO | [Shader Hunter] bind_pipeline_state(primitive_topology, 5) 07:33:42:125 [17112] | INFO | [Shader Hunter] bind_pipeline_state(blend_constant, 0) 07:33:42:125 [17112] | INFO | [Shader Hunter] bind_pipeline_state(sample_mask, 4294967295) 07:33:42:125 [17112] | INFO | [Shader Hunter] bind_pipeline_state(front_stencil_reference_value, 0) 07:33:42:125 [17112] | INFO | [Shader Hunter] bind_pipeline_state(back_stencil_reference_value, 0) 07:33:42:125 [17112] | INFO | [Shader Hunter] push_descriptors(stage =vertex, layout.handle=00000170682001F0, param_index=2, update = { type:constant_buffer, binding:0, count:1 }) 07:33:42:125 [17112] | INFO | [Shader Hunter] bind_pipeline(vertex_shader : 000000008DB626CD, pipelineHandle: 00000171B3D26210) 07:33:42:125 [17112] | INFO | [Shader Hunter] push_descriptors(stage =pixel, layout.handle=00000170682001F0, param_index=1, update = { type:shader_resource_view, binding:0, count:1 })--> resource_view[0], handle = 0000017AECC8F040 }) 07:33:42:125 [17112] | INFO | [Shader Hunter] bind_pipeline(pixel_shader : 00000000BAF1E52F, pipelineHandle: 00000171B3D260D0) 07:33:42:126 [17112] | INFO | [Shader Hunter] bind_pipeline(unknown : 0000017068CA9950, pipelineHandle: 0000000000000000) 07:33:42:126 [17112] | INFO | [Shader Hunter] draw(4, 1, 0, 0) 07:33:42:126 [17112] | INFO | [Shader Hunter] push_descriptors(stage =pixel, layout.handle=00000170682001F0, param_index=1, update = { type:shader_resource_view, binding:0, count:1 })--> resource_view[0], handle = 0000000000000000 }) 07:33:42:126 [17112] | INFO | [Shader Hunter] push_descriptors(stage =compute, layout.handle=00000170682001F0, param_index=2, update = { type:constant_buffer, binding:0, count:4 }) Each line is created by a C++ code, so instead of just writing a log you can do a lot of other things. This is what I did in my DCS VREM mod, by for example replacing PS code by other, copying textures from one Pixel to another to use it as a mask in the modified code, inject GUI variable into constant buffer to trigger some effect in modified PS code, and so on. In this context, what should be done for VRS ? Intel i5 10400K @4.8 GHz, 3080ti, 32 GB RAM, Varjo Areo. I spend my time making 3dmigoto VR mods for BoS and DCS instead of flying, see https://www.patreon.com/lefuneste
mbucchia Posted February 9 Posted February 9 (edited) Thanks for the details. However, it isn't clear whether this really gives you the information that you need. It is able hook into the right D3D calls, but have you tested whether it can **reliably** provide information that distinguishes rendering of the VR left/right view vs offscreen rendering etc? Because that is the difficult part (the one that requires a heuristic to predict the future). Until you can evaluate whether you are given **reliable** access to the view information, there is no real advantage to this vs using OpenXR Toolkit or vrperfkit as a starting point. Validating whether you can use these hooks is tricky. In OpenXR Toolkit, there is a (hidden) developer mode with a feature called "vrs_debug" that allows you to capture a frame and test the heuristic. It creates a similar log and also takes as screenshot of every screen pass along with its left/right/both classification. This is when you truly see how difficult it is to distinguish the VR view. With DCS, I recall there was approx 80 passes (of course it depends on the version, the aircraft, the settings, the current viewpoint...), and the heuristic **must be right** 100% when classifying these render passes, otherwise badness happens. You could try the following to assess how good/bad of a job Reshade is doing. Assuming you can inject a stencil mask for ANY pass (which I doubt given that stenciling and depth buffer are shared, so it probably only works for passes that already use depth, which isn't all of them) you can create a stencil for left eye that covers say 25% of the screen on the left and a stencil for right eye that covers 25% of the screen on the right. When that stencil is applied and if and only if the heuristic to detect left/right view is correct, then the outcome will be a cropped view 25% on both sides. Any "mis-classification" will cause an obvious glitch where the left part of the right view (or vice-versa) will be blocked. But that is only half of it. You also need to check that the heuristic doesn't accidentally reject some of the VR views, and therefore leaves some performance on the table. In OpenXR Toolkit there is a developer overlay with a value "VRS RTV", which is usually a good indicator. That value should be the total number of passes identified as VR views, eg the 80 I mentioned above. It is much trickier to evaluate what this number should be, ideally the developer would use a tool like renderdoc to capture a frame a count how many passes they see. Then compare this number with how many passes the heuristic classified as VR views. Assuming that heuristic is good (and to be honest, I highly doubt it is, because as mentioned this is an extremely complex problem without a solution today), you can use the hooks to inject VRS commands. I'm not gonna go into the details here. There is a project here that I never released but was meant to be a clean, standalone VRS injector (thought it lacks of any heuristic): https://github.com/mbucchia/VRSInjector/blob/vr/InjectorDll/vrs_d3d11.cpp Back to your OG point, if your goal is to make this work in IL-2, I would not waste my time on it. There is something very special about IL-2 that no one as figured out. You can try all 3 injectors with IL-2, and all 3 will eventually crash for inexplicable reasons. You can lookup the OpenXR Toolkit source code to figure out how to "unblock" IL-2. I think enabling developer mode unblocks it. You can also look up the PimaxMagic4All tool I wrote here, which is the closest I thought I got: https://forum.il2sturmovik.com/topic/85619-dfr-support/#findComment-1283925, however it was ultimately proven that the game still crashed with VRS. Edited February 9 by mbucchia I wasn't banned, but this account is mostly inactive and not monitored.
lefuneste01 Posted February 12 Posted February 12 (edited) On 2/9/2025 at 7:24 PM, mbucchia said: Thanks for the details. However, it isn't clear whether this really gives you the information that you need. It is able hook into the right D3D calls, but have you tested whether it can **reliably** provide information that distinguishes rendering of the VR left/right view vs offscreen rendering etc? Because that is the difficult part (the one that requires a heuristic to predict the future). Until you can evaluate whether you are given **reliable** access to the view information, there is no real advantage to this vs using OpenXR Toolkit or vrperfkit as a starting point. Validating whether you can use these hooks is tricky. In OpenXR Toolkit, there is a (hidden) developer mode with a feature called "vrs_debug" that allows you to capture a frame and test the heuristic. It creates a similar log and also takes as screenshot of every screen pass along with its left/right/both classification. This is when you truly see how difficult it is to distinguish the VR view. With DCS, I recall there was approx 80 passes (of course it depends on the version, the aircraft, the settings, the current viewpoint...), and the heuristic **must be right** 100% when classifying these render passes, otherwise badness happens. You could try the following to assess how good/bad of a job Reshade is doing. Assuming you can inject a stencil mask for ANY pass (which I doubt given that stenciling and depth buffer are shared, so it probably only works for passes that already use depth, which isn't all of them) you can create a stencil for left eye that covers say 25% of the screen on the left and a stencil for right eye that covers 25% of the screen on the right. When that stencil is applied and if and only if the heuristic to detect left/right view is correct, then the outcome will be a cropped view 25% on both sides. Any "mis-classification" will cause an obvious glitch where the left part of the right view (or vice-versa) will be blocked. But that is only half of it. You also need to check that the heuristic doesn't accidentally reject some of the VR views, and therefore leaves some performance on the table. In OpenXR Toolkit there is a developer overlay with a value "VRS RTV", which is usually a good indicator. That value should be the total number of passes identified as VR views, eg the 80 I mentioned above. It is much trickier to evaluate what this number should be, ideally the developer would use a tool like renderdoc to capture a frame a count how many passes they see. Then compare this number with how many passes the heuristic classified as VR views. Assuming that heuristic is good (and to be honest, I highly doubt it is, because as mentioned this is an extremely complex problem without a solution today), you can use the hooks to inject VRS commands. I'm not gonna go into the details here. There is a project here that I never released but was meant to be a clean, standalone VRS injector (thought it lacks of any heuristic): https://github.com/mbucchia/VRSInjector/blob/vr/InjectorDll/vrs_d3d11.cpp Back to your OG point, if your goal is to make this work in IL-2, I would not waste my time on it. There is something very special about IL-2 that no one as figured out. You can try all 3 injectors with IL-2, and all 3 will eventually crash for inexplicable reasons. You can lookup the OpenXR Toolkit source code to figure out how to "unblock" IL-2. I think enabling developer mode unblocks it. You can also look up the PimaxMagic4All tool I wrote here, which is the closest I thought I got: https://forum.il2sturmovik.com/topic/85619-dfr-support/#findComment-1283925, however it was ultimately proven that the game still crashed with VRS. I had to do identify left / right inner/outer for quad view, because I setup a mask for labels. Currently I’m doing it by trapping a PS shader dedicated to global illumination. As it is called once per QVview (or eye) I just have to count the call to know what is drawn. I’m copying a texture from it, so I need to ensure to have the right texture for each view, otherwise things are not aligned... I also setup option to have effects done only on inner or outer view. I could have setup a choice for each eye, but as it seems useless, I did not setup it. If you have time to loose, you can try my mod and see how the stencil mask can be displayed for each QV target. Or have alook on some videos here. Stupid question : how can I identify ’pass’? Is it by render target binding (I can have their resolution) by draws (there are hundreds of them for a single frame) or by something else ? Thanks for your link, I’ll have a look and try to understand it...I’m confident for IL2, as I have my VREM mod based on 3dmigoto working for years and reshade is working for DCS when 3dmigoto is no more... Edited February 12 by lefuneste01 Intel i5 10400K @4.8 GHz, 3080ti, 32 GB RAM, Varjo Areo. I spend my time making 3dmigoto VR mods for BoS and DCS instead of flying, see https://www.patreon.com/lefuneste
mbucchia Posted February 13 Posted February 13 12 hours ago, lefuneste01 said: I had to do identify left / right inner/outer for quad view, because I setup a mask for labels. Currently I’m doing it by trapping a PS shader dedicated to global illumination. As it is called once per QVview (or eye) I just have to count the call to know what is drawn With VRS, you need to capture (and identify) **all** the passes to gain performance. There are dozens (or two dozens, or three dozens) per eye. The number (and ordering) of each pass might depend on dozens of factors such as game settings, which aircraft, which map... It's not a novel idea to do the counting. It just doesn't work reliably unless you only care about supporting 1 version of 1 game with 1 set of settings on 1 map with 1 aircraft. Or creating (manually) an exponential number of "counting" heuristics for each individual combination. And for every major engine update, you will need to recalibrate all of them since the developer might add, remove or reorder the passes. Things like OpenXR Toolkit don't bother with the counting for these reasons. Instead it looks for other hints. An example is OpenXR Toolkit looking for an OpenXR swapchain image to be committed (via hooking the corresponding API), since it is likely happening at the end of rendering for one of the views and most likely, engines draw left view before right view. A similar approach is to look for clearing of a depth buffer, since it most likely indicates the beginning of a new view. 12 hours ago, lefuneste01 said: how can I identify ’pass’? Is it by render target binding (I can have their resolution) RTV bindings is easiest but it is not sufficient. The resolution of the RT itself doesn't matter, for VRS you need the viewport. Depending on how the engine does it, setting RTV might happen before setting viewport, or vice-versa. Need to handle both. 12 hours ago, lefuneste01 said: I’m confident for IL2, as I have my VREM mod based on 3dmigoto working for years and reshade As I explained before, the issue with IL2 is **very specific to the use of VRS**. The issue isn't related to injection or anything else. The game does something that breaks NvAPI VRS assumptions. I doubt your 3dmigoto exercises any of the paths relevant to that issue. I wasn't banned, but this account is mostly inactive and not monitored.
lefuneste01 Posted February 13 Posted February 13 8 hours ago, mbucchia said: As I explained before, the issue with IL2 is **very specific to the use of VRS**. The issue isn't related to injection or anything else. The game does something that breaks NvAPI VRS assumptions. I doubt your 3dmigoto exercises any of the paths relevant to that issue. Thanks for your explanations. So if there is no hope for IL2, I won't spend time to try something on VRS, as quad view is available for DCS... Too bad, 15% benefit on IL2 would have been welcome. Intel i5 10400K @4.8 GHz, 3080ti, 32 GB RAM, Varjo Areo. I spend my time making 3dmigoto VR mods for BoS and DCS instead of flying, see https://www.patreon.com/lefuneste
Recommended Posts