Jump to content

mbucchia

Members
  • Posts

    542
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by mbucchia

  1. Just to be clear on how drastically different this is. Reshade being a post-processing injector, it operates after the game engine finished its work. So whatever time your GPU has spent rendering, it's already behind, and Reshade is never going to boost your performance. Reshade hooks into the "presentation layer", meaning the API that the game uses to deliver a frame to the device. This device can be a monitor or a VR headset. This hook is significantly easier to do, because Reshader is operating on the finished product of the game's rendering. It's all packaged and labeled. Left, right, depth, etc. There is no need to do any prediction, Reshade has all the information it needs, because the rendering has already happened. Reshade doesn't have to make a guess, and risk being wrong. (I'm not reducing the complexity and how great Reshade is, but let's say that Reshade's value added is in its scripting stack and interface, not how it hooks into the game) For something like VRS, the magic that VRS triggers must happen **during** the rendering. At this time, the injector doesn't know what is being done. The injector just knows "something is being rendered". It's not all packaged and labeled. The injector has to make a guess whether it's left, right, or something completely different. That guess is extremely difficult without knowing the future. And knowing the past doesn't really help. And if that guess is incorrect, the consequences are catastrophic (bad visual glitches). The only entity that knows for sure what is happening, is the game engine. Without the game engine giving the injector a hint of what it is doing, the odds that the injector is going to guess incorrectly, a not close to 0. Best thing that the injector can do is mitigate risks. Finding the right balance between making a bold guess and only making safe guesses. None of that is even remotely applicable to the much simple situation that Reshade has to deal with.
  2. It's not from me, and it doesn't really make sense. The guy added support for FOV override >100%, which creates a distorted image. I don't recommend this build since it's likely not digitally signed and will break anti-cheats. As I've explained, VRS is not post-processing. Reshade (a post-processing injector) will not help you. Saving GPU cycles must happen while you are rendering. The work cannot be undone magically at the very end (post-processing).
  3. Edit: turns out the thread wasn't deleted (I did not create it, though I deleted all my replies after the thread turned badly)
  4. The #1 challenge **by far** in any foveated rendering injection (built outside of game engine) is to identify at what time to inject the VRS commands during the rendering. This is the issue that all of the 3 available solutions (OpenXR Toolkit, vrperfkit, and Pimax Magic) are struggling with. Currently, what these 3 tools do is hook into Direct3D calls, specifically ID3D11DeviceContext::OMSetRenderTargets, which is invoked sometimes before the engine begins to draw "something". The problem is that this "something" can be one of many things, it _can_ be the view to be rendered in your VR headset (*ding ding ding* that is the one you want to inject the VRS command at) or it can be something else, like an off-screen surface used for render-to-texture (very common for huds or instruments) or a menu or a miscellaneous surface used for a specific graphics effect (*bzzzzzt* no, you absolutely do not want to inject VRS commands for those). During rendering of a frame, this OMSetRenderTargets() is called many times, for different purposes. If the injector properly detects that this is for the VR views, then all things work fine. But if the injector accidentally mis-classifies a call for a VR view but it is in fact one of the other purposes, then you end up with issues, such as the one described in this thread. These issues tend to be catastrophic as they are very visible in the way they glitch. The is no universal solution for recognizing a VR view render pass from within an OMSetRenderTargets() call. What OpenXR Toolkit does is a relatively involved heuristic that involves querying some of the base data available during OMSetRenderTargets(), such as the dimension of the surface to render or the "format" (color type), all part of the D3D11_TEXTURE3D_DESC. Sadly this isn't enough to reliably detect that the engine is rendering the VR view. Also, fun fact, for newer tech like Direct3D 12 or Vulkan, they do not support "introspection" which means there is no trivial way to even extract this information in constant time. Doing something like adding a visual marker and then looking for it later at the end of the frame is also not possible, for two reasons, one is would kill performance to read back GPU memory and two it would be too late. And no, it isn't something that can be hard-coded somehow, because the order of the render passes in the engine changes often, it changes depending on what gfx you have enabled, which aircraft or scene, which segment of the game (menu, cockpit view, 3rd person view) and it also changes between versions of the game. Also, for dynamic foveated rendering, you must be able to not only detect that a render pass is for a VR view, but you must be able to know whether it is for the left eye or the right eye. This alone adds another insane degree of complexity and makes mistakes in that detection even less forgiving. Bottom line: in order to reliably implement foveated rendering in an injector, you need to classify render passes as they happen on the GPU, which effectively requires knowledge of the future. This is not a trivial problem, and AFAICT today this problem of predicting the future, is not solvable My proposal 2+ years ago was to have the game engine programmatically add a marker to the render targets that it uses for the VR view. Direct3D supports this via ID3D11DeviceChild::SetPrivateData, and it is very efficient to do, both in terms of effort (setting up this function call is less than 5 lines of code) and performance (there is no penalty to this if done properly). By providing such markers, it is now trivial for OpenXR Toolkit (and other tools) to look for the marker when hooking OMSetRenderTargets(), and to know - without an ounce of doubt - whether the VRS commands need to be injected. I am one of the 3 leading experts on this topic (the only foveated rendering injectors that work semi-universally today are OpenXR Toolkit, vrperfkit, and Pimax Magic). I probably have spent more time than anyone else on solving these problems. It's too late now. None of the three tools mentioned above are in active development. The engine needs to add the marker, and then the tools also need modifications to look for the marker, something that isn't done today, since no such standard marker was agreed upon with the developers. Quad Views is not a solution that helps in all scenarios. Both VRS and Quad Views have pros and cons, one might help in a situation where the other doesn't help. Today if you do not have significant CPU headroom, Quad Views will not help you, while VRS on the other hand is almost free in terms of CPU usage. IL-2 suffers the same problems as listed above, and more. None of the 3 injectors work today with IL-2 as they cause mysterious crashes. I spend significant time with a user on the IL-2 forum (firmidigli or something, sorry I blank on their name) to troubleshoot why VRS causes the IL-2 engine to crash. We came up empty after weeks of investigation. There is something specific to what the IL-2 engine does that is just no working with VRS and causes random crashes. You cannot inject quad views outside of the game engine. Quad views is not post-processing (which is how Reshade works). There are hundreds and more places in every game engine where the engine assumes 2 views for rendering, in the geometry code, in the shaders, in the presentation code... I spent a significant amount of time working on quad views injection, and I could never make it work cleanly outside of basic sample code (worthless). Every game where I somehow successfully managed to inject quad views (mostly Unity games, can't remember their names), had completely broken graphic effects, because quad views is something that requires some precautions when implementing your engine. We brainstormed some ideas with other developers in the past (fholger, creator of vrperfkit) and the only approach that sounded remotely viable was dynamic shader recompilation or geometry shaders injection, both approaches are incredibly complex and would likely represent weeks/months of work by an expert developer just to support 1 game and would very likely still break many post-processing effects (aka wasting all this time). One of the other approaches I came up with was inspired by Luke Ross' alternate frame rendering, and consisted of "alternate views rendering" where each frame loop would alternate between view 1-2 and 3-4. However this causes significant CPU overhead (unacceptably higher than what we see with DCS today for example) and it breaks any temporal post-processing such as TAA or DLSS. I got this specific technique working in MSFS2020, and it was absolutely unusable both performance-wise and quality-wise.
  5. This isn't a config issue or a solvable problem. Foveated rendering via VRS (what OpenXR Toolkit does) cannot be supported reliably outside of the game engine. It is **impossible** for an external tool to properly "triage" and classify render passes to do foveated rendering that works in 100% of the scenarios without engine support. What OpenXR Toolkit does (the "heuristic") is extremely fragile and can be broken by something as simple as "using a different aircraft" or "enabling a gfx setting" (best guess for your situation is perhaps DLSS or other form of upscaling). Same exact thing happened in MSFS, and I fixed it a few times, but it became too much work. AFAIK the feature is now useless in MSFS. 2+ years ago I made a thread on this forum to explain how ED (and any game developer) could add 5 lines of code in their engine to resolve these problems and make "universal" foveated rendering injection a reality. These 5 lines would preface the beginning of a render pass with a "hint" that OpenXR Toolkit could detect and know when/how to apply foveated rendering. Unfortunately that thread was ignored by the devs, and led to many angry discussions so I ended up deleting it. QVFR, while a better solution than VRS overall, does increase CPU and that is probably why it it's working as well for you.
  6. This is all explained here:
  7. Turbo mode is a gigantic hack around the OpenXR interface (meaning it purposely misuses OpenXR) that bypasses artificial frame timing limitations introduced by platform developers. It was originally introduced as a workaround to a bug on the now defunct WMR platform (HP Reverb), but it (accidentally) turned out to expose the same issue on nearly all platforms. Quest Link is one of the worst offender and Meta is purposely capping your performance. (source: I am the guy who wrote Turbo mode) Why is Quest purposely slowing down your game? I honestly don't know. What I know for sure is that Quest is not a platform for PCVR gaming, because Meta does not care about this scenario and will not solve such flagrant issues. It isn't the responsibility of app developers (DCS) to workaround inherent deficiencies of the VR platforms that the platform vendor refuses to fix for >2 years. Tl;dr: don't count on Meta to give you a good PCVR experience.
  8. One could try today and look at the log file to confirm. The Quest user presence bug is some more Meta incompetence. I reported this specific problem to them. Again, they have no interest in PCVR.
  9. I replied in the other thread.
  10. Maybe they changed something after 2.9? It definitely did not work in 2023 and beginning of 2024. DCS uses (used?) the assumption that if the XR_VARJO_quad_views extension was present, it would use quad views, while the proper check is to use xrEnumerateViewConfiguration(). It is impossible for an API layer to mask an extension. Here was my original report of the issue to ED: https://forum.dcs.world/topic/317990-openxr-toolkit-with-new-dcs-release-not-working/#findComment-5138959 This used to cause a significant headache for Varjo users (even before QVFR for other headsets was a thing) and a special software had to be developed to mask the OpenXR extension in order to use OpenXR Toolkit.
  11. None of my tools are supported. Supported means = I monitor the community for bugs and requests and take actions for them. Many new apps do not work with OpenXR Toolkit and I have no plans to fix it. But AFAICT there are no known issues with DCS at this time for either OpenXR Toolkit and QVFR. To be clear, Turbo Mode is a feature that bypasses poor frame management that nearly all vendors are victim of (except PimaxXR/VDXR for obvious reasons). Things like Meta not delivering proper PCVR support for several years now as they are focused exclusively on standalone, MR and do not care the least for PCVR. I believe the Turbo Mode feature _should_ work through QVFR even if Quad Views is disabled via the DCS settings. As for the "unadvertise" it never worked with DCS because DCS never actually followed the proper OpenXR usage for detecting quad views platform support.
  12. I'm not working on any code anymore. That project was nearly completed but will never be released.
  13. I've always been posting here and there. I'll update the documentation, but won't be working on any code.
  14. Thanks. What does the eye tracking option does?
  15. The guide is now a little out of date. Could someone do me a favor and send me a screenshot of the setting in DCS to enable QV? I will amend the wiki guide with it. Thanks!
  16. ......... No, you were not [using DCS with FR for months with your AMD 6900XT]. AMD doesn't support VRS (OpenXR Toolkit foveated rendering) with D3D11, which is what DCS uses. It never has and never will, this is a limitation of AMD drivers. You must be confused out of your mind if you think it ever worked. Maybe you switched from Nvidia to AMD at some point? You are seeing foveated rendering option in MSFS because you are using D3D12 in MSFS. https://mbucchia.github.io/OpenXR-Toolkit/#supported-graphics-cards
  17. There are 2 conditional steps to OpenXR Toolkit showing the FR option: - absence of the "foveated rendering killswitch setting" which would have been reset by clearing the registry - failure to initialize NvAPI (part of Nvidia driver) or NvAPI returning that VRS isn't supported on your card. For the latter, I don't really know what can cause that. You said it still works in another game which is ever more confusing. The only other time someone had this problem, this was caused by their Nvidia driver. I think they did DDU etc to cleanly reinstall it. But I think they were seeing this problem with all games, not just one. Perhaps you replaced some DLLS in the sim and that interferes with NvAPI, so maybe a clean reinstall of the sim (making sure to delete the game folder!).
  18. Many people don't do this step correctly, like forget to confirm the restore to defaults, or forget to leave safe mode. So I'd recommend to try that again. Alternative is to delete registry entries by hand, both HKEY_LOCAL_MACHINE\Software\OpenXR_Toolkit and HKEY_CURRENT_USER\Software\OpenXR_Toolkit.
  19. This is the correct answer. The hold up for better integration and streamlining of this tech is Meta and their OpenXR runtime that doesn't support some of the most basic features on PC, like fovMutable or XR_EXT_eye_gaze_interaction. Meta has been slowly killing PCVR, and in 2024, they finally managed to destroy the OpenXR ecosystem and make sure that developers have no easy to craft efficient and portable PCVR applications. Thanks Meta!
  20. BTW I was _not_ referring to your post as misinformation, but rather other posts I read elsewhere!
  21. I'm pretty sure that already exists. There is an IPD slider in Pimax Play, that should basically do the same thing as world scale. Give it a try.
  22. No display in the HMD when quad views is enabled in the runtime is almost always the sign of an incompatible API layer. If not OpenXR Toolkit, then it's another one, I think OXRMC, OBSMirror also will cause this. Double check using the API layer tool from Fred Emmott.
  23. Yes. QVFR included built-in CAS, so there was no need for OXRTK. I believe the Pimax implementation is the same (looking last night, they are simply using my code). Might need to confirm with them.
  24. Here is a series of diagrams explaining things: null
  25. See my explanation there as to why you are seeing the black screen: Please, don't listen to this. He doesn't understand the technical aspect of it. See my explanation above.
×
×
  • Create New...