Jump to content

mbucchia

Members
  • Posts

    530
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by mbucchia

  1. 2 hours ago, najafsen said:

    I have RX7900 GRE and I had it working. I never had any other graphics card. I set open xr foveated setting so drastic that only small portion of the screen was high quality (very high quality) and the rest was very low quality. This way I got high frame rate and very high quality center of view. But downside was sudden transition from very high quality view to very blurry view. This transition was so visible that no one can tell me I had never had it working. Now all are gone and I can't bring it back. I don't know if DCS update, Steam VR update or AMD graphic card update caused this. My headset is Pico 3.

    There is no support for OpenXR Toolkit foveated rendering on DX11 with AMD GPUs. It simply does not exist.

    I literally wrote the OpenXR Toolkit program and I know how it works.

    To use VRS on D3D11, you must use the NVAPI library (also known as NVidia VRWorks), here is the code in OpenXR Toolkit doing that:

    https://github.com/mbucchia/OpenXR-Toolkit/blob/6b9ecb69a4b2dc714b14a86407868af315d02531/XR_APILAYER_MBUCCHIA_toolkit/vrs.cpp#L1171

    As the name indicate, NVAPI or NVidia VRWorks is exclusive to NVidia and does not work on AMD.

    https://developer.nvidia.com/vrworks

    Variable Rate Shading (VRS) is an easy to implement rendering technique enabled by Turing GPUs.

    Turing = NVidia.

    I'm not going to have another back and forth on this. Previous user ultimately realized that they were not speaking of DCS, but instead they had used the feature in another game, a DX12 game, with their AMD card. You're probably in the game boat.

    Or you are thinking of ",Quad views foveated rendering" which is nothing to do with OpenXR Toolkit, and is a completely different feature (that does work on AMD with DX11).

    • Like 1
    • Thanks 2
  2. 3 hours ago, TED said:

    After getting my new quest pro, do far really enjoy it except for the comfort issue after playing for a while. Especially noticeable on the forehead with a lot of the pressure building up there. Does anyone have any good recommendations for improving the comfort for longer sessions? 

    The Quest Pro was the most uncomfortable headset I've ever owned, until I bought the Globular Cluster mod. I was having the same issue you've described, excessive head pressure which caused insta-headaches. With the Globular Cluster replacement components, that issue completely went away for me.

    • Like 1
    • Thanks 1
  3. 3 hours ago, Roosterfeet said:

    That's a shame. My Quest 3 runs DCS a lot better with Turbo mode enabled. Quadviews doesn't crash when I turn on turbo mode with it but I prefer not to use any foviation with my non-eyetracking headset. Anyone have a solution to enable turbo alone in Quadviews?

    There was an additional workaround to Meta's bug that went in the Quad Views tool. It was never back-ported to OpenXR Toolkit. There is no solution to use only Turbo in Quad Views.

  4. 5 hours ago, mrsylvestre said:

    It is not clear at this stage if this runtime is an original effort by Pico or if they borrowed code from another project

    It is 100% confirmed to be a fork of VDXR. It's very sad that they are not giving attribution to the original code (which is technically a copyright infringement per MIT license of the original code).

     

    Edit: I should also add, it is based on a version of VDXR from 2023, so I'm not entirely sure they're even maintaining it? Otherwise why not tell people it is here? Why not pick a newer version that has more features and fixes?

  5. This error occurs when you do not have an active OpenXR runtime. Make sure in Pimax Play you click "Set as Runtime", this should do it.

     

    Also, remove OpenXR Toolkit until you get this working without it. 

     

    If neither work, search "vcredist" on google and download and install the latest version of that.

    • Like 1
    • Thanks 1
  6. The issue with OpenXR Toolkit Turbo when removing the headset is 100% root caused to be an issue where the Quest Link OpenXR runtime violates the OpenXR standard. In other words, it's a Meta bug. I've reported it directly to their engineers over a year ago (maybe even 18 months IIRC) and they won't bother to fix it (extremely simple bug fix).

    • Like 1
  7. On 3/29/2025 at 1:55 PM, Qcumber said:

    Thanks. I see you are using a Varjo headset. Maybe it's a Meta issue. I guess I will stick with the regular neck and back exercises! 😉

    Check the order of the API layers with Fred's tool. I recall XRNS needed to be in a specific order.

    On 4/1/2025 at 12:41 AM, TimSell75 said:

    During the recent MRtV Event with the Crystal Super it was mentioned that @mbucchia was able to fix/erase the visible Box when using Quad Views. Can somebody confirm this? Because I still see it.

    At around Minute 28 in this video

     

     

    I have zero clue what he is talking about. ED resolved (or greatly improved) that issue, I can't remember when but it wasn't so long ago. I have no hand into DCS rendering or engine.

    Make sure you also disable the couple of options listed in the QVFR guide.

  8. Hey,

    I don't have a clear answer for you. You also say when "using SteamVR" or "violating SteamVR API" which is ambiguous since you could mean via OpenVR or OpenXR.

    I haven't looked specifically at this problem, especially since TBH, I never use SteamVR and nearly all my solutions were dedicated to "skipping SteamVR" 🙂

    That said, based on what you are describing, I believe you are likely correct:

    On 2/25/2025 at 10:40 PM, RealDCSpilot said:

    My suspect at the moment is only the excessive use of VRAM that DCS is infamous for. It might interfere heavily with SteamVR's own ressource management for dashboard functionalities (access to Windows desktop etc).

    You are likely familiar with the concept of "swap file" which is a way to relieve your main RAM by temporarily moving resources to your storage drive. There is a similar concept for GPU memory (VRAM), called residency. Your GPU needs resources (textures, geometry, shaders...) to be in the VRAM in order to use them for rendering. But sometimes your GPU driver will move some resources out of VRAM and into your system RAM if the application is "over-committing" (which means using more memory than the allowable budget). Then when the GPU needs a resource that was "demoted" to system RAM, it will have to first make room for the resource (by demoting another resource) and then copy it back into the VRAM (aka "swapping", like your "swap file" for system RAM) . This process is slow. In some worst case scenarios, a resource might "bounce" back and forth between GPU and system RAM, and this creates an awful effect where your app is rendering like 1 FPS 😄.

    I've seen this exact issue with No Man's Sky on PCVR, where the frame time would jump to 100-140ms because some of the VR compositor resources would get demoted to system RAM. There is a solution to such problem, which is to mark the most critical resources as permanently resident in VRAM (explained on the page I linked), to avoid them being demoted. This was actually a mitigation I had implemented in SteamVR for WMR.

    Note that as explained on the page I linked about residency, there is something called the application VRAM budget. That is the maximum that an application should ever use. This amount of budget is NOT your entire VRAM. There's something called system reservation, which is Windows protecting some of the VRAM for things like the desktop window manager, and your monitor's backbuffers. Many people asked me this question about OpenXR Toolkit, "why is your overlay saying I have 10GB of VRAM while my GPU has 12GB!?". The answer is those 2GB are the system reservation. If the application ever goes above the application VRAM budget and into the system reservation, bad things happen as described above.

    Now the case of VR and memory management is actually quite complex. This is for a simple reason. When you develop an app that runs on flat screen, you don't have to worry about anything other than the system reservation. Your application has its GPU resources (textures etc) and loads them into VRAM, and the only additional overhead is what's called the DXGI swapchain, aka the surface that the application uses to present the rendered image to the monitor. That DXGI swapchain is allocated within the applications's memory budget, so it's easy to account for. When you develop an app for VR, there is a lot more happening. You do have those VR swapchains as well, and these also come out of the applications's VRAM budget, and they are much bigger (stereo and triple-buffered). But you also have the entire VR compositor resources as well. These typically live in other processes like whatever "Oculus Service" (Quest Link) or DWM (WMR) or VRServer (SteamVR). These are not part of the application's VRAM budget (since they are allocated out of nowhere by another application process). So it becomes much harder for an application to track these additional resources and to not over-commit the VRAM. An application will usually try to maximize its use of VRAM especially if you are using very high settings. This means some apps will try to reserve the entirely of the application's memory budget. They will usually do that when starting the game or loading a new level. For games that allow you to switch between flatscreen and VR on the fly (press of a button), this is also a very complex scenario. MSFS got it wrong for a very long time. Basically you have to "unreserve" sufficient memory back for the VR compositor. And again, as an application, you do not know how much VRAM the VR compositor needs...

    Then you have SteamVR overlays coming in. These are the absolute nightmare for your game developer conscious about maximizing VRAM usage. These overlays can be loaded AFTER your game started and reserved memory. Or something the full extent of their resources is only allocated upon first use, or specific use (eg: which "menu" you select within your overlay). So if a game had reserved all the VRAM, you are now over-committing VRAM, and risking the back and forth copy of data between system RAM and VRAM.

    Now you have a way of checking if this is what's happening, through the Task Manager. You'll see two numbers in your GPU performance pane. A "GPU Memory" and a "Shared GPU Memory". Assuming you are not on an integrated GPU (!), you will see "GPU Memory" as your VRAM, and "Shared GPU Memory" as your system RAM, which can be used when over-committing. In the ideal case, "Shared GPU Memory" should be 0, though in reality there is always a small residual. You want to get your baseline before starting your game. How much "Shared GPU Memory" is used? On my machine, it's always something like 0.1GB or 0.2GB. That's HEALTHY. Then run your game, hopefully "Shared GPU Memory" remains the same value. Then when you are hitting your overlay issue, take a look at "Shared GPU Memory" again. Is it higher than your baseline by a notable difference? For example, on my machine, I see it go above 0.3GB when overcommitting, and it's usually noticeable like 0.5GB or 1GB or more... That's over-committing, and that's what causes bad performance. 

    There is no real solution for you to this problem. It's just poor programming, but technically, it's not anybody's fault. DCS shouldn't under-use VRAM just to allow overlays that maybe nobody will use. Your overlay needs resources, and it can't do anything to evict data from DCS. There is no good solution.

    One thing your overlay developer could do, is temporarily boost the residency priority of its resources when the overlay is open. That will still cause a slow down when you first open the overlay, but at least the overlay should be more responsive. Now DCS in the background will probably appear less fluid, since now resources used by DCS will need to be copied back and forth between VRAM. Then the overlay must restore the lower residency priority when you close the overlay. I don't think many applications do that, the API for that is not known by many programmers, and also as noted in the API's documentation: "Changing the priorities of resources should be done carefully. The wrong eviction priorities could be a detriment to performance rather than an improvement.". It's a loaded gun that shouldn't be put into the wrong developer's hands :(

    • Like 4
    • Thanks 4
  9. You are all misunderstanding what OpenXR is/means. 

    OpenXR doesn't mean "without SteamVR", and there are many headsets out there that only support OpenXR through SteamVR. Examples: Valve Index, HTC Vive (almost all of them), Bigscreen Beyond... on these headsets, you cannot run 99% of the applications outside of SteamVR.

    You haven't specified which HTC Vive headset you are using, however there is a very high chance it only supports OpenXR through SteamVR. The only 2 HTC headsets supported by HTC's "non-SteamVR OpenXR runtime" (sometimes called "native OpenXR runtime"  though it's actually terribly misleading vocabulary) are the HTC Vive Cosmos (non-Elite) and HTC Vive Focus 3 to my knowledge.

    Edit: I remembered there is one more scenario, I believe you can use the HTC Vive Elite XR with Virtual Desktop, in which case you can use VDXR (Virtual Desktop's non-SteamVR OpenXR implementation).

    • Like 3
  10. 12 hours ago, lefuneste01 said:

    I had to do identify left  / right inner/outer for quad view, because I setup a mask for labels. Currently I’m doing it by trapping a PS shader dedicated to global illumination. As it is called once per QVview (or eye) I just have to count the call to know what is drawn

    With VRS, you need to capture (and identify) **all** the passes to gain performance. There are dozens (or two dozens, or three dozens) per eye. The number (and ordering) of each pass might depend on dozens of factors such as game settings, which aircraft, which map...

    It's not a novel idea to do the counting. It just doesn't work reliably unless you only care about supporting 1 version of 1 game with 1 set of settings on 1 map with 1 aircraft. Or creating (manually) an exponential number of "counting" heuristics for each individual combination. And for every major engine update, you will need to recalibrate all of them since the developer might add, remove or reorder the passes.

    Things like OpenXR Toolkit don't bother with the counting for these reasons. Instead it looks for other hints. An example is OpenXR Toolkit looking for an OpenXR swapchain image to be committed (via hooking the corresponding API), since it is likely happening at the end of rendering for one of the views and most likely, engines draw left view before right view. A similar approach is to look for clearing of a depth buffer, since it most likely indicates the beginning of a new view.

    12 hours ago, lefuneste01 said:

    how can I identify  ’pass’? Is it by render target binding (I can have their resolution)

    RTV bindings is easiest but it is not sufficient. The resolution of the RT itself doesn't matter, for VRS you need the viewport. Depending on how the engine does it, setting RTV might happen before setting viewport, or vice-versa. Need to handle both.

    12 hours ago, lefuneste01 said:

    I’m confident for IL2, as I have my VREM mod based on 3dmigoto working for years and reshade

    As I explained before, the issue with IL2 is **very specific to the use of VRS**. The issue isn't related to injection or anything else. The game does something that breaks NvAPI VRS assumptions. I doubt your 3dmigoto exercises any of the paths relevant to that issue.

  11. Thanks for the details. However, it isn't clear whether this really gives you the information that you need. It is able hook into the right D3D calls, but have you tested whether it can **reliably** provide information that distinguishes rendering of the VR left/right view vs offscreen rendering etc? Because that is the difficult part (the one that requires a heuristic to predict the future).

    Until you can evaluate whether you are given **reliable** access to the view information, there is no real advantage to this vs using OpenXR Toolkit or vrperfkit as a starting point.

    Validating whether you can use these hooks is tricky. In OpenXR Toolkit, there is a (hidden) developer mode with a feature called "vrs_debug" that allows you to capture a frame and test the heuristic. It creates a similar log and also takes as screenshot of every screen pass along with its left/right/both classification.

    This is when you truly see how difficult it is to distinguish the VR view. With DCS, I recall there was approx 80 passes (of course it depends on the version, the aircraft, the settings, the current viewpoint...), and the heuristic **must be right** 100% when classifying these render passes, otherwise badness happens.

    You could try the following to assess how good/bad of a job Reshade is doing.

    Assuming you can inject a stencil mask for ANY pass (which I doubt given that stenciling and depth buffer are shared, so it probably only works for passes that already use depth, which isn't all of them) you can create a stencil for left eye that covers say 25% of the screen on the left and a stencil for right eye that covers 25% of the screen on the right. When that stencil is applied and if and only if the heuristic to detect left/right view is correct, then the outcome will be a cropped view 25% on both sides. Any "mis-classification" will cause an obvious glitch where the left part of the right view (or vice-versa) will be blocked.

    But that is only half of it. You also need to check that the heuristic doesn't accidentally reject some of the VR views, and therefore leaves some performance on the table. In OpenXR Toolkit there is a developer overlay with a value "VRS RTV", which is usually a good indicator. That value should be the total number of passes identified as VR views, eg the 80 I mentioned above. It is much trickier to evaluate what this number should be, ideally the developer would use a tool like renderdoc to capture a frame a count how many passes they see. Then compare this number with how many passes the heuristic classified as VR views.

    Assuming that heuristic is good (and to be honest, I highly doubt it is, because as mentioned this is an extremely complex problem without a solution today), you can use the hooks to inject VRS commands. I'm not gonna go into the details here. There is a project here that I never released but was meant to be a clean, standalone VRS injector (thought it lacks of any heuristic): https://github.com/mbucchia/VRSInjector/blob/vr/InjectorDll/vrs_d3d11.cpp

     

    Back to your OG point, if your goal is to make this work in IL-2, I would not waste my time on it. There is something very special about IL-2 that no one as figured out.

    You can try all 3 injectors with IL-2, and all 3 will eventually crash for inexplicable reasons. You can lookup the OpenXR Toolkit source code to figure out how to "unblock" IL-2. I think enabling developer mode unblocks it.

    You can also look up the PimaxMagic4All tool I wrote here, which is the closest I thought I got: https://forum.il2sturmovik.com/topic/85619-dfr-support/#findComment-1283925, however it was ultimately proven that the game still crashed with VRS.

  12. 25 minutes ago, mbucchia said:

    As I've explained, VRS is not post-processing. Reshade (a post-processing injector) will not help you.

    Just to be clear on how drastically different this is.

    Reshade being a post-processing injector, it operates after the game engine finished its work. So whatever time your GPU has spent rendering, it's already behind, and Reshade is never going to boost your performance. 

    Reshade hooks into the "presentation layer", meaning the API that the game uses to deliver a frame to the device. This device can be a monitor or a VR headset. This hook is significantly easier to do, because Reshader is operating on the finished product of the game's rendering. It's all packaged and labeled. Left, right, depth, etc. There is no need to do any prediction, Reshade has all the information it needs, because the rendering has already happened. Reshade doesn't have to make a guess, and risk being wrong.

    (I'm not reducing the complexity and how great Reshade is, but let's say that Reshade's value added is in its scripting stack and interface, not how it hooks into the game)

    For something like VRS, the magic that VRS triggers must happen **during** the rendering. At this time, the injector doesn't know what is being done. The injector just knows "something is being rendered". It's not all packaged and labeled. The injector has to make a guess whether it's left, right, or something completely different. That guess is extremely difficult without knowing the future. And knowing the past doesn't really help. And if that guess is incorrect, the consequences are catastrophic (bad visual glitches). The only entity that knows for sure what is happening, is the game engine. Without the game engine giving the injector a hint of what it is doing, the odds that the injector is going to guess incorrectly, a not close to 0. Best thing that the injector can do is mitigate risks. Finding the right balance between making a bold guess and only making safe guesses. None of that is even remotely applicable to the much simple situation that Reshade has to deal with.

    • Like 1
  13. 53 minutes ago, skywalker22 said:

    Do you know anything about openxt toolkit v1.3.3? There are only 2 ddl file inside zip, and their names are very strange.

    Is it official? Strange thing is, you don't have it on official webiste.

    It's not from me, and it doesn't really make sense. The guy added support for FOV override >100%, which creates a distorted image. I don't recommend this build since it's likely not digitally signed and will break anti-cheats.

     

    43 minutes ago, lefuneste01 said:

    What should be feaseable to have VRS in this config ?

    As I've explained, VRS is not post-processing. Reshade (a post-processing injector) will not help you. Saving GPU cycles must happen while you are rendering. The work cannot be undone magically at the very end (post-processing).

    • Like 1
  14. 20 hours ago, skywalker22 said:

    Which 5 lines of code?

    The #1 challenge **by far** in any foveated rendering injection (built outside of game engine) is to identify at what time to inject the VRS commands during the rendering. This is the issue that all of the 3 available solutions (OpenXR Toolkit, vrperfkit, and Pimax Magic) are struggling with.

    Currently, what these 3 tools do is hook into Direct3D calls, specifically ID3D11DeviceContext::OMSetRenderTargets, which is invoked sometimes before the engine begins to draw "something". The problem is that this "something" can be one of many things, it _can_ be the view to be rendered in your VR headset (*ding ding ding* that is the one you want to inject the VRS command at) or it can be something else, like an off-screen surface used for render-to-texture (very common for huds or instruments) or a menu or a miscellaneous surface used for a specific graphics effect (*bzzzzzt* no, you absolutely do not want to inject VRS commands for those). During rendering of a frame, this OMSetRenderTargets() is called many times, for different purposes. If the injector properly detects that this is for the VR views, then all things work fine. But if the injector accidentally mis-classifies a call for a VR view but it is in fact one of the other purposes, then you end up with issues, such as the one described in this thread. These issues tend to be catastrophic as they are very visible in the way they glitch.

    The is no universal solution for recognizing a VR view render pass from within an OMSetRenderTargets() call. What OpenXR Toolkit does is a relatively involved heuristic that involves querying some of the base data available during OMSetRenderTargets(), such as the dimension of the surface to render or the "format" (color type), all part of the D3D11_TEXTURE3D_DESC. Sadly this isn't enough to reliably detect that the engine is rendering the VR view. Also, fun fact, for newer tech like Direct3D 12 or Vulkan, they do not support "introspection" which means there is no trivial way to even extract this information in constant time. Doing something like adding a visual marker and then looking for it later at the end of the frame is also not possible, for two reasons, one is would kill performance to read back GPU memory and two it would be too late. And no, it isn't something that can be hard-coded somehow, because the order of the render passes in the engine changes often, it changes depending on what gfx you have enabled, which aircraft or scene, which segment of the game (menu, cockpit view, 3rd person view) and it also changes between versions of the game. Also, for dynamic foveated rendering, you must be able to not only detect that a render pass is for a VR view, but you must be able to know whether it is for the left eye or the right eye. This alone adds another insane degree of complexity and makes mistakes in that detection even less forgiving.

    Bottom line: in order to reliably implement foveated rendering in an injector, you need to classify render passes as they happen on the GPU, which effectively requires knowledge of the future. This is not a trivial problem, and AFAICT today this problem of predicting the future, is not solvable 😄

     

    My proposal 2+ years ago was to have the game engine programmatically add a marker to the render targets that it uses for the VR view. Direct3D supports this via ID3D11DeviceChild::SetPrivateData, and it is very efficient to do, both in terms of effort (setting up this function call is less than 5 lines of code) and performance (there is no penalty to this if done properly). By providing such markers, it is now trivial for OpenXR Toolkit (and other tools) to look for the marker when hooking OMSetRenderTargets(), and to know - without an ounce of doubt - whether the VRS commands need to be injected.

     

    20 hours ago, skywalker22 said:

    How come you are so sure only those 5 lines would do the difference?

    I am one of the 3 leading experts on this topic (the only foveated rendering injectors that work semi-universally today are OpenXR Toolkit, vrperfkit, and Pimax Magic). I probably have spent more time than anyone else on solving these problems.

     

    20 hours ago, skywalker22 said:

    ps: Maybe they will listen to you know.

    It's too late now. None of the three tools mentioned above are in active development. The engine needs to add the marker, and then the tools also need modifications to look for the marker, something that isn't done today, since no such standard marker was agreed upon with the developers.

     

    16 hours ago, lefuneste01 said:

    think it's worthless to do it for DCS as you already provided the needed tools for Varjo and other HMD

    Quad Views is not a solution that helps in all scenarios. Both VRS and Quad Views have pros and cons, one might help in a situation where the other doesn't help. Today if you do not have significant CPU headroom, Quad Views will not help you, while VRS on the other hand is almost free in terms of CPU usage.

     

    16 hours ago, lefuneste01 said:

    but I'm wondering what could be done for IL2 GB with reshade addon...

    IL-2 suffers the same problems as listed above, and more. None of the 3 injectors work today with IL-2 as they cause mysterious crashes. I spend significant time with a user on the IL-2 forum (firmidigli or something, sorry I blank on their name) to troubleshoot why VRS causes the IL-2 engine to crash. We came up empty after weeks of investigation. There is something specific to what the IL-2 engine does that is just no working with VRS and causes random crashes.

     

    16 hours ago, lefuneste01 said:

    But I always had in mind it will not be feasible to force the engine to do 4 rendering instead of 2.

    You cannot inject quad views outside of the game engine. Quad views is not post-processing (which is how Reshade works). There are hundreds and more places in every game engine where the engine assumes 2 views for rendering, in the geometry code, in the shaders, in the presentation code... I spent a significant amount of time working on quad views injection, and I could never make it work cleanly outside of basic sample code (worthless). Every game where I somehow successfully managed to inject quad views (mostly Unity games, can't remember their names), had completely broken graphic effects, because quad views is something that requires some precautions when implementing your engine.

    We brainstormed some ideas with other developers in the past (fholger, creator of vrperfkit) and the only approach that sounded remotely viable was dynamic shader recompilation or geometry shaders injection, both approaches are incredibly complex and would likely represent weeks/months of work by an expert developer just to support 1 game and would very likely still break many post-processing effects (aka wasting all this time).

    One of the other approaches I came up with was inspired by Luke Ross' alternate frame rendering, and consisted of "alternate views rendering" where each frame loop would alternate between view 1-2 and 3-4. However this causes significant CPU overhead (unacceptably higher than what we see with DCS today for example) and it breaks any temporal post-processing such as TAA or DLSS. I got this specific technique working in MSFS2020, and it was absolutely unusable both performance-wise and quality-wise.

    • Like 2
    • Thanks 2
  15. This isn't a config issue or a solvable problem. Foveated rendering via VRS (what OpenXR Toolkit does) cannot be supported reliably outside of the game engine.

    It is **impossible** for an external tool to properly "triage" and classify render passes to do foveated rendering that works in 100% of the scenarios without engine support.

    What OpenXR Toolkit does (the "heuristic") is extremely fragile and can be broken by something as simple as "using a different aircraft" or "enabling a gfx setting" (best guess for your situation is perhaps DLSS or other form of upscaling). Same exact thing happened in MSFS, and I fixed it a few times, but it became too much work. AFAIK the feature is now useless in MSFS.

    2+ years ago I made a thread on this forum to explain how ED (and any game developer) could add 5 lines of code in their engine to resolve these problems and make "universal" foveated rendering injection a reality. These 5 lines would preface the beginning of a render pass with a "hint" that OpenXR Toolkit could detect and know when/how to apply foveated rendering.

    Unfortunately that thread was ignored by the devs, and led to many angry discussions so I ended up deleting it.

    QVFR, while a better solution than VRS overall, does increase CPU and that is probably why it it's working as well for you.

    • Like 5
  16. 10 hours ago, Bounti30 said:
    Question for experienced users
    I use a mid-range PC with the quest 2.
    For me OpenXR toolkit is essential with turbo mode.
    Why such a difference? how does this mode work and can't it be natively integrated into DCS?

    Turbo mode is a gigantic hack around the OpenXR interface (meaning it purposely misuses OpenXR) that bypasses artificial frame timing limitations introduced by platform developers. It was originally introduced as a workaround to a bug on the now defunct WMR platform (HP Reverb), but it (accidentally) turned out to expose the same issue on nearly all platforms. Quest Link is one of the worst offender and Meta is purposely capping your performance.

    (source: I am the guy who wrote Turbo mode)

     

    Why is Quest purposely slowing down your game? I honestly don't know. What I know for sure is that Quest is not a platform for PCVR gaming, because Meta does not care about this scenario and will not solve such flagrant issues.

    It isn't the responsibility of app developers (DCS) to workaround inherent deficiencies of the VR platforms that the platform vendor refuses to fix for >2 years.

     

    Tl;dr: don't count on Meta to give you a good PCVR experience. 

    • Like 3
    • Thanks 4
  17. 3 hours ago, slughead said:

    I haven't used QVFR for a while now. When I did, I only used it for turbo mode, not the QVFR, because you had put a workaround in your QVFR layer to stop DCS crashing when Quest headsets were removed. You didn't push that fix (workaround) into the OpenXR Toolkit. I think I stopped using QVFR during the first half of 2023.

    One could try today and look at the log file to confirm.

    The Quest user presence bug is some more Meta incompetence. I reported this specific problem to them. Again, they have no interest in PCVR.

    • Like 1
    • Thanks 1
  18. 8 hours ago, slughead said:

    Unadvertise for DCS always worked for me. 🤷🏻‍♂️

    Maybe they changed something after 2.9? It definitely did not work in 2023 and beginning of 2024.

    DCS uses (used?) the assumption that if the XR_VARJO_quad_views extension was present, it would use quad views, while the proper check is to use xrEnumerateViewConfiguration(). It is impossible for an API layer to mask an extension.

     

    Here was my original report of the issue to ED:

    https://forum.dcs.world/topic/317990-openxr-toolkit-with-new-dcs-release-not-working/#findComment-5138959

    This used to cause a significant headache for Varjo users (even before QVFR for other headsets was a thing) and a special software had to be developed to mask the OpenXR extension in order to use OpenXR Toolkit. 

    • Like 1
  19. None of my tools are supported. Supported means = I monitor the community for bugs and requests and take actions for them.

    Many new apps do not work with OpenXR Toolkit and I have no plans to fix it.

    But AFAICT there are no known issues with DCS at this time for either OpenXR Toolkit and QVFR. 

    To be clear, Turbo Mode is a feature that bypasses poor frame management that nearly all vendors are victim of (except PimaxXR/VDXR for obvious reasons). Things like Meta not delivering proper PCVR support for several years now as they are focused exclusively on standalone, MR and do not care the least for PCVR. 

    I believe the Turbo Mode feature _should_ work through QVFR even if Quad Views is disabled via the DCS settings. As for the "unadvertise" it never worked with DCS because DCS never actually followed the proper OpenXR usage for detecting quad views platform support.

    • Like 1
    • Thanks 2
×
×
  • Create New...