

mbucchia
Members-
Posts
540 -
Joined
-
Last visited
-
Days Won
3
Content Type
Profiles
Forums
Events
Everything posted by mbucchia
-
Oasis Driver now in Steam Store for windows mixed reality
mbucchia replied to Kayos's topic in Virtual Reality
If you're on the SteamVR Beta (2.13) please opt out for now. There's an issue with it that will be solved shortly. -
PSVR2 - now works with Eye Tracking and Quad Views!
mbucchia replied to proxlamus's topic in Virtual Reality
Yes the eye tracking and quad views foveated rendering will work fine on AMD. -
Oasis Driver now in Steam Store for windows mixed reality
mbucchia replied to Kayos's topic in Virtual Reality
https://steamcommunity.com/app/3824490/discussions/0/592906686979854790/ -
PSVR2 - now works with Eye Tracking and Quad Views!
mbucchia replied to proxlamus's topic in Virtual Reality
This is the VR modding scene, this article was outdated barely an hour after it was published. We move really fast. -
PSVR2 - now works with Eye Tracking and Quad Views!
mbucchia replied to proxlamus's topic in Virtual Reality
This isn't a real warning. -
I personally used a command hook strip for ages
-
Dynamic Foveated Rendering - Everything in one page
mbucchia replied to mbucchia's topic in Virtual Reality
I dont follow your math. See the log file for the megapixels math (it's done there) but also this megapixels is irrelevant to quality, since what matters is pixel density. QVFR tells the game to preserve 1:1 pixel density in the focus region (well, with your multiplier, 1.1x pixel density). We confirmed that in your log. If the game doesn't render at this density, then it's a game bug. Someone probably needs to do a deeper investigation in order to confirm. Edit: can you also look at your VD overlay? It should say something like resolution = 110% -
Dynamic Foveated Rendering - Everything in one page
mbucchia replied to mbucchia's topic in Virtual Reality
The log just repeats the content of your config. So there's likely a user error here. For the rest you can do the math. With your focus size settings at 0.25 on both axes, it means the resolution of the focus region will be 1/4th of your stereo resolution (Godlike). 3072 / 4 = 768 Because you are using 1.1 multiplier, we need to also factor that 768 × 1.1 = 844 Which is the resolution we see being advertised to the game (in the log file). Now to look at the actual render resolution with DLSS, at quality this means input is 0.67 factor Stereo: 3072 × 0.67 = 2058 QVFR focus: 844 × 0.67 = 565 That's assuming the game is actually following the same math. One would need to enable the DLSS logging in order to check that. -
Dynamic Foveated Rendering - Everything in one page
mbucchia replied to mbucchia's topic in Virtual Reality
The math QVFR does is to make the final stereo image at the resolution of what the focus region (inner view) would be if it covered the full FOV. So the final stereo image is 1:1 the same resolution that you have without QVFR if you set the focus multiplier to 1.0, and the pixels in that inner view are 1:1 the same resolution. I would try comparing without DLSS, because I dont know what the engine does when it has to do quad DLSS instead of stereo. It's quite possible that the "1:1 resolution" is thrown away by the engine (which isn't something I have any control on, and would be an engine bug IMO). You can perhaps post the log file so we can compare the resolution QVFR sets. This won't tell us what resolution DLSS uses though. -
There is no support for OpenXR Toolkit foveated rendering on DX11 with AMD GPUs. It simply does not exist. I literally wrote the OpenXR Toolkit program and I know how it works. To use VRS on D3D11, you must use the NVAPI library (also known as NVidia VRWorks), here is the code in OpenXR Toolkit doing that: https://github.com/mbucchia/OpenXR-Toolkit/blob/6b9ecb69a4b2dc714b14a86407868af315d02531/XR_APILAYER_MBUCCHIA_toolkit/vrs.cpp#L1171 As the name indicate, NVAPI or NVidia VRWorks is exclusive to NVidia and does not work on AMD. https://developer.nvidia.com/vrworks > Variable Rate Shading (VRS) is an easy to implement rendering technique enabled by Turing GPUs. Turing = NVidia. I'm not going to have another back and forth on this. Previous user ultimately realized that they were not speaking of DCS, but instead they had used the feature in another game, a DX12 game, with their AMD card. You're probably in the game boat. Or you are thinking of ",Quad views foveated rendering" which is nothing to do with OpenXR Toolkit, and is a completely different feature (that does work on AMD with DX11).
-
The Quest Pro was the most uncomfortable headset I've ever owned, until I bought the Globular Cluster mod. I was having the same issue you've described, excessive head pressure which caused insta-headaches. With the Globular Cluster replacement components, that issue completely went away for me.
-
Pico headsets: Pico Connect built-in experimental OpenXR runtime
mbucchia replied to mrsylvestre's topic in Virtual Reality
It's OK, more OpenXR is good, more adoption is good. My code is open source to allow that, but it would be really nice if companies using open source would take the time to follow the directions of the license- 13 replies
-
- 7
-
-
There was an additional workaround to Meta's bug that went in the Quad Views tool. It was never back-ported to OpenXR Toolkit. There is no solution to use only Turbo in Quad Views.
-
Pico headsets: Pico Connect built-in experimental OpenXR runtime
mbucchia replied to mrsylvestre's topic in Virtual Reality
It is 100% confirmed to be a fork of VDXR. It's very sad that they are not giving attribution to the original code (which is technically a copyright infringement per MIT license of the original code). Edit: I should also add, it is based on a version of VDXR from 2023, so I'm not entirely sure they're even maintaining it? Otherwise why not tell people it is here? Why not pick a newer version that has more features and fixes?- 13 replies
-
DCS just doesn't want to start in VR : XRresult
mbucchia replied to Mac D's topic in Virtual Reality
This error occurs when you do not have an active OpenXR runtime. Make sure in Pimax Play you click "Set as Runtime", this should do it. Also, remove OpenXR Toolkit until you get this working without it. If neither work, search "vcredist" on google and download and install the latest version of that.- 1 reply
-
- 2
-
-
-
The issue with OpenXR Toolkit Turbo when removing the headset is 100% root caused to be an issue where the Quest Link OpenXR runtime violates the OpenXR standard. In other words, it's a Meta bug. I've reported it directly to their engineers over a year ago (maybe even 18 months IIRC) and they won't bother to fix it (extremely simple bug fix).
-
Dynamic Foveated Rendering - Everything in one page
mbucchia replied to mbucchia's topic in Virtual Reality
Check the order of the API layers with Fred's tool. I recall XRNS needed to be in a specific order. I have zero clue what he is talking about. ED resolved (or greatly improved) that issue, I can't remember when but it wasn't so long ago. I have no hand into DCS rendering or engine. Make sure you also disable the couple of options listed in the QVFR guide. -
Is DCS VR somehow violating Steam VR API rules?
mbucchia replied to RealDCSpilot's topic in Virtual Reality
Hey, I don't have a clear answer for you. You also say when "using SteamVR" or "violating SteamVR API" which is ambiguous since you could mean via OpenVR or OpenXR. I haven't looked specifically at this problem, especially since TBH, I never use SteamVR and nearly all my solutions were dedicated to "skipping SteamVR" That said, based on what you are describing, I believe you are likely correct: You are likely familiar with the concept of "swap file" which is a way to relieve your main RAM by temporarily moving resources to your storage drive. There is a similar concept for GPU memory (VRAM), called residency. Your GPU needs resources (textures, geometry, shaders...) to be in the VRAM in order to use them for rendering. But sometimes your GPU driver will move some resources out of VRAM and into your system RAM if the application is "over-committing" (which means using more memory than the allowable budget). Then when the GPU needs a resource that was "demoted" to system RAM, it will have to first make room for the resource (by demoting another resource) and then copy it back into the VRAM (aka "swapping", like your "swap file" for system RAM) . This process is slow. In some worst case scenarios, a resource might "bounce" back and forth between GPU and system RAM, and this creates an awful effect where your app is rendering like 1 FPS . I've seen this exact issue with No Man's Sky on PCVR, where the frame time would jump to 100-140ms because some of the VR compositor resources would get demoted to system RAM. There is a solution to such problem, which is to mark the most critical resources as permanently resident in VRAM (explained on the page I linked), to avoid them being demoted. This was actually a mitigation I had implemented in SteamVR for WMR. Note that as explained on the page I linked about residency, there is something called the application VRAM budget. That is the maximum that an application should ever use. This amount of budget is NOT your entire VRAM. There's something called system reservation, which is Windows protecting some of the VRAM for things like the desktop window manager, and your monitor's backbuffers. Many people asked me this question about OpenXR Toolkit, "why is your overlay saying I have 10GB of VRAM while my GPU has 12GB!?". The answer is those 2GB are the system reservation. If the application ever goes above the application VRAM budget and into the system reservation, bad things happen as described above. Now the case of VR and memory management is actually quite complex. This is for a simple reason. When you develop an app that runs on flat screen, you don't have to worry about anything other than the system reservation. Your application has its GPU resources (textures etc) and loads them into VRAM, and the only additional overhead is what's called the DXGI swapchain, aka the surface that the application uses to present the rendered image to the monitor. That DXGI swapchain is allocated within the applications's memory budget, so it's easy to account for. When you develop an app for VR, there is a lot more happening. You do have those VR swapchains as well, and these also come out of the applications's VRAM budget, and they are much bigger (stereo and triple-buffered). But you also have the entire VR compositor resources as well. These typically live in other processes like whatever "Oculus Service" (Quest Link) or DWM (WMR) or VRServer (SteamVR). These are not part of the application's VRAM budget (since they are allocated out of nowhere by another application process). So it becomes much harder for an application to track these additional resources and to not over-commit the VRAM. An application will usually try to maximize its use of VRAM especially if you are using very high settings. This means some apps will try to reserve the entirely of the application's memory budget. They will usually do that when starting the game or loading a new level. For games that allow you to switch between flatscreen and VR on the fly (press of a button), this is also a very complex scenario. MSFS got it wrong for a very long time. Basically you have to "unreserve" sufficient memory back for the VR compositor. And again, as an application, you do not know how much VRAM the VR compositor needs... Then you have SteamVR overlays coming in. These are the absolute nightmare for your game developer conscious about maximizing VRAM usage. These overlays can be loaded AFTER your game started and reserved memory. Or something the full extent of their resources is only allocated upon first use, or specific use (eg: which "menu" you select within your overlay). So if a game had reserved all the VRAM, you are now over-committing VRAM, and risking the back and forth copy of data between system RAM and VRAM. Now you have a way of checking if this is what's happening, through the Task Manager. You'll see two numbers in your GPU performance pane. A "GPU Memory" and a "Shared GPU Memory". Assuming you are not on an integrated GPU (!), you will see "GPU Memory" as your VRAM, and "Shared GPU Memory" as your system RAM, which can be used when over-committing. In the ideal case, "Shared GPU Memory" should be 0, though in reality there is always a small residual. You want to get your baseline before starting your game. How much "Shared GPU Memory" is used? On my machine, it's always something like 0.1GB or 0.2GB. That's HEALTHY. Then run your game, hopefully "Shared GPU Memory" remains the same value. Then when you are hitting your overlay issue, take a look at "Shared GPU Memory" again. Is it higher than your baseline by a notable difference? For example, on my machine, I see it go above 0.3GB when overcommitting, and it's usually noticeable like 0.5GB or 1GB or more... That's over-committing, and that's what causes bad performance. There is no real solution for you to this problem. It's just poor programming, but technically, it's not anybody's fault. DCS shouldn't under-use VRAM just to allow overlays that maybe nobody will use. Your overlay needs resources, and it can't do anything to evict data from DCS. There is no good solution. One thing your overlay developer could do, is temporarily boost the residency priority of its resources when the overlay is open. That will still cause a slow down when you first open the overlay, but at least the overlay should be more responsive. Now DCS in the background will probably appear less fluid, since now resources used by DCS will need to be copied back and forth between VRAM. Then the overlay must restore the lower residency priority when you close the overlay. I don't think many applications do that, the API for that is not known by many programmers, and also as noted in the API's documentation: "Changing the priorities of resources should be done carefully. The wrong eviction priorities could be a detriment to performance rather than an improvement.". It's a loaded gun that shouldn't be put into the wrong developer's hands :( -
Moving away from Reverb G2 - Is SteamVR required?
mbucchia replied to Hoggorm's topic in Virtual Reality
You are all misunderstanding what OpenXR is/means. OpenXR doesn't mean "without SteamVR", and there are many headsets out there that only support OpenXR through SteamVR. Examples: Valve Index, HTC Vive (almost all of them), Bigscreen Beyond... on these headsets, you cannot run 99% of the applications outside of SteamVR. You haven't specified which HTC Vive headset you are using, however there is a very high chance it only supports OpenXR through SteamVR. The only 2 HTC headsets supported by HTC's "non-SteamVR OpenXR runtime" (sometimes called "native OpenXR runtime" though it's actually terribly misleading vocabulary) are the HTC Vive Cosmos (non-Elite) and HTC Vive Focus 3 to my knowledge. Edit: I remembered there is one more scenario, I believe you can use the HTC Vive Elite XR with Virtual Desktop, in which case you can use VDXR (Virtual Desktop's non-SteamVR OpenXR implementation). -
With VRS, you need to capture (and identify) **all** the passes to gain performance. There are dozens (or two dozens, or three dozens) per eye. The number (and ordering) of each pass might depend on dozens of factors such as game settings, which aircraft, which map... It's not a novel idea to do the counting. It just doesn't work reliably unless you only care about supporting 1 version of 1 game with 1 set of settings on 1 map with 1 aircraft. Or creating (manually) an exponential number of "counting" heuristics for each individual combination. And for every major engine update, you will need to recalibrate all of them since the developer might add, remove or reorder the passes. Things like OpenXR Toolkit don't bother with the counting for these reasons. Instead it looks for other hints. An example is OpenXR Toolkit looking for an OpenXR swapchain image to be committed (via hooking the corresponding API), since it is likely happening at the end of rendering for one of the views and most likely, engines draw left view before right view. A similar approach is to look for clearing of a depth buffer, since it most likely indicates the beginning of a new view. RTV bindings is easiest but it is not sufficient. The resolution of the RT itself doesn't matter, for VRS you need the viewport. Depending on how the engine does it, setting RTV might happen before setting viewport, or vice-versa. Need to handle both. As I explained before, the issue with IL2 is **very specific to the use of VRS**. The issue isn't related to injection or anything else. The game does something that breaks NvAPI VRS assumptions. I doubt your 3dmigoto exercises any of the paths relevant to that issue.
-
Thanks for the details. However, it isn't clear whether this really gives you the information that you need. It is able hook into the right D3D calls, but have you tested whether it can **reliably** provide information that distinguishes rendering of the VR left/right view vs offscreen rendering etc? Because that is the difficult part (the one that requires a heuristic to predict the future). Until you can evaluate whether you are given **reliable** access to the view information, there is no real advantage to this vs using OpenXR Toolkit or vrperfkit as a starting point. Validating whether you can use these hooks is tricky. In OpenXR Toolkit, there is a (hidden) developer mode with a feature called "vrs_debug" that allows you to capture a frame and test the heuristic. It creates a similar log and also takes as screenshot of every screen pass along with its left/right/both classification. This is when you truly see how difficult it is to distinguish the VR view. With DCS, I recall there was approx 80 passes (of course it depends on the version, the aircraft, the settings, the current viewpoint...), and the heuristic **must be right** 100% when classifying these render passes, otherwise badness happens. You could try the following to assess how good/bad of a job Reshade is doing. Assuming you can inject a stencil mask for ANY pass (which I doubt given that stenciling and depth buffer are shared, so it probably only works for passes that already use depth, which isn't all of them) you can create a stencil for left eye that covers say 25% of the screen on the left and a stencil for right eye that covers 25% of the screen on the right. When that stencil is applied and if and only if the heuristic to detect left/right view is correct, then the outcome will be a cropped view 25% on both sides. Any "mis-classification" will cause an obvious glitch where the left part of the right view (or vice-versa) will be blocked. But that is only half of it. You also need to check that the heuristic doesn't accidentally reject some of the VR views, and therefore leaves some performance on the table. In OpenXR Toolkit there is a developer overlay with a value "VRS RTV", which is usually a good indicator. That value should be the total number of passes identified as VR views, eg the 80 I mentioned above. It is much trickier to evaluate what this number should be, ideally the developer would use a tool like renderdoc to capture a frame a count how many passes they see. Then compare this number with how many passes the heuristic classified as VR views. Assuming that heuristic is good (and to be honest, I highly doubt it is, because as mentioned this is an extremely complex problem without a solution today), you can use the hooks to inject VRS commands. I'm not gonna go into the details here. There is a project here that I never released but was meant to be a clean, standalone VRS injector (thought it lacks of any heuristic): https://github.com/mbucchia/VRSInjector/blob/vr/InjectorDll/vrs_d3d11.cpp Back to your OG point, if your goal is to make this work in IL-2, I would not waste my time on it. There is something very special about IL-2 that no one as figured out. You can try all 3 injectors with IL-2, and all 3 will eventually crash for inexplicable reasons. You can lookup the OpenXR Toolkit source code to figure out how to "unblock" IL-2. I think enabling developer mode unblocks it. You can also look up the PimaxMagic4All tool I wrote here, which is the closest I thought I got: https://forum.il2sturmovik.com/topic/85619-dfr-support/#findComment-1283925, however it was ultimately proven that the game still crashed with VRS.
-
Just to be clear on how drastically different this is. Reshade being a post-processing injector, it operates after the game engine finished its work. So whatever time your GPU has spent rendering, it's already behind, and Reshade is never going to boost your performance. Reshade hooks into the "presentation layer", meaning the API that the game uses to deliver a frame to the device. This device can be a monitor or a VR headset. This hook is significantly easier to do, because Reshader is operating on the finished product of the game's rendering. It's all packaged and labeled. Left, right, depth, etc. There is no need to do any prediction, Reshade has all the information it needs, because the rendering has already happened. Reshade doesn't have to make a guess, and risk being wrong. (I'm not reducing the complexity and how great Reshade is, but let's say that Reshade's value added is in its scripting stack and interface, not how it hooks into the game) For something like VRS, the magic that VRS triggers must happen **during** the rendering. At this time, the injector doesn't know what is being done. The injector just knows "something is being rendered". It's not all packaged and labeled. The injector has to make a guess whether it's left, right, or something completely different. That guess is extremely difficult without knowing the future. And knowing the past doesn't really help. And if that guess is incorrect, the consequences are catastrophic (bad visual glitches). The only entity that knows for sure what is happening, is the game engine. Without the game engine giving the injector a hint of what it is doing, the odds that the injector is going to guess incorrectly, a not close to 0. Best thing that the injector can do is mitigate risks. Finding the right balance between making a bold guess and only making safe guesses. None of that is even remotely applicable to the much simple situation that Reshade has to deal with.
-
It's not from me, and it doesn't really make sense. The guy added support for FOV override >100%, which creates a distorted image. I don't recommend this build since it's likely not digitally signed and will break anti-cheats. As I've explained, VRS is not post-processing. Reshade (a post-processing injector) will not help you. Saving GPU cycles must happen while you are rendering. The work cannot be undone magically at the very end (post-processing).