Jump to content

Chamidorix

Members
  • Posts

    14
  • Joined

  • Last visited

  1. SteamVR SS increased is linear, in-game DCS PD is exponential. 1.5 PD = 225% steam. 0.8 PD with 300% steam is the same as 192% steam or between 1.3 and 1.4 PD. 0.5 PD with 500% steam (which won't necessarily work since steam default caps render targets at 4096, which is below 500% on reverb) would be the same as 125% steam. Nevertheless, it would hardly surprise me if the recent WMR changes to its dynamic render buffer handling and viewport downsizing caused some new bug with non steamvr induced render target increases.
  2. You are going to have a fantastic time in flatscreen DCS, but please do not underestimate how brutal on the GPU DCS VR is. A 2080ti honestly doesn't cut it, even more so for 2.5.6 vs 2.5.5. I am a mega tweaker and I'm using every software trick imaginable on a heavily Overclocked water cooled 2080ti and still just can't even stay at 45 fps to hit reprojection on my Reverb consistently..... So you can definitely try it out, and VR in general (outside of DCs) is awesome to get into right now, but be realistic that playing in VR primarily is going to be sub optimal, unless you spend extravagantly, until Ampere/Hopper/7nm Geforce cards in Fall, and even then it still will be probably take a high end card to get a good experience.
  3. You should buy whatever is the cheapest K series flagship processor from the last 4 generations. Skylake(6700k), kaby lake(7700k), and coffee lake(8700k) + coffee lake refresh(9900k), as they all have the same exact per core architecture minus a little bit of L$ here and there. Point is, at the same ghz they will all have the exact same performance in DCS since it is wholly single thread bound. You can consistently get a 9900ks up to 5.5 ghz if you only care about gaming stability (not prime95 24/7). You can get a 6700k up to 5 ghz consistently with the same parameters. So 4 generations you can get about a 10% performance increase max, so don't feel afraid at all to buy an older cpu if you can get a deal. But, generally, the resel market for CPUs is shit so it will probably just make the most sense to get a 9900k(s). Also, disable hypthreading. It takes a non negligible amount of resources away from the logical cpu running the bottlenecked DCS thread. You can also get about 5% single thread perf increase by disabling spectre/meldown mitigations either via inspectre or Windows official registry settings (inspectre just uses those registry values).
  4. This is terrible advice. There are two causes to the ghosting omnipresent on the HP reverb: #1 is unfix-able. The 2160x2160 panels in the reverb are bargain bin and they do not consistently implement the sub 1ms "low persistence" response time that they should. It is essential in VR for frames to only be visible for a fraction of a frame's render time in order for your eye tracking while rotating head to behave how your brain expects. Otherwise you perceive smearing since you move your head but the image hasn't stayed in the same place. #2 is why you should leave pre-rendered at 1 if at all possible if you are hating ghosting. WMR is always going to late stage rotationally reprojecting frames; this is different than motion reprojection that drops you from 90 to 45 fps. When rotationally reprojecting, the WMR compositor has to predict where the frame should be as you rotate your head, and the more cpu frames you let quene up the worse this prediction is going to be. Anyways, this is all related to the concept of rotational ghosting, and its hard to explain (if you don't already understand it I'm guessing my explanations didn't help at all), and the best resource I've found was a video of a guy detailing how they fixed it in the oculus protoype with low persistence displays. I would greatly appreciate if anyone who also knows the video could link it, as it is extremely helpful in helping people understand why ghosting occurs in VR.
  5. Windows mixed realilty creates 3 virtual 1080p monitors to handle the loading of non-UWP (win32) desktop apps inside of WMR applications. Been this way since Windows 1903. I have complained incessantly for an option to disable this, since I use a win32 app approx 0.1% of the time in VR and the 3 virtual/headless monitors cause all sorts of problems with desktop compositing. Anyways, typical microsoft fashion there is absolutely no way to disable it. I've messed with blocking the creation of arbitrary monitor EDIDs but the WMR binary itself requires the monitors to be created on startup.
  6. This is the most stupid shit I have seen in months. Sweviver is indeed somewhat of a Pimax shill, even considering his employment with them, but to roll this adversarial attitude onto a random forum dude who's just made a couple of damn vr-linked posts is asinine, man. Regarding the video, it does contain a healthy amount of bullshit. My favorite was Subnautica, where he talks about how he was "surprised" he could get it to run smoothly when you can see he's literally downsampling the game below native res. Like, shit dude, its not exactly rocket science that you can get any game to run cpu capped and smooth if you downsample it to shit. All of this is to say nothing of the fact that he's grabbing an incomplete mask of one eye, and then expecting his viewers to believe this distortion corrected mask, viewed on a desktop monitor, through youtube's compression codecs and god knows what post fx editing, is in any way relational to the perceived image quality while actually in headset. Anyways, cool to see some actual numbers for DCS though.
  7. V-sync/triple buffer/gsync etc settings are completely ignored by WMR/OpenVR/Oculus pipeline. VR screens, both software and hardware, function wholly different from traditional monitor screens - this is why Oculus and OpenVr currently run the screens in exclusive "Direct" mode, completely segregated from the traditional desktop rendering pipeline. VR screens run at extremely low persistence (the screen flashes on for far less than 11 ms, and then turns black), so the display must ALWAYS stay locked to refresh rate for this low persistence flicker to be perceived as uniform imagery by our brains. It is then the VR framework's job to ensure a rock solid set of frames is delivered by dynamically re-projecting, duplicating, and dropping application-rendered frames to exactly produce an even 90fps to the headset compositor. This leads into WMR and the WMR native frame compositor with regards to FPSVR frametimes. WMR->StreamVr is an awkward situation where two separate fully capable VR frameworks are running at the same time, and so a number of framework compositing duties are split between the two. Notably, WMR handles all distortion correction, and most importantly all of the aforementioned frame operations to maintain stable 90 fps. This easily can consume 1+ms per rendered frame, and indeed is the reason FPSVR cannot be used for precise frame time cutoff analysis. It is actually possible to get detailed frame time information from the WMR compositor itself, by using https://docs.microsoft.com/en-us/windows/mixed-reality/using-the-windows-device-portal, but it is somewhat technical and completely useless in game.
  8. I'm just going to copy paste my response on reddit to the youtube video a couple pages back, since I own a reverb and I touched on driver enabled MSAA (doesn't work) and the effect of virtual reality pre-rendered frames (less means less ghosting but more min frame times). " I immediately doubt your credibility when you still somehow have not grasped the PD vs Steam SS difference, that LITERALLY gets repeated as nauseam across the internet. PD setting of 1.5 means make the resolution 1.5 times wider and 1.5 longer, for 225% more pixels. Steam SS 150% means 150% more pixels. PD 2.0 is the same as Steam 400%. This gets spammed so much by people on the forums; I get the various boomers not grasping it but I really don’t get how you missed it. Anyways, you also most likely don’t understand the full impact of upping pre-rendered frames, but that is more understandable. When you queue up 3 frames on the cpu, you now are making your headset software predict 3 x 1000/FPS milliseconds in the future as opposed to 1 x (single frame gpu render time). This is a huge difference and will make late stage rotational reprojection super shit (tons of ghosting on fast moving distant objects). Now, obviously this is a per user subjective thing as many will be bothered by low frame stutters vs ghosting and vice versa, so I certainly agree with trying 4 and seeing if zooming in on far away targets or rolling over trees bothers you. I personally suggest using Nvidia low latency ultra, as this will use average frame time data to schedule the cpu frame completion to coincide with gpu frame render. This gives you the GPU throughput of 2 pre-rendered frames (the gpu is never waiting on the cpu like in 1 pre-rendered frame), with the latency (equal to just the gpu render time of one frame) of 1 pre-rendered frame. Finally, I don’t think you understand the implications of implying that you can enable driver level MSAA in this D3D11-powered deferred rendering pipeline game. You flat out can’t enable driver MSAA in D3D11, and you flat out can’t design a driver level multi sample implementation in a deferred rendering engine. After re-watching your video, I am confident all you are observing is the transparency super sampling. While the setting does in fact work, it is pointless to enable, as NVCP transparency super sampling merely ordered grid super-samples pixels that pass alpha test, i.e it costs just a much as full-scene super sampling per pixel. So, by enabling you end up spending more gpu power per pixel on alpha textures than the non-alpha texture pixels, which is the opposite of efficient behavior. MSAA is generally so efficient vs full scene supersampling in part because it culls the alpha texture pixels from sampling as opposed to prioritizing them. Also, to elaborate, AA override or enhance does absolutely nothing, as it will in most deferred rendering engines, and ANYTHING D3D11+ or Vulkan. However, post process FXAA works, and interestingly enough MFAA (maxwell-gen aa interleaving) works with DCS 2x or 4x MSAA. This is the real driver trick if you require DCS MSAA; as long as you maintain decent frames rates MFAA will interpolate 2x or 4x samples with motion vectors every other frame to essentially create a 4x or 8x sample at half the cost. "
  9. EDIT: I actually utilized my reading comprehension and realized you specifically state in your list that the part must be commonly available "off the shelf". In this case, the kingpin's bin + watercooling ready setup will certainly outpace any off the shelf titan. I'm leaving my comment since I had fun writing it, however :) ORIGINAL: Just for fun Aurelius, since I know you are involved in the enthusiast space, but technically, money permitting, the best possible card would be an rtx titan, no? Your list has the Kingpin 2080ti, and while it's true that there is no way to easily purchase a pre-binned full 102-400 die, I'm pretty sure the ~10% improved perf from the bigger die would in most cases outweigh the more marginal gains to be had from a top% 102-300 bin. (This is all assuming of course that you can comparably OC the titan, via water cooling + bios flashing + voltage shunting etc.) Also, DCS VR on a high res headset is actually a game that can outpace the 2080tis 12 gb vram pretty quickly, so there is a bit more to the value proposition with 24 gb in addition to the otherwise outlandish ~10% improved core for ~100% increased price ;P I really wish multi-gpu was a larger market share and had more support :(. I need to find my own computer science department to hack together an OpenVR/DX11/DCS patch, lol. Also, one of these days I'd love to get one of these new optanes vs a full ramdisk setup and do some serious frame time analysis to determine what kind of low-frame improvement, if any, you'll get by sticking to ramdisk. And then compare, of course, to a more lowly and mortal nvme setup. Finally, I've stated this elsewhere already, but after trying the 8KX at CES, I'm fairly confident that it will definitively beat the Index vs Reverb arguments as the best possible DCS headset. Perfect tracking + IPD + sweetspot size + comfort of index, resolution density of reverb, lacking the distortion problems of previous pimaxes while retaining the exclusive fov. The Rift S should remain the best option for no-configuration/non-enthusiasts, however.
  10. So, I tried the Pimax 8KX at CES and was able to exactly replicate my DCS graphics settings with NV Profile Inspector on their 9900ks + titan rig (my home rig is 7700k + 2080ti +hp reverb). I found the experience extremely enjoyable; the big thing to note as that they pretty much completely fixed the distortion issues on the demo units. Whether this rolls out to consumer units remains to be seen. Now, I say they had a titan in their rig for fun, but please remember a titan is only ~10% faster than a 2080ti unoverlocked. So please don't go thinking you need $2500 card to run 8kx :). Nvida went all out on marketing at CES; there were plenty of VR booths with multiple NVlinked Quadro 8000 cards ($$$) provided for free by Nvidia. You can find my more in depth notes + semi review in the 5k vs index thread:
  11. They are duel 2160 x 3840 panels, native. They accomplish this over a single display port 1.4 by capping refresh rate at 75hz. Plug into a bandwidth calculator and you can see they just barely have enough. You can quick-swap to ~1440 x 2560 render via PiTool and then this will upscale to the 4k panels and unlock refresh rate to 90hz. From my time at CES, it seems you can pretty seamlessly swap during game run to compare, but I certainly preferred less aliasing over +15hz, especially in DCS. Additionally, the refresh/render balance can further be finessed by discretely adjusting the fov, also via pitool. Overall, having never used Pimax before, I was fairly impressed at CES by the extra parameters PiTool let's you configure. (They already have static foveated rendering that works on everything!) As far direct 8KX vs reverb pixel density comparison, that is quite tricky to definitively answer. As mentioned, the 8KX is 2 x 2160 x 3840, and the reverb is two square lenses: 2 x 2160 x 2160. So, the 8KX has substantially more horizontal pixels to compensate for the correspondingly substantial fov increase, so intuitively you would expect them to have very similar density. It is hard to perform a hard calculation, since your eye distance from the lens in addition to your personal IPDs interaction with the lens arrangement construction can significantly vary the actual perceived dpi. Anecdotally, I can say I was able to exactly replicate my graphics setup at home on the CES rig with nvidia profile inspector (1.4 pd, 2x msaa + mfaa enabled, -0.5 lod filtering) and, to the best of my perception, could not notice any difference in cockpit or outside world clarity + aliasing. But, as mentioned, the clear, consistently colored screens were HUGELY noticeable, and the FOV was just such a good-feeling quality of life upgrade. Now that I've had a night to sleep on things, I find myself craving the 8KX hardcore while playing today on my reverb. I done messed up and tasted (currently) unobtainable glory and now I don't wanna go back xD Another important thing to consider is that the Pimax has hard mechanical IPD adjustment. I have 61.5 and while the reverb works fine with about ~half of the image being sweetspot clarity level, I had close to edge to edge sweetspot clarity on 8KX once the IPD was dialed in. The only headset I've had better sweetspot on is the Index, since that remains the only headset with double-lens design + ipd AND eye relief sliders for 100% sweet spot attainability. I'm actually kinda bummed, because previously I was confident I could wait until Nvidia Ampere + HDMI 2.1 (maybe DP 2.0?) to bring about a a new gen of headsets with the increased cable bandwidth, given that the reverb was almost maxing out current cable tech, but now I'm pretty sure I'm not going to be able to resist dropping a 1.5k load on this new hardware just because of how good the improvement felt xD But hey, I can dream Nvidia surprise drops Ampre next month and Microsoft drops WMR 2.0 with 6 camera inside out tracking and and Samsung releases a true 2x 8k panel oled headset copying the index chassis adjustments for <$1000. I can dream.
  12. I've owned Index, currently have the reverb, and tried out multiple 8KXs today at CES. The reverb is absolutely better for DCS than the index, and honestly, the 8kx demo units they had blew the reverb out of the water. Assuming they have the same QA as the demo units, the consumer 8kxs should absolutely be the best consumer headset on the market for ALL parameters, money not being an concern. The resolution density of the reverb is the same as the 8kx, but the difference between having peripheral vision vs looking through a box like the reverb is insane. Additionally, the screens were much nicer; I did not realize how annoying the muira was on the reverb until trying a high resolution headset without it. Add on top of it the near perfect tracking of lighthouse, and you have a headset that wins in all categories, except price :) (BTW, the primary issue with Pimaxes besides money has been QA and panel distortion. The CES units had completely fixed the distortion, and I am going to wait until you can buy the 8kx on Amazon to hopefully help alleviate QA RMA issues. )
  13. Yet another bump. This tweak is essential for top end vr graphics. Makes immersion so much better when you can use sun glare. Have to make this edit every update.
×
×
  • Create New...