I was hoping you could run a test for me to see if my process is worthwhile or if it’s just a placebo.
I’m running MSFS 2024 with an RTX3090 and HP Reverb G2. While the headset’s native resolution is 2160 X 2160 per eye, the recommended render scale is 3184 X 3100 pixels in OpenXR to account for barrel distortion, etc.
I’m consistently reading dissatisfaction with DLSS because it makes cockpit digital displays blurry. I know there are recommendations for changing secondary scaling values in the config files + DLSS Tweaks utilities, etc. I’m looking to see if there are other options.
My thinking (and correct me if I’m wrong) is DLSS works best if it has enough source data before it up-scales. If the source data is lacking (e.g. not enough details in the source information), then the outcome is blurrier / less detailed.
According to MSFS 2024 FPS display, when I run DLSS in Quality mode, the actual render source resolution is 2123 X 2067 pixels which is significantly less than the native resolution of my HMD.
Through trial and error, what I did was increase my OpenXR resolution setting to 108%. For MSFS 2024, the outcome rendering source was 2,216 X 2157 pixels. 109% was of course higher than target 2160 pixels (instead of 2157), but I figured the difference in pixel count was small enough to not warrant the performance loss.
Everything seems clearer to me (including the digital displays). Yes, I have to tone down some settings to compensate for performance loss, but it’s not a dramatic change.
I was hoping someone else could test this out to see if it’s worthwhile on their setups / other HMD solutions. A DLSS Tweaks solution that could be even better is to make it so that the final source rendering resolution is at lease 2160 X 2160 pixels (plus the extra pixels that DLSS scales up).
Congrats, you figured out on your own what many already know and actively recommend
DLSS in VR heavily supersampled is great. Many swear by DLSS Performance and then supersample it really high, but I like to use at least Balanced and have settled on a custom DLSS setting halfway between Balanced and Quality. I upscale that to about 5300x5xxx, as I have a 2880p HMD. Just make sure you are getting 45fps locked and you’re all set
I did more testing, and adding supersampling before DLSS gets involved does indeed make a very positive difference for me. The clarity is completely worth it.
Similar to yourself, I did try doubling up the SS before my latest result, but the performance cost was too high. So my 2160 X 2160 was about 4,320 X 4,320 mixed with a DLSS Performance mode (not Quality mode), and the performance cost was too great. Adding SS even before DLSS has a performance cost.
I’ll report back with more findings if I learn something new.
I’m still experimenting. Based on your message, it sounds like you are using DLSS Tweaks to get a custom resolution. What rendering resolution did you settle on? Or more precisely, what percentage of the HMD’s native resolution (not including barrel distortion compensation, etc.) worked best for you?
I find that all scaling has a significant performance cost so I’m curious to hear where you are making the biggest trade-offs. Thanks in advance!
I used DLSS tweaks to set a scaling factor of about 0.63, that’s halfway between Balanced and Quality.
Then use OpenXR Toolkit to push resolution to 5300x(whatever). This only works on a 4090 though! So that’s a bit less than twice the native (physical) panel resolution of 2880x2880.
In most scenarios I’m interested in that gives me 45fps locked. If not I drop down to Balanced.
I’m thinking from the point of visual fidelity. What minimum do you need to render to not lose details?
Using your example:
63% of 5,300 pixels is 3,339 pixels. Ideal target of 2,880 native pixels without any up-scaling is 4,032 pixels (40% for barrel distortion, etc.).
So, with DLSS, you are rendering an extra 454 X 454 (above native) or 39% of the extra pixels (4,032 total render pixels minus 2,880 native display pixels = 1152 pixels for barrel distortion, etc.) you would normally have to be rendering through other means.
It would be interesting to see if there is a pattern or a threshold trend of pixels above or below the native display count in VR when using DLSS.
Just trying to nail down a technique to get to that magic outcome.
I discovered that OpenXR Toolkit - though deprecated - can help here. It lets you adjust the game scaling and the DLSS settings on the fly so you can make changes while playing the game. You have to get in and out of VR mode for the scaling portion, but that’s better than having to reload the game from scratch every time.
So help me out here. I’m trying to find a good test sample scenario to work with in MSFS 2024.
I’ve been trying to use Las Vegas Strip (general flight) with a Bell 407 helicopter at night, but it’s too hard to judge. The exteriors look OK, but DLSS does a terrible job with the digital cockpit readouts, so by the time you sharpen them, the usable performance is thrown out the window.
New York Discovery Flight is pretty good, but I’m just not sure.
In my experience, the most performance intensive changes happen when you reduce the rendering resolution of the display (i.e. you choose the DLSS rendering % / resolution). When increasing the resolution of the source (before DLSS downscales it), the performance cost is a lot less, but so are the related benefits.
This goes a little off topic, but I had an unexpected breakthrough yesterday. I was struggling with finding DLSS settings I liked; I just couldn’t break the 30 FPS barrier without losing too much quality.
In all the MSFS 2024 settings guides, LOD for Terrain and Objects is recommended to be 100 instead of 200 / 400 respectively. On a whim, I decided to push them to maximum, and I had a big jump in FPS. Using MSFS FPS indicator, I calibrated further to keep PC memory use to under 32GB. My 3090’s 24GB limit is not an issue.
Now that I have some headroom, I should be able to do a better job with DLSS scaling and not lose so much quality. I’ll report back later.