Why does native resolution not look like native resolution?

Firstly just to say I’ve been having a great time flying in VR. I’m lucky enough to get decent performance even in airliners and I’ve found it really good for my mental health to fly out and see the world during some difficult times. Thank you very much MS and Asobo!

I wondered if anyone can explain why MSFS renders like it does in VR. I run Quest 2 at native resolution and 100 render scaling, and whilst it is a great experience it doesn’t look quite like native resolution, with the further away instruments and scenery slightly blurred compared to other native resolution applications. Other rendering effects are noticeable even before flying whilst on the main menu, how the edges of the floating 2d screen have jaggies and aliasing. No other native resolution games or apps have this, and even MSFS in virtual desktop mode (2d screen output shown on a virtual big 2d screen inside the headset) doesn’t have these effects.

I wondered if anti aliasing and the sharpening post process effects might be unnecessary at native resolution and causing loss of detail so I experimented by turning them off. I found that the cockpit and instruments were noticeably clearer and more in line with native resolution expectations, but everything outside the cockpit had huge amount of shimmering making it unplayable. Is there some problem with the rendering that is being covered by the anti aliasing and sharpening combo?

I am pretty confident there are no problems with my hardware setup and this is the same for everyone. I just wondered if anyone knew the reason why it is like this and if it is expected to be fixed?

Thank you !

10850k / 3090 / 32gb / quest 2 5500*2800

1 Like

You might want to review the information I’ve posted in “VR Technology Explained” in this topic:

My 2070 SUPER VR settings and suggestions (Index - SteamVR) :green_circle: - Virtual Reality (VR) / Hardware & Performance - Microsoft Flight Simulator Forums

I’ve been wondering the same a little bit but it doesn’t take long to realizing FS2020 is not taking advantage of VR specific and lens-oriented rendering. It is clearly visible in the 2D stereoscopic view where you can see the 20% pixels in the center of the image are covering 80% of what you actually see in the headset (figures are just for illustration here), whereas the other pixels remaining (80% of what is rendered) displays on 20% of the HMD panels in view (and even then, half of them are outside the view boundaries).

This where increasing super sampling in the center and decreasing sampling on the periphery might be a way to restoring higher clarity where it matters the most.

For example NVidia is offering specific technologies and libs to help with this and even if you’re not using NVidia tech you can certainly do a lot of the same things with a little bit of GPU Shader code. I’ve tried a few of their older demos showing the benefit of “Lens Match Shading” for example and this didn’t help much the fps but this helped increasing perceived resolution:

VRWorks - Lens Matched Shading | NVIDIA Developer

They now have more sophisticated techniques but they might not be usable as-is in FS2020:

Turing Variable Rate Shading in VRWorks | NVIDIA Developer Blog

You can download and try some of their dev demos on your system:

Download Center | NVIDIA Developer

2 Likes

Should we add " Variable Rate Shading" to the VR wishlist?

Thanks Captain. I have been looking through your links and videos, it’s interesting stuff but once it gets into shaders I fear much of it is beyond my level of understanding as a non tech guy !

Indeed there seems to be a much wider field of view that is rendered/shown on the stereoscopic monitor display which is not shown in the headset. It is particularly obvious when the developer mode FPS counter is on - the box is in the top right of the monitor picture, but only the very bottom left corner of the box is visible in the headset.

Is this standard VR behaviour due to the c1.5x resolution increase required for the lens-curvature adjustment? (I had naively expected this to be an increased resolution but the same field of view?) Or is it possible that MSFS is rendering a wider field of view than required at the 2700*2800 resolution (Quest 2 native after curvature adjust) and then sending just the centre part of this wide field of view to the headset, meaning we get a lower actual resolution than expected in the headset? I have no idea if this is possible… But I am making wild guesses as I do feel that the image in the headset is not a 1/1 native resolution image when compared to other applications, and feels more like a lower resolution image cleaned up (quite effectively) with the TAA/Sharpening

The image is rendered to cover the entirety of the HMD panel. It is expected and normal you don’t see all of it when looking straight forward but it is just an illusion. You can see some if not most of it on your peripheral vision depending on how well you adjust your HMD and how far you’re adjusting the eye relief (if you have this like in the Vive or Index).

The problem is that due to lens deformation, a better strategy for visual crispness is rendering the centre region which higher details/resolution and the periphery with lower resolution. You can do this in only 1 ways: render the image distorted as the inverse of the lens distortion so that the once seen through the lens, the distortion cancels out and appears planar.

However rendering in a such a way is not complex from a pixel shader perspective, but introduces lots of problems for pixel shader code relying on a standard projection. In effect, a pixel (x,y) coordinate and the 3D world coordinate is no longer direct and makes screen space algorithms much harder to implemenent for example.

This is why there are 2 approaches to solve this in an “easy” way:

  • render the image in a way which maximizes the centre region pixel density, and minimizes the outer regions pixel density. This consists naively to divide the image in 9 regions (3x3) with the centre one bigger and the side ones smaller. It is quite easy to implement and in practice is quite good.

  • render the mage in a way which maximizes the centre region pixel legibility with super sampling, and minimizes the outer regions pixel quality with under sampling. This is implemented at the video card driver level and in practice on Nvidia it is quite easy to use.

The former method gives higher pixel density in the centre region without changing the overall rendering cost: you render as much pixels as before over the same total render resolution. The VR API expects a traditional undistorted render which means once the rendering is done, there is an additional mandatory step, which doesn’t costs much either, which consists in projecting back the 9 areas into an even spaced 3x3 grid.

The latter method gives higher sampling quality in the centre region without changing the overall rendering cost either: you render as much pixels as before over the same total render resolution. Here, it is the video card driver which is modifying the pixel shader code handling internally so that it effectively raises the sampling in the centre and reduces the sampling on the periphery. This doesn’t require projecting back to an undistorted view prior submission to the VR API because it is already undistorted.

Both methods are supposed to even out the rendering cost. In other words with the same resolution, the rendering cost is similar. The difference is where is the rendering cost allocated the most and in both case it is in the centre region. In pushing these techniques a little bit more, you can also use them to maintaining the same rendering cost but with a higher rendering resolution. It is just a matter of “degrading” the sampling rate even more on the periphery while keeping same sampling rate in the centre for example.

The latter technique with Nvidia goes 1 step beyond even more: at the rendering engine level, you directly tell the driver the sampling rate based on a 16x16pixels block. This means you can selectively under sample regions of the view which are known to not having much details (moving objects, moving ground or periphery) and super sample the regions known to require more details (static or close by objects etc…). This is super easy to implement with the Nvidia API (1 function call only per frame) and you can derive the sampling rate texture (the 16x16 pixels blocks) directly from the TAA motion vector texture for example as a basis.

PS: this idea of using the TAA buffer is one I thought about when looking at how the Nvidia API is working. It might be highly game content dependant though and It might prove not good in practice with FS2020. I’ve had the chance a couple months ago to discussing it with a developer from a studio having implemented the NViidia API. He was holding a presentation about how they did implement the Nvidia API in their game, which was quite complex with features and edges detection etc… They hadn’t though about just using the TAA buffer but he told me he found the idea worth trying.

3 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.