Concept: multi-res VR viewport for decreasing pixel count -> more FPS

I dont know is this is new idea or not, and is it possible to implement to MSFS or any other VR application , but in theory this concept would increase fps significantly because amount of viewport pixels to be rendered would be much less than in “normal” single resolution views.
Idea is to have multible zones in the view with different resolutions: more resolution in the center of the view (sweet spot), and less resolution on the edges.

Here I have simple image about the idea:
image

I think this would be quite effective way to increase frame rate especially in VR, at least in theory.

This is in fact a common technique in VR rendering, and I believe I heard of it being used in MSFS specifically (though I don’t have a reference offhand).

Here’s an example of how this is done by applying a mask with varying pixel density across the field of view, then interpolating the empty pixels: Tech Note: Mask-based Foveated Rendering with Unreal Engine 4 | Oculus

Well, this kind of feature is indeed quite obvious, therefore it would have been a surprise if it hadn’t been implemented already somewhere. However, I would expect that in VR applications this kind of feature would be used more often, or featured more often… I havent heard any of these before, but then, VR stuff is very new to me.

1 Like

What I would really like is seperate render scale for the cockpit (which I would like to be super sharp) and the outer world (which I’m ok with being a little bit blurry). From what I unserstand some other sims do exactly that.

1 Like

This would work well with eye tracking, otherwise it will be very obvious when you are not looking straight into center.

As I recall, due to the distortion of the lens on VR goggles, there is less resolution at the edges than in the center even without eye tracking. Otherwise it wouldn’t be used today, as I don’t believe any VR goggles on the consumer market have eye tracking? (I could be wrong, but I’ve not heard of this feature going in yet.)

Or they could just implement DLSS.

It’s called FFR or fixed foveated rendering and native Oculus Quest games often use it. But it can be a visible and destracting degradation of visual quality. PC VR games try to avoid it.

I agree that it could be distracting, especially if you turn your focus off from the center of the view. But, currently most HMDs have quite narrow sweet spot, therefore I think this feature wouldn’t make things much “worse”. And if you could configure the area of “sharp view” vs “blurry view”, even better. As an option this would be useful, especially for people who choose better fps instead of ultra graphics.
However, I cant tell what is the performance cost of this technique, of course there is some processing to be done.
I point out that currently VR you have to use render resolution factor of 0.7 to have good balance between image quality vs performance. How about having render scale of 1.0 in the middle (like 1/3rd of pixels) and the rest with render scale 0.5? At least I would like to have that.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.