Will DLSS 3.0 be supported in VR msfs?

10 real fps at max setting and max resolution on a varjo aero. That would be the minimum for a 4090. Without dlss 3.0
But I think and hope it will be more

1 Like

We seem to be forgetting that in VR: MSFS renders each eye separately. This is likely why it won’t be supported in VR initially. It depends on where in the pipeline DLSS 3.0 lies. From everything I’ve seen it’s at the end, holding frames up (but has access to motion vectors, which in most games is earlier in the pipeline) If it compares the left and right eye and inserts a frame it will be WONK. Not this doesn’t spell doom as they could adapt the DLSS to hold either the composite frame (combo of left and right eye, which is sent to the HMD) or it could simply hold each side for an additional frame, but this would add double latency as it would result in effectively holding four frames before sending the composites. I’m not a graphical engineer and I’m not familiar with either the pipeline of MSFS nor DLSS so please take my analysis with a big grain of salt, but from what I do know there will be challenges and as VR enthusiasts I think we should temper our expectations as well as BEG for VR support for DLSS 3.0 (or some form of it such as ASW2.0.

Aside: I there’s an often-overlooked feature of DLSS since 2.1 that permits Dynamic DLSS (DDLSS) effectively allowing the render resolution to fluctuate based on frame rates, so that you can target a given FPS. That way the scenes will be more balanced. If you miss a desired frame-time the render resolution can be lowered. This is difficult because many things such as LODs and textures may need to either be loaded (or pre-overloaded) to accommodate the highest possible quality, but I think it will someday be funny that we had to “pick” a DLSS level and fiddle with the adjustment. DLSS 3.0 initially will be locked to using the performance mode of DLSS (1:4 pixel count or 1/2 res; i.e. 1080p on 4K), but there’s no reason it too can’t be dynamic. So far I’ve only found 1 game that supports Dynamic DLSS: Manor Lords, which is only in demo and doesn’t seems to work dynamically as it should, so maybe there’s downsides to DDLSS.

Maybe they’ll call it DLSS 3.1 or for all I know 4.0 but I hope you enjoy the treatise on the above as it’s just fun to think about at this point.

DLSS 3.0 just sounds like the equivalent of VR’s existing “motion smoothing/reprojection”. If you ask me, VR already has this.

1 Like

Would love to hear from @CptLucky8 on this topic.

1 Like

It does, but the artifacting is terrible as they don’t seem to have any good support for use of the motion and depth buffers both. The fact that MSFS is at least sharing the vectors and also OpenXR will pass that information is a good start. Now we just need runtimes/HMDs that take advantage of it all and that would make DLSS3.0 effectively a method best left for use in 2D.

Although latency is similar one difference is that VR usually adds pose-prediction - If I understand correctly - it’s more or less asking the HMD “how is the user moving in relation to the frame being rendered,” so that when it is displayed it’s adjusted properly. DLSS3 doesn’t seem to integrate that part and may simply be what they need to incorporate that information to make it work in VR. Call that DLSSVR we need more letters :slight_smile: - Nvidia will jump at the chance to make a new acronym-based feature. :slight_smile:

1 Like

The important difference is that VR’s Motion Reprojection is processed by CPU, and DLSS3 is entirely done in the GPU. There is a large performance hit because of CPU overhead of MR. Realistically speaking we know that if, say, MSFS runs at 30fps MR (most of the time), if I disable MR I will get ~43 fps. If DLSS would be working in VR without CPU overhear, that would mean that with MR off I would get ~43*2= 86 fps (!) that is virtually artifact-free. No wobbles, no jitter, no MR enabled in VR. From what I see so far in the analysis of DLSS3 frames, there can be artifacts, but mostly with partially hidden geometry, when there’s no data to recreate part of the object that was hidden on one frame and revealed on the next frame, it would be incomplete in the generated frame in between. Should be minor, if noticeable at all at that frame rate in the sim. So that would be a game-changer. x2 boost withtou any CPU overhead required and minimal / unnoticeable artifacts. Add to that the general DLSS performance boost.

1 Like

But does it need to? MR works (in part) by shifting the previous frame in view according to HMD motion prediction, if it doesn’t get the new frame in time, while DLSS3 adds a frame prior to the last frame created and previous frame. There is no prediction involved. In essence, MR can work after DLSS3, and shift the last frame again if there’s no frame ready by the time of the next refresh. Though there may not be need with 70-80 fps to actually do that.

2 Likes

I asked mbucchia this same question. Its a good one I think.

Me too. Too little for so much money.

That statement is very very not true.

There is extremely little overhead of motion reprojection (MR) on the CPU. You can look at the WMR overlay, post CPU, will read a very small value. The dominant workload of MR is motion estimation, which is almost free on both CPU and GPU since its actually done by the video encoder block of your GPU. You still need to wait for this process to complete (which means the increased GPU frame time), but meanwhile both CPU and GPU are 100% free to process other things from the game. The rest of the MR process is post-processing and motion propagation, both operations are free in terms of CPU and extremely light on GPU (well below 1ms).

4 Likes

That’s interesting. I too was under the belief MR was taxing on the CPU. My reasoning was with MR prior to SU10 I could easily achieve over 45 fps with MR disabled, but enabling MR it would not go abover 30 fps, and sometimes drop down to 22.5.

I couldn’t lower my settings enough to make it lock at 45.

It will lock at 45 fps now in SU10 with DLSS, but I don’t like the current DLSS implementation in VR, it’s too blurry, even at quality so I’m back in TAA and using your OpenXR toolkit with FSR.

Why does the frame rate drop so much in MR relative to it disabled? Specifically, what prevents me from easily getting 45 fps with TAA and MR enabled when I can with it disabled?

1 Like

Oh yeah, I too would like an explanation of why this is so, as I also get a huge FPS drop.

Also it seems high end AMD GPUs don’t handle MR as well as equivalent nVidia cards, at least anecdotally, and I have no idea why that should be. Even with my current high end AMD rig I cannot get MR to reliably lock at 30fps, no matter my settings, even in rural areas with a simple GA aircraft. Without MR I get over 45, to say low 50’s with the same settings etc.

1 Like

Yeah motion reprojection uses nvenc? I think
 It not on the cpu much as the encoder itself is hardware on your video card.

Let me prepare a very detailed explanation with diagrams etc and I will get back to you


Meanwhile, I want you to try to think of it differently. When you use Motion Reprojection and it fits you (lock you) into one of the refresh rate divider (45, 30 or 22.5), you’re not “dropping” from X FPS to 45/30/22.5. You’re actually raising your effective frame rate from X to 90 FPS after motion reprojection, while being at the same or lower CPU/GPU utilization you had before, and evidently a lower CPU/GPU utilization that it would take you to reach 90 FPS without motion reprojection.

I know this may sound counterintuitive, but hopefully my more complete explanation will explain why this happens. It will also explain where the actual and the perceived overhead come from (spoiler alert: it is not from your CPU).

4 Likes

Here is my very very long post that I am keeping separate (since not related to DLSS topic):

6 Likes

Thanks for a thorough explanation.

You’re a legend mate, thanks for all you’re doing for this community. I did post a question regarding the value of upgrading to the new hardware in there, I really appreciate your answers, it’s nice to have a reliable, accredited source to ask questions to, as posting generally in the forums you can never know if the person replying has any knowledge or not.

So for you take the time to post real explanatory posts like this is gold, even if it probably ends up in you getting bombarded with more questions!

Here’s an interesting article that talks about the Frame Rate Up Conversion (FRUC) of the new Optical Flow hardware and SDK, which I bet is the tech behind DLSS3.

So I guess my question is: is DLSS 3 equivalent to the supersampling of DLSS 2 (but hopefully with some quality improvements in the process too? I hear people aren’t super stoked about quality at the moment) followed by FRUC to increase the frame rate? Or perhaps the otherway around, because I feel that the supersampling part is likely cheaper than FRUC, so better do FRUC on smaller images, then upscale them?

But with FRUC being a form of “backward” frame prediction (“take two consecutive frames and return an interpolated frame in between them”), I have a hard time understanding how this can fit in any low-latency scenario like VR
 All forms of reprojection (see my other post) dramatically drop in quality with latency, so assuming that DLSS 3 will increase latency by many ms
 it’s also going to make reprojection much less good at its job.

Edit: I’m going to try to draw a diagram of this so we can reason about latency and guesstimate some numbers.

2 Likes