Do we know when DLSS 3 will be available in MSFS?
Right now in SU11 beta:
https://forums.flightsimulator.com/t/sim-update-11-beta-release-notes-1-29-22-0/548906
Wow, cool!
Any idea if the issue with blurry glass cockpits with DLSS enabled have been fixed?
When is SU11 scheduled for release?
It has not so far. Temporal ghosting on motion, and resolution-based blur, on Garmin screens remains just as bad as under SU10.
Ok, letâs keep our fingers crossed they will eventually be able to get it sorted.
Iâm not fully sure if itâs mainly on Asoboâs or Nvidiaâs table to get it fixed?
Here is my current take, Iâm trying to reason on how the DLSS 3.0 frame rate upscaling could work in VR, and what kind of latency you would be getting and why this would be very challenging to âmake it rightâ.
DISCLAIMER: I have no insider knowledge of DLSS 3.0, there is no technical documentation available AFAIK, except for the FRUC article I posted above. Nothing Iâm writing here is backed by technical data from Nvidia. Everything I write down here is based on a quick analysis after drawing these few diagrams.
First, we start by looking at latency in the most ideal case: app can render at full frame rate (ie: hit the rate of the displays). No frame rate upscaling of any sort is done.
We hit the minimal latency here. At V-sync, we get a predicted display time, from which we can ask the tracking prediction algorithm to give us a camera pose to use for rendering. We have an opportunity just before display to perform spatial reprojection to correct (as much as we can, see my other post about reprojection) the image using the latest tracking infornation that we have.
Note that the timeline for this âReprojâ and actual display time isnât at scale: scan-out (the beginning of the display on the actual panel) may happen more than 1 frame later, due to transmission latency.
In this example, I give typical WMR end-to-end latency (40ms), which I read is also the range with other headsets. Everything is awesomeâŠ
Now we look at a more realistic case: app cannot render at full frame rate. We assume here the app renders at an exact half rate, and that half rate is perfectly in phase with the displayâs refresh rate meaning we start rendering a frame exactly at V-sync):
We get the same predicted display time as before, because the platform optimistically assumes that we will hit the full frame rate. But when we donât (see that âmissed latchingâ, we did not finish rendering the frame on time), we increase our end-to-end latency by 1 frame, In other words, the predicted camera pose that we will use for rendering will be 1 frame âolderâ that we wish. Whatever additional headset motion happens in that period of time will increase the tracking error. We still do spatial reprojection close to when the image is about to be displayed, but that reprojection might have to correct that 1 more full frame of error, which reduces the smoothness (because spatial reprojection can only do so much, and itâs not very good at correcting depth perspective). But from a latency perspective, 50ms is still acceptable.
Letâs take a look now at your typical motion reprojection. Again, you can read more about it in my other post:
The âforward reprojectionâ technique means you always start from the most recent image that you have, and you create a new frame from there by propagating motion (moving pixels in the direction you guess they are moving). This scenario does not increase latency. The most recent frame from the app is displayed as soon as possible. A new frame is generated next, and since it is starting from the most recent frame, the end-to-end latency after propagation remains the same. We continue to use late-stage spatial reprojection to further cancel prediction error. From a latency perspective, the situation is identical to the one described previously. The combo motion reprojection + spatial reprojection gives you a smoother experience than spatial reprojection alone (here again see my other post). But it may cost quality (motion reprojection artifacts).
Finally, we look at what âbackward frame generationâ could mean. This is what DLSS 3.0 seems to be doing, but given the lack of technical documentation out there, this remains a guess.
Because backward frame interpolation needs to have the 2 most recent frames in order to generate a frame to go in-between them, it means we must wait until both frames are fully rendered before beginning the interpolation process. We submit the oldest frame for display in parallel to the interpolation happening. That frame, in the case of the app running at half frame rate, now has 2 extra frames of latency in terms of camera/tracking prediction, on top of the 1 frame latency penalty from the missed latching (see earlier diagram). This takes us to quite a higher latency (eg: from 51.1ms to 73.3ms). We are now beyond the threshold of what is acceptable.. We still rely on the spatial reprojection to attempt to correct the camera pose at the last moment, however due to the extra wait for the 2nd app frame, that late-stage reprojection now needs to correct an error potentially 3x times higher than before (and will do a much less good job than beforeâŠ). Actually, due to the extra frames being inserted in-between the 1st and 2nd app frames, the 2nd app frame now has a time-to-late-stage-reprojection even higher, in the case above, it is 4 full frames. At 90 Hz that is 44.4ms of possible error. Any acceleration, deceleration, sudden motion in that time period will cause noticeable lag.
So yeah great you are getting higher frame rate. You can see your FPS counter show those crazy numbers (like 90), but your latency doubled. This is assuming you can hit 45 FPS before even enabling DLSS at all. If you can only hit 30 FPS, then take all those numbers and add 2 more frames of latency (eg: from 62.2ms to 95.5ms) and 4 more frames to the reprojection error (88.8ms of prediction error to correct for). This means that every time the game render an image, it does it nearly 1/10th second ahead of the image actually being displayed, without any way to predict with certainty the actual headset position at the time of display.
This is why I suspect frame generation is better for 60 fps â 120 (which I donât care about) than it is for 20 fps â 40, which is where folks with MSFS at heavy airports sometimes would end up⊠artifacts in the generated frames will be more visible at lower fps, and input lag will literally get worse than running it at the original 20 fps. ;_;
Being doing some reading on the very little we know about DLSS 3 and frame generation⊠I checked all Nvidia developer websites and no traces of any SDK yet. But I kept digging a little bit.
So apparently âDLSSGâ is sort of the terminology used to refer to DLSS with frame generation, and you can see 2 sets of DLSS DLLs (lol) in the MSFS game folder for SU11. One DLL is the plain DLSS SDK (nvngx_dlss.dll
) and the other one is the Streamline SDK wrapper (sl.dlss.dll
). They both have a âGâ variant: nvngx_dlssg.dll
and sl.dlss_g.dll
which would be the frame generation variants.
Of course itâs not a simple âreplace the DLLâ scenario the Streamline DLL is showing a bunch of extra logic for querying and presenting the interpolated frames (things named sl.dlssg.present.{real|interpolated}
and all kinds of other cute strings pointing at additional functions and/or signals), which obviously the game would needs to implement.
But that takes me to the interesting part: the UserOpt.cfg
file has a DLSSG
on/off value that has already been established to be for 2D monitor, but it also has a DLSSGVR
on/off, presumably for VR.
Has anybody with a 4090 tried to set this DLSSGVR
to 1 and see if thing blow up in VR ? Iâm surprised given the amount of tweakers on the forum that it hasnât been brought up yet.
(to be honest, I highly doubt this setting would actually work, but if itâs there⊠who knows ?)
Edit: I forgot to mention, but it looks like Frame generation might be DX12 only? So be sure to try while DX12 is selected!
More technical mumbling for those interested
Based on some more analysis of the DLLs and some disassembly, this is how Iâm guessing it works:
-
Streamline SDK works by exposing properties to set things up, and then the app has to âtagâ resources with specific marker (âhey this is the image from the gameâ, âhey this is a depth bufferâ, âhey this is where you should place the supersampled image outputâ). This is information you can find in the Streamline documentation (but none of the DLSSG specific bits yet of course).
-
Strings analysis shows some new properties and markers related to interpolation:
-
Strings
DLSSG.*
correspond to input properties to the DLSS API. Stringssl.dlss_g.*
correspond to Streamline properties and markers for the application to use. These are the ones we care about. -
The app must create a texture to receive the result of the frame interpolation, and tag it with
sl.dlss_g.interpolated.buffer
. -
The app also provides all the traditional input for regular DLSS,
sl.dlss_g.depth.buffer
,sl.dlss_g.mvec.buffer
, and all the other input parameters such as sub-pixel jitter for the frame etc, and also tag a destination texture to received the supersampled (anti-aliased) non-generated image (sl.dlss_g.real.buffer
) -
Upon calling the traditional DLSS âEvaluateâ function, DLSS will now produce two outputs: the
sl.dlss_g.real.buffer
supersampled image for the input, and thesl.dlss_g.interpolated.buffer
image to fit between the previous output and the current output. -
It is then the responsibility of the game to asynchronously present the two frames (interpolated then real) at the appropriate time.
That last bit is quite important, because it needs the game engine to understand the concept of interpolated frame, and write extra logic to display the frame at the right time. If that bit is not there in the game engine, it cannot just be hacked up (in something like OpenXR Toolkit for example )
But none of this helps with solving the latency issue mentioned in my earlier post thatâs still the biggest blocker in even considering frame generation for VRâŠ
Hi Matt,
I made a post about some of this here. I was playing around with these settings and even saw that DLSSGVR in the UsrOpt.cfg file and tried it. It didnât do anything.
Oh, and yes you are correct, it requires DX12 be enabled and I believe HAGS has to be on too.
haaa I searched for âDLSSGVRâ and did not really find anything.
As I said I did not expect it to work, but it was worth 2 min trying
Yeah, I switched back to SU10. But I remember it didnât crash anything and it didnât appear to do anything at all. (Though Frame Generation works to the primary/monitor display) However, I also had Motion Reprojection enabled. Would this matter? If you want I can try it out again, rejoin SU11 (and I know which folders to backup this time LOL)
@iBeej
Could you please share which folders should be backed up. Thanks!
As I said I did not expect it to work, but it was worth 2 min trying
Yup, I just jumped back in and tried it out again. Doesnât do anything. I find it interesting that itâs in the UserCfg.opt though!! I wonder if there will be an attempt to make it work. I just donât know how well itâs going to go with the latency.
Yup, you want to backup the fs-base and fs-base-ui-pages folders BEFORE you opt in to SU11. If you revert back to SU10, you will need to opt out and then put those backed up folders back and everything will work peachy.
I wonder if would it be possible for nVidia to rewrite the code for the frame generation pipeline, so that it uses forward in time interpolation in VR only, the same as motion reprojection?
If so, it could possibly replace MR with a faster solution and better AI frame rendering.
Nothing better than the VR drivers (MRW, Oculus, etc.) know what could be the next frame thanks to headset sensors. I will not hold my breath on Frame Generation replacing retro-projection⊠Only thing is possibly interfacing AI with existing retro projection process for an even better predicted frame. Seems complex IMHO.
Yeah, when we use the term âMotion Projectionâ what we are actually referring to is extrapolation. Where the new frame generation feature on the 4090 is interpolation.
With extrapolation, the algorithm is inferring on previous frame information and drawing a new frame (in to the future). This comes with an unavoidable risk which is some of the pixel data could be wrong when the next real frame is generated. This is especially true on edges of geometry⊠which is why you see âwobblesâ or shimmering on the edges of the render window, or where the aircraft air frame meets the window, or on the trailing edge of the wing, etc. But the benefit of doing it this way, is lower latency, which is desperately required in VR.
With the 4090 frame generation, my understanding is that it is interpolating. Which is inferring information from two real rendered frames and inserting a fake one between them. This comes with a huge downside obviously, as the GPU needs to process 2 frames first in the frame buffer, insert the frake frame between them and THEN draw to the display. This takes more time, thus latency, and could induce motion sickness, particularly if this is tied to head motion. The benefit here is that you get a lot more consistency, less artifacts or âwobbleâ because you have two real frames to work with.
Do with that what you will, but it sounds like there is a real technical hurdle here as it pertains to latency.
Looks like the new Motion reprojection of Motin reprojections frames, not the real ;).
I bought a RTX 4090 because I thought you could use DLSS3 in VR. What a letdown.
Iâm also wondering, are there any REAL flight-simmers out there who use a monitor while playing this game?