OpenXR Toolkit (upscaling, world scale, hand tracking...) - Release thread

It’s been a while since I’ve shared some updates on random topics. So I’ll share one now.

Back in October last year, I started thinking about DLSS Frame Generation (or DLSS FG, please call it that and not DLSS 3 - because that’s confusing!) and how to use it in VR.

I wrote a post here explaining potential challenges with latency. Back then, Nvidia had not released any developer interface for DLSS FG, so there wasn’t much to do. I never take “but it shouldn’t work” for granted, so I had always planned to revise all of this at a later time.

Fast forward to March of this year, Nvidia finally releases a public SDK for DLSS FG. However, the disappointment is immediate: that interface only supports flatscreen. There is no way to even experiment with it in VR. At that time, my plate is pretty full with DCS foveated rendering anyway, and it stayed full with it until 3 weeks ago.

Now to the beginning of August. I finally have a little bit more time, and after many months on-and-off thinking about how to build an “asynchronous compositor”, which would be a building block for any experimental motion reprojection efforts, I am able to focus on this. Nvidia still has not produced a proper SDK for DLSS FG, in spite of 3 of my requests to them. So I decided to go the really hacky way and reverse-engineer how MSFS does DLSS FG for flatscreen.

After a few days (nights) of intense work, I’m pretty much there: I have an asynchronous compositor, I have a way to use DLSS FG with it with 2 eyes (pretty important for VR!), and I am able to leverage the mysterious “Depth and motion” option in the MSFS VR settings to feed what’s needed for DLSS Frame Generation. But don’t get excited until you read what’s next.

After a few days now experimenting with this, my findings aren’t quite encouraging.

Let’s start with the good news. The latency does not seem to be that big of an issue. Or at least right now it’s not an issue that is breaking the experience.

Bad news! The performance of the DLSS FG algorithm is quite disappointing. With the performance I am measuring so far, it is not usable above a resolution of 2K per eye if you are hoping to achieve 90 Hz (so forget about your G2, Aero or Crystal…).

That last point is obviously a big blow. The whole point of this tech would be to give your high-end headset a chance at the frame rate it needs. But when you have 2 eyes to render and upscale, this is just too much.

So at this point, I’m not quite sure anything very positive will come out of this. There are a few other issues I have not solved, such as scheduling interference and some blurriness of the image, but I’m not even convinced they are worth looking into unless I have an epiphany with the performance issue above. But we will see. One would say an option would be to re-implement Super Resolution after Frame Generation… Yes, this would be possible, but the amount of effort needed for that is tremendous, and it would produce worse image quality than Super Resolution as implemented in MSFS itself, which I am getting a vibe “doesn’t make everyone happy” today already.

Nvidia has been of no help to me through this process. They have refused to assist with anything I’ve asked them so far. Their “developer support” has provided absolutely 0 support.

With FSR Fluid Motion announced earlier, I will probably still revisit those findings later (but who knows when it will be available to developers - look at Nvidia’s example - it’s been nearly a year and they did not deliver…).

56 Likes