Nvidia Image Scaling NIS and VR

From 25 to 28 is still 12% more ^^ a lot of people would have 12% more fps ^^

2D monitor you mean without VR but NIS enabled in the driver? The performance impact will for sure be different since VR requires 2 passes of NIS (left eye then right eye) so there is more overhead for sure.

When you do Ctrl + F1 you are switching between a bilinear (cheap, blurry) scaler and NIS (sharp) scaler. It makes sense that “the sharpest gives the lowest FPS” since its using NIS, the more complex one.

This Ctrl + F1 shortcut is only meant for testing purposes so you can compare whether NIS gives better visual quality for the same scale. If the quality is not better with NIS, then you should consider not using it (but it doesn’t look to be your case).

I’m still trying to find a sweet spot for quality/perf and decide whether i go with lower resolution and motion reprojection or no reprojection and higher res. (overhead kills my pc)

NIS looks really bad on glass cockpit but i found that with 65% internal scaling 200% WMR and these settings both interior and exterior looks almost as good as without NIS but i gain 15fps in the 99 percentile range.
scaling=0.77
sharpness=0.55

3 Likes

Okee. I understand it now. The sharper image is sure better. But I have to check a further test what the framerate is without the Api NIS overlay. I will report back.

edit:
In normal mode (wihout the api) my framerate in VR is ~18.
(In my first reply i thought CTRL F1 shuts the Api on/of…)

In APi overlay and with .7 scaling 25
In api overlay and with .6 scaling 33 (But unclear and low resolution visable)
Mode sharpness 0.4 and a scaling .7 gives for me the best result and a great FPS Boost

Good work. THANKS

1 Like

There are some best practices like with any post-processing effect, so only direct integration into the game would produce the best result. This “NIS enabled for all apps” whether it’s the driver setting or my OpenXR layer is super convenient but it can only be a best effort of external integration.

The NIS SDK documentation section 3.1 talks about this but this optimal placement is only possible when done in the game. Same for proper HDR support.

Thanks for sharing your results!

Thank you so much for your work @mbucchia ! Really awesome to see some progress with VR :slight_smile: It works right now and soon the testing will start :smile:

Would it be difficult to make the same for DLSS, too? I’d think that the “flow” would be the same just with different API calls? Just thinking out loud here…

Thanks once again, looking forward to see it progressing!
Stephan

Thanks for working on this! Looking forward to trying it on quest 2 , I like to run via steam vr and virtual desktop.

I guess we should first disable the built in sharpening in the msfs usercfg? Although I am unsure this actually does anything !

Man, you’re unstoppable! Leap Motion project, and now this!

I will check it out, but from your information I can make a couple of suggestions. I’m using the similar API layer for OpenVR (not OpenXR) in Elite Dangerous (a space sim), called OpenVR FSR. It uses either FSR (from AMD) or NIS (from Nvidia) on any GPU. Which one you can set in config file. What’s quite smart about it is there is a setting for RADIUS - and with this setting it uses FSR/NIS for the central area defined by this setting, but on the edges it uses the low-resource bilinear scaling. I find this setting the best, because all HMDs now (maybe with exception of marginal Varjo) have lenses that are not sharp around the edges, so the resource-intensive NIS is wasted there. I found that it works better with FSR because with NIS I could see the circle because of bad AA in Elite Dangerous - it’s much sharper inside the circle. FSR somehow looks much more uniform. But with MSFS that has TAA it may work better with NIS, while saving resources, so you can get by with a higher quality central area in the end. Here’s a quote from its readme file with relevant info. You may want to try this approach if you continue with this project, the round area option may be a good idea. Also, I suggest people try following values for quality. Those are good starting points. I use 0.77 in ED.

'//per-dimension render scale. If <1 will lower the games render resolution

// accordingly and afterwards upscale to the "native" resolution set in SteamVR.
// If >1, the game will render at its "native" resolution, and afterwards the
// image is upscaled to a higher resolution as per the given value.
// If =1, effectively disables upsampling, but you'll still get the sharpening stage.
// AMD presets:
//   Ultra Quality => 0.77
//   Quality       => 0.67
//   Balanced      => 0.59
//   Performance   => 0.50
"renderScale": 0.77,

// tune sharpness, values range from 0 to 1
"sharpness": 0.9,

// Only apply FSR/NIS to the given radius around the center of the image.
// Anything outside this radius is upscaled by simple bilinear filtering,
// which is cheaper and thus saves a bit of performance. Due to the design
// of current HMD lenses, you can experiment with fairly small radii and may
// still not see a noticeable difference.
// Sensible values probably lie somewhere between [0.2, 1.0]. However, note
// that, since the image is not spheric, even a value of 1.0 technically still
// skips some pixels in the corner of the image, so if you want to completely
// disable this optimization, you can choose a value of 2.
// IMPORTANT: if you face issues like the view appearing offset or mismatched
// between the eyes, turn this optimization off by setting the value to 2.0
"radius": 0.4,

// if enabled, applies a negative LOD bias to texture MIP levels
// should theoretically improve texture detail in the upscaled image
// IMPORTANT: if you experience issues with rendering like disappearing
// textures or strange patterns in the rendering, try turning this off
// by setting the value to false.
"applyMIPBias": true,'

Thanks for asking.

Short answer:

It’s possible to support DLSS using a similar concept (OpenXR API layer).
It’s significantly more difficult to support DLSS.
It’s impossible for me to support DLSS.

Long answer:

Let me give you the backstory on this NIS integration project.

It all started when someone on this very forum asked “can we have the same thing as OpenXR custom render scale” but for FOV? I had two thoughts about this:

  1. I’m not sure this is going to make a real difference. But if I’m wrong we could get a small-medium improvement out of it. Risk/reward = Medium.
  2. The implementation to do that will take me less than 3 hours. Difficulty = Very easy.

So I ended up doing it, and a few people tested and and we figured out that it wasn’t worth it. So I spend 3 hours doing something we’re not going to use, but it was not wasted because I learned the basic of making an OpenXR layer.

Then someone (same person actually!) asked about FSR support. I looked at the FSR SDK for about 10 minutes, and my conclusions where:

  1. I’m not sure I can make it work. But if it does it could be awesome. Risk/reward = Very high.
  2. Based on the example code, the implementation isn’t completely trivial, it would probably take me a few days to get it right. Difficulty = Medium.

I’m a busy guy, so I decided that I would not have time to do it now. So I basically tabled it.

Then I saw this thread and the excitement. I looked at the NIS SDK for about 10 minutes and concluded:

  1. I’m equally not sure I can make it work. And it could be awesome. Risk/reward = Very high.
  2. Based on the example code, the integration would be super easy. It’s stateless and the NVIDIA dev made an awesome DX11 sample code that I think I can integrate as-is. It would probably take me a few hours to get it right. Difficulty = Easy.

So I decided to go for it given that it would only be a small time investment and if it failed, I would not have spent too much time on this.

Now for DLSS, I also looked at the SDK and told my self:

  1. I’m equally not sure I can make it work. And it could be awesome. Risk/reward = Very high.
  2. Phew, there’s a lot of stuff to do and code to write. This is a stateful software (meaning you keep stuff around from a frame to another). This will probably take a few weeks. Difficulty = Hard.
  3. Oh wait, I don’t even have a compatible GPU to develop this! Showstopper

So yes I think it is totally possible to support DLSS with an OpenXR API layer, but it’s a much more complex effort that I don’t have time for, and also I would not be able to actually develop this without RTX hardware.

1 Like

I actually started this NIS effort a day after Leap Motion. But then I hit some issues with NIS and paused :slight_smile: Nonetheless what I learned from working on NIS at this time was that I could do graphics rendering on top of the app - which I ended up doing for the Leap Motion support to draw the hands in the game.

Then with a bit more experience from the hands rendering, I got back to this NIS project and actually got it to work. It’s all connected my friend!

I definitely looked at openvr_fsr and it certaintly comforted my in the idea that “it’s possible to support FSR/NIS”.

This RADIUS feature sounds quite interesting. It would take a non-trivial amount of work to make this happen. But even before, we’d need to evaluate where time is spent. Does the NIS shader really takes this long to execute that this optimization is worth it? To be answered.

For now I’m going to just see based on the feedback where this is going to go, and not think about other improvements quite yet!

1 Like

As mentioned earlier, there’s currently an issue with SteamVR and it won’t work unless you use WMR. Going to look into it in a few hours and hopefully I can patch it this weekend (no promises).

2 Likes

Sure, that’s what I meant. Obviously it’s not worth spending the time on that radius until the entire concept is proven to work. If NIS is confirmed to give a sold performance increase with negligible visual quality loss, then making a radius setting will only improve on that.

1 Like

Thanks for your detailed answer :pray: I understand the points you explained and man, I really appreciate your work and time. Your layer works awesome!

1 Like

I haven’t gotten this to quite work yet (not going into VR even though it thinks it should be per the log file…) more testing to be done…

However, regarding the install script. It works fine (Win11 at least) so long as the key tree exists. On my machine, I did not have any keys in the tree below “khronos/openxr/1/”… However, after creating the “ApiLayer/Implicit” structure, the install/uninstall scripts appear to work.

Not sure if there is a way to force in the key creation in the powershell scripts.

What headset are you using? So far it seems to only work with WMR but I am starting to look into SteamVR as we speak.

Thank you this is awesome feedback!!
I had this script written differently before and it did not need to create the intermediate key. So I broke it along the way and need to fix that.

Excellent work, Matthieu!

I’ve tested your layer on two different rigs with the following results:

  1. Machine: 11900K, 3090 OC, 32GB CL16 3600, NVMe, G2
  • Before: everything (!) on ultra/maxed, 100% OXR, 80% TAA → 30 FPS (stable)
  • After: everything on ultra/maxed, 100% OXR, 80% TAA → 45 FPS (stable) - WOW, what an increase!
  • After: everything on ultra/maxed, 100% OXR, 100% TAA → 35 FPS (stable) - more than I have dreamed of!

I’ve tried different combinations of scaling/sharpness of the layer and found a sweetspot somewhere around 80/30. WIth these settings I can see absolutely no decrease of optical quality, it even feels a bit “clearer” (maybe sharper?).

Best experience I’ve had so far!

  1. Machine: 10700F, 3060Ti, 16GB, NVMe, G2
  • Before: most settings on medium, some high, few ultra, LOD 300/200, 100% OXR, 70% TAA → 28 FPS (stable)
  • After: same settings → 36 FPS (stable) - wonderful!

Sweetspot of the layer seems to be 70/30 here, again with no decrease of quality.
You made my father very happy as well :wink:

Thank you very much for a great piece of software!

2 Likes

Forgot to mention:
On machine 1 I’ve had no problems installing the script, whereas I had to follow @tykey6’s and @aeg9748’s advices on machine 2 to get it to run.

If I recalled, you had played with the FOV modifier stuff I published earlier, so based on the theory in the comment above, you already had the parent registry keys created from my earlier script:

G2 w/ WMR (windows store version)

When I try to enter VR, I get a quick flash of the screen but it doesn’t enter VR ( but it looks like it calculated the scaling in the log file). After that, hitting ctrl-tab (or my shortcut combo on my alpha yoke) doesn’t appear to do anything (even if I uninstall the API). I need to restart the sim to then go into VR (without the API running)

Are you sure your OpenXR runtime is set to WMR and not SteamVR? Can you run the OpenXR Developer Tools for WMR to check?