SU14 and Reprojection Mode: What's really going on?

I’m a Reverb G2 revised owner with a 3080 ti, both which I had for over two years. One VR feature that was in the limelight ago a year or so ago was Motion Reprojection (MR), which was originally only available in the OXRTK. At that time I tried MR but the jiggling visual artifacts it produced were unacceptable, so I set MR aside and sort of forgot about it. Later MSFS added a VR graphics option to enable “Reprojection Mode”. IIRC I tried it at that time and experienced the same jiggling artifacts, so again I ‘forgot about it’. But yesterday I decided to give Reprojection Mode another try with SU14 and discovered that the artifacts are gone. It also appears it’s injecting generated frames, but when I reread https://forums.flightsimulator.com/t/motion-reprojection-explained/548659, it seems that what MSFS calls Projection Mode (depth) is not exactly a full Motion Reprojection implementation, for the FPS isn’t being throttled to 30/45 by this setting. If I exclusively enable MR in the OXRTK, the throttling occurs and the artifacts are no longer present. Though my base FPS in VR is 55-60, MSFS is always throttled to 30 FPS by the OXRTK’s MR. Bottom line with all of this is that:

  1. Reprojection Mode/MR no longer produces jiggling artifacts
  2. Is Reprojection Mode (depth) really doing any sort of frame injection? Obviously it behaves differently than OXRTK’s MR.

I’d consider it a rather cruel hoax for MSFS to add that option if it wasn’t tied to actual Reprojection functionality at the time the option appeared. MSFS includes a comment that only Depth may be implemented by the HMD vendor. Any thoughts on any of this? Thanks for reading.

You can manage the MR settings in OpenXR Tools for Windows Mixed Reality (don’t confuse with OpenXR Toolkit). You can enable there an info box, visible in VR, telling you if you actually are in Motion Reprojection mode.
AFAIK there is only one Motion Reprojection mode in WMR/G2, enabled in OpenXR Tools for Windows Mixed Reality. There is no such thing as “OpenXR Toolkit Motion Reprojection”.
The MR settings in MSFS can only potentially improve the behavior of Motion Retrojection, but whether it actually happens for WMR I don’t know. See the details in the post by Matthieu Bucchianeri which you linked in your post.
Quote:
The “Depth”-only setting enables the game to pass depth information which is used on WMR for better spatial reprojection (when motion reprojection is off - this setting has no effect when motion reprojection is used). I can’t speak for other vendors, but I suspect they can also do better spatial reprojection with this setting.

I would love to see your settings if you reach 55-60 FPS with RTX 3080 Ti and Reverb G2.
I’m far from those numbers.
Care to share?

1 Like

Well, I’m sort of cheating to get 55-60 fps. One setting gets me that fps: DLSS Ultra Performance. Usually I use DLSS Balanced, but for testing OXRTK’s MR I wanted to see if I could get it to throttle MSFS at 45. With DLSS Balanced I get 45-50 fps with the settings mix I use, which are all the same used for Ultra Performance.

There is no such thing as “OXRTK Motion Reprojection”. See my post above. What you exactly have in mind?

OXRTK is shorthand for “OpenXR Toolkit”. On its System tab (available only in its VR overlay menu), Motion Reprojection can be enabled and it works. Seems that you didn’t know what I was referring to.

I found that if the OXRTK is active (i.e. not disabled), it prevents the MS toolkit from enabling MR regardless of whether the OXRTK has MR enabled or disabled. When MR is enabled via the MS toolkit, it produces the jiggly artifacts, as also does MR enabled via OXRTK (didn’t notice that earlier). Still rather baffled by MSFS’s Reprojection Mode depth. It’s clearly doing something, as evident by the jiggling edges of 2D panels, but yet the frame rate is not being throttled. Can’t find an external FPS tool that works with MSFS, having that info would be ‘the smoking gun’ I’m looking for.

OXRT doesn’t introduce new way of MR. It just allows steering in a easier way the WMR MR modes, without the need to set them each time in OpenXR Tools for Windows Mixed Reality.
It is extremely easy to tell if MR is working - you can immediately spot the difference if you have frames injected to a full speed of 90Hz, all becomes fluid especially when you pan your head. And you may see the “jelly bean” artifacts especially though the spinning prop arc.
If MR works also the FPS is the headset frequency divided by (in WMR case) 2, 3 or 4 (for 90Hz it means 45, 30, 23 FPS). In the OXRT you can lock it to a given FPS, avoiding disruptions when the system detects scene simple enough to go to the next FPS level or complex enough to force it down. You can deliberately stay on the lower level to avoid such jumps.
If you want to know for sure in which mode the MR is (and if it is actually working by injecting frames) - enable the colorful info box in the OpenXR Tools for Windows Mixed Reality (check Display Frame Timing Overlay).

What depth mode in MSFS is doing - see my post above with the quote from Matthieu Bucchianeri’s post. It’s not another independent MR mode. It’s a function allowing MSFS to pass additional information to the VR runtime (WMR in G2 case) to improve the MR processing. If MR is not enabled - “depth mode” does nothing and there is no reason for looking for any FPS limiter.

See the manual:

Quote:
System tab:
(…)

  • Motion reprojection (only with Windows Mixed Reality): Enable overriding the Motion Reprojection mode. Default means to use the system settings (from the OpenXR Tools for Windows Mixed Reality).
    • Lock motion reprojection (only with Windows Mixed Reality, when Motion Reprojection is forced to On): Disable automatic motion reprojection adjustment, and lock the frame rate to the desired fraction of the refresh rate.

stekusteku already gave you all the important references.

  • OpenXR Toolkit Motion Reprojection settings are just a shortcut to OpenXR for Windows Mixed Reality Motion Reprojection settings

  • “Reprojection mode” in MSFS currently does not relate to Motion Reprojection at all and submitting depth has no impact on Motion Reprojection with Windows Mixed Reality (and probably all other vendors too)

  • The other form of reprojection, that I called Spatial Reprojection in my other post, is always on regardless of any setting. That form of reprojection is only useful for compensating movements of the headset, and it does not account for motion in the scene (which is what Motion Reprojection does)

  • Your headset is always generating frames when you don’t hit frame rate, either through the use of Spatial Reprojection alone, or through a combination of Spatial and Motion Reprojection together (when enabled)

  • Without Motion Reprojection, your head movements will still feel smooth at a lower frame rate thanks to the always-on Spatial Reprojection, but the overall scene still looks juddery. With Motion Reprojection, both head movements and scene motion are propagated in the future, creating a truly smooth experience.

  • The “Reprojection mode” in MSFS only provides additional information useful for Spatial Reprojection, in order to correct the perspective of the reprojected frames even more.

  • Motion Reprojection always comes at the cost of quality, since Motion Projection is about predicting the future, a science that is very difficult to nail down.

  • There are many factors that affect the quality of Motion Reprojection, such as your GPU, the overall system performance, but also greatly, the actual content (some content is much harder to predict than other, in particular fast motion, or things with transparency, or UI). Not too much you can do about that.

5 Likes

Thanks for the clarification of the situation, it is now understood what is going on. Your feedback is very appreciated. There’s no doubt things are a bit over my head here.

1 Like

I find that this behavior is very interesting in that it could noticeably effect the smoothness of the VR experience in MSFS. The following is 100% conjecture for I know nothing about the goings-on in a HMD system so please bear with me.

Considering that “always generating frames when you don’t hit frame rate” implies that the VR system attempts to enforce a constant 11.111 ms screen refresh duty cycle for a 90 htz HMD (e.g. Reverb G2), an application that delivers a new frame every 20 ms (50 fps) probably causes judder (for the lack of a better word) in the animated display. Here’s some scenarios:

“best effort”
0 ms - first frame from app displayed
11.111 - generated frame displayed
20 ms - app frame displayed
31.111 ms (or 33.333?) - generated frame displayed
40 ms - app frame displayed
51.111 (or 55.556?) - generated frame displayed

80 ms - app frame displayed
91.111 ms (or 100, rounded up, which resyncs and uses the next app frame) - generated frame displayed (?)

100 ms - app frame effectively resyncs the cycle
111.111 ms - generated frame

If “best effort” is happening, either the HMD is effectively running at greater than 90 fps for each generated frame’s display time is 8.889 ms rather than 11.111, or the time that the app frame is displayed grows longer and longer while the display time for the generated frame shrinks until the resync occurs. Both of these cases I imagine results in judder, the second case being more so.

“enforce 90 fps”
0 ms - first frame from app
11.111 - generated frame displayed
20 ms - frame received from app
22.222 - app frame displayed (2.222 ms latency)
33.333 - generated frame displayed
40 ms - frame received from app
44.444 - app frame displayed (4.444 ms latency)
55.555 - generated frame displayed

80 ms frame received from app
88.89 - app frame displayed ( 8.889 ms latency)
100 ms frame received and displayed from app, resyncing the system
111.1111 - generated frame displayed

If “enforce 90 fps” is happening the app’s frame display latency grows and grows until the resync. I think that this certainly results in judder.

These are the only two (really it’s three) cases I can think of. Both are no-win in that they’ll cause judder due to the skewing of the display timing of the app and the generated frames. Given if any of these cases are happening, a remedy for this VR judder problem is easy: either run at 90 fps (dream on), or a whole fraction there of, e.g. 45 or 30 (15…no way!). Obviously higher is better, so via the OXRTK I now frame limit (throttle) MSFS to 45 fps, and at least to me the judder seems less or nonexistent. BTW, I lower a lot of settings in IL-2 Great Battles to get 90 fps. IMHO VR in that combat sim becomes mess at <90, thus definitely worth the lowered image quality (as it also is in MSFS to get a solid 45 fps in VR).

Any thoughts on this?

Thanks for your attention.

This is how displays work. They have a scheduled scan-out at regular interval programmed in the hardware, there is no way around it.

This isn’t how it works no. You can only display at 11.111, 22.222, 33.333. So your frame at 20.0 is “latched” for later display. Then at 22.222 the frame is spatially reprojected with the latest headset pose to reduce the juddering.

Yes, this is more like it. Except that the latency typically won’t accumulate as you’re saying. This is because the VR stack will possibly make you “sleep”/wait in order to reduce the latency and avoid accumulation.

Unfortunately the strategies to avoid this accumulation may be more or less well implemented,., you are on WMR so you might be familiar with the “Turbo mode” or the “Prefer framerate over latency”. What these options do is literally avoid any sleep/wait and request to pump frame as quickly as possible.

With what I told you above, you can now go and compare with/without Turbo mode. Tell me which one you prefer. Turbo on WMR has historically helped increasing framerates dramatically, sometimes 2 digits, at the cost of this added/accumulated latency. IMO it works a lot better when not worrying about that accumulation.

You got it. This is why locking at 45 FPS feels better than running at 50-55 FPS. You are making the frame pace and the amount of spatial reprojection happening consistent for every frames.

EDIT: Here is another diagram I made a while ago about spatial reprojection (LSR):

  • The application is not able to keep up with the headset’s panel frame rate
  • For each missed frame latching, the late-stage reprojection (LSR) reuses the last submitted frame
  • The LSR queries the most recent tracking information
  • The LSR performs a simple depth reprojection (when the application submits depth information) or auto-planar reprojection (when the application does not provide depth information)
    • Almost no PCVR application provides depth information to the runtime
  • The reprojected frame is used for scan-out
  • When the application is finally able to submit a new frame, it has to wait for the next latching opportunity, therefore adding latency
3 Likes

Wow, that was a quick response! And yet again, incredibly helpful info…much Thanks! Always had a gut feeling about a 45 fps lock in MSFS VR, but knowing that there is solid technical reasoning behind its use makes it a complete must do. I’ll give Turbo mode a try, but as any serious MS flight simmer knows it’s all about smoothness and immersion, not about frame rate. Thanks yet Again!

1 Like

While on this topic, and because you seem interested. Let me visually explain the jiggling. And why it’s a hard problem. This is applicable to Motion Reprojection only.

I originally made these captures for an internal presentation, but I see no reason to not share them since they contain no proprietary information (or I will blur anything that does).

The reason why Motion Reprojection causes “jello” effect, is because of how it works.

As explained in the other post, it uses the 2 most recent images to evaluate the motion in the scene. Here is an example of “Motion Vectors” generated from two consecutive stereo images (older images at the top, newer at the bottom):

In that scene, the helmet is rapidly becoming visible from behind the arch, moving from right to left, and slighty down.

You can see the Motion Vectors. Here the helmet can be seen in red, which is arbitrary, so I’ve drawn on top of it what this red color means (it means those vectors going from right to left and slightly down).

This process of estimating motion between two images is extremely complex and costly. It’s also very error-prone.

The next step is “cleaning up” those motion vectors. You see the blue/yellow in the motion vector image: this is noise, this is very small motion that isn’t real, mostly caused by instability and estimation error, especially around rough edges.

So we apply some post-processing to clean it up:

It’s not 100% perfect, but it removed quite a lot of noise. It also smoothened the edges of the moving objects.

Now that we have motion vectors, we can generate our frame. We do this by covering the real frame with what is called a grid mesh. It looks like this:

Then, for each of these “tiles” in the image, we are going to look at the corresponding motion vector(s) at their position, and we are going to use the amplitude and direction of the vectors to “move” the corners of these tiles (aka “propagate” the pixels in the direction of the motion). We have to adjust the amplitude specifically for the timestamp of the image being synthesized. So for example if we computed vectors between 2 images at 45 Hz (22.2ms), and we want to generate an image 11.1ms in the future (to achieve 90 Hz), we will divide the amplitude of the vectors by two.

This produces a result like this:

Now, if we also show the next (real) frame, you can see exactly the differences between the propagated image, and what the real image should have looked like:

And here it is. Here is the jello..

As you can see, because we applied a relatively coarse grid onto the initial image, when moving the corners of the tiles, it creates some distortion (circled in red). And when de-occluding details that we did not see in the initial image (circled in yellow), those details cannot be reconstructed (we literally do not have these pixels).

And these are what create the jello effect. From a real frame to another, when adding a fake frame in between, the fake frame is imperfect in ways that create those artifacts.

There are only a few things that can be improved here.

  • Quality of the motion vectors. As explained, this a very complex and slow process. Improving it is difficult. Using newer engines like Nvidia Optical Flow helps. Using temporal hints to help the estimator is great. Adding additional post-processing to clean imperfections also helps (this was one of the change in WMR 113, with temporal rejection added).

  • Higher-density motion vectors. The reason the grid for our motion propagation is coarse, is because it needs to match the availability of motion vectors. A typical motion vector engine can only generate 1 motion vector for a 16x16 block. Then on top of that, generating 1 motion vector for each block at full resolution, is too slow. So there is further down-sampling happening. Using newer engines like Nvidia Optical Flow helps here too, since NVOF can do 4x4 blocks.

  • Perfect motion vectors. This is actually something the game already computes. Yes, when MSFS renders its frames, it will compute a perfect set of motion vectors, which is needed for TAA or DLSS. All it would take, is for MSFS to give those motion vectors to the VR platform, and for the Motion Reprojection to use them. This is effectively what the Reprojection Mode: Depth and Motion option does. Unfortunately, no VR platform ever supported it. Even Meta, who introduced that idea, never supported it on PC. Only Quest standalone applications can use it.

Even with the ideal solution to all the problems above, the issue of de-occlusion around edges, is not solvable. You cannot guess what pixels are behind other pixels, this information was either 1) lost when another pixel covered it or 2) never computed due to Z-buffer testing.

This is why the solutions such as DLSS Frame Generation or AMD FSR Fluid Motion, do not use forward prediction, but instead perform interpolation between two well known images. Then you only need to guess the location of the pixels you propagate, and not their color.

9 Likes

Obviously a master is at work here. Thanks for this background info, again much appreciated.

BTW, before retiring recently I worked in aerospace developing realtime hardware-in-the-loop and associated data acquisition/reduction/analysis software for guided missile systems testing labs. Got to rub elbows with actual rocket scientist there, sort of feels that way here. Thanks again.

2 Likes

We all are amazed by mbucchia! So from the last mention of DLSS FG, if we were willing to accept an increase of one frame (11msec for 90Hz) latency, could we have better motion reprojection image quality?

Nice, HITL is fun, both building hardware and all the test software :slight_smile:

1 Like

No it’s way more than that unfortunately.

I wrote an analysis here, it’s 4 times this number.

Regardless of latency, what’s truly the blocker for DLSS Frame Generation in VR is the poor performance of the interpolation, which I described here:

(this is from writing a working prototype of it with MSFS)

2 Likes

Sort of a side bar here, but in the world of HITL (aka HWIL) these PXI systems really rock. We used these for 1553 avionics bus and RF/IQ data communications. I did some programming for these using their dataflow language LabView.

Did some testing using Turbo Mode and saw the FPS boost, even got my FPS up to 70 using a trivial case with DLSS Ultra Performance. In the end, the 45 fps limit seems the best overall compromise. Spent a fair bit of time carefully reading your posts on MR and DLSS FG. Still makes my head hurt taking it all in. I don’t yet fully understand what SR is up to in terms of limiting latency vs FPS but that’s OK, I’ve no need for any further explanation about this.

One thing that I’ve learned through all this is that the MSFS VR situation is sort of fitting a square peg into a round hole. It’s never going to fit well unless we get 90 FPS native, perhaps with MSFS 2024.

The dedication you’ve displayed in pushing the VR envelope is amazing and beyond reproach. Thanks for your openness in all this and putting up with my naivety.

2 Likes