Something I noticed recently:
Increasing the DLSS quality level from Ultra Performace to Quality has a HUGE impact on CPU frame latency, ie, the “mainthread” stat in the developer overlay FPS counter.
Exhibit A: DLSS Ultra Performance setting:
Exhibit B: DLLS Quality setting:
How is this possible? Common wisdom is that DLSS only really hits GPU, not GPU. Happens with both the G2 and the Pico4.
For the Pico 4 it’s actually quite annoying as I can hit 72FPS using Ultra Performance, but if I go up to the next level, Performance, I’m in the 60s, and ironically the CPU IS NOW THE LIMITING FACTOR, because the mainthread time is above 13.8ms (which would be required for 72FPS). The GPU would be able to push 72fps in Performance mode just fine, but the increased CPU load that DLSS is clearly causing is holding it back.
Anyone have any idea why this is happening?
EDIT: Just had a look in 2D with TAA and it’s the exact same thing:
Renderscale 100% => mainthread = ca. 10ms
Renderscale 200% => mainthread = ca. 20ms
So seems to just be general MSFS weirdness. I’m beginning to wonder whether the main thread has to wait for the GPU before it’s done?? But that would make no sense.
I imagine it is because the different quality settings directs the game engine to render the game at different resolutions and then the GPU upscales them using DLSS magic. The higher the resolution rendered by the game at say Quality mode therefore uses more CPU time than Ultra Performance mode.
Seems unusual… usually (in other games), CPU load is hardly affected by cranking up the resolution, right?
If you’re right, CPU load should also increase a lot when changing resolutions/render scaling in TAA mode. Is this the case? I haven’t tested this but probably will.
But most other games arent cpu bound in the first place. The reason you dont get the big performance boost when moving down the resolutions in msfs (like most other games) is for that very reason
I believe that selection of the level of detail for objects and terrain is relative to the on-screen resolution, and that this is picked based on the internal render resolution as opposed to the final screen resolution. This would result in more objects/polygons being processed on the CPU before they reach the GPU at higher render resolutions.
If this is correct, you should be able to test for this by carefully comparing screenshots looking at objects in the distance, and it probably is the same for TAA with render scaling as well.
DLSS reduces load on the gpu and therefore puts more onus on the cpu. With VR and basically two outputs the differences will be compounded. And don’t forget such technologies are created on PC’s and therefore they will always be ahead of VR when it comes to refinement.
Yeah something like this sound plausible, I had a hunch it might be something like this.
Although it does seem strange as this implies there are really 2 respective LOD scales: two controlled via the UI and 2 controlled indirectly by resolution.
This would also mean then common addage of “max out GPU load to reduce CPU stuttering” doesn’t hold true in this game (or not as much) as is usually the case.
Annoying! Hopefully the x3D will improve things a lot
Just had a look in 2D with TAA and it’s the exact same thing:
Renderscale 100% => mainthread = ca. 10ms
Renderscale 200% => mainthread = ca. 20ms
So seems to just be general MSFS weirdness. I’m beginning to wonder whether the main thread has to wait for the GPU before it’s done?? But that would make no sense.
A (Ultra Perf) final resolution in headset is nearly half of B (Quality), so the FPS difference looks inline with this. Looks like you may have a 4090/3090 so the VRAM is not impacted as much.
Also are you on DX11 or 12?
I have been testing, Quality vs Balanced and pushing Openxr to 150-160% to find a visually appealing experience with high framerates. Or using Openxrtoolkit to override the final resolution to around 4200x4100 and changing MSFS setting to find that balance.
Yes, of course it’s related to the resolution. The unexpected thing is that the higher resolution is impacting CPU load so much, when usually it impacts mainly GPU load.
Running my G2 with a 3080 ti I see no appreciable difference in the MSFS main thread execution times whether I use DLSS Quality or Ultra Performance. Both vary between about 9 to 12 ms, with OpenXR scaling at 100%.
Running a i7 10700k at a 3.9 Ghz OC with 2x16 GB DDR4 4000 cas 16. Times posted were observed flying over Los Angeles at daytime live weather in the new AN-2 at about 4K’. Running with a driver FPS limit of 45 fps. The only time I run into cpu limited is when landing at a complex airport without lowering my LOD settings to about 100/75 (usually 180/90). I mostly use DLSS Balanced without problems @45, though if clouds are very light or nonexistent I can run Quality. Without the FPS limit Balanced will yield 50-56 FPS gpu limited. Clouds, SSAO and Buildings on High, most everything else at Medium except all shadow setting (except ground) are fully maxed for the maximal VR cockpit immersion. 1 or 2 other settings are at Ultra for things that don’t directly effect the rendering/image quality.
Update: my original frame times were from using the OpenXR Toolkit’s overlay. Using the dev mode debug FPS panel gave 2x the main thread times (?), and showed I was CPU limited with a green panel. And as observed earlier, it didn’t matter for the timings what DLSS mode was being used. I have no need to pursue this further for I got exactly what I want out of MSFS at this time, but was curious about what you observed. Hope you get the situation with your main thread times figured out.
Researching why my Dev Mode mainthread times seem so wonky compared to what the OpenXR Toolkit reports, I came across this post which you may find helpful. BTW before seeing this I was tending to think the Toolkit’s times made more sense, at least in my case.
Hmm was your framerate capped when you checked the mainthread frametimes? I’ve noticed that the effect (higher mainthread maintimes with higher DLSS or just higher resolution in general) only occurs when my FPS is uncapped
Only the OP, in the sense that it’s claiming that the Dev Mode FPS panel’s mainthread reporting is erroneous. The details given in it took considerable effort and provide info that I thought useful. At the end of the post, its author reports that the issue was corrected by reinstalling both Windows and MSFS, definitely something I won’t be doing in my situation.
Update: Sim flew my test area without the driver frame limit. Still no appreciable differences in the mainthread times for the different DLSS modes, though the Dev Mode FPS panel was now reporting reasonable mainthread values which were around 200 ms more than the OpenXR Toolkit’s overlay cpu frame time, and also the limited state was apparently being corrected reported. BTW, Quality FPS 42-47, Ultra Performance FPS 50-63 (but definitely not worth the blur); both reported as constantly changing between RdrThread and GPU limited.