Does anybody know if FSR4 provides any improvement to the ghosting we see on glass cockpit displays? Is it any better than DLSS?
No. FSR is similar to dlss with glass cockpit display ghosting. Asobo needs to work on this but it has not been a priority.
Asobo cannot do much about that. The reason for ghosting of glass instruments is that these are technically animated 2D textures without any detectable motion vectors.
NVIDIA basically acknowledged that animated textures are hard to track and that they improved the detection on DLSS 4 - this even was previewed on CP2077 by them using some random billboard in the background.
For AMD FSR it will be the same deal - as long as there is no 3D motion vector involved there will be some ghosting as in such case the motion ist basically guessed by ai with some higher latency.
A magical “exclusion mask” in 3D space does simply not exist to my knowledge (I looked that up for DLSS - you can only mark menus as 2D menu layers to be excluded. As these are rendered separately on top of the 3D scene it’s easy to do).
Do you have the same issue if you follow the OP’s steps to reproduce it?
Yep
Provide extra information to complete the original description of the issue:
Blurry and smug on the screens
Are you using DX12?
Yep
Are you using DLSS?
Yep
If relevant, provide additional screenshots/video:
They can MASK the screens of the aircraft, like farming simulator has masked them so using DLSS wont affect the screens. This is possible.
If course it’s possible.
I don’t know why there are two threads about the same topic, but in the other one I posted an example from DCS where it’s already solved. Nvidia’s DLSS programming guide also talks about masking of problematic elements like animated textures (= displays) or particles etc.
Question is if Asobo can do the same in MSFS.
BTW if you want to vote on the issue, the linked thread is the one that’s listed on the feedback snapshot, not this one here.
Asobo pointed out that they’ve already marked glass screens and water textures as reactive surfaces. The problem is that the DLSS algorithm isn’t making those surfaces completely reactive. If it did, they would appear jittery (a problem that was visible in the FSR 2.1 implementation from FS2020).
Asobo’s TAAU avoids these issues by masking the screens both from the temporal algorithm and the camera jitter. However, this results in blurry/pixelated screens when the render resolution is lower than the output.
The best solution would be to render them at the output resolution after the anti-aliasing/upscaling pass, which is probably what DCS is doing. However it might not be possible in the MSFS engine, or the performance cost would be too high. Only Asobo know.
Nope, not the way you think - as mentioned within their guidelines (page 65 f.):
9.4 Future DLSS Parameters
DLSS is a rolling suite of algorithms and neural networks that are under constant research and development by various groups at NVIDIA. As part of this research, NVIDIA is examining ways to use different data generated by the rendering engine to improve the overall image quality and performance of DLSS.
The following is a list of rendering engine resources that the DLSS library can optionally accept, and which may be used in future DLSS algorithms. If the developer can include some, or all, of these parameters, it can assist NVIDIA’s ongoing research and may allow future improved algorithms to be used without game or engine code changes. For details on how to pass these resources, please see the DLSS header files and if needed, discuss with your NVIDIA technical contact. Please also check that there is no undue performance impact when preparing and providing these resources to the DLSS library.
1. G-Buffer: a. Albedo (supported format – 8-bit integer) b. Roughness (supported format – 8-bit integer) c. Metallic (supported format – 8-bit integer) d. Specular (supported format – 8-bit integer)1 e. Subsurface (supported format – 8-bit integer) f. Normals (supported format – RGB10A2, Engine-dependent) g. Shading Model ID / Material ID : unique identifier for drawn object / material, essentially a segmentation mask for objects - common use case is to not accumulate, if the warped nearest material identifier is different from the current, (supported format – 8-bit or 16-bit integer, Engine-dependent)
2. HDR Tonemapper type: String, Reinhard, OneOverLuma or ACES
3. 3D motion vectors - (supported format – 16-bit or 32-bit floating-point)
4. Is-particle mask: to identify which pixels contains particles, essentially that are not drawn as part of base pass (supported format – 8-bit integer)
5. Animated texture mask: A binary mask covering pixels occupied by animated textures (supported format – 8-bit integer)
6. High Resolution depth: (supported format – D24S8)
7. View-space position: (supported format – 16-bit or 32-bit floating-point)
8. Frame time delta (in milliseconds): helps in determining the amount to denoise or anti-alias based on the speed of the object from motion vector magnitudes and fps as determined by this delta
9. Ray tracing hit distance: For Each effect - good approximation to the amount of noise in a raytraced color (supported format – 16-bit or 32-bit floating-point)
10. Motion vector for reflections: motion vectors of reflected objects like for mirrored surfaces (supported format – 16-bit or 32-bit floating-point)
In short, these resources (like the “animated texture mask”) and their rendering information are not part of any NVIDIA DLSS algorithm released to public pre August 2025 and are only accepted and interpreted by current public DLSS algorithms for Nvidia to collect data and research possibilities to cope with them - other than that DLSS does specifically nothing with these resources. Also, not even CP2077 as Nvidias go to showcase game has any solution implemented to get rid of ghosting on animated billboards in the distance - that also was only reduced by DLSS4.
Most likely the reason DCS devs don’t even talk about “HUD_MFD_after_DLSS = true”, nor even the official DCS changelog mentioning it (only the general move to DLSS4 is mentioned here) is that they use a highly experimental non standard approach, maybe some strange workaround to make HUDs and MFDs render like menus. Otherwise, if it was a perfectly supported solution they wouldn’t hide it the way they do ![]()
Because of this Asobo most likely won’t add anything in these regards light hearted, even if it’s possible on paper. Likely they also know that Nvidia is working towards a solution by further ai training.
It working in a different game satisfies the definition of the word “possible”. That’s all I can care about. Whether Asobo can figure out a solution is up to them. Experimental or not.
They recently said they are working on reproducing the issue and masking the screens.
Really? Where did they say that? It would be fantastic to make progress here
Can you tell us where?
It’s such a shame that 5 years in and this issue is still persistent for both 20 & 24. I’m using TAA and it’s beautifully crisp.
They say its under invertegation for the latest BUG fixes scroll down where it says wishlists
Unfortunatly its said that for years. I don’t think the investigation led anywhere
This needs to be combined with:
It just says its under investigation on the road map.. keep up voting people!