It seems that DLSS 3.0 will bring much more performance than 2.1. It would be good to know in case someone is buying a nvidia 4000 series.
Nvidia mentioned specifically in their launch webcast that MSFS will ahve DLSS 3.0 support 4000 series cards.
DLSS 3.0 is not supported in VR.
Can you give me the link instead of the screenshot, unfortunately I can’t find the information
I got this from the Microsoft Flight Simulator 2020 Virtual Reality Facebook group.
The quote in the image seems to be the same thing that is said on Wikipedia but Wikipedia has no source cited at all for the “At release, DLSS 3.0 does not work for VR displays” part so it is very far from definitive. I guess we’ll eventually know for sure. VR does already utilize techniques that are somewhat like this (ASW and the like).
Does not work for VR displays… What’s the ■■■■■■■ point then lol. Hope they fix this.
That’s what I was afraid of. All I read didn’t mention VR.
That’s a step backwards for NVIDIA. I hope they can fix it.
Then the performance improvement for 4000 series in msfs will be less than advertised.
Note that Wikipedia isn’t exactly well-vetted, definitive information about the technical capabilities of new computing products… I’d suggest someone get a question into MS/Asobo for the next dev Q&A.
I’d say “at release…” sounds like it probably will be supported later in development.
edit: in fact, I distinctly remember reading on two VR fan sites (UploadVR and Road To VR) that it will have huge benefits for VR - but of course they don’t name sources apart from what they have from nVidia.
edit 2: Just found this, which makes for interesting reading:
The newly announced DLSS 3, on the other hand, is designed to take AI rendering to the next level. The new iteration calculates complete frames without burdening the actual graphics pipeline. Up to seven-eighths of the displayed pixels can thus be reconstructed with AI. This can increase the frame rate fourfold compared to an image without DLSS 3.
No way !!! We need dlss 3.0 in vr, that’s where we have the most limitation. Unbelievable, just like the price
Strangely, I just went back to those VR sites to see what they report about this - and both sites have removed their articles on the 4000 series cards.
Wikipedia is often quite well vetted but you should always look for the sources there for any claim it makes as all wikipedia information is supposed to have references with links.
That is a $2200 CAD question (plus tax). I would very much like to know if it will work with MSFS in VR. I’m thinking about placing a preorder, but maybe not yet, because the price is painful, and if it’s not a game changer but a modest upgrade it’s not worth it for me…
DLSS 3.0 increases fps and perceived smoothness by rendering 2 frames and then using AI to interpolate a 3rd frame that is displayed in-between them. By its nature, this technique will add latency, which is not great for VR experience.
Yes me too, I would suggest to wait for the reviews before placing an order. Dlss 3.0 will not be supported in vr at launch soit certainly not 2 times more perf in vr. But I hope for 10 more fps at least in vr in msfs
From what I read, that added latency is actually less than the latency of outputting frames at full native 4K resolution, if I recall correctly. Because DLSS also upsamples the lower-resolution frame. So disabling it and outputting hi-res frames will add more latency than rendering lower-res and upsampling+motion reprojecting with DLSS3. So there isn’t any reason that I know of it should not work with VR, in principle, but we just don’t know yet…
Edit: found it - “But what about latency? Adding extra frames to a game is going to naturally increase latency, right? Well, NVIDIA Reflex largely deals with latency. For instance, in Portal RTX, DLSS 3 latency is only slightly higher at 56ms than DLSS 2 at 53ms, and much better than native latency at 95ms. Things are a bit further apart on Cyberpunk, with DLSS 3 delivering 54ms of latency to DLSS 2’s 31ms, but it’s still below native latency at 62ms. So, yes, DLSS 3 isn’t quite as responsive as DLSS 2, but it’s still better than native and shouldn’t be a big issue overall.”
I’m struggling to understand how the DLSS3 would work for VR in the first place. AI is generating missing frames, but for our current VR tech, it would need to generate these frames for each eye, so it will need additional layer of figuring out how each frame will look for each eye so as not to warp the view in 3D space if the frames become too disjointed between left and right eye. There would have to be some kind of 3D persistence as well, to make sure the generated images in left and right eyes still combine into a comprehensive 3D image, and are not too distinct from each other to be noticeable and lose the 3D effect.
On the other hand, its pretty much the same problem with the current up-scaling, as the upscaler also has to deal with this 3D effect and having two separate eyes and two separate images that it needs to upscale but still maintain the coherent, combined 3D picture, and it seems to be working just fine, so probably Nvidia figured all of this out for DLSS3 as well with these newly generated frames…
Only 10 more fake fps? for so much money? All for you, my friend.
I suspect actually that running upscaling, including temporal upscaling, on the full-frame image (with both eyes combined) should work fine. The most likely place for accidental overlap between the eye images is at the centerline (extreme right of left eye, extreme left of right eye) which tends to be blocked by the nose bridge anyway. And I think that could only happen in cases of very fast movement.
It’s also possible it already scales up each eye image separately and composites them together; I’m not familiar with the DLSS SDK or how it works, I’m just a random programmer speculating.