One other thing to consider with the upcoming release of next gen nVidia and AMD GPUs, especially for VR, comes from mbucchia’s recent post on Motion Reprojection and specifically why AMD GPUs struggle with MR. I quote:
" Q: Why is motion reprojection bad on my AMD GPU?
A: This is because of the motion estimation phase of the algorithm. It relies on the video encoder block of the GPU. Pre-RDNA GPUs (like RX 5000 series) do not support motion estimation on large blocks, and therefore we must fallback to small blocks, which means we must downscale the input images dramatically (think 5x smaller than headset’s resolution) and this loses a lot of details. RDNA GPUs (RX 6000 series) support large blocks, however they are 2-3 times slower than Nvidia GPUs for motion estimation, which leads to missed latching in the LSR thread, and results in image warping or unstable framerates."
I can’t imagine that the 4090 will regress in this department, but I haven’t a clue if AMD’s 7000 series will improve on that “2-3 times slower” video encoder process. Something to look out for, yet not something that usually crops up in spec sheets, benchmarks and reviews.
I certainly know that my 6800XT is poor at MR to the point where I can’t use it (although it’s great in 2D at 1440p) which is one reason I’m going back to nVidia this time round with the 4090.
Really appreciate the effort, rare to see Fcat results anywhere (Babeltechreviews is pretty much the only one I ever see them from and they don’t usually do FS2020). Eager to see the comparison to your 4090 results,
Really looking forward to seeing VR benchmarks with that gpu 4090. I hope, soon, dlss 3 also will work in vr. Then, I’m sure I will buy it. A totally game changer in performance.
There is a thing you can test. How is the difference when decreasing the power target to 350w and 300w.
Would guess, because MSFS is mostly CPU limited, that there is no change in FPS.
We are talking about VR so MSFS it’s much more likely not to be CPU bound here. Certainly with the Vive Pro 2 headset OP has you’re going to easily find completely reasonable render resolution settings that may very well be too much for even the 4090 before any CPU bottleneck is hit.
Tests being done in DX11 is relevant. Seen several reports that DX12 alleviates the CPU bottleneck somewhat, and it’s still in beta so reasonable to expect further improvement.
I agree, DX12 for MSFS is still in its infancy and we all hope it can make significant performance gains. Even now in 2D with an all AMD rig, I seem to be getting much better performance than many nVidia owners using DX12, possibly because Asobo have optimised DX12 better for current AMD GPUs, given how the Xbox is also an all AMD thingy. How this will pan out with the next gen AMD and nVidia cards, together with newer drivers, remains to be seen. All good fun though.
Re performance in VR (eg HP Reverb G2) vs on 4k display - doesn’t the scene have to be calculated twice i VR? Making VR much more CPU dependant than the same number of pixels on a single screen?
You’re correct that there is likely to be more demand also on the CPU compared to just straight up identical amount of pixels for a single viewport but because the amount is pixels is just so much higher that tends to be the more limiting factor anyways and things like flat screen benchmarks at 4k are more indicative of performance you can expect in VR than the flat screen benchmarks that end up CPU bound.
So far you can also almost always get a benefit from the ability to increase the render resolution with VR headsets but you aren’t going to be getting any more eyes.
Consider that the Vive Pro 2’s native panel resolution is already 1.4x that of a 4k screen and the typical default render setting is rendering at 140% so you end up with twice the amount of pixels compared to 4k and you could still increase the resolution for additional visual quality. With the Quest 2 things are less extreme but you do still get 1.2x the number of pixels that a 4k screen has if you want to render at 140%.
So much has been said about the consumption of the 4090, because in the end it consumes at full load 25W less than a rtx 3090ti. If you could pay the bill before you can continue to do so. If you could not afford it, look for another hobby or you will always have the consolation of playing Mfs at 1080ppp without virtual reality. Because to play VR decently with a good headset minimum you will need a 3080Ti and even then you will have to lower the quality of many things to not get dizzy
No need to exaggerate too for the poor users not able to afford such extravaganza and crazy expensive GPU, we can play other stuffs than Tic Tac Toe
I’m pretty happy with my RTX3070 as I can play MSFS 1440p nearly all at ultra/high, and can play VR at 45fps.
I’m targeting a (used or new) RTX3090 or RTX3080TI as next upgrade, no more. I know that will be a night and day difference with my current configuration.
I am a 90% VR user and intend to build a new system soon, thus would be interested in opinions on this. I uderstand, an RTX 4090 will still outperform a 3090 using TAA, but I am not sure if the gap would warrant the price difference of 1000 €.
I don’t hold my breath for VR and DLSS3. VR Headset have already this technique implemented for long at drivers level: retro projection or ASW. They will need to manage it differently for each headset IMHO, so I don’t think it will work. But I’ll be happy to be wrong obviously
Doing it at GPU level they may have more info about the scene, what is moving in which direction etc - so maybe it will work much better than reprojection (just my guesses, I have no idea how these technologies work)
Interesting performance chart above, I didn’t know the 3090ti already had such advantage over 3090