I played around too today and my findings are these:
Dont lower the power target as suggested in many places. This gave me problems and it took a while to find out why. I have set my power limit to 70% and the mystery begins. Stutters, freezes, fps-loss, it feels like pre SU10 again. Tried various settings in the graphic options but that doest help. just when I go back to 100% power limit all was fine again. Surprisingly the power consuption stays around 300Watts at my 45 fps locked setup.
DX12 is worse that DX11 at my system. I flew some rounds over Frisco at 1500-2000 feet and while DX11 stays solid at 45fps DX 12 dropped to 34-37fps over the city. Still feeling good but not as good as 45fps.
My best experience is when I have set the graphics settings high enough for an slightly GPU-bound setup. Any CPU-bound scenario causes stutters again. Now I have cpu frame times around 15ms and gpu frame times around 20ms in my 45fps locked setup. OXR Toolkit shows me around 5% cpu headroom mostly, very few target missing or cpu-bound notes.
I think a well balanced setup will a little more weight on the cpu side is a good recipe for an smooth experience is msfs.
11900k, 32GB, Reverb G2, Palit 4090 OC, Win10, DX11, TAA100, OXR100, TLOD250, more ultra than medium.
I’m not sure there’s any difference from DLSS 2. I may be mistaken but DLSS3 is basically DLSS2+frame generation. There’s a new DLAA mode too, but I didn’t find it does anything interesting.
One discovery for me was just that Traffic settings cause huge and unpredictable fps hit on my system. AI Traffic set fairly low (40%) was capping my fps to around 20 in my test setup at KJFK and I was going crazy trying to fix that. I set it to “Off” trying different things - and fps shot back to 30 MR with 35% headroom. Live traffic seems to only suck a few fps, but AI traffic was crushing my fps even when I tried lower settings. I don’t know why… Wirth remembering in case anyone see stutters, and better compare setting without traffic.
Those scenery stutters are the one thing I really want to eliminate. From what I’ve heard the AMD 3D processors should have the best shot at making it as smooth as possible. My target setup will be 7800x3D + 4090 for this reason.
Even now with 8700k + 3080 I get mostly 45FPS, but the scenery stutters come and go… Sometimes more, sometimes less. Seems to be a CPU issue
Quick note on going from the 3080 10GB to the 4090 in MSFS is that, in CPU-limited dense city areas, there’s barely any improvement in base performance. But turning on DLSS frame-doubling literally doubles frame rates with zero artifacting that I can see. (I mean, if I was trying to get hits on a YouTube video, I could probably record video and freeze-frame and find some artifacting, but there was nothing visible to my naked eye.)
Frame rates at 5,120x1440 were as low as 47 fps around dense scenery in Seattle in a plane with a G1000, but with the DLSS Frame Doubling turned off the same area was just 21-23 fps on the 3080 and 4090 alike.
TL;DR: CPU speed still matters; DLSS frame doubling is a great thing but I betcha that the 4080 and 4070 are gonna do that just as well, and that’s why Nvidia’s waiting to release those.
Well, it literally does double the frame rate in 2D. But yeah, not much here for VR folks to celebrate.
As far as MSFS goes, it’s so CPU-bound that I’m pretty sure that we’re going to find that the 4080 with frame-doubling enabled is just as good as the more expensive 4090. Who knows, the 4070 might even perform on par.
I needed a new system now, but from what I’m seeing of so much of the 4090’s performance coming the DLSS frame doubling due to MSFS being so CPU bound in cities, folks would be wise to wait to see what people report from the less insanely priced cards. I’m willing to bet the upper mid-range 40x0 cards are gonna be darn close to the 4090 in MSFS performance.
I crank up the gpu related settings as clouds, ambient occlusion, terrain shadows, trees and grass f.e. until i get my balance. I dont touch my TLOD of 250 although this is the setting with the highest impact on cpu load. OLOD (100 here) als have huge impact as well as texture synthesis on ultra sfik.
I have off all AI anf live traffic so far. Will check it now.
Edit: Have set AI Traffic both to 50%, no impact on fps, see some cessnas blinking below my 4000ft and hear a lot of ATC, thats all.
Take it with a mountain of salt, but these are the 4K figures provided by nVidia, as reported in Tom’s Hardware. Note the “4080” 12 GB version has now been “unlaunched” and existing stock will be repackaged, probably as a 4070 and possibly with a lower price. That means a lot of packaging going to recycling by board partners. No wonder EVGA jumped ship and parted ways with nVidia.
What I found swapping in the 4090 for my 3080 10GB on my 10900K system was, with that CPU, in 2D, you don’t get the “base” performance increase (shown as 45.3/3080, 74.7/4090 in the chart) in areas of high detail like NYC/Seattle/London. Performance was nearly identical in NYC, within 1-2 fps (45 on the ground, 42 in the air), on both cards with the DLSS 3.0 frame doubling turned off. (This is on a 49-inch 5,120x1440 monitor, so essentially 4k equivalent pixel-wise.)
Now, the 4090 is well worth getting, because that DLSS frame doubling is awesome! But the 4080, and 4075/whatever, will also have that, and probably also get you into the region of “wonderful” frame rates for less money. But that’s all 2D.
In VR, I did see a heft performance increase on my 4090, which was surprising because I saw posts here from some who say they didn’t. I was seeing 72 fps on my Reverb G2 with DLSS Quality (which does looks sharp on the G2), compared to about 40 fps with the 3080. So even without DLSS 3.0 support, the 4090 is significantly smoother there.
It’ll be interesting to see how much my 13900K affects this when it arrives, given how CPU-bound it is in 2D.
DLSS 3 is a frame generation made to create a interpolated frame in between 2 real computed frames.
1/ Frame x is computed and displayed
2/ Frame x+1 is computed and not displayed
3/ DLSS 3 create an interpolated frame x+0.5 (between x and x+1) and display it
4/ Frame x+1 is finally displayed (that’s why some latency from 2/, time passed)
5/ Jump to 2/
So in screen we saw x, x+0.5, x+1, etc…
In VR, DLSS 3 will enter in conflict with the current technology named ASW, retro projection or Motion Retroprojection.
In VR lantency is king and should be avoided, so the created frame is predictive, not between 2 real computed frame:
1/ Frame x is computed and displayed
2/ Analyzing current Frame x and current motion of headset, predictive Frame x+0.5 is computed and displayed
3/ Frame x+1 is computed and displayed from real motion
4/ Jump to 2/
I don’t see how DLSS 3 could be intricate with the retro projection tech. The VR tech is made at headset drivers level using predictive motion and don’t add any latency. And replacing VR tech by DLSS3 can’t give a good result due to latency.
Could have wrong but for me DLSS 3 will never work in VR (AFAIK IMHO).