I don’t think core frequency is really the issue just the memory clock. That includes factory OC cards which usualy increases both. Fortunately I have few issues but every now and then a new driver can be finicky, I first try reducing my vram OC a notch or two but too much for my liking and I just roll back driver instead… One thing I strongly suggest is increasing and fixing your Windows virtual memory to 15 or 16 GB, mine is on C: only (my packages on D:)
I did try underclocking my VRAM (down to -500 in MSIAF), but was still having random crash. At least the core underclocking is much more stable… And no issues with windows memory, I have 128GB and far from using all of it
It’s not 1995 anymore, Windows virtual memory also pages such things as DX instructions into vram and makes room by paging stuff out of it
With 24GB of VRAM this should not be the case, and MSI AB does not report any problem with VRAM consumption - but this is a interesting path I’m definitely going to explore - anyway I don’t have many other paths left (tried nvidia dlls updates, etc … )
BTW, with -500 on Memory Clock is a “no go”, not reaching the menu screen anymore with the new driver
I don’t think it particularly matters how much vram you actually have it uses at least some virtual as part of a fixed process. Try disabling your page file and I think you will see double or triple the workload on your memory controllers as Windows starts moving data about in it’s own unoptimal way.
Seems that latest 536.23 drivers fix the issue, at least for me (GTX 1660 super, DX12)
I had hopes but unfortunately, adding 32GB of swap didn’t solve the issue.
Then my next suspect would be power … check your leads, I had a tricky issue where tightening a cable tie popped the modular connection that was hidden behind other cables at the psu end. Everything still worked until high demand was called for and then ether the app crashed or my PC would restart. Worse still there was still some connection there so it seemed entirely random and it took me quite a while to notice.
Even if connections are good getting stable with a reduced clock therefore lower power draw might still mean something. Test by unplugging all usb except keyboard and mouse, if things improve it’s a sign that your PSU is no longer up to the task
This may be obvious… but beyond video drivers, also make sure your graphics card VBIOS is current to what is available on the manufacturers website.
In the same vein, also be sure to update your motherboard BIOS.
Tbh that really should be obvious … I personally would be embarassed to file a bug report if I hadn’t tried all the recommended fixes first.
That’s another lead. I just monitored the power consumption of the GC, and it can not go further than 307W although it should be able to reach 530W according to its BIOS… (CP2077 crashed with default clock speeds, but could launch it with -500 on Core clocks with a GPU power consumption of 307W). I double checked the cables and everything seems fine, however I’m not convinced that my 12VHPWR connector if properly fed from my 1000W PSU. I just ordered a new ATX3 1300W PSU and keep you posted - I don’t intend to blame Asobo/MSFS2020 or Nvidia if it’s on my side
That could depend how you tested it as MSFS might not stress a 4090 … I imagine an Aida64 or OCCT test is what you need
Didn’t come up to my mind to try these, so thanks for the help.
AIDA pushed to 417W, and OCC to 471W with default clocks, therefore my PSU is already able to deliver more than needed … So not only MSFS crashes but also CP2077, ■■■■■■ Nvidia
Could be a duff card, pretty rare but it can happen
My first 3090 blew up my motherboard. It instigated an upgrade to a new CPU + motherboard, along with an RMA’d 3090.
It passes every graphical tests/benchmarks with success and with factory OC clocks, therefore I doubt it is a faulty card - and I would be the only one to have this infamous error message in MSFS, which is unfortunately not the case…
That’s fair enough but fixing it by underclocking only the base clock seems pretty unusual … I figured out long ago that video memory clockspeed was the main culprit and that was even before Asobo put out the warnings.
Run Furmark if you want to see the max power consumption. Setting the power limit to 530W doesn’t mean it’ll use 530W, since voltage and current control power consumption, while temperature and frequency control boosting behavior.
FurMark did run just fine and pushed the GC up to 475w (up to 523w with Power Limit set to 110%). I will try longer tests but so far the CG seems to deliver performances as expected…