If the 15.9GB is not yet allocated and the overall used memory is 26.4GB, then it is impossible to allocate that extra 15.9GB since I only have 32GB! I assume the shared GPU memory is physical memory and not virtual memory since that would decrease performance even more.
If the GPU wanted to use more regular RAM I think Windows would need to allocate physical pages for that, and consequently swap some other RAM usage out to the pagefile.
Someone recommended clearing the DNS Cache. Actually it helped. My VRAM Usage went from 99% to 85% which made an almost stutter free takeoff.
At landing it dropped back to 5 FPS ![]()
That sounds wild!
DNS Cache is about remembering domain names and records. It has nothing to do with textures or your GPU RAM load.
I agree and here is a wild theoretical reason behind such a behavior. If MS has changed the IP address for one of their servers and you still have an old address in your local cache, maybe that could trigger some kind of retry loop eating memory and thus causing stutter?
Guys, I am still around since I find this interesting from a programming point of view. When I was programming this stuff we used a function called malloc but things have gotten more complex. Here is what Google AI says about these basic vocabulary here:
malloc is a C function used for dynamic memory allocation, but it doesn’t directly interact with GPU memory on its own. It allocates memory on the host (CPU) system’s heap, not the GPU’s memory. [1, 1, 2, 2, 3, 3, 4]
How GPU Memory Works: [5, 5, 6, 6]
- GPUs have their own dedicated memory, often called VRAM or device memory, separate from the system’s RAM. [5, 6, [7](GPU memory — ROCm Documentation), 8, 9, [10](Requesting GPUs - MSU HPCC User Documentation]
- To use GPU memory, you need to explicitly allocate and manage it through the GPU’s API (like CUDA or OpenCL). [11, 12, 13, 14, 15]
- GPU memory is typically accessed and managed through functions provided by the GPU’s programming environment, not malloc. [12, 12, 16, 16]
How GPU Memory Allocation Differs from malloc: [1, 16]
malloc
is used for general-purpose memory allocation on the host system’s heap, while GPU memory allocation requires specific functions provided by the GPU’s API. [1, 1, 3, 16, 16, 17]
GPU memory allocation
is tied to the specific GPU and its API, requiring knowledge of the GPU’s memory architecture and how to transfer data between the host and the GPU. [12, 16, 18, 19, 20, 21, [22](Introduction to GPU programming models — GPU programming: why, when and how? documentation, 23, 24]
GPU memory management
is often more complex than malloc due to the need to synchronize data transfer between the CPU and GPU. [12, 12, 16, 16, 25]
In essence, malloc is a standard C function for general-purpose memory allocation on the CPU side, while GPU memory allocation is managed through specific functions and libraries provided by the GPU’s programming environment (like CUDA or OpenCL). [1, 16, 26, 27, 28]
Generative AI is experimental.
[1] https://en.wikipedia.org/wiki/C_dynamic_memory_allocation
[2] https://www.freecodecamp.org/news/malloc-in-c-dynamic-memory-allocation-in-c-explained/
[3] CS 240: Introduction to Computer Systems (Spring 2021)
[5] https://acecloud.ai/resources/blog/why-gpu-memory-matters-more-than-you-think/
[6] https://www.lenovo.com/us/en/glossary/what-is-video-memory/
[7] [GPU memory — ROCm Documentation](GPU memory — ROCm Documentation)
[8] https://rafay.co/the-kubernetes-current/gpu-metrics-memory-utilization/
[9] https://acecloud.ai/resources/blog/why-gpu-memory-matters-more-than-you-think/
[10] [Requesting GPUs - MSU HPCC User Documentation](Requesting GPUs - MSU HPCC User Documentation
[11] CUDA Runtime API :: CUDA Toolkit Documentation
[13] https://www.computer.org/csdl/journal/td/2021/05/09286775/1pork1v4xlS
[14] https://medium.com/accredian/harnessing-parallelism-how-gpus-revolutionize-computing-597f3479d955
[16] Simplifying GPU Application Development with Heterogeneous Memory Management | NVIDIA Technical Blog
[19] https://www.mathworks.com/help/gpucoder/ug/gpu-memory-allocation-and-minimization.html
[20] https://www.informit.com/articles/article.aspx?p=2756465&seqNum=3
[21] https://pure.hw.ac.uk/ws/portalfiles/portal/50620386/3462172.3462199.pdf
[22] [Introduction to GPU programming models — GPU programming: why, when and how? documentation](Introduction to GPU programming models — GPU programming: why, when and how? documentation
[23] https://www.cs.siue.edu/~marmcke/docs/cybergis/cuda.html
[24] https://sangho2.github.io/papers/lee:gpu.pdf
[25] https://vngcloud.vn/blog/decoding-the-enigma-cpu-vs-gpu-what-is-the-best-choice-for-your-workload
[26] https://dl.acm.org/doi/fullHtml/10.1145/3453417.3453439
[27] https://www.cherryservers.com/blog/introduction-to-gpu-programming-with-cuda-and-python
No idea but it helped. I usually spawn at LOWI in the Fenix 320 and my VRAM is always maxed out and takeoff is an absolute stutter fest (sometimes 5 FPS or lower).
I cant look around properly and its almost unplayable, i have to guess when to rotate and remember to never open the menu again or rather press Esc while flying because of that epic camera animation which crashes my game.
After clearing my DNS Cache my VRAM never maxed out and i could smoothly fly to EDDB where the problem started again. Drops to 5 FPS moments before landing and i have to guess when to flare while im stuttering through the world.
Clearing DNS Cache made takeoff at least enjoyable.
RTX 3080 10gb
AMD 5 5600x
32gb DDR4
A number of the posts in this thread reminded me of the old adage: “Correlation is not causation.” Just because two things happened at the same does NOT necessarily mean the one caused the other. I think it is important to bear this concept in mind.
The issue I’m detailing in that thread is different to VRAM issues. It’s a separate, non-VRAM related issue that appears to be caused when GPU is under 100% load and FG is enabled.
Do you have set texture resolution to ultra? I have set it to high, running 4k DLSS Quality, FG, AIG traffic + BeyondATC, Aerosoft Frankfurt, Fenix A320. No issues with VRAM. Also restarted a couple of times to make sure. So must be something else going on there. I have frames locked to 63 with VRR and it’s very smooth.
Upgraded now to 64gb „CPU“-RAM (previously 32gb).
No improvement…
Ok, once more. 4K is a pretty basic resolution for flight sims for several years so far. So YES if it cannot be used in 4K (thus you cannot expect usable VR quality either) it must be marked as not capable to it before selling it.
Which GPUs would you consider 4K capable and why? Especially seeing 4090s also have issues your statement is that there are no 4K capable GPUs on the market.
The FPS meter in DEV mode clearly shows VRAM overcommitment. Easy as that: like 12,3Gbyte/10Gbyte.
The “shared GPU memory” in task manager doesn’t show VRAM overcommitment. I have it around 3Gbytes right when I start the sim up even before reaching 70-80% of VRAM capacity.
My point is that it’s not possible. If you check the issue descriptions even with the current top GPUs and CPUs and lots of RAM it tends to happen to many people.
Furthermore if it were a performance issue at 1st place you could solve it or at least make it happen less frequent by reducing quality of the sim. My experience shows that just by reducing the quality settings the issue still remains in pretty similar shape and form.
I just stumbled upon 1 setting that seems to greatly reduce the number/time of this issue is turning RayTracing off. It doesn’t really visibly effect GPU parameters (like GPU compute or VRAM, or others) but the times where GPU latency goes up to unusable levels while all GPU parameters are in green are less frequent.
One more question: if it simply a performance issue how it’s possible that sometimes NO GPU parameters show ANY overloading (not even VRAM) and still GPU latency goes up to several hundred ms?
Yeah, pretty clear to me that for marketing purposes MS is trying to cram too many bells and whistles into a limited amount of VRAM. This should simply be self limited so that they do not run out of memory. If the user has 16GB maximum used is 14… 12 it is 10 8 it is 6… guaranteed less detail than what the marketing people want to put in their 4K promotional videos, BUT no stutters, and ultimately happier users.
I re-listened to their last developer’s brief and it seems like they just saw memory management as something out of their control. Or they are under marching under orders from the marketing side of the house. I suspect it is the later.
I was once told that if it looks like someone is doing something very stupid in an organization, you probably just do not understand the organization.
Do you have the same issue if you follow the OP’s steps to reproduce it?
Yes. It happens on large airports usually.
Provide extra information to complete the original description of the issue:
• Config: AMD 5800X3D RAM: 64 Gbyte, GPU: 3080, sim installed on NVME.
Sometimes DEV mode shows VRAM overcommitment, sometimes it happens at 80% of the VRAM capacity used. Sometimes DEV mode shows 10Gbyte as maximum VRAM capacity, sometimes it shows only around 8 Gbyte but not always overcommitted.
Reducing quality only made the appearance of the issue insignificantly less frequent but it was still unusable at large airports, this symptom usually hit during landing few feet above the runway. 30FPS in one second then it becomes 0-3, sometimes 5, depending on the airport. Good luck with the landing.
Only one thing seems to help (or at least helped the most so far) was to disable RayTracing.
The symptoms are the same always: while GPU utilization shows 100%, the temps and power consumption is more similar to 30-40% of GPU performance, DEV mode shows relative fast GPU compute latency while it shows extremely slow GPU latency causing 0-5 FPS usually. Hard to say exactly which function is the issue as there are no more clues in the DEV mode FPS meter - even less with other tools.
If relevant, provide additional screenshots/video:
• Special case when VRAM is not overcomitted AND it doesn’t even show accurate VRAM value for maximum capacity.
thank you so much for sharing this, i was about to try the same thing to see if it did anything
I’ve just upgraded too. It only helped a bit but not really significant. Altogether it was a good move especially if you’re flying large airliners and have several tools running (like Navigraph Chart can be extremely performance/RAM hungry) you can easily reach 40 Gbytes or more RAM demand. So while for MSFS2020 32 Gbyte of RAM was always enough for me for 2024 64 Gbyte is advised but usable with 32 Gbytes too. Just expect more performance decrease but it’s a few FPS not this topic like unusability.
Am not an expert, but what Anti-aliasing are you using? When I use DLSS with my 3840 wide monitor it comes out at 1920 not 3072… it basically doubles the image rendered at the 1/4 screen size.
So how can I try to replicate this? I haven’t had this yet.
load the a320v2 on the runway at jfk and start rolling
if your fps dont drop to absurdly low numbers, then you are probably good
how much vram do you have
I did happen when I loaded the flight at jfk. But once loaded and the numbers settled it worked well, full fps and not using full vram. Both with a320v2 and fenix a320. I have a 4080 so 16gb vram.

