I am using automatic translation because my English is not good.
AMD 5800X3D uses a large L3 cache and is considered to be more effective in MSFS. We have actually viewed the results of the average FPS improvement in several places.
So we compared how different they are over time.
(I didn’t want to know what percentage faster, but in what areas, at what object densities, and under what conditions it works)
In Tokyo, the minimum frame rate of 5800X3D is probably about the same as the maximum frame rate of 5950X. Amazing.
The effect of the L3 Cache improvement in MSFS2020 is probably more effective than any other game I have seen. The huge L3 Cache hides memory access latency and can supply data to the ALUs with large bandwidth; in an object-intensive application like MSFS, the CPU’s inability to supply data to the ALUs and waiting for memory to supply data can effectively make the CPU’s X3D’s large cache hides memory access latency and reduces the number of times data is supplied from the slow main memory, thus reducing latency when memory re-references occur, which contributes to higher FPS. While it is common to estimate a typical CPU cache hit rate of 95% for traditional applications, MSFS will (presumably) have a lower cache hit rate due to the large number of objects. This workload is typically found in circuit simulators, numerical calculations, and high-performance computers; MSFS2020, like such simulators, has a tendency to overflow the L3 cache due to the large number of objects and the large amount of data, so the L3 cache is (probably) highly effective in improving the frame rate.
Impressively low power consumption: X3D’s main memory accesses are reduced by cache hits, which reduces the drive current required for memory controller accesses using IODs, probably contributing to the lower power consumption.
Both frame rate growth and power consumption worsen for both processors, especially in the Shibuya to Shinjuku area where object density is high, but X3D has a faster frame rate recovery.
Discussing which processor to choose is not the subject of this thread; we benchmarked MSFS (especially in urban areas) because its load trends are a bit different from those of traditional PC games, and we thought it would be difficult to have a general discussion.
The results were impressive and I will share them with the community. Sincerely.
You may find this talk interesting, AMD seems to want to change … everything and with Intel stepping into the GPU market this may be a sign of the times … will nVidia make CPU’s soon or will they become a dinosaur?
Heterogeneous Architectures sounds like our PCs will become more console like.
Nvidia already make CPU’s, most cloud platforms (AWS/Azure etc) are already using them and have been for years.
Its Intel and AMD who are playing catch up. The world is moving on and x86 architecture is the dinosaur. This is why MS are so heavily invested in Xbox cloud. The future (rightly or wrongly) will not have “gaming PC’s” with Nvidia or AMD graphics cards.
All of this testing was done with DX11 and SMT=ON.
The reason is that that is the standard for processor settings.
In the case of SMTON, many instructions are packed by the instruction decoder, so the CPU clock is less likely to improve compared to SMTOFF. Therefore, in the majority of cases, SMTOFF will provide better performance for gaming applications.
We will do a separate test with the same criteria for DX11/DX12 and SMTON/OFF on 5800X3D.
Please wait a bit, as for the 5950X, we have already replaced the processor with a 5800X3D, so it will probably be difficult to test again.
How would it compare to DX12? What if the graphics options were changed, with or without VSYNC?
To answer these questions, we needed a standard that would allow us to evaluate the same test without any human senses, so we created the Haneda 34L method.