AMD 5800X3D performance

I think it was downgraded slightly because the added V-cache sits on top of the core die, and this limits the core’s ability to transfer heat up to the integrated heat spreader/lid.

1 Like

Yep, I believe you are right. It will be interesting to see how they manage the 3D cache on future Ryzens. The 5800X3D has always been more of a prototype in my mind - a production test of the V-cache concept.

1 Like

The principle is based on the practice of delidding the cpu, that brings the dies in direct contact with the cooler … dangerous territory and not for the faint hearted.

I would be interested in a comparison

hey folks ! do you think this X3D is still relevant with the SU10?

I don’t have one but I am wondering myself about this. Going forward, when PC enthusiasts upgrade to a new platform they should become much more available.

With all due respect…

If you expect a sim update to make a faster processor irrelevant… maybe you are putting to much faith in that update…

I don´t have a lot of expectations in DX12, to be honest. If they don´t break too many things, I will be happy. And DLSS won´t make a difference in CPU bound scenarios.

Don´t get me wrong. I would be very happy if SU10 was so great, I got such a large performance improvement It made my 5800x3D sit idle in my system… but i´m not counting on that…

It’s not a faster processor it has more L3 cache that makes it optimal for MSFS the 5950 is literally a faster processor but does not do as well with MSFS because of the smaller L3 cache.

Of course! The software doesn’t obsolete a faster piece of hardware. :slight_smile: Mine runs very nice.

Note that for folks who have a high enough resolution that DLSS doesn’t cause too much bluriness, that’s a great way to get moore OOMPH out of a GPU-limited system, which puts more pressure back on the CPU, for which you want something that runs MSFS’s main thread fast. :wink:

I have 5800x3d/6900xt and prior to SU10 still had some problems (i.e. fps in 20s)with some very dense areas, but SU10 has completed fixed that and I’m close to my 60fps cap almost everywhere with occasional dips into high 40s. I’ve tested DX11 vs DX12 and fps was similar but I sometimes see slight stutters with dx11 but with dx12 I get no stutters and is almost always GPU limited. I run 2k/widescreen.

2 Likes

I own a Strix 3090 and run 4k. I’ve owned a 12900k since release, and it’s been running in an Asus Z690 Apex with 32 mb DDR 6400 C32.

I bought a 5800x3d and an MSI x570 Carbon EK, and paired it with 32 mb of binned Samsung B-Die DDR4 running at 3800mhz C16.

I ran the 12900k in two configurations. The first was all cores enabled, all P-Cores at 5.1ghz with all e-cores at 37 and ring bus/cache at 37. The second was with the E-Cores disabled, all the P-Cores at 5.3 ghz, and the ring bus/ cache at 5hz.

All I can say is that this 5800x3d body slams the 12900k, even when the 12900k is heavily overclocked. It’s cheap, efficient, and smokes Alder Lake. By comparison Intel is expensive, hot, and inefficient. The performance difference in frame times isn’t even close. The 5800x3d is just so buttery smooth that you can’t help but see it.

If I was running an old AM4 system I would run, not walk to get my hands on one. It’s been a long, long, long, time since the CPU world has seen that caliber of an upgrade for an existing socket.

Honestly, I was growing tired of Intel’s “bump up the clock speeds and heat to compete” course of action and place more respect on AMD’s “engineer a better product and release it on an 5 year old socket” direction. I’ve owned and built my own systems since the early 90’s, and this thing is the real deal. I will definitely hold off for the next cpu with 3d cache, I’m a believer now.

TLDR: The 12900k had me CPU limited with a 3090 at 4k, and the 5800x3d has me GPU limited in the same situations with drastically higher fps and 1% fps. The 5800x3d gets the most out of my 3090 by far, and it’s not a close race.

8 Likes

I’d say 2k widesrceen is by far not enough for a PC that should easily be 8k capable, you need a higher resolution monitor or a second minimum 2k monitor would also do.

Edit: bumping renderscale or resolution probably doesn’t help as very little in MSFS is rendered above 4k, maybe some textures on 3rd party aircraft but nothing else.

The only thing they did to the 3D was triple the L3 cache, it was an experiment that went wonderfully well for this game.

It’s a niche CPU for MSFS.

They did a lot more than that, it’s structure is equvalent to delidding. I plan to get one when the nerds start shedding their boards for the new AMD platform … but I’ll need a 3080 to take proper advantage.

maybe the software optimisation on the SU10 make the L3 « less » important that it used to be ?

It’s a good point but IMO not too difficult to code in without causing problems for others. So far it’s SU10 (and only in beta), who knows what 11 and 12 will bring?

Using the SU10 beta and can unequivocally say YES.
I’m not seeing SU10 help at all with CPU usage/efficiency.

1 Like

Thanks for your perspective. I actually haven’t seen anyone coming from a 12900k to a 5800X3D, but glad to know they’re out there.

On a side note, one real advantage of the 12900k for MSFS was the high single-core speed thanks to the main thread limitation of DX11. It will be interesting to see how the 5800X3D competes with the 12900k using DX12, which should theoretically close that “gap” in the single-core speed.

Again, it’s not about core speed, the 5800X3D’s main party piece is the L3 cache, that’s the only difference between it and the vanilla 5800X (and +$175 USD-ish) the 5800X has 32mb the X3D has 96mb

The best way to think about it is it’s a Vegas card dealer where normally he has two hands but the X3D dealer has six. He’s still only working with one deck of cards but he can keep the backup in his hands.

1 Like

The thing is, to draw every frame you have to know the position of every object. Then you have to re-calculate the new position of every object, and draw the next frame. That goes for AI as well.

MSFS has a lot of AI. Road vehicles, boats, airport vehicles, ramp workers, aircraft, fauna, etc. With the 5800x3d, it can calculate all of that position data and keep much if not all of that data resident in it’s 96mb of L3 cache.

Alder Lake does have around a 1ghz clock speed advantage, but the more you ask it to do, the more it has to swap things out of it’s 30mb L3 cache and into system ram, and back. The 5800x3d has lower clock speeds, but it seldom has to wait around for something to calculate because the data is resident in cache and not transferred back and forth out of system ram.

It’s like if you work on a loading dock, and the supervisor has a large memory. He never has to spend time writing notes down on paper, or reading them, he just knows where everything is, where it’s going, who’s going to move it. The other shift has a super who can’t remember anything, and has took spend half his time looking it up when you ask a question. He might be younger, faster, more agile, but he doesn’t perform as much work in a lot of situations. And neither do the people who work for him. In this context, it’s the GPU that often ends up waiting around for data from the supervisor.

6 Likes

Core speed matters signficantly when you’re limited by the main thread pipeline in DX11. The L3 cache is another component to this, as the higher L3 cache should improve performance when main-thread limited in a DX11 scenario. Hence, the reason why I said it will be interesting to see if the performance gains are as dramatic when using DX12 and main thread bottlenecking is less pronounced.