AMD 5800X3D performance

This time around Raptor Lake will explicitly go into Alder Lake mobos. And support DDR4.

3 Likes

That’s a shocker, usually not the case.

I stand corrected.

Also there’s rumors of a 5900X3D that may have over 200mb L3 so we both may be wrong. :smiley:

Right now is not the best time for a new build. AM4 is indeed not going to provide an upgrade path, both Zen 4 and Raptor Lake are coming in the next few months, and the RTX 3090 Ti, besides being way overpriced right now, will lose its value once the next generation is released. GPU prices are slowly improving, and if the Ethereum merge takes place in a couple of months from now, we could see another wave of used GPUs flooding the market.

If you really need a new build right now, then you should go for the 5800X3D if you want the absolute best performance at this time, or Alder Lake if you would like to upgrade to Raptor Lake later (riskier because outside of a few engineering sample leaks, we don’t know how Raptor Lake performs yet, and you will have spent more in the end).

As for the GPU, look into the mid-range segment or used market where you can find some decent deals if you can’t do with the one you have right now. Again, if you want the very best at this time, including lots of VRAM for DirectX 12, the RX 6900 XT is consistently selling below MSRP now and is a much better deal than the RTX 3090 series. You will miss out on DLSS and better ray-tracing performance, however FSR 2.0 is coming soon as well, and once ray-tracing is released you’ll probably want to upgrade to something even better anyway.

Hi, Banzonho.

Without question I prefer the X3D. My gut instinct is that you’ll get more performance per dollar with it. You can pair it with an excellent x570s board in the $200 dollar range like the MSI Tomahawk. The X3d doesn’t come with a cooler, but neither does the 12900k. I’d pair the X3D with a cheap AIO watercooler to get the most out of the boost clocks, or a good high end heatsink.

When it’s time to upgrade I usually sell my components to friends, imo the X3D will have better than average resale value.

Right now it’s 92 degrees outside my window. The 12900k heated up the room noticeably, way more than the X3D does. The X3d is easier on the electric bill too. I don’t really like Intel’s power hungry, hot running nature. With the 12900k I didn’t need to open the room’s heater vents in the winter, and I’d have to run the AC constantly to keep the room cool in the summer. I too have a 3090, and having a 450 watt video card is already bad enough. Less heat, less power, less fan noise, happier universe, cooler room. (you can solder with a 35 watt iron)

One good thing is that the X3D doesn’t seem to be very picky about needing fast dram.

Edit:
Forgot to mention that right now nVidia has lowered the prices of it’s cards substantially. They have a substantial backlog of 3xxx cards on the market which they need to burn through before they can launch the imminent 4xxx line. 4xxx cards should drop anytime now.

Like Salem stated, here’s rumors of a 5900X3D that may have over 200mb L3. Rumors state that it might be on AM4. But you know how that is, in the PC world the next thing is always right around the corner. You have to decide where to jump in.

Oh, wow.

I wasn’t expecting such a big difference.

Disabling SMT now.

Thanks for your amazing work!

2 Likes

I just upgraded tonight from a 3700x. Very interested to see what kind of improvements I get in VR.

Paired with a 3080.

I came from a 3800x and the difference in FPS was about 2x sometimes better, sometimes less, all depending on graphics settings etc. I have a 3080 as well.

2 Likes

The difference between 3800X and 5800X3D (and perhaps even in general) shrinks the more raw computing power is required.
It is difficult to generalize because different people have different demands on their simulators and different areas where they want them to be comfortable.
However, if you already have a GPU like the GF3080, it is not difficult to get over 40 FPS overall with the 5800X3D.
If it is that much in many areas, people will probably find it comfortable.

1 Like

We believe it is effective with respect to VR. (I have all OculusHMDs from Oculus DK2 onwards).
We have not yet done any quantitative verification of MSFS2020VR, although we have played with it. I assume that large L3s are strong in scenarios with large amounts of parallel access to the same memory space, so I would guess that they would work well for VR.

A side note below.
The permanent factor that makes the frame rate worse in VR is to have independent view angles from the left and right eye for a single screen.
NVidia has provided hardware support for cases like this (generally VR) with a solution to render in a single pass when multiple viewpoints are used for a single scene, but it didn’t work very well.
This is because screen space techniques such as SSAO, which have since become commonly used, cannot support different left and right viewpoints because they defer rendering calculations to a screen that has already been computed. (This is because it is normal for the shadows to be different in the left and right views, and a simple application of SSAO can only generate one shadow shape. This creates artifacts).

The developer may not like this because it does not result in the intended screen effect.

1 Like

(I am not trying to be hostile, just a trend analysis)

Presumably the 7950X3D will have a 2CCD+1IOD configuration, similar to the 5950X. This is my guess based on the fact that the 5800X3D has a small configuration similar to the EPYC Milan-X (Zen3) and EPYC Genoa-X (Zen4).

The minimum configuration would be 1CCD+VCache and 1IOD. This is 5800X3D.
(Currently the VCache chip is only 64MB, and there is no solution to use it as 128MB with through-silicon vias, nor are there any micrographs of such; the VCache area is already stacked on top of most of the CCDs, so there is no space.)
VCache will be limited to 64MB for a while. (This is also the case when used for EPYC: Milan-X has up to 768MB of L3, but the chips connected with fast access from one CCD are 32+64MB. (The others are accessed via CCDs, which are slower in comparison.)

The 7950X3D will operate as a very similar topology to the 5800X3D with fast L3 cache connections. It does not appear as an integrated 200MB L3, but as 2 x 96MB.
In other words, in MSFS2020, I don’t think the speed will change significantly, only the IPC increase and clock improvement.
Since it will be a TSMC 7nm IOD and power saving, the room for clock improvement may be larger than the current situation(GF 14nm++ process currently RyzenIOD).

Hi there,
planning to switch from 3700X to 5800X3D, 1080p here for now and will remain on this resolution until 4000 series become available with reasonable price (lets see if… :smile:). Anyway would you say that this replacement will give me a nice boost while using RTX 2070 Super?

Not sure whether I should go for it, but planning to get top tier AM4 and last on it for next few years until ZEN 5 appears…

You’ll definitely be able to crank up LOD and traffic settings but fps is basically limited to your gpu, sure there will be an improvement because of the big L3 cache but I expect you’d see much bigger leap with a 3070 upwards.

How did it go friend?
I just bought a 3080 and I’m on the fence for this micro.

I have an old R5 2600, and between the GPU upgrade (had a 1060) and this, I think it will be marvelous.

Its been a pretty nice upgrade so far.

I would say more than anything, it’s just overall much more smooth. Frame times seem MUCH better. Overall avg framerate is definitely higher too in a lot of areas. I still see drops into the mid 30’s over some (not all) of the heavy photogrammetry areas I’ve tested. But again, even with the framerate in the 30’s, it is for sure much more smooth due to what I assume are improved frame times.

This is on 100/100 LOD, flying the Milviz 310 (which seems like a heavier fps hitter than most other GA aircraft for some reason). If I go 200/200 LOD’s I see drops into the high 20’s.

I have not tested in 2d yet. I’m wondering if going 200/200 LOD in 2d will have the same type of hit as it does in VR

Coming from your “older” hardware, you will see absolutely massive improvements with this chip and a 3080.

1 Like

Well all I want to get rid of for now is those annoying stutters cause overall my FPS are not that bad - main problem is when moving around or on the ground where these stutters appear and in the dev mode I get cpu main thread limited.

It sounds like you are underdriving your gpu, what monitor do you have?

I have AOC 24G2U 144hz G-Sync compatibility is on

And the resolution you are using? and TAA or DLSS etc?

Edit: Your set up is not too dissimilar to my own in terms of power and balance so I can safely advise you to abandon 1080p.

Set up custom desktops in NVCP for both 1440p and 4k at 60Hz (turn G-sync and HDR off for now). I suggest you apply the 1440p to start with and then turn on image scaling with a low sharpening setting, this should fix any blurriness. You can also enable v-sync to stop your fps running away in the menus.
Then use these in UserCfg.opt

Monitor 0
Windowed 1
FullscreenBorderless 1
WindowActive 1
Resolution 2560 1440
FullScreenResolution 2560 1440
PosX 0
PosY 0
AntiAliasing TAA
DLSSMode QUALITY
PrimaryScaling 0.850000
SecondaryScaling 1.000000
SharpenAmount 0.500000
ReprojectionMode 0
WorldScalePercentVR 0
AntiAliasingVR TAA
DLSSModeVR PERFORMANCE
PrimaryScalingVR 0.800000
SecondaryScalingVR 1.000000
SharpenAmountVR 1.000000
VSync 0
HDR10 0
Raytracing 0
PreferD3D12 0

Keep traffic settings no higher than default.
Experiment these settings in 4k DLSS too (change where appropriate.

1080p, TAA as I am not on beta.

See my edit above