AMD 5800X3D performance

Thanks for the reply.
Yeah definitely. I ordered it yesterday as it dropped 70$ and was in ATL.
I was commited to wait to the 4000 series but having an AM4 and with the current price drops I couln’t resist to upgrade both the GPU and the CPU.

I have never spent so much money on a PC but I am now in that age where the most scarce resource it’s time, lol. I don’t think I’ll regret :).

1 Like

It´s a shame there is not going to be a i7 13700K X3D… The L3 of the upcoming 13700K is said to be only 30 MB which is not enough for MSFS.

Why not enough? I have a 12700k and it works perfect.

And 12MB of L2 cache, for a total of 42MB (68MB total for the 13900K), and that does not include architectural and process node improvements. Leaked benchmarks of early engineering samples show a big improvement in minimum frame rates over Alder Lake (similar to going from Vermeer to Vermeer-X), so it’s too early to predict whether it will be weak for MSFS or not. Personally I think Raptor Lake will take the lead until Zen 4 with 3D V-Cache is released.

That is what I meant. Raptor will not have sufficient L3 cache compared to the then outdated 5800X3D. The later one already has 96 MB and owns the MSFS crown. I would like to see a 13700K X3D with at least the amount of L3 which AMD already has.

Perfect is relative. I just ordered a 12700K along with a new CPU cooler and a Trix motherboard. Right now I am still running a 9900K and I am not pleased at all.
The 12700K will be sold as soon as the 13xxx will arrive at market. I had a hard time not ordering a 5800X3D instead but it is at the end of it´s lifecycle. So I just wish Intel would introduce more L3 cache as this is the holy grail for MSFS.

L3 cache is not everything. By that logic, there would be no difference between Zen 2 and Zen 3 in MSFS since they have the same amount of L3 cache (32MB), and that would extend to Zen 4. L2 cache is also very important for CPU performance, and it is faster because it’s sitting closer to the core. The 13700K will be better than a single Zen 3 (4MB) and Zen 4 (8MB) CCD in this respect.

Also there is no way Intel could produce a 3D V-Cache variant of Raptor Lake anyway because their chips are not designed to support 3D stacking, let alone a MCM architecture.

Of course it is not only L3 cache. Otherwise a 386 with 1 GB… But it has a massive impact even if the CPU is not top notch. Clearly seen by the 5800X3D which performs way better than i9 12900KS.
If the CPU will even be better with sufficient L3 cache not limited coolingwise such a product would outperform everything in regard to MSFS.

IIntel is probably at least a few years away from having a comparable 3D cache option, as Intel does most of their own fabrication. The benefit of AMD using TSMC for fabrication is that they can let TSMC do the heavy lifting and use their fabrication. That’s not to say Intel can’t use TSMC for that technology.

I’m not sure I get your point here. L3 cache makes a huge performance difference in CPU limited scenarios - like MSFS. Comparing Zen 2 and 3 performance, even though the same L3 cache, is inherently flawed as there is a significant IPC increase between generations, particularly Zen 2 to 3. The performance increase in MSFS between the 5800X and 5800X3D is proof enough L3 cache is a significant factor in performance for MSFS. The 5800X3D even has a slightly reduced base clock compared to the 5800X3D, further highlighting the performance benefits of L3 cache.

2 Likes

That’s precisely my point, it is way too early to judge the performance of a CPU based on L3 cache amount only. I have already mentioned the L2 cache increase in Raptor Lake and potential architectural/clocking improvements, which could also include the management of cache.

That’s too broad of a statement, there are several, mostly older, CPU-limited games that benefit either very little or not at all from the extra cache. That includes Prepar3D and X-Plane.

I understand what you mean. What I’ve seen so far on Raptor Lake is a rumored increase to 2 and 4 MB L2 cache for each P and E core, respectively. It would be interesting to see if this would increase performance out of the box. At first glance, I would assume this L2 cache would be prohibitively too small to have a significant performance uplift for gaming. Anything bigger than the L2 cache for each P and E core would be sent to the L3 cache, instead. Once again, this highlights the benefits of having a very large shared L3 cache.

I’m sure this could open the door to developers to take advantage of a larger L2 cache. I don’t recall seeing any modern benchmarks comparing L2 to L3 cache performance. The L3 cache has been the focus lately.

Any VR useres (specifically the G2), what are you guys seeing for temps? Mine is peaking around 87-89 which seems hot? I’m on a 240mm AIO for cooling.

Doing some googling, people seem to say anything up to 90 is fine, and even then the cpu will do it’s best to maintain turbo speed and not throttle.

1 Like

Just upgraded a 5600x to 5800x3d and its paired with a 6800xt @ 2650mhz/3600 ram/Quest 2 + Link cable. And after 6 hours messing with it - Flight Sim 2020/Automobilsta 2/Project Cars 3. Long story short its beyond expectations in VR and a bigger upgrade to 4k gaming than is being let on. But part of that could be thinking the 5600x was good enough at 4k when its clearly not after using a chip like this.

Flight sim wise the biggest high light is that there are no more oculus link issues (Shimmering/Stutters) its clear and stays clear even if i jerk the plane from side to side. It laughed at New York with scattered clouds/morning with all the airport/car/boat/fauna traffic at 100. At 80hz/1.2 res - 90 in game res scale mix of medium high settings. I flew around it for 45min i couldn’t believe it. Im also in Update 10 beta.

One thing I noticed in flight sim is that this works good on DX 11 and not so much on DX12. On the 5600x i was getting better hobbling along performance on DX12.

Also they are $399 at microcenter this week that’s what lured me in.

Same but im on a tower cooler it seems getting up to 90 is the norm for this thing. Interesting that an aio isn’t fairing much better!

2 Likes

The thing does run on the hot side. :wink:

I’m also seeing mid to high 80s under heavy load with a 280mm radiator on an enclosed liquid cooler (though the case layout is not ideal, with a “pull” config on the radiator fans instead of “push” which may lead to slightly higher ambient temps). I’m hitting 4.35 GHz at those temps though, as reported by Ryzen Master, so it’s not excessively throttling down.

(Not VR, but running my RTX 2070S full blast for 3440x1440 :D)

1 Like

Yeah I’m going to have to take a closer look at my boost clocks and average temps in Ryzen Master tonight.

I simply noticed that my MAX temp last night after a ~2 hour flight was 89.3. I wasn’t actually paying attention to my temps the entire time…it was just an observation I made afterwards which caught my attention.

But yeah, most everything I’m reading is saying high temps on these is normal and “by design”. Still a little unnerving to me though.

1 Like

Hey there, @CasualSniper854, and a warm welcome to the forum community. Interesting input on your improved performance. Part of the joy of this hobby is the constant upgrade pressure to find that perfect sweet spot for performance.

Now that you are here, suggest you check out the Forum Guide where you can read about all the features available here to help you use and navigate the forums.

Again, welcome to the community!

1 Like

After a closer look at my temps last night, I can safely say that spikes into the mid - high 80’s are rare. Average temps are much better in the low - mid 70’s.

Download PBO2 Tuner. It will allow you to set a negative voltage offset to your X3D’s voltage/frequency curve. It’s super easy to do, and it will lower your temps.

PBO2 Tuner is a stand alone windows utility which doesn’t need to be installed and can just run from a folder, it’s very light.

From my observations most X3D’s can handle -20 to -25 with ease, producing less heat, and achieving higher boost clocks in certain situations.

Nothing to be afraid of, super easy, and non permanent unless you make it so.

Just start off with -10 -10 -10 -10 -10 -10 -10 -10

2 Likes

The 5800X3D does not allow boost clocks to be unlocked by any method other than MSI motherboard bclk OC (MEG X570 GOD LIKE/ACE/Unify Only), so even with Ryzenmaster, there is no OC item like other processors. If your processor’s temperature is too high, you will not be able to unlock the boost clock. If your processor’s temperature is high, it may not be cooled enough.

The reason for the OC limitation seems to be more than just simple heat. The main reason is that the added L3 cache does not operate stably when the voltage is below 1.35V and the voltage cannot be lowered.

(It’s hot everywhere in the world right now, PC’s are too, but humans should be careful of heat stroke too)

1 Like