As a piece of general advice, don’t ever buy AMD GPUs if you are using or considering using the VR. You might get lucky, but the chances are it will not end well… Might work for 2D.
Again, unless you do VR. I have a 4090 and I’m severely CPU-limited. There’s definitely a lot of single-core performance that’s going to help a lot. 5800X3D is the best there is now, and cheap. This is what I’d get on a budget. 7000X3D will definitely be better than anything Intel can offer, for MSFS.
The AMD just announced Ryzen 9 7950X3D with 16 cores and 144MB of cache. 5.7 GHz boost (same max clock as existing Ryzen 9 7950X CPU), the TDP has been reduced by 50W to just 120W.
Ryzen 9 7900X3D: 12 cores, 5.6 GHz boost and 140MB of total cache.
Ryzen 7 7800X3D: 8 cores, 104MB of cache, 5.0 GHz boost clock and TDP at 120W.
7950X3D vs. Core i9-13900K : 9% to 24% faster in gaming.
Afailable “in February” with no firm date… Shut up and take my money…
Lower base frequency shouldn’t matter for gaming, as the turbo frequency is usually attainable provided there’s thermal headroom to do so. However, it might matter for other applications that don’t need a huge L3 cache (like Blender), and I suspect there might be some performance loss over the non-3D versions.
I thought the same as well. I suspect the performance gains due to the L3 cache of the 7900/7950X3D over the 7800X3D will be minimal due to the inherent limitations of two core complexes, but their higher boost frequencies might help offset that.
I wonder how base clocks will impact 0.1% / 1% performance thus smoothness. AMD has not released the 7800X3D base clock number, could it be higher than the two others?
Maybe, will higher clocks and 3Dvcache be attainable on the same CCX?
How about OS scheduling on Windows?
Looking forward to real life tests.
In any case this announcement is good news for the MSFS community.
On a personal note, I am planning to play with virtualization and CPU cores pinning on Linux, it will be interesting to see how it plays out in terms of choosing the proper CPU for max performance/smoothness of MSFS in a virtual machine. I suspect 7800X3D is the sweet spot on Windows bare metal, but for my setup, it could be a different story.
Pricing has not been made public.
The rumor mill is placing 7800X3D around 510USD MSRP at launch.
The idea being the introduction price was planned to be around 60 dollars more than its predecessor due to inflation, etc.
Of course this is to be taken with a grain of salt, pepper, even garlic, given the volatile context.
That by itself doesn’t mean anything about the performance. It only means that gamers only care about single-core performance, and use 2-3 cores at the max, while creators use video and 3d rendering etc. that use all cores, so they need strong multi-core performance. Just like I do. 7800X3D won’t work for me as my workstation CPU - not enough cores, a downgrade from my 12-core 5900X. But 7900X3D or 7950X3D will work just fine!
The only problem is I’m used to 64Gb of RAM. And 64Gb of 6000Mhz DDR5 RAM is eye-watering expensive. Ouch! AM5 motherboard too…
For Canadian simmers: I will try the strategy that has worked for me with 3080 GPU, 5900X CPU and 4090 GPU purchases: an in-store full-price prepaid preorder at Canada Computers. In every case, I got the items within a week from the release date. Not sure when it will start, but not before the release date and price is official. You have to actually get to the store (impossible to do online), register and pay full price (can be refunded if you change your mind), and get in line. Then wait for an email to pick up… Much less stressful than hunting the online stock, and minimum wasted time… I’ll see if it will work this time. Other stores may have similar arrangements…
I am not sure we are speaking about the same thing
The way 3D cache is distributed across will indeed have performance implications in some use cases.
For reference here is the output of lstopo for 5950x, showing how cache is distributed:
Here you can see that 5950x is a dual CCX architecture with 16 threads in each. Unlike 5800X3D, which is a single CCD.
7800X3D is also a single CCD.
According to AMD it will come with 104 MB of total cache (L2+L3), which works out to 1 MB L2 cache per core and 96 MB of L3 cache.
In their announcement, dual-CCX 7900X3D and 7950X3D were shown to have caches of 140 MB and 144 MB, while they would have been 204 MB or 208 MB if it was distributed evenly across all cores like in 7800X3D.
Therefore in 7900X3D and 7950X3D the additional 3D Vcache will be attached to one CCX, not the other. It is an asymmetric cache setup. Different CCXs do not share L3.
That’s why 7800X3D is likely the sweet spot for MSFS, and some other games as well…
For workstation usage, typically, it is another story.
I for one am planning to use virtualization extensively, so in this case the higher end CPUs can be better suited as well
I was just talking about the sentence about top gaming chip vs top gaming+production, it’s just marketing. But thanks for the explanation. I don’t dispute what you’re saying. I think they are trying to offset it by having a higher boost clock on 2 higher CPUs maybe… I’m just hoping that 7950X3D won’t be much worse than 7800X3D. I guess benchmarks will show. I don’t need virtualization, but heavy 4K video editing and rendering, Photoshop and other graphics etc. are calling for 12+ cores as well.
The 7950X and 7900X are better binned (i.e, best quality chips) than the 7800X, so max boost frequency is inherently higher because of lower voltage requirements. This doesn’t have anything to do with trying to offset the lower base clocks. There is a man-made reason why the 7950X has the highest boost frequency among the Ryzen lineup.
I don’t think the 7950X3D will be worse than the 7800X3D, it just might not be signficantly better considering it has more L3 cache. This will likely be a limitation of the L3 cache attached to each CCX.
As pointed out, the L3 cache on the 7950X3D is asymmetric: 32MB+64MB (CCD1) and 32MB (CCD2).
Also, given that the 7800X3D’s maximum clock is 5.0GHz, it is probably an asymmetric configuration of CCD1 (5.0GHz) and CCD2 (5.7GHz).
The image is 7950X3D. You can see that the cache is stacked on one side of the CCD.
Since the maximum clock of the 5800X3D was 4.4GHz, the upper limit of L3 VCache, the speed was inevitably lower for the need to run the ALU at high speed. Therefore, we think that they tried to satisfy both needs by creating a core with VCache and a core without VCache but with high speed.
However, we feel that this would require explicit user control, such as manual setting of processor affinity.
(This is a common method used for processors that can be 64T or more, such as ThreedRipperPro; Windows does not support 64T or more equally on a single processor group.)
) The reason is that, unlike Intel, AMD has not to date provided a thread scheduling library for these asymmetric processors, and creating one for the OS for this product alone would be impractical given the cost.
The image, what bit is the cache? What can I see stacked?
Why hasn’t AMD provided a way for Windows to utilise their cores efficiently? Surely that is required or are they only meant to work properly on other OS? Such as?
How would setting affinity (for MSFS) work? I mean, you can do that now with any CPU, right? But you have to do it EVERY time you boot the game. Surely there should be a way to automate the best config?
At the end of the day, I/we non-technicians I think just want to know what the best option is and how that compares to the other options pros/cons. Why are things so complicated!
Without disassembling the processor and looking at it under an electron microscope, it is impossible to “really” tell. However, it is natural to assume that the explicit color change indicates only one side.
The cache is calculated to be 64 MB. This is obvious considering the capacity of the 7800X3D.
The reason AMD has not offered such a scheduler until now is that asymmetric CCDs have never existed. The only such processor currently available for x64 is the Intel 12/13 series.
Automation of affinity settings is available with ProcessLasso and others.
The best option cannot be answered as everyone has their own needs and it depends on each individual.
Generally speaking, I would say the 7800X3D is hassle-free, because it is as easy to use as the 5800X3D.
Clearly there are a few applications that use more than 8C16T, so I think the 7950X3D is better suited (assuming affinity is configured by hand in MSFS).
I am also planning to swap to 5950X->5800X3D (currently)->7950X3D due to some software/game development.
However, with the information we know now, we currently do not know if the 7950X3D is faster than the 7800X3D when compared to the 7950X3D.
This is because we cannot be sure that another core will have no thermal effect at all, and we do not know if the AGESA firmware will work properly.
(This is because AMD traditionally tends to have a longer time frame for proper UEFI improvements than Intel.)
I’m doing it now every time I start MSFS. I have an elaborate .bat file that asks me (by literally speaking with TTS) what extra software to I want to launch (i.e. I can run or skip Addon Linker, ATC, hand tracking, BushTalk Radio etc.), then it launches MSFS, then launches everything with correct affinity that I want (last 4 cores for everything but MSFS) sets affinity to all processes that are needed, kills all apps and processes that can be killed. I have a lot of extra software running for my motion rig, controls, ATC, etc. I figured Process Lasso is another process to run, why use it when you can use Windows native commands to set things up properly without running an extra software?
Where did you find it?
Interesting. BTW the problem with MSFS itself is that it crashes the moment you try to set it to any affinity, or try to launch it with specific affinity, somehow it doesn’t tolerate it, while other software is just fine. I wonder what would be the right cores to try and bind it to… Well, we’ll see soon.
■■■■■■ - I was hoping for a clear cut choice once the new x3d cpus were announced, but now I’m still very much hesitating about what to do. My only concern is to maximise VR gaming performance, in particular flight sims (incl. MSFS, DCS, IL2 etc) and demanding flat 2 vr mods (e.g. Luke Ross mods such as RDR2).
It doesn’t look like the new x3d cpus will be way ahead of what Intel is currently offering, but on the other hand, I now have a slight concern for potential hiccups with scheduling with that asymmetric design of the 7950 and 7900 x3d cpus - it sounds like the 7800x3d might perform similarly for VR with less risks than the asymmetric designed ones. But is the 7800x3d going to be the right choice over say the i7 13900KF? That’s the choice that isn’t clear to me yet, although I guess waiting (some more - I’ve been delaying my CPU upgrade for so long now) for the reviews and benchmarks will help with that (but then again it’s not even that easy to judge most tests since most of them are not testing the games that matter to us for VR and ensuring smoothness over just higher fps)…