If you REALLY want to force a CPU bottleneck, you run at the maximum resolution. Look at how many systems can’t push out 4k without a CPU upgrade. Or if you want to test raw CPU-only performance, don’t use a game that can benefit from GPU acceleration, use 4k or 8k movies straight to a 4k or 8k display. Or do transcoding - which the same benchmarks show that you’re still better off going with an i9-13xxx. These “benchmarks” have nothing to do with the real world, and nobody should base a purchasing decision on benchmarks anyway.
Here’s the thing: If you’re happy with 1080 or 1440, then you don’t need these CPUs. If you want 4k/8k or greater, these CPUs are still behind the i9-13900. So who are these CPUs for anyway? The simmers doing a new build? They can do better, or they are running at 1080 or 1440, in which case they simply don’t need it.
“Future-proofing?” You’re better off saving your money, then when the future actually gets here, you’ll be able to buy better, cheaper. That’s how it’s always been.
It’s sad, because I used to always buy AMD. But this round, Intel is still the champ. And if they wanted to rework their 13xxx series and get even more return on their investment, they can do a process feature shrink, get rid of the E cores, and stuff 12 p cores on the die. I’d pay a 50% premium for that. The advantages are (a) more deterministic code execution, and (b) raw performance. Of course they won’t it would compete too well against their xeons for use in labs where raw power means saving time means saving money.
Anyone tried to run MSFS on a dual-cpu xeon with a half-terabyte of RAM? Seems it would cost less than some of us already spend building out cockpits. The only poster running an XEON only has 32 gb of RAM, so certainly not representative of the product category. The motherboards I’m looking at for a new build in 2 years support dual CPUs, 16 slots for RAM, and 7 slots for video cards. Why you would buy an Xeon mb and then not USE it to a reasonable level of it’s capacity, IDK.
The 7 slots can only accommodate 4 dual-width cards, but hey … even that would be awesome. And with software bloat being what it is, and the ability to run data models that you simply can’t run in a reasonable time on a desktop machine … something to keep in mind. Because I have a data analysis project I’m probably going to run in 3 years, when it will be cheaper to make fractional petabyte storage machines (estimate I need 1/3 pb, and in 3 years that will be “affordable”, at least compared to, say, last year).