Which 12/13th Gen Intel CPU For 4080/4090 @ 1440p?

Currently using an i5-12400F + RTX 3050 for 1440p and upgrading to a 4090 (or 4080 at minimum) soon.

What CPU should I getting that pairs best with it for MSFS? i9-13900K or is there something else cheaper that is just as good and maybe runs cooler? Don’t wanna risk major performance loss for that though.

I have some Best Buy rewards I need to redeem before they expire so shopping time.


For MSFS, I think the i5-13600K could work. With MSFS being largely single thread limited, all the extra cores on the i9 are not as important.

If budget is not a constraint, I would get the 13700K. It’s 90% the performance of the 13900K but nearly $200 less. The 13600K is also a good choice if you want to save some money, and can be overclocked to match the 13700K in performance.

Why limit the graphics to 1440p?

With that level of hardware, I would go for
4K - 3840x2160 minimum.
Or Even
5K - 5120 x 2880
8K - 7680 × 4320.

Not so fast on the 8k display. Tried it this weekend on my video wall setup (4 x 4096 x 2160 50" displays in a 2 x 2 grid, giving a 100" TV, dual Arc 770LEs). The video cards have no problem whatsoever with doing it. Started out with my standard 8192 x 2160 window, everything ok. Stretched the window to cover half the upper row of screens, giving 8192 x 3290, still no problem. Went for 8192 x 4320 - ran into a “sanity check” that is in place to force the sim to run at 4k and CPU - only rendering. I figure it’s there to prevent Xbox users from having problems if they ever plug their xbox into an 8k display, since they will NEVER be able to do 8k on an Xbox X. And CPU-only rendering is useless at that resolution. Until they fix it to allow the PC version to run 8k AT 8k, the only way to run at 8k will be to lower your video resolution, which kind of defeats the purpose of having 8k in the first place.

Bug report filed earlier today. I doubt many other people have hit this particular bug, since they seem to run at less than 8k x anything.

In the meantime, 8k x 2k is still gorgeous - it’s a wide-screen view that covers twice the normal viewing angle horizontally. Can’t wait to try 16k x 2k when my two 65" side screens are shipped to my supplier, hopefully next week. Then I’ll post screenshots comparing the two - if the sim is capable of 16k output.

1 Like

Sell the RTX and get an Arc 770LE. With twice the Ram and twice the bus width, it’s your video card, not the CPU , that’s the bottleneck.

Real-life stats: Installed 2 Arc 770LEs, on last year’s i5-12400, and CPU utilization dropped off to 16% while flying in an 8192 x 2160 window stretched across two 50" screens. And it will only get better when they add XeSS support.

I fully expect that CPU to be able to support 16k x 2k when my two 65" side screens finally arrive, because the choke point is no longer the CPU (which before I upgraded the video cards, was often maxing out), and at that resolution, supplying the current 4 screens, I’m barely using half the GPU capacity, so a 3rd one on each is not unreasonable.

1 Like

How can Arc 770LEs do that when even the 4090 struggle at 8k?

I’ve already given some of the reasons, but here goes:

  1. the nividia 4xxxx series is seriously at the end of the line. the last die shrink yielded almost zero performance increase - they had to go to serious power draws to get any real benefits. It’s why, unlike AMD and Intel, which have both released roadmaps for their next two generations, NVidia has got nothing. This happens with every tech at some point, now it’s NVidia’s turn.

  2. it takes overhead to do between-frame interpolation - serious in-card computing. And to show that you need twice the in-card vram, twice the in-card bandwidth. So even the N4090 with a 320 bit data bus and 24 gb of ram is seriously behind 2 arcs with a combined data bus of 512 bits and 32 gb of ram.

  3. Contrary to uninformed opinion, Windows (and FreeBSD and Linux) use all the memory you can throw at them. It doesn’t show in memory usage, but the memory allocator in all these systems keeps a hidden cache of recently-used-and-freed memory. When a program calls for more ram, the memory allocator checks the memory cache to see if it still has a valid-but-marked-free pointer to that ram - if it does, it returns it instead. If it doesn’t it will next try to allocate ram from the memory arena - memory that is available for allocation, and return that instead. If there is no free ram available, it will use an algorithm to select which part of the cache it will really remove from the caching scheme, invalidate that cache, and then take memory from that area. This is why more ram is good. On a game it’s not that big a deal (unless you’re going on a 1 week voyage with others taking turns at the controls, and never shut down), but on servers the performance increase is huge because you’re mostly running the same code over and over and over. Anyone who says 64 gb or 128 gb is a waste hasn’t done system development (and even most system devs aren’t aware of this, since they nevere interact directly with the memory arena manager). So 128 gb of DDR4-3600 doesn’t need overclocking, and in conjunction with 2 16gb cards, it rocks.

Funny thing is I kind of expected most of this from the upgrade to dual arcs; what I didn’t expect was the profound drop in CPU utilization. But I’m not going to lose any sleep over it :slight_smile:

Now that word’s getting out, retailers are trying to ramp up their stock. My local branch got 2 in stock Friday, gone the same day, one to me and one to another client.

And for a bit of shadenfreude:

NVidia’s head threw on a HUGE price increase the day before pricing was revealed to the public. Until then, it was supposed to be only about a 25% price premium over the 3090. But the head honcho thought he could go double-or-nothing. The hubris is pretty satisfying to watch, because I was going to buy a pair of 3060s based on all the disinformation of the NVidia crowd, but finally said to myself “the people crapping on the card are doing so to avoid buyer’s remorse. They don’t actually own one, or two.” And took the plunge.

Once the two 65" screens are in, I’ll post pics of running MFSF in 16k x 2k-3k “super panoramic mode”, same as I’ve done with 8192 x 2160. (Posted pics of the 100" video wall elsewhere, don’t remember if I posted them here, but they’ll be out of date in a week or two anyway). They will include CPU utilization, card performance, etc.

As Steve Jobs said - Artists Ship. He wasn’t referring to artists, but to people who DO the work and then SHOW their work in public. Someone has to try out dual Arcs - if everyone waits for everyone else, we’ll keep making sub-optimal decisions based on outdated knowledge.

Rather verbose explanation, but I think it kind of answers your question, and provides some context into the whole mess the GPU market has become, with NVidia and AMD both getting greedy. The latest AMD pricing is equally lame.

if you going to buy 4090, go with 13900k (since you asked which Intel CPU).

i am CPU limited by my 5900x with my 4090 at 4k in some games. Some people call this bottleneck.