I have a 4-screen setup (two center screens, 2 side screens), but experimental mode didn’t allow me to create a single screen that expanded across both center screens. Here’s my fix:
- In experimental mode, create a second window.
- Leave the horizontal axis offset at 0.
- Stretch the window across both displays.
- Fly for a minute.
(The stretched display will cover the original window).
- Exit.
- Restart the simulator.
- Exit experimental mode.
- Restart the sim
- Press Alt-Enter to get into windowed mode
- Enjoy your main window now being 8192x2160
(the original screen shot is 8192x2160)
2 Likes
Nice find
Another fix can be when you select these two center screens and set them in surroundmode in nvcpl
This way they will be seen as one.
and with these two sidescreens then will be all together be seen as 3 screens maybe it will give some extra fps.

- A lot of people have problems with NVidia not working to merge the two displays. Doesn’t work for me.
- I just ordered an Arc 770 this afternoon, should have it next week. NVidia control panel only works for NVidia.
- For less than half the price of a 4090, I can get 2 Arc 770s. I compared the specs:
Single NVidia 4090:
24 gb ddr6 ram
384-bit bus
crazy power consumption
Dual Arc 770:
16x2 gb ddr6 ram = 32 gb
2x256-bit bus = 512 bit combined
not quite so crazy power consumption
When you can get 4 Arc 770s for less than 1 NVidia 4090, it’s obviously time to consider alternatives.
1 Like
You can’t just sum two individual cards together, doesn’t work like that. And they are nowhere comparable to a 4090.
1 Like
Actually, you CAN if they’re driving different screens.
Not really. But if you’re happy with that, go you.
Yes, you can, and there are commercial video wall products that do exactly that.
In my case, each screen will be driven by 1 gpu, same as it is right now.
Actually, that’s how the 4 gpus are working in my box. Each one only handles the data fed to it. So 2 x Arc 770 w. 16 gb ram is 1/3 more than an nvidia 4090, 1/3 more bandwidth total.
If I were stupid enough to pay crypto miner prices for a 4090, that 24 gb of ram would be divided among 2 screens, so 12 gb of ram per screen, instead of 16. Ditto data bus width aloocation - 384 bits allocated among 2 screens is 192 bits per screen, each Arc has 33% more data bus width per screen.
To the gpus, it’s just data. It all comes from the same cpu, and the cpu has no problem dividing the data flow between 2 screens on 2 gpus - otherwise my current setup wouldn’t work using 2 gpus to drive 2 screens in an 8192x2160 stretched center screen.
And if I find that I’m then cpu constrained, I’ll just throw in an i9-13400k. But I think I’ll be okay with last summer’s cpu.
The idea that you need to run just 1 gpu - anyone running 4 screens is doing more than 1 gpu, or they’re limited in terms of ram and data bus width per screen by their single card.
You can run multiple GPUs to run more than 4 screens. This is pretty standard stuff. This isn’t SLI. This is just adding more display outputs. I do it in my setup (8 monitors). I have a 3070Ti as my main GPU, and an old Quadro P620 I use just to drive my cockpit’s touch screens. Works like a charm.
That said, it doesn’t work like you think it does.
This is incorrect. Only 1 card is going to be used by MSFS for its acceleration needs. The others are just passive display cards. The sim will not render across multiple GPUs or combine VRAM. All the heavy lifting of rendering is done the primary GPU. It’s what does all the compute and provides the VRAM for MSFS. None of the other cards are being used at all except as passive outputs. The main GPU renders everything and passes the rendered frames off to the other GPUs to display on their respective displays.
You’re basically wasting money and power driving 4 GPUs to run 4 monitors.
2 Likes
That is absolutely false - we’re not talking acceleration - that’s handled by the GPU drivers, not the game. The “rendering” is done by the GPU, not the CPU. The ONLY time Windows provides VRAM is if you have a really ■■■■■■ “shared memory” video card. In which case you really need to get a real GPU.
It’s absolutely true. As noted above, your assumptions are, still, incorrect. There is one rendering pipeline, it’s not being divided between the cards but handled by the primary GPU and therefore limited by it’s bus width, vram and raw power. The specs of the other cards are essentially irrelevant.
As Crunchmeister71 notes, the other GPUs are just acting as display adaptors.
Last post here. If you’re happy, great.
1 Like
What you two are proposing hasn’t been true for almost 2 decades. Most of the “graphics pipleline” operations are now implemented in hardware the GPU(s), not the CPU.
It’s why you have ray-tracers, shaders, etc., all implemented in the video card hardware and not the CPU.
In the case of the Arc 770LE, the hardware includes 4096 shading units, 256 texture mapping units, 128 ROPs, and 32 raytracing acceleration cores.
Those ROPs are a good example:
In computer graphics, the render output unit (ROP) or raster operations pipeline is a hardware component in modern graphics processing units (GPUs) and one of the final steps in the rendering process of modern graphics cards
Source: https://en.wikipedia.org/wiki/Render_output_unit#:~:text=In%20computer%20graphics%2C%20the%20render,process%20of%20modern%20graphics%20cards.
Rasterization hasn’t been CPU-based for a LOOOONG time, except in cheap shared-memory adapters that are just a bare-bones interface to the output displlay.
So what about the rest of the pipeline? Again, it’s mostly GPU, not CPU.
The model of the graphics pipeline is usually used in real-time rendering. Often, most of the pipeline steps are implemented in hardware, which allows for special optimizations. The term "pipeline" is used in a similar sense to the pipeline in processors: the individual steps of the pipeline run in parallel as long as any given step has what it needs.
Source: Graphics pipeline - Wikipedia
Gotta keep up with the times. It's why the Arc GPU contains more than 20.7 billion transistors. While the transistor count of the i9-13900 isn't known, the i9-12900k is a measly 2.95 billion transistors.
You can quote Wikipedia articles as long as you want. Nothing to do with graphics pipelines, we are talking about the rendering engine of FS2020 and it’s output.
The gpus do nothing on their own, they need the software to tell them what to do. FS2020 has a single render pipeline for whatever it displays across how many outputs.
Your 2nd and 3rd outputs are not magically being split from everything else and left to each GPU in turn to render, FS is not splitting different views off and sending each GPU only the relevant objects and data they need and hoping they all render each nicely and in sync with each other.
It’s pushing all that’s called for out to the primary GPU.
It’s why SLI was created, and it’s modern implementation nvlink, to share rendering and vram between multiple GPUs rather than just take advantage of individual cards. But you need nVidia for that.
There’s a reason SLI is dead. CPUs are fast enough that they can send the data to 2 or more video cards, along with viewport information, so they can render, for example, 2 side-by-side views with an X offset for the right view that is the width of the left view.
Before, CPUs weren’t fast enough to do this. And neither were GPUs.
Most of the render pipeline nowadays IS the GPUs. No, MSFS isn’t splitting off the data - but the underlying Windows graphics engine IS doing exactly that - feeding slightly different info to each GPU, because THAT part of Windows knows the physical dimensions and placement of each window to be rendered. The actual rendering is ALMOST ALL done in the GPUs nowadays.
Anyway, anyone who wants to can find out why SLI is pretty much dead nowadays. Nobody wants it with the much improved hardware - welcome to the 2020s.
The reason SLI is not needed anymore is because of NVlink and a PCIe bus that is fast enough and has a high enough bandwidth to not require direct GPU links via SLI anymore.
The reason why NVlink is a ‘thing’ - as was SLI - is because it can share GPU load over multiple cards without the application having to be written specifically to take advantage of multiple GPUs.
Without , it or something similar, discreet GPUs are just that and software has to be written to take advantage of them. That is very difficult for ‘lots of reasons’ - FS2020 is not.
Getting tiresome now. You can keep believing what you want, doesn’t change the facts. Bye.
Getting tiresome now. You can keep believing what you want, doesn’t change the facts. Bye.
Funny, I've been thinking the same thing.
NVLink has NOTHING to do with this. I never said the GPUs would communicate with each other. I'm not stupid - just more up to date on tech than you seem to be. First you claimed the CPU did most of the rendering - totally false except on the cheapest video cards. Then you brought up an obsolete technology that enthusiasts abandoned years ago. And then you bring up NVLink. Totally irrelevant. But it shows you're not up to date with the technology.
Go look up the OpenGL functions - they’re executed by the video cards, not the CPU (except in those shared memory garbage video cards), and they are responsible for 90% of the entire rendering pipleline if you have the right card.
But the rendering pipeline doesn’t even stop at the video card. Plenty of smart TVs have chips to further enhance the video. We’ve come a long way, baby!
No, NVlink has obviously nothing to do with this since you are not using nVidia cards.
Up to date and not stupid? I haven’t suggested anything of the sort, neither did I mention anywhere that the CPU does the rendering. I think the mere fact you think I did somehow says it all.
…as does you bringing OpenGL into this. Nothing to do with this at all.
Now you’re onto smart TV’s and rendering… Umm, okay.
Someone who has a newer smart tv that can enhance images, but doesn’t yet have a video card capable of doing 4k, is going to appreciate that they can run their card at a lower resolution and get “almost the same” visuals.
It’s truly amazing how the smart tv upscalers work - I regularly stream old TV shows directly off the net (no computer involved - it can be turned off) and those old shows look better than they ever did “back in the days.” Don’t knock anything that has been proven to work.
Same as the Intel Arc 770 LE cards are truly amazing. Next month I’ll be installing a second one - just have to finish building my video display rack so I can get the screens off my work desk (and add 2 more screens above the 4 I have now, to display pop-outs of the multi-function displays).
But be a sucker for NVidia’s propaganda - it’s your loss. It’s significant that their 4xxx cards aren’t really novel. Just “the next generation of our old and now over-priced stuff.”