I’ve already given some of the reasons, but here goes:
-
the nividia 4xxxx series is seriously at the end of the line. the last die shrink yielded almost zero performance increase - they had to go to serious power draws to get any real benefits. It’s why, unlike AMD and Intel, which have both released roadmaps for their next two generations, NVidia has got nothing. This happens with every tech at some point, now it’s NVidia’s turn.
-
it takes overhead to do between-frame interpolation - serious in-card computing. And to show that you need twice the in-card vram, twice the in-card bandwidth. So even the N4090 with a 320 bit data bus and 24 gb of ram is seriously behind 2 arcs with a combined data bus of 512 bits and 32 gb of ram.
-
Contrary to uninformed opinion, Windows (and FreeBSD and Linux) use all the memory you can throw at them. It doesn’t show in memory usage, but the memory allocator in all these systems keeps a hidden cache of recently-used-and-freed memory. When a program calls for more ram, the memory allocator checks the memory cache to see if it still has a valid-but-marked-free pointer to that ram - if it does, it returns it instead. If it doesn’t it will next try to allocate ram from the memory arena - memory that is available for allocation, and return that instead. If there is no free ram available, it will use an algorithm to select which part of the cache it will really remove from the caching scheme, invalidate that cache, and then take memory from that area. This is why more ram is good. On a game it’s not that big a deal (unless you’re going on a 1 week voyage with others taking turns at the controls, and never shut down), but on servers the performance increase is huge because you’re mostly running the same code over and over and over. Anyone who says 64 gb or 128 gb is a waste hasn’t done system development (and even most system devs aren’t aware of this, since they nevere interact directly with the memory arena manager). So 128 gb of DDR4-3600 doesn’t need overclocking, and in conjunction with 2 16gb cards, it rocks.
Funny thing is I kind of expected most of this from the upgrade to dual arcs; what I didn’t expect was the profound drop in CPU utilization. But I’m not going to lose any sleep over it 
Now that word’s getting out, retailers are trying to ramp up their stock. My local branch got 2 in stock Friday, gone the same day, one to me and one to another client.
And for a bit of shadenfreude:
NVidia’s head threw on a HUGE price increase the day before pricing was revealed to the public. Until then, it was supposed to be only about a 25% price premium over the 3090. But the head honcho thought he could go double-or-nothing. The hubris is pretty satisfying to watch, because I was going to buy a pair of 3060s based on all the disinformation of the NVidia crowd, but finally said to myself “the people crapping on the card are doing so to avoid buyer’s remorse. They don’t actually own one, or two.” And took the plunge.
Once the two 65" screens are in, I’ll post pics of running MFSF in 16k x 2k-3k “super panoramic mode”, same as I’ve done with 8192 x 2160. (Posted pics of the 100" video wall elsewhere, don’t remember if I posted them here, but they’ll be out of date in a week or two anyway). They will include CPU utilization, card performance, etc.
As Steve Jobs said - Artists Ship. He wasn’t referring to artists, but to people who DO the work and then SHOW their work in public. Someone has to try out dual Arcs - if everyone waits for everyone else, we’ll keep making sub-optimal decisions based on outdated knowledge.
Rather verbose explanation, but I think it kind of answers your question, and provides some context into the whole mess the GPU market has become, with NVidia and AMD both getting greedy. The latest AMD pricing is equally lame.