So yesterday I had the same issues as many others (being stuck on 97%, crash at character creation, etc).
Today I managed to get into the main menu without any problems however looking at “Free Flight” and how the entire product seems to have been built I think the overall root cause issue for many of the issues that people are seeing seems to be the lack of proper network infrastructure that can handle the amount of people trying to play and an overt dependency on streaming information.
It suddenly makes sense why the download size was relatively small and why I’ve also been experiencing issues under “Free Flight” in which if the sim can’t load in the details of an airport - it won’t let you fly.
It also suddenly makes sense why people have been saying that their visual fidelity has been subpar:
It all seems to be coming down to streaming and not just whether the servers can handle it but whether people’s individual connection can handle it.
I’d like to think that I understand why the sim was built this way given the mass amount of data involved but I think you also inadvertently created a single point of failure: if there’s network / server issues - the whole thing seems to collapse.
Or if people have a subpar connection their experience is going to reflect that.
Same questions - I’ve wondered about the viability of this architecture but thought I must be missing something somewhere. There were enough ( more than enough) red flags for me to decide to stay with 2020 for the time being and see how it worked in practice… but I really hoped I was just being over cautious….
I was also a bit worried about the install size on Xbox. It seemed very small for a game that was supposed to be graphically better, perform better, and have more content than 2020, and 2020 base game sits at over 120GB.
If Microsoft really thought that streaming hundreds of GB of data to an untold amount of users across the world, then they simply have no business making flight sims anymore.
I would hazard a guess that it’s not the network infrastructure that’s the problem per se (the Azure cloud is surely pretty robust), but a problem with the system design somewhere. There was clearly some sort of bottleneck, causing a cascading effect that’s hard to reverse as everybody just kept retrying the connection. From yesterday’s video it appears that the problem was a saturated cache. I’m guessing what this means in practice is that lots and lots of connections were trying to pull data through the cache, the cache got filled with data, and whilst connection A was trying to pull file A, connection B was trying to pull file B from the database, and the cache wasn’t able to make way for file B, causing a deadlock. If this happens across all your caches then effectively the system stops. In this case, the network infrastructure would be fine; the problem lies in the poor scalability of the system.
Needless to say I have no idea what actually happened, just speculating on what could be happening in a situation like this. The post-mortem will hopefully explain what really went wrong. (And boy do they need a lot of explaining to do to earn back trust, given that the entire product depends on cloud connectivity.)