This is a horizon scanning kind of question. Which do you think are going to be the main changes in the way we live a flight simulation experience in 15 years from now?
To give you an idea, there have been several ongoing improvements (which will continue forever), such as flight model, graphics etc. Here I am talking about big changes, for example since 2005 we have:
VR
satellite photogrammetry
AI applications to scenery and traffic
force feedback yokes
online ATC and multiplayer
we start seeing coming along consumer 3dof/6dof platforms
what would, in your opinion, be the items on the list in 2035 when we look back at 2020?
Well I wouldnāt say we have force feedback yokes in any meaningful way right now but going forward I certainly hope force feedback returns and preferably doesnāt take 15 years to do it since we donāt really have it even in flight sticks except for boutique stuff right now.
I think (hope) hand tracking and AR features will become widespread in conjunction with VR. Other VR-related technologies like haptics will improve and that will have an impact on sims as well as the hardware for it will become more widespread.
We can expect simulation to continue to benefit from increases in computer and communications technology improvement: so much that we get is on the back of developments in the wider technology world. I would guess much improved AI driven 3d figures at airports (service and passenger) as standard. More realistic GA experience / control tower interaction with better reflection of local traffic rules. Animated co-pilots carrying out automated tasks in āassistance modeā and much improved interaction between the aircraft model and the world model (friction/airflow etc).
On the down-side some things will get worse⦠we will have lost many of the super experienced pilots of the 60ās and 70ās that so contribute to the development of classic aircaft in the sim. Will there be any left except the āchildren of the magenta lineā by 2040? And - it may be that the whole development ecosystem is SO complex that the large number of hobby contributors are no longer able to create for the sim (note how few freeware third party aircraft have been created for MSFS compared with FS9 and FSX : yes I know its relatively early days but I worry that the entry point for devlopers is now so high that only aircraft that can attract enough downloads to be a serious financial proposition ill ever get developed).
More realistic planes and more diversity. Better more believable weather. An AI ATC that can interact with spoken words from the pilot flying the plane. Better support for projectors and multiple screens. The ability to have either an āall glass cockpitā or āconventional instrumentsā on every plane. The cost of motion simulation comes down to the point where more people can experience it.
It would be nice if somehow the programmers could condense the code so we donāt have to have petabyte drives just for the program and planes.
I think it will improve but not by as much, itās like CPU speeds, it is flattening off. If you look at the difference of the early Sublogic Flight Simulators, allowing for graphics and detail they were a lot simpler in what they simulated and much smaller areas to fly in. This leapt massively to FS2004 and FSX at age (roughly 20).
Without decrying MSFS or X-Plane or Aerofly at all, the changes since then are more cosmetic. Much better detail - which is more about machines being able to handle the data ; you can uprate FSX quite significantly, better graphics and sound. Modelling of flight has improved, but not hugely.
Thereās been a big explosion in other aircraft since then ; less so for MSFS airliners but that will come, but that can only go so far. There arenāt that many new aircraft, so once youāve done the obvious youāre looking at high accuracy (which is there in many cases) and obscurity. Few people want absurd accuracy.
VR will grow hugely, and weāll probably have some sort of projection system rather than a headset, which will probably make me fall over
Expansion of things like VATSIM and the like are limited by the difficulty of producing plausible AI ATC and players ; there arenāt going to be that many people wanting to do flight sims.
Possibly more competitve and missions stuff ; people often ask āwhat do I actually doā with flight sims. One downgrade from MSFS is the tuition and missions stuff.
Iām old. Old enough to have programmed Windows 2.0. This doesnāt happen. Improvements in CPU speed, GFX clout, SSD speed and size and the like is largely wasted IMO. Developing on a 386DX/20 with 4Mb RAM, a 32Mb hard drive and VGA was possible, and wasnāt significantly slower.
Given the ridiculous increases in CPU speed, memory, storage and speed, and support graphics hardware, itās not very impressive.
Unreal Engine 5ās Matrix demo (look for it on Youtube if you havenāt seen it) has shown everyone what a potential flight sim could look like if it began itās development today. For an even more on-the-nose example, see itās āSupermanā variant.
We will CERTAINLY have a second and probably a third Fenix release. Aaaaw I wish it would be 2035 and have more Fenixes to choose from.
Everything else is⦠whatever. Graphics is good enough for me world is good enough for me everything else I would say (I want this a little bit better, and that a little bit better and ATC and whatnot better blah blah blah) will probably be done anyway in the next sim or in oncoming updates.
But the most important thing is:
MORE ttriple-A absolute study-level superb quality airplanes And redoing realism and failures, for example it should not be possible to just land a full airliner without bursting the tires just by the excessive airspeed needed for touchdown. And it should not be able to brake an airliner that easy without bursting the tirex or making the brakes glow.
Mixed reality will become standard for home cockpit builders: Sitting in an affordable, modular and fully customizable cockpit with physical switches while the windows are green screens. Bringing together haptics and visuals, the immersion will be near perfect.
AR is a big thing in my opinion too. It becomes more and more important if we land into physical cockpits. Hectic also are a massive thing for me, changing everything in my opinion. The question will be their interaction with force feedback.
I think some of the developments in AI art are incredible and relevant here - Iām talking about Dall-E, Midjourney, Stable Diffusion etc. This (and similar approaches) will have a massive impact on the procedural generation of landscape meshes and textures. There are already extensions which can generate eg infinitely tiled textures on any imaginable theme. The technically incredible thing about eg Stable Diffusion is that it works off a 4gb downloadable model and can generate an image if anything in any style you like. There are limitations but things like āgive me a 0.1m mesh & textures for a swiss-looking alpine hillside meadow that matches up With this 10m
Mesh i have and this blurry satellite imagery ā will be no problem, very soon if not already.
I think this is another point of value. I am convinced that AI intelligence will fill in scenarios with pseudodata more and more! While the satellites may not go down to 0.1m and lidar will not cover the entire globe, the actual mesh, photogrammetry and autogen will be AI based (even more than today) leading to have sceneries that are filled with credible although not realistic representations of the reality where data at that resolution are not available. A sort of an AI fuelled autogen
In order to simulate real world the natural move goes into the photoreal scenery and either AI managed live data, not just injecting it as it is but processing it to have more realistic behaviour of AI entities, or a complete AI system based on machine learning (for instance AI really handling vehicles in an autonomous way and not just following a predefined A ā B path like bots with basic maneuvering at each waypoint). Something similar is already used by drones formations, where they interact in real time to adpat to the changes created by other drones and keep formation always perfect, no matter how complex movements they do. Clear application case for this is the handling of ground traffic at an airport, where each AI vehicle is adapted to the current status on its vicinity with some more logic than a stop if occupied path and continue if clear path.
Cloud computing would be most likely another effect we will soon see because industry (not only gaming one) is going fast into that direction. This can allow to move most of the computing tasks to a server, while your client is basically just a front end with your graphic card and joystick/mouse/keyboard. So, all those things that are running now on your PC main thread can be run on a server and the resulting calculations just sent to your client to simply draw them into your screen.
A very interesting branch of this one is the clients running a computing network itself (each one runs a small % of the overall computing) so it generates a lot of capacity with low efforts on each single machine. That has been already used to make some astronomy big data calculations or processing, being the users the ones adding their own home computers to the network. This can be used in MSFS to manage more complex live AI traffic (including roads, ships and trains) or very precise weather, as those entities are global for all players for instance. Weather prediction is indeed one of the most demanding features where this can be used, even if this is not really needed as MSFS uses real data instead.
It kind of makes me chuckle at how weāre coming full circle. Back in the day before PCs, that was the norm. You would connect to a mainframe via a dumb terminal. All the processing would be done on the server (in the cloud) and all you would have on the user end is the video feed and inputs. Over the decades, the world moved away from that and went to fully integrated desktop computers.
Now decades later after the PC revolution, the whole computer landscape seems to be moving back to the client-server model of the last century. A far more capable model, of course, but still reverting back to its roots. Even gaming - one of the biggest drivers of standalone PCs (and consoles), is moving in this direction. And I find that kind of hilarious.
The answer is easy I think: a game running in a server means the cost of the server is added to the development cost, so added to the seller shoulders. If user already has a PC you don“t really need a server unless game is online, so you just sell the software and that“s it. On the industry it“s a bit different: the controlling devices installed on the field are normally very expensive, so for the seller it“s cheaper to have just a front end on the field with a network interface to send and receive data (like diagnosis) and one server only to run the whole logic of all devices in the field at once. Unless you need PLCs or things that physically act over moving parts on the field that cloud computing can reduce your development costs a lot and also ease the maintenance, as server is next to you and not thousands of kilometers away, in the real field, and you only have 1 device to maintain instead of 100 smaller machines spread on the whole field.
FSX to MSFS is 15 years. I would expect more enhanced textures, and more detailed models and terrain, but overall framerates will still be as they are roughly today, as theyāll be pushing more detail.
VR/AR support should be consolidated and more mainstream, hopefully with much lighter and cheaper headsets/glasses.
Perhaps weāll be sharing data peer to peer directly, rather than via a central server. The idea above about using the combined computing power to solve complex AI algorithms would be a boon to us all.