ChatGPT's Report on MSFS 2020 - State of the Sim in Nov 2023

Report on Microsoft Flight Simulator Technical Issues

Overview:

Microsoft Flight Simulator (MSFS) is a visually stunning and immersive flight simulation platform. However, users have reported various technical issues that impact the overall experience. This report aims to highlight some of these issues and suggest areas for improvement.

Graphics and Performance:

  1. Frame Rate Variability:
  • Users often encounter inconsistent frame rates, leading to stuttering and reduced smoothness during gameplay.
  1. VR Performance:
  • Virtual Reality (VR) performance may be suboptimal, impacting the experience for users with VR headsets.
  1. Graphics Optimization:
  • While MSFS is visually impressive, there is room for optimization to enhance performance on a broader range of hardware configurations.

Stability and Crashes:

  1. Crash to Desktop (CTD):
  • Some users experience frequent crashes to the desktop without clear error messages, making it challenging to diagnose and address the issues.
  1. Stability during Multiplayer:
  • Multiplayer sessions may suffer from instability issues, including desynced aircraft positions and connection problems.

User Interface and Accessibility:

  1. Menu Responsiveness:
  • Users have reported occasional delays and unresponsiveness in the menu interface.
  1. Accessibility Features:
  • Improvements in accessibility features, such as customizable controls and better support for assistive technologies, could enhance the user experience for a broader audience.

Aircraft Systems and Realism:

  1. Autopilot Behavior:
  • Inconsistencies in autopilot behavior and functionality, particularly in specific aircraft, have been reported.
  1. Weather System Accuracy:
  • Users have noted discrepancies between the in-game weather and real-world conditions, affecting the realism of flights.

Community Feedback:

  1. Communication and Transparency:
  • Improved communication from Asobo Studio about ongoing issues, updates, and the status of reported bugs would help manage user expectations.

Recommendations:

  1. Regular Updates:
  • Consistent and timely updates addressing reported issues and introducing new features would enhance the overall user experience.
  1. Transparency and Communication:
  • Improved communication about ongoing efforts, known issues, and the development roadmap would foster a stronger relationship with the user community.
  1. Community Feedback Integration:
  • Actively incorporating feedback from the MSFS community in the development process can lead to more informed decisions and a better-tailored simulator.

Conclusion:

Microsoft Flight Simulator has the potential to be a groundbreaking simulator, but addressing the reported technical issues and implementing user feedback will be crucial for ensuring a positive and immersive experience for all users.

10 Likes

What is the point of this post? Everyone know the current states, I am not sure why you are copying with much less details the feedback snapshot and blogs, it’s much more precise.

14 Likes

A lot of these issues people already know about. It also fails to highlight all of the great accomplishments in the sim :slight_smile:

Anyone can just take a look at the most voted bugs in Bug Reporting Hub and find everything listed here.

Communication wise, I have found that stuff many users talk about be answered within 24 hours. Communication is awesome at the moment from the CMs.

And a lot of the times, if there is no answer, it likely is that the team doesn’t have an answer or can’t comment at the time. It’s better than many other companies.

1 Like

Because this evaluation was based on data collected from numerous sources all over the Internet, not just the MSFS forum’s snapshots and blogs, and hopefully is not biased by personal agendas and pet wants.

If it is similar to the Forums Snapshot & Blogs, then that is great, to confirm that those snapshots represent a good interpretation of all data available, without the negativity that so often accompanies forums post & blogs.

6 Likes

@N6722C Love the post. Microsoft’s Chat AI reporting the current state of Microsoft’s Flight Simulator :rofl: :+1:

2 Likes

Interesting to see ChatGPT’s take on it, but I guess there is nothing in the report that we didn’t know. Thanks for posting it though.

1 Like

Almost as funny as asking ChatGPT to write a “Press Release” for MSFS 2024 !!!

(wont post it – it might be just to confusing !!)

What really impressed me was ChatGPT’s ability to write detailed simconnect CODE for MSFS
(so much faster than I can !! _)

2 Likes

Dunno man.
AI creeps me out…

I’m guessing it still hasn’t ingested the SU13 debacle.

1 Like

I was generous, and only using ver 3.0 whose data goes to Jan 2023. The later version 4.? is more up to date and may tell a different tale.

The point I am trying to make is that ChatGPT can look at and diagnose way more data in a second, that a human can do in days of internet searches, and then come to a conclusion, that even now, is typically more accurate than most humans.

It can also write a very Convincing Job Resume !!! (even if it is all lies, it is very much what your typical head hunter is looking for.

And it can write MSFS JS & C++, in a fraction of the time a Dev can. may need a little tweeking, but its basically pretty good.

2 Likes

According to ChatGPT, it’s latest data update was Jan 2022.

ChatGPT:
My training data includes information up until January 2022. I don’t have real-time updates, so I cannot provide information on events or developments that occurred after that date. If you have specific questions about events or information that emerged after January 2022, I recommend checking the latest reliable sources for the most up-to-date information.

1 Like

So, I’m a year late – (so is MSFS 2024 !)

1 Like

Fixed it for you: And it can write MSFS JS & C++, in a fraction of the time a Dev can. may need a little tweeking, but its basically pretty good with known algorithms to simple problems.:winking_face_with_tongue:

1 Like

Agreed, its not up to doing the complex (outside the SDK) UI mods that you have managed to do !! (that is, until it gets to peek at your source code !! )

I’m skeptical it can currently solve any problem that involves conjuring a new approach, even if it’s relatively simple. There is no reasoning ability in LLM’s, despite a vocal segment proclaiming them to be nascent AGI. They’re not, they’re just statistical token prediction machines that know only what they’ve ingested and can get easily confused if the prompt is missing guardrails.

As such they can solve relatively common problems seemingly easily because all they are really doing is substituting for a Stack Exchange search and choosing the top answers. Or making one up based on that (or github code), which is why you need to double check the answers.

There’s a lot of tricks going on with prompt engineering approaches to alleviate these weaknesses (langchain, agent GPT’s, etc) but they all boil down to attempts to patch over the fact there is no intelligence in the machine, currently.

Still, great for narrow defined tasks even now, but knowing where to use them and when is the key. Also very dependant on the old “garbage in, garbage out” with wrt data.

3 Likes

Yes, very much a case of “Garbage In, Garbage Out, then modify the garbage in, once you have seen the garbage out, to a LESS garbage in, to give a LESS garbage out”

Rinse & repeat

Maybe this is AI teaching me. to ask more Intelligent Questions ?

So assuming Msfs 2020 is millions of lines of code that may have errors what happens if ChatGPT or similar looks through it all? Could it fix those errors and check the code is improved? Or even straight out rewrite and improve it.

Short answer: No

Long answer:
LLM (large language model aka ChatGPT et al) AI doesn’t understand code per se, it’s just a statistical engine that has eaten a buttload of text and predicts the next word in a sequence. Feed it enough data it can produce a semblance of coherant output, but ask the wrong question without constraints and it can easily produce gibberish, falsehoods, or fluent bs while sounding completely confident. Like many business people and politicians we know, so if you use that low level metric for human intelligence we’ve already reached AGI (artificial general intelligence), but I’d hate to die on that hill.

Asking a question when writing code, eg: on how to do some small snippet code in a particular language, where before you might look up Stack Exchange but now you ask ChatGPT or Bing or ., is different from fixing issues and debugging code in an existing codebase.

You can analyse code bases via what’s called static and/or dynamic analysis to theoretically determine quality and other metrics for debugging, but results vary dramatically depending on the standards used to write the application and adherence to those standards. What starts out with good intentions in software development often goes out the window for a bunch of different reasons. The rewrite of MSFS 2024 is a good example here, and was the right decision to make.
You can only put lipstick on a pig and make it dance for so long! I’m sure they will be applying some of these techniques in MSFS 2024, to varying degrees.

What you ask is a well researched area though. The system would need to be constructed with a formal specification language above the implementation language, in order for an AI system to be able to understand what the code is supposed to do. eg: Microsofts own TLA Introduction to TLA - Microsoft Research is the benchmark here.
However, very few systems are written this way currently, though this may change in the future. I would expect any automated ATC system, an area under heavy research, would be architected like this. Games? Not so much. Simulations? Maybe. MSFS is more in the game software engineering camp at the moment though.

Other approaches are possible, like NASA’s space code which is highly constrained in order to make it more robust and less error prone. An AI system would probably be able to understand this codebase given it has a very defined set of rules that govern what can be written.

2 Likes

In software development there is the “don’t repeat yourself” DRY principle that is often applied to the structure of code. On the flip side of this is the principle of “replace-ability”. Think of replaceable parts and tool and die making in other industries. A good example is Log4J. This is reused throughout a code base many, many times which makes it very difficult to replace. Then there is SLF4J which is a framework for using any logging solution. Think of it like an adapter that allows you to attach any logging solution you like to your code. SLF4J is designed for replace-ability.
Many code frameworks have what are called “generators”. These are tools that generate code that represent the basic structure for a basic application. The code generated by these tools can be thought of as replace-able starting points.
The MSFS SDK provides a set of example aircraft that can be thought of as replaceable starting points. You replace the model with your own, wire up the flight model, and you are off to the races.
It would be interesting if an AI could generate a replaceable flight model based on the parameters of the aircraft. Providing samples of each type of aircraft in the SDK is a simple approach to doing the same thing; here’s a single engine flight model so attach a single engine model to it and it will fly.
The X-Plane approach is a crude attempt to provide a flight model for the aircraft model of your choice. However, it ends up following the reusable approach more so than the replaceable approach. You have the model that works with the flight model code hidden inside the model that you see in the sim. This is why all the planes feel the same with their system. To use the logging analogy, they try to provide a framework for using any model you want (SLF4J), but all the results end up looking like Log4J.
So, to train an AI to write flight model code, you would feed it all the code for the aircraft in MSFS in a way it can learn the semantics of the system. From what I can tell, there are two inputs to learn: the code for the model (weights and measures), and the code for the flight model (lift, roll, etc.). The look of the aircraft would still need to be custom. In other words, the AI would know that the aircraft has landing gear, but it would not know how the gear looks. It would know the gear goes up and down, but it would not know how the gear looks going up and down. Same for the liveries, propeller animations, etc.
So, here’s the wishlist item: given a specific aircraft model, generate a replaceable flight model.

I like this thread, OP. Scarily accurate AI evaluation of the current state of the simgame.