Not to diverge too much, but this is important, and a really great example of where this tech fails.
LLM’s (Large Language Models) like ChatGPT are useful for creative/fun tasks where accuracy and trust are not a consideration, but they are extremely risky to use for tasks where accuracy and trust are a key element. This is unlikely to change soon due to the nature of the tech, but this aspect is being overshadowed by an AI hype train.
“As with many AI-driven innovations, ChatGPT does not come without misgivings. OpenAI has acknowledged the tool’s tendency to respond with ‘plausible-sounding but incorrect or nonsensical answers’ – an issue it considers challenging to fix:” https://twitter.com/Reuters/status/1618382078382379008
See also: https://diginomica.com/generative-ai-will-it-be-summer-humanity-or-legal-winter-vendors
OpenAI initally rolled out the AI Hype train completely ignoring or downplaying these trust/accuracy issues, and it took a bunch of other peristant AI researchers exposing the holes in it before even they had to admit how bad it is in those key areas. There’s a reason Google hasn’t rolled out their “ChatGPT” and that’s why, they can’t afford the risk.
It will get better, but we need to be hyper aware of the risks and constraints of AI tech as much as impressed by the shinee tech demos.
MSFS World Simulator
Digital Twins are a huge area of growth moving forward, combined with VR/AR, and a lot of the tech stack that MSFS is using/proving is directly applicable. MSFS has potential for a lot of things in this area, whether in part, whole, or niche spin offs of the whole stack.