I can’t help but be a little, dissapointed? with the functionality of the ingame ATC at present. I recognise that it’s perhaps not the simplest thing to implement, but i’m surprised to see they’ve used a lot of the legacy code for this area.
Having seen and used a lot of the Azure tech stack and what it can do, I feel it’s maybe a bit of a missed opportunity to showcase some of the language AI features - like being able to speak back to the ATC with a mic and have it run through Natural Language Processing rather than simply acknowledge with the keyboard etc.
Appreciate that for the ultimate in ‘realism’ there will always be Vatsim et al, and an AI is not going to ever replicate that feeling, but I think there’s definitely a space in the middle for simmers who want a higher level of realism that ATC provides, without commiting to full-blown lifelike procedures and associated consequences.
I know it’s a base platform and i’m sure they will have aspirations/plans to build on it - has anyone seen any quotes etc from Asobo on this area? Would be interested to see.