Flight Plan using ChatGPT

So here are some interesting facts:

  1. UT25 is the Monument Valley Airport, not the Cache Valley Hospital Heliport. The heliport is actually 23UT but doesn’t exist in the game.
  2. Neither UT12 or Howell / Bonham Airport seem to exist. There is a Howell Airport in Idaho and a Bonham Airport in Texas.
  3. UT52 is the Mary North Airport in Mary, Turkmenistan, not the Bear River Migratory Bird Refuge Airport which also doesn’t seem to exist.
  4. 74V is the Roosevelt Municipal Airport, not the Aviation Acres Airport which is 23CD in Burlington, Colorado.
  5. The most interesting thing I learned is that if you take the lat/long coordinates in lists and input those as waypoints, you get a very plausible flight plan that fits within the constraints of the prompt. It produces a flight plan that is 71nm long and can be flown in about 50 minutes according to the airplane performance data I found on the internet for the Aviat Husky A-1C.

Here is the flight plan laid out in Little NavMap:

Well, he is in trouble from the very beginning of his journey.
He cannot accept this job.
He holds a “Private” Pilots License. He needs a “Commercial” Pilots License with an IFR rating.
He is being paid to do this job. Sorry Mr. PP, you need to go get your commercial license.
I guess the RAF Brass didn’t check your credentials. Smile…
Of course, he could just go anyway and pray they never check him out. It will be fun !

1 Like

Prompts for MS Asobo in ChatGPT… all in fun, please don’t take too seriously…

  1. Respond to forum users complaining about CTDs
  2. Create creative reasoning for why bug xyz hasn’t been fixed yet
  3. Auto Respond to people talking about the “other” sim…
  4. ATC ! …. Should be fun……
  5. Mission/scenario generator ( it can write code )
  6. Virtual copilot for those transoceanic flights….

:smile::smile::smile::smile::smile::smile::smile::smile::smile::smile:

I have wondered about having an AI write a script for starting msfs with a default plane at a given location, without having to go through all the menus. However, I am not a computer code know it all. I have used both ChatGPT and the new Google Bard. Between the two, I will say that Google is more reliable in the sense that it bases much of its information on its search engine. Granted, you still have to verify the information as you would with ChatGPT. Google Bard does provide a Google it button at the bottom of your query, so you can verify everything the AI looked at for your question. As a recent graduate from graduate school, the school had issued notifications to the students that using ChatGPt was considered cheating/plagiarism. However, they admitted that there was no way to know if indeed that was true. Their plagiarism program, Turnitin does not pick up the language, but a keen-eye professors could if they paid attention to the writing style of a student in the past vs one that was written with AI. I think the AI programs are new but they will improve the accuracy of their responses within the next few years. This is just the beginning.

There was a pretty good article in The Atlantic yesterday regarding the hazards of ChatGPT and other language models. It basically says what I was trying to say a month ago in this thread - that because they sound confident and official, it will be extremely hard for people without knowledge/expertise to discern between fiction, error, and reality, which we’ve also seen in this thread.

I don’t mind the use of it for brainstorming and generating ideas, but it will be used clumsily and mark my words, nefariously as well. I feel myself turning very strongly against the continued unrestrained development of these applications.

That happens even with real people as well. People who are confident and act like they know what they’re talking about when they’re probably the last person who should be talking about it and people will oooh and ahhh at what they have to say.

1 Like

The article covers why the LLMs make that more difficult than with a person. Then you have the sheer amount of noise that’s going to come from LLMs versus the small amount of people that are able to bluster in the same way.

IM not so HO - must humans have a very high opinion of their humanists. We humans ‘think’ we are very individually special and unique. Most of us have grown lazy in all sorts of ways. Intellectual laziness is the most common attribute of us humans. Most of our ‘testing’ is not testing knowledge but testing our ability to take tests. What all LLM and AI in generally ‘prove’ is we are not superior knowledge machines just lazy in our evaluations of the world.

dw

ps we humans do have a process to keep ‘real’ facts real. We call it cryptography.
A person can attach a cryptographic key to any digital ‘thing’ to mark it as authentic. This will ‘prove’ it’s from the person that created it.

dw

So what’s the difference between a fallible human and a fallible machine? I can think of several.

No difference at all ---- GIGO garbage in garbage out.

dw

Except for all the things talked about in the article. Dunning-Kruger multiplied nearly infinitely.

At least with one person it’s easier to tell, to suss them out, usually using those “other” cues like language, etc. but also by eliciting direct feedback. And a human with the right amount of expertise and self-awareness will go “I’m not sure, let me get back to you on that,” or “I think…”, etc. (those cues again). We can have a discussion, maybe clarification if necessary, and come to an understanding. Communication, learning occur, perhaps on either side.

But tests are not just about knowledge recall; a person is absolutely able to assess a person’s critical-thinking skills, going beyond rote and application levels, which is a big part of instructing and aviation examinations (check rides, practical tests, oral tests, etc).

The bots are just going to put their nonsense out and leave us all to fact-check without any indication (at this point) from where it came, which will take more time and resources than we have. No thanks.

Google’s Bard is not any better. She/he/it made up airports and distances also.

Sure, here is a flight plan for you to fly in Utah with 5 to 8 waypoints, at least 2 dirt runways, and that can be flown in one hour while flying a plane that can fly at 80 knots. The flight plan is as follows:

Waypoint 1: KPUB - Panguitch Airport (ICAO: KPUB)
my comment – this is Pueblo Memorial in Colorado. From this point, it goes off the rails…

Waypoint 2: U44 - Boulder Mountain Airport (Decimal Latitude and Longitude: 38.023833, -112.125000)

Waypoint 3: U45 - Brian Head Airport (Decimal Latitude and Longitude: 38.100000, -112.025000)

Waypoint 4: U46 - Duck Creek Village Airport (Decimal Latitude and Longitude: 38.166667, -112.016667)

Waypoint 5: U47 - Modena Airport (Decimal Latitude and Longitude: 38.200000, -111.983333)

Waypoint 6: U48 - Cedar City Regional Airport (ICAO: KCEC)

Waypoint 7: U49 - Brian Head Ski Resort (Decimal Latitude and Longitude: 38.100000, -112.025000)

Waypoint 8: U50 - Bryce Canyon Airport (ICAO: KBYC)

This flight plan is approximately 70 nautical miles long and can be flown in one hour at 80 knots. It includes two dirt runways, KPUB and U44. The flight plan does not cross any other waypoints.

Please note that this is just a suggested flight plan. You may need to adjust it based on the weather conditions and your own flying experience.

Yup GIGO

dw

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.