Providing online training for business aviation professionals globally.


Ep 35 – Artificial Intelligence and Aviation

Podcast | October 18, 2023

Welcome, aviation enthusiasts, to another exciting episode of our podcast, where we explore the cutting-edge developments shaping the business aviation industry. Today, we embark on a fascinating journey into the realm of artificial intelligence and its transformative potential in this dynamic field. Strap in, as we delve into how AI is set to revolutionize business aviation, leaving an indelible mark on safety, efficiency, and innovation.


Picture this: an aviation landscape where aircraft are no longer limited by human capabilities alone, where smart algorithms work in harmony with highly skilled pilots and ground personnel. Artificial intelligence, or AI, holds the key to unlocking this new frontier. By harnessing the power of machine learning, predictive analytics, and automation, AI is poised to shape a future that will enhance every facet of the business aviation experience.


Safety has always been paramount in aviation, and AI is here to augment our efforts further. Imagine advanced AI systems that meticulously analyze mountains of data, detecting patterns, trends, and anomalies to predict potential risks before they materialize. By seamlessly integrating AI into safety management systems, we can proactively identify hazards, streamline safety protocols, and minimize human error—an invaluable asset for pilots, maintenance crews, and regulators alike.


But AI’s impact on business aviation extends far beyond safety. Consider the realm of operational efficiency. AI-driven algorithms can optimize flight routes, taking into account real-time weather data, air traffic, and fuel consumption. This means faster, more efficient flights, reduced operational costs, and a smaller environmental footprint. By learning from countless flight patterns and historical data, AI can make accurate predictions, enabling airlines to make informed decisions about maintenance schedules, crew planning, and fleet management.


Now, let’s turn our attention to the passengers—the lifeblood of the business aviation industry. AI is poised to transform the passenger experience, revolutionizing how we interact with aircraft and the services they provide. Imagine stepping aboard an aircraft that adapts to your preferences and needs seamlessly. AI-powered personalization can customize in-flight entertainment, optimize cabin temperature and lighting, and even anticipate individual food and beverage preferences.


Furthermore, AI can enhance customer service throughout the entire journey. Smart chatbots equipped with natural language processing capabilities can provide instant support, answer inquiries, and handle reservations. AI-powered virtual assistants can guide passengers through complex airport terminals, ensuring a smooth and stress-free experience. The possibilities are endless when AI becomes the invisible but indispensable companion in our aviation adventures.


As we conclude this opening segment, it is clear that AI is a force to be reckoned with in the business aviation industry. Its integration promises to elevate safety, increase efficiency, and transform the passenger experience. But with every technological leap forward, we must also address questions of ethics, privacy, and security. Striking the delicate balance between technological advancement and responsible implementation will be crucial as we navigate this exciting AI-powered future.


So, fasten your seatbelts and join us as we explore the frontiers of AI integration in business aviation. From safety management to operational optimization and passenger experience, we will uncover the incredible potential that lies ahead. Welcome to a world where aviation meets artificial intelligence. This is the training report, and I’m your host, Brent Fishlock. Let’s take off into the future together!


Some of you that have listened to this podcast a few times may have noticed that my writing seemed different, more polished. It is more polished and different because that was AI. I have a friend in the TrainingPort office with access to ChatGPT and I asked him to create a short podcast about AI and business aviation.


His request was worded like this:


“I run a podcast focused on the business aviation industry. Typical topics we discuss on the show are safety related items, change in policy and/or regulations, trends we see in the industry, etc. For an upcoming podcast I’m considering focusing the show on how AI could integrate into business aviation and what areas of the industry might be most affected/benefit the most. I’m thinking that I’ll let AI write the first five minutes of the podcast monologue. Can you please compose a compelling 5-minute podcast opening on this, the use AI in business aviation?”


ChatGPT took just a few seconds to write the 8 paragraphs and I thought it was pretty good. My experience with AI is very limited but what I’ve seen is astounding. There are many questions of course and it will take many stumbles to find the right balance between regulation, copyright, plagiarism, corporate greed, business best practices, personal choice and much more.


Interviews with AI CEOs has these people asking for regulation. Don’t be reactive with regulation. We must get out in front of this they say. I don’t remember any industry making statements like that.


Well, what is this? Is this really Skynet unleashing the Terminator?


ChatGPT started in November of 2022 and had 100 million users signed up in 3 months. Growth is now 54% per month. It’s unbelievable. ChatGPT, and there are others of course, is a chatbot built on a large language model or LLM.


Firstly, what is a chatbot? It’s like having a conversation with an AI. You make a query, and the AI responds. You can ask the AI to modify its answer, and it does. Something like, ‘make your response more funny’. And it just does it.


Ok, what is a large language model? “It’s a neural network, more about that in a second, trained on huge quantities of information from the internet so it can learn—meaning they generate altogether new responses, rather than regurgitating specific canned responses.” That was one of my questions. Is the output just copied from somewhere else on the internet? According to this definition, no.


IBM defines a neural network as a subset of machine learning where algorithms are used for deep learning. Neural networks name and structure are inspired by the human brain, mimicking the way that biological neurons signal to one another.


A large language learning model is an algorithm that uses huge amounts of data to understand, summarize, generate, and even predict new content. Another phrase out there is ‘generative AI’ which is similar to LLMs in that they are designed to create text content.


So, a machine that can learn like a human and you can talk to it.


AI has been around since 1966 at MIT when ELIZA the language model was created. LLM morphed from the language model using much more data to expand the AI’s ability to come to a conclusion based on evidence and reasoning. A large language model can have one billion parameters. A parameter is a variable in the model which the AI can use to infer new content. So, a LLM uses over a billion parameters to create what you ask it to create.


The speed of LLMs really jacked up in 2017 when transformer neural networks were developed. Content development times reduced dramatically making AI more appealing to more industries. Not to be out done by MIT, Stanford developed ‘foundational’ models which are so big that they are the foundation for new models.


Should we be cautious about using large language models?


The developers say ‘yes’ and they are saying it directly to the US congress. Less threatening than someone launching a nuclear strike with AI are the more simple challenges of LLMs such as:


  • Overall costs, expensive graphics, massive amounts of data, hosting costs.
  • Is there a known bias or has it been removed? I’d say the AI was self-promoting itself in its introductory podcast on business aviation.
  • Can the response of the LLM be explained? How was the response generated?
  • What if the answer the LLM gives is simply wrong? And if it is wrong, how do you fix it so it’s correct the next time? LLMs are so big that finding the reason as to why the response was incorrect is very difficult to determine.
  • We can’t leave the hackers out of this, so they get their own term and it’s a ‘glitch token’. A glitch token is inserted into the LLM to make it malfunction. Glitch tokens were first seen in 2022.

So, at the beginning of this podcast ChatGPT explained what AI can do for business aviation and I scratched the surface on what AI is.


AI is already being used. Recently the OpsGroup, of which I am a fan, performed a 5-day blitz on organizing NOTAMs involving most if not every part of the industry from writing to reading. The group used ChatGPT to determine if AI could understand and categorize NOTAMs to make them easier to read and sort through. The short answer is yes, 98% of the time ChatGPT was correct.


Opsgroup said the goal “was to design and test a prototype system to post-process NOTAMs, tag them, summarize them, and sort and filter them into a newly designed briefing package.” This seems like an excellent use of AI. How many times do you have to find the one NOTAM that is crucial to the flight.


I like this quote “The future is still bright for people with critical thinking skills and the ability to use prior experiences to perform complex tasks. AI could infringe on the human workplace so stay sharp out there”.

In The News: Approach Minima Change in Canada


Ok let’s change gears for a moment. In the news is a section of the podcast where I talk about other happenings in aviation.


It has been proposed that approach minima in Canada be changed from ‘advisory’ to ‘prescribed’ meaning the minima on the chart is the minimum weather that must be present at the runway in order to attempt the approach.


Today, due to the complexity and variations in minima based on the type of operations and approvals, it is difficult for ATC and even pilots to determine whether the planned approach is banned. In many parts of the world, you will not be cleared for an approach if the approach ban is on. In Canada, you are cleared even if the ban is on. ATC doesn’t know what your operations minima are unless they ask and even then, it can be pilot specific due to experience requirements.


The TSB Recommendation A20-01, which changes minima from advisory to prescribed, was issued in 2020 and Transport Canada says they agree with the recommendation however there are many factors at play including operations in the North, service disruptions, rotary operations, airport infrastructure and so on.


TC is gathering an industry working group this fall. Stand by for more.


Thanks for listening. Have a great day.




AIN article:–724NqugrEzi8gVKnqhyKG-YG61MnBlgAc7-cWV51gob0S70Iau3h4o_8cT9DUkB8VYIn2d0DG6BbPqSaiDlUMX3zLFw&utm_content=2&utm_source=hs_email




Tech Article:


In the news:


aviation professional

Engaging and Effective Online Training. Logo
Get a free topic

Required fields are indicated with a red star.

Request a free demo

Required fields are indicated with a red star.