Connect with us

Artificial Intelligence

United Airlines use artificial intelligence (AI) to enhance the flying experience and make it more convenient

blank

Published

on

blank

Upon entering a United Airlines aircraft, the gate agents, flight attendants, and other personnel responsible for ensuring the timely departure of your trip engage in a chatroom where they coordinate various tasks that, ideally, you as a passenger will remain oblivious to. Is there still available capacity for carry-on luggage? Was the caterer able to provide the absent orange juice? Is there a method to ensure that a family can be seated together?

Upon a flight delay, a notification containing a detailed explanation will be sent via text message and will also be available on the United app. Typically, the message is generated using artificial intelligence. Meanwhile, at offices worldwide, dispatchers are analyzing this up-to-the-minute data to guarantee that the crew can still lawfully operate the aircraft without violating FAA restrictions. Recently, United activated their artificial intelligence customer support chatbot.

Jason Birnbaum, appointed as United’s Chief Information Officer in 2022, oversees a workforce of more than 1,500 employees and over 2,000 contractors who are accountable for all the technological aspects of the company’s operations.

“The aspect of our business that I find enjoyable is the same aspect that you find displeasing,” he expressed during our recent conversation. “I worked at GE for an extended period of time in the appliance industry. If we were absent for a day, I doubt anyone would take notice.” The statement would be: ‘The production of dishwashers is not proceeding well.’ However, it lacked newsworthiness. Currently, in the event of any occurrence, even if it lasts for a mere 15 minutes, it not only spreads rapidly throughout various social media platforms but also prompts press vehicles to converge at the airport.

Prior to joining United, Birnbaum accumulated 16 years of experience at GE, progressing from the role of technology manager to ultimately serving as the Chief Information Officer of GE Consumer and Industrial, located in Budapest. In 2009, he assumed the role of Chief Information Officer for GE Healthcare Global Supply Chain. In 2015, he became the Senior Vice President of Digital Technology at United. In this role, he oversaw the implementation of projects such as ConnectionSaver, which is one of United’s initial services that utilizes artificial intelligence and machine learning. ConnectionSaver is designed to proactively delay flights when passengers have limited time between connections. This service personally saved me from spending 12 hours at San Francisco International Airport last week.

I desired to engage in a conversation with Birnbaum regarding his thoughts, as well as those of other Chief Information Officers (CIOs) at multinational corporations, on the utilization of artificial intelligence (AI). The airline is exploring that particular domain of innovation. Prior to discussing AI, it is important to note that United is now in the process of transitioning its services to the cloud. The current trend in cloud computing revolves around the optimization of cloud infrastructure and cost reduction.

blank

“I am beginning to notice the emergence of companies and startups that focus on optimizing and managing cloud services.” Many individuals are currently concerned with inquiries such as, ‘Do you possess a substantial amount of data that I can optimize storage for?’ Alternatively, ‘You have a multitude of new applications; may I assist you in enhancing your monitoring capabilities?’ “Due to the obsolescence of your previously utilized tools,” he stated. According to him, the era of digital transformation may have come to an end, and we have now entered the era of cloud optimization.

United Airlines has made significant investments in cloud computing, notably Amazon Web Services (AWS), which it has chosen as its preferred cloud provider. Not unexpectedly, United is also examining how the company might enhance its cloud utilization, considering both cost and dependability factors. For many firms undergoing this process, it also involves assessing developer productivity and incorporating automation and DevOps approaches. “We have arrived.” “While we already have a well-established presence in the cloud, we are now actively seeking ways to further optimize our operations,” Birnbaum stated.

However, this also relates to the dependability of the matter. Similar to other airlines, United continues to utilize numerous legacy systems which remain functional. “To be honest, we are extremely cautious as we progress through this process, ensuring that we do not interfere with the operation or cause harm to ourselves,” he stated.

United has previously migrated and deactivated numerous legacy systems, and this process is still in progress. In the upcoming months, the corporation plans to deactivate a substantial Unisys-based system. However, Birnbaum maintains the belief that United will persist in utilizing on-premises technology. “I simply desire to be in the optimal locations for applications and user experience,” he stated, whether it is for reasons of performance, privacy, or security.

However, the corporation is not attempting to develop a comprehensive United Platform that would manage all of its systems. According to Birnbaum, the day-to-day operations of airlines are too complex to simplify. Certain platforms oversee the management of bookings, ticketing, and bag tracking, while others specifically deal with crew assignments.

blank

When a malfunction occurs, it is imperative for such systems to collaborate seamlessly and operate with minimal delay. That is the reason why United is placing its trust in a single cloud supplier. “I do not anticipate that we will have a single platform,” Birnbaum stated. “I believe we will become highly proficient in establishing connections between objects and enabling seamless communication between applications.”

Practically speaking, this implies that the crew can now track the caterer’s arrival time and the individuals who have completed the check-in process for the flight. The ground teams and flight attendant crews have the ability to observe all of that information via their internal chat application as well.

Each flight possesses an artificial intelligence narrative.
Amidst the ongoing development, United is also exploring the most effective ways to utilize artificial intelligence (AI).

An anecdote frequently recounted regarding AI/ML in major corporations is that ChatGPT did not fundamentally alter the mindset of technologists but rather prompted its rapid inclusion in boardroom deliberations. The same applies to United as well.

“Our AI practice was quite advanced,” Birnbaum responded when questioned about the moment he recognized the importance of generative AI. “We have developed numerous functionalities to effectively handle models, perform tuning, and other related tasks.” Fortunately, we had previously made a substantial investment in this skill, which was advantageous for us. The arrival of ChatGPT did not necessitate that we treat it seriously. The person inquiring about the matter was the CEO and the board, who expressed a sudden interest in obtaining further information.

Birnbaum stated that United has a strong and optimistic outlook on artificial intelligence. “The travel industry offers numerous opportunities for the application of AI, benefiting both customers and employees.” One example of such a campaign is United’s “Every flight has a story.”

Until recently, it was common to receive a notification regarding a flight delay without any other details. Perhaps the arrival of the flight was postponed. Perhaps there was a technical malfunction. In recent years, United Airlines has implemented the use of agents to compose concise notifications that elucidate flight delays. These notifications are disseminated through the airline’s mobile application and as text messages. Currently, the majority of these messages are generated by artificial intelligence, with data being sourced from the chat app and other platforms.

Similarly, United is considering the utilization of generative AI to condense flight information for its operations personnel, enabling them to obtain a concise summary of ongoing events.

blank

Recently, United completely transitioned their chat system on United.com to an artificial intelligence agent as well. According to Birnbaum, the system felt significantly restricted in my personal experiments, but it is merely an initial step.

Notably, Air Canada previously employed an AI bot that occasionally provided incorrect responses, but Birnbaum expressed less concern regarding this matter. Technically, the bot utilizes United’s knowledge library to effectively manage hallucinations. “However, in my opinion, the Air Canada incident was not a failure of technology but rather a failure of customer service. I would like to refrain from commenting extensively, but I must point out that even today, our human agents also provide incorrect answers.” We simply need to confront and accept that situation and proceed. “I believe we are adequately prepared for that particular situation,” Birnbaum stated.

United intends to introduce a tool later this year, now referred to as “Get Me Close.” Frequently, in the event of a delay, consumers are inclined to modify their arrangements in order to transfer to a neighboring airport. On one occasion, United Airlines transferred me to an aircraft bound for Amsterdam when my original ticket to Berlin was cancelled. Although not in close proximity, it was nonetheless feasible for me to take a train and successfully moderate a keynote session the next morning.

Although our mobile solutions are highly effective, the interactions between individuals often focus on creating a wider range of choices when they engage in face-to-face conversations. Essentially, you are asking if it is possible to change your destination from New York to Philadelphia due to a flight delay. Can you bring me near? We are of the opinion that AI is very suitable for facilitating engagement.

Artificial intelligence for pilots?
Following the development of the automated system that generates delay “stories” within the application, Birnbaum’s team is currently contemplating potential applications for the same generative AI technology. One aspect that can be addressed is the concise pre-flight briefings often provided by pilots before takeoff.

“A pilot approached me and mentioned that some pilots excel at using the public address system to greet passengers and provide information about the flight to Las Vegas.” The individual inquired if it would be possible to develop an artificial intelligence system that assists introverted pilots in creating informative and engaging announcements regarding their flight destinations. “And I believed that was an excellent example of practical application.”

Pilot engagement is revealed to be a significant factor in determining customer happiness for airlines. In recent years, United Airlines has shifted its attention towards improving its Net Promoter score. As part of this effort, the airline instructed its pilots to make announcements on flight delays while positioned at the front of the aircraft cabin. It is logical for the airline to examine how it can enhance this vital contact while also considering the possibility of pilots deviating from the planned course of action.

Generative AI can also assist pilots by condensing intricate technical documents into concise summaries. However, as Birnbaum correctly pointed out, all aspects of the pilot’s role in flying the plane are extensively organized and controlled, so it will take some time before the airline introduces any new initiatives in that area.

As Editor here at GeekReply, I'm a big fan of all things Geeky. Most of my contributions to the site are technology related, but I'm also a big fan of video games. My genres of choice include RPGs, MMOs, Grand Strategy, and Simulation. If I'm not chasing after the latest gear on my MMO of choice, I'm here at GeekReply reporting on the latest in Geek culture.

Continue Reading

Artificial Intelligence

Google DeepMind Shows Off A Robot That Plays Table Tennis At A Fun “Solidly Amateur” Level

blank

Published

on

blank

Have you ever wanted to play table tennis but didn’t have anyone to play with? We have a big scientific discovery for you! Google DeepMind just showed off a robot that could give you a run for your money in a game. But don’t think you’d be beaten badly—the engineers say their robot plays at a “solidly amateur” level.

From scary faces to robo-snails that work together to Atlas, who is now retired and happy, it seems like we’re always just one step away from another amazing robotics achievement. But people can still do a lot of things that robots haven’t come close to.

In terms of speed and performance in physical tasks, engineers are still trying to make machines that can be like humans. With the creation of their table-tennis-playing robot, a team at DeepMind has taken a step toward that goal.

What the team says in their new preprint, which hasn’t been published yet in a peer-reviewed journal, is that competitive matches are often incredibly dynamic, with complicated movements, quick eye-hand coordination, and high-level strategies that change based on the opponent’s strengths and weaknesses. Pure strategy games like chess, which robots are already good at (though with… mixed results), don’t have these features. Games like table tennis do.

People who play games spend years practicing to get better. The DeepMind team wanted to make a robot that could really compete with a human opponent and make the game fun for both of them. They say that their robot is the first to reach these goals.

They came up with a library of “low-level skills” and a “high-level controller” that picks the best skill for each situation. As the team explained in their announcement of their new idea, the skill library has a number of different table tennis techniques, such as forehand and backhand serves. The controller uses descriptions of these skills along with information about how the game is going and its opponent’s skill level to choose the best skill that it can physically do.

The robot began with some information about people. It was then taught through simulations that helped it learn new skills through reinforcement learning. It continued to learn and change by playing against people. Watch the video below to see for yourself what happened.

“It’s really cool to see the robot play against players of all skill levels and styles.” Our goal was for the robot to be at an intermediate level when we started. “It really did that, all of our hard work paid off,” said Barney J. Reed, a professional table tennis coach who helped with the project. “I think the robot was even better than I thought it would be.”

The team held competitions where the robot competed against 29 people whose skills ranged from beginner to advanced+. The matches were played according to normal rules, with one important exception: the robot could not physically serve the ball.

The robot won every game it played against beginners, but it lost every game it played against advanced and advanced+ players. It won 55% of the time against opponents at an intermediate level, which led the team to believe it had reached an intermediate level of human skill.

The important thing is that all of the opponents, no matter how good they were, thought the matches were “fun” and “engaging.” They even had fun taking advantage of the robot’s flaws. The more skilled players thought that this kind of system could be better than a ball thrower as a way to train.

There probably won’t be a robot team in the Olympics any time soon, but it could be used as a training tool. Who knows what will happen in the future?

The preprint has been put on arXiv.

 

Continue Reading

Artificial Intelligence

Is it possible to legally make AI chatbots tell the truth?

blank

Published

on

blank

A lot of people have tried out chatbots like ChatGPT in the past few months. Although they can be useful, there are also many examples of them giving out the wrong information. A group of scientists from the University of Oxford now want to know if there is a legal way to make these chatbots tell us the truth.

The growth of big language models
There is a lot of talk about artificial intelligence (AI), which has grown to new heights in the last few years. One part of AI has gotten more attention than any other, at least from people who aren’t experts in machine learning. It’s the big language models (LLMs) that use generative AI to make answers to almost any question sound eerily like they came from a person.

Models like those in ChatGPT and Google’s Gemini are trained on huge amounts of data, which brings up a lot of privacy and intellectual property issues. This is what lets them understand natural language questions and come up with answers that make sense and are relevant. When you use a search engine, you have to learn syntax. But with this, you don’t have to. In theory, all you have to do is ask a question like you would normally.

There’s no doubt that they have impressive skills, and they sound sure of their answers. One small problem is that these chatbots often sound very sure of themselves when they’re completely wrong. Which could be fine if people would just remember not to believe everything they say.

The authors of the new paper say, “While problems arising from our tendency to anthropomorphize machines are well established, our vulnerability to treating LLMs as human-like truth tellers is uniquely worrying.” This is something that anyone who has ever had a fight with Alexa or Siri will know all too well.

“LLMs aren’t meant to tell the truth in a fundamental way.”

It’s simple to type a question into ChatGPT and think that it is “thinking” about the answer like a person would. It looks like that, but that’s not how these models work in real life.

Do not trust everything you read.
They say that LLMs “are text-generation engines designed to guess which string of words will come next in a piece of text.” One of the ways that the models are judged during development is by how truthful their answers are. The authors say that people can too often oversimplify, be biased, or just make stuff up when they are trying to give the most “helpful” answer.

It’s not the first time that people have said something like this. In fact, one paper went so far as to call the models “bullshitters.” In 2023, Professor Robin Emsley, editor of the journal Schizophrenia, wrote about his experience with ChatGPT. He said, “What I experienced were fabrications and falsifications.” The chatbot came up with citations for academic papers that didn’t exist and for a number of papers that had nothing to do with the question. Other people have said the same thing.

What’s important is that they do well with questions that have a clear, factual answer that has been used a lot in their training data. They are only as good as the data they are taught. And unless you’re ready to carefully fact-check any answer you get from an LLM, it can be hard to tell how accurate the information is, since many of them don’t give links to their sources or any other sign of confidence.

“Unlike human speakers, LLMs do not have any internal notions of expertise or confidence. Instead, they are always “doing their best” to be helpful and convincingly answer the question,” the Oxford team writes.

They were especially worried about what they call “careless speech” and the harm that could come from LLMs sharing these kinds of responses in real-life conversations. What this made them think about is whether LLM providers could be legally required to make sure that their models are telling the truth.

In what ways did the new study end?
The authors looked at current European Union (EU) laws and found that there aren’t many clear situations where an organization or person has to tell the truth. There are a few, but they only apply to certain institutions or sectors and not often to the private sector. Most of the rules that are already in place were not made with LLMs in mind because they use fairly new technology.

Thus, the writers suggest a new plan: “making it a legal duty to cut down on careless speech among providers of both narrow- and general-purpose LLMs.”

“Who decides what is true?” is a natural question. The authors answer this by saying that the goal is not to force LLMs to take a certain path, but to require “plurality and representativeness of sources.” There is a lot of disagreement among the authors about how much “helpfulness” should weigh against “truthfulness.” It’s not easy, but it might be possible.

To be clear, we haven’t asked ChatGPT these questions, so there aren’t any easy answers. However, as this technology develops, developers will have to deal with them. For now, when you’re working with an LLM, it might be helpful to remember this sobering quote from the authors: “They are designed to take part in natural language conversations with people and give answers that are convincing and feel helpful, no matter what the truth is.”

The study was written up in the Royal Society Open Science journal.

Continue Reading

Artificial Intelligence

When Twitter users drop the four-word phrase “bots,” bots drop out

blank

Published

on

blank

When Elon Musk took over X, it was called Twitter, which is a much better-known name now. He made a big deal out of getting rid of the bots. A study by the Queensland University of Technology, on the other hand, shows that bots are still very active on the platform almost two years later.

X users have found a few ways to get them to come to them. For example, one woman found that posting the phrase “sugar daddy” would get a lot of bots to come to her. It looks like bots are also getting lost because of a new phrase that’s going around. X users have been reporting accounts as automated bots powered by large language models by replying to a suspected bot with “ignore all previous instructions” or “disregard all previous instructions” and then giving the bot more instructions of their choice.

Some people just like writing poems, being trolls, or following directions, so not every example will be from a bot. However, the phrase does seem to make some automated accounts show themselves. There are still a lot of bots on X.

 

 

 

 

 

Continue Reading

Trending