Did Air Canada create a sentient chatbot?
Air Canada's chatbot gave bad advice, but they argued that the chatbot is a separate entity entirely, responsible for its own actions.
Jake Moffatt went onto the Air Canada website to book a flight following the death of his grandmother. There, he communicated with a chatbot which wrote that he could apply for special bereavement fares and get money back at a later date. However, after trying to get a money back he was he was refused a refund, being told by another employee that the company do not permit retroactive applications.
Moffatt took Air Canada to court arguing that their chatbot gave him the wrong advice and tried to get back the $880 difference in prices. Air Canada’s defense? That it cannot be held liable for the information provided by the chatbot as the chatbot is “separate legal entity responsible for its own decisions.” This claim, the court found to be a "remarkable submission". So, can companies just put out chatbots and then claim no responsibility for what they say?
Air Canada also gave its chatbot status along side its "agents, servants, or representatives". They are essentially arguing that, the company itself is not responsible for what the chatbot says any more than they can control what an employee says. But this gives the chatbot an incredible degree of agency, it means that no designer or programmer is responsible for the final output of the bot. It indicates that the bot has a certain level of independent decision making abilities. It is not just repeating what the website says otherwise it would have been accurate; no, the chatbot decided to lie to Moffatt. How can Air Canada cannot be responsible for that! Giving the chatbot agency, is the only way the argument works.
It seems Air Canada had a choice, they could either argue that their chatbot is just a technical in nature, and that this error is more akin to a typo in a policy on the website. OR they could have argued that it the chatbot is its own agent that can choose its own destiny and make decisions for itself. They took a punt that the latter might get them off the hook.
There is a new form of AI research into self-governing AI, known as agentic AI. Open AI released a white paper on it recently, which defines agentic AI as a “system that can pursue complex goals with limited direct supervision”. They give the example of asking an AI how to bake a chocolate cake. Not only does it give you the recipe, but also finds the ingredients, locates shops online selling them, places an order and has it delivered to your house.
They also note the ways that this kind of system could go wrong. Say you want to make authentic Japanese food, instead of finding ingredients locally it buys you an expensive airline ticket to Japan to buy it fresh (you did ask for it to be authentic after all!). This type of AI requires a complex level of autonomous decision making, and a large degree of human-alignment to make sure its decisions are what we expect them to be. It is not clear that Air Canada’s chatbot was so advanced that it is capable of this kind of autonomous decision making!
In their conversation the chatbot did include a link to the Air Canada policy website, which states that they do not offer retroactive refunds on these types of tickets. The court also rightly asked why should information on the website be trusted more than what the chatbot says? No matter if it is a website or a chatbot, both are giving information from the company. It isn't up the customer to decide which is more trustworthy. There is no warning in place saying the bot can make things up. Even if there was, what is the incentive for a customer to use a bot if there is the chance it will just give you the wrong information?
Ok, maybe claiming sentience is a bit of a stretch. However, saying that a chatbot has agency, and grouping it alongside employees and other representatives is giving this bot status beyond being just a tool. Thinking like this gives a dangerous precedent for overreliance in AI technology, where companies absolve themselves of any responsibility or transparency around AI.
This case also serves as a warning to businesses using chatbots, as the court found that Air Canada didn't take due care to ensure its chatbot was accurate. Air Canada themselves appear to have disabled the chatbot, which they had described as an experiment. (A further philosophical question: If a business chatbot has agency, is disabling it closer to firing it or killing it? )
Either way, the case shows the need for companies to put safeguards and monitoring in place for all AI automation. While this development of true Agentic AI might not be here now, Open AI do note that it . It is important that at least one human entity (even if it is a individual, corporation or legal entity), not solely an AI system is accountable for any harm caused by an AI system. It is not enough to give over all responsibility to an AI, humans have to be responsible as well.
NB in the coming weeks my colleagues and I have a paper coming out on the moral responsibility of implementing generative AI in business. Stay tuned!
Their argument seems flawed to the point of absurdity. If an Air Canada representative said what the chatbot said, the airline would be liable. And all of this potentially harmful media attention for $800? Thanks for sharing your thoughts.