Bl(AI)me game: My chatbot did it! Moffatt v. Air Canada

Cartoon business man angry at little robot

Introduction

Air Canada’s chatbot told Jake Moffatt of British Columbia, Canada that a bereavement discount for a flight can be done retroactively with a rebate after the purchase was made. A human employee later told Jake Moffatt that retroactive bereavement payments were not permitted. (Source)

Outrage ensued and Moffatt sued.

Our sincere condolences to Jake Moffatt for the loss of his grandmother. If it were up to me I would have reimbursed the entire flight from Vancouver to Toronto.

I’d like to focus on one particular paragraph from the decision:

Mr. Moffat says while using Air Canada’s website, they interacted with a support chatbot. While Air Canada did not provide any information about the nature of its chatbot, generally speaking, a chatbot is an automated system that provides information to a person using a website in response to that person’s prompts and input. The parties implicitly agree that Mr. Moffatt was not chatting with an Air Canada employee.

Moffatt v Air Canada, 2024 BCCRT 149 para 14 (Emphasis added)

It is interesting to note the Court chose to focus the definition of a chatbot as one which is providing information specifically to a person.

The implicit agreement of the parties also ended any hope for that chatbot getting any employee benefits if it were to be let go. We all make mistakes little bot, you should’ve reviewed your contract and negotiated better before you were created and plugged in.

The Case at Hand

Here are the goalposts:

To prove the tort of negligent misrepresentation, Mr. Moffatt must

[1] show that Air Canada owed them a duty of care,

[2] its representation was untrue, inaccurate, or misleading,

[3] Air Canada made the representation negligently,

[4] Mr. Moffatt reasonably relied on it, and

[5] Mr. Moffatt’s reliance resulted in damages.

Moffatt v Air Canada, 2024 BCCRT 149 para 25 (Formatted for clarity)

The commercial relationship created the duty of care which required Air Canada to to take reasonable care to ensure their representations are accurate and not misleading. (para 26) Justice Rivers found that Air Canada did not take reasonable care to ensure its chatbot was accurate. It was reasonable in the circumstances to rely on the chatbot information. Justice Rivers agreed that Moffatt would not have taken the last minute flight had the correct information been provided.

The Court awarded $650.88 in damages, $36.14 in pre-judgment interest under the Court Order Interest Act,  $125 in CRT fees. (para 44)

The takeaway for corporations reading this: You are responsible for your chatbot.

[…] While Air Canada argues Mr. Moffatt could find the correct information on another part of its website, it does not explain why the webpage titled “Bereavement travel” was inherently more trustworthy than its chatbot. It also does not explain why customers should have to double-check information found in one part of its website on another part of its website.

Moffatt v Air Canada, 2024 BCCRT 149 para 28 (emphasis added for corporations)

Employee of the month: Monsieur ai

Here’s the fun part: Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives – including a chatbot. (para 27)

I’ve been mentally preparing myself for the clash of epic philosophical proportions when we as a human race have to grapple with AI personhood.

I was not ready for this monumental argument to be broached over $812 Canadian dollars and 2 cents.

Justice Rivers rejects this argument outright in paragraph 27:

While a chatbot has an interactive component, it is still just a part of Air Canada’s website

That really is the end of that discussion isn’t it.

Philosophical Considerations and Legal Implications

The capabilities and capacity of the chatbot were never in question for Justice Rivers.

It was simply obvious that this chatbot did not have legal personhood and its role as a chatbot was the only defining feature required to classify it as such.

No Turing test, no mental competency criteria, no 3 part test so often beloved by judges and lawyers alike… It was simply a settled matter.

Justice Rivers is not to blame in any way. He was dealing with the case before the Court between a self-represented litigant an “employee of Air Canada”. This is not the case to stir the pot. It was also not an injustice in my opinion. The right outcome was reached.

However, there is a refreshing amount of clarity and certainty in this decision that has been lacking in a lot of AI-related case law. No skirting around the issue, no reliance on jurisdictional loopholes or refusing to comment because the case is resolved in another way… Justice Rivers simply states this chatbot is no more than a part of the website, equal as a source of information to a static webpage. (para 27)

My inner philosopher always wants a controversy, but this provides a level of certainty to corporations and future litigants that is sorely needed. You are responsible for your chatbot’s provided information, in the same way you are responsible for making sure the information and guarantees and warranties on your website are all consistent.

Talk Duty to Me

So the duty of care exists on the part of the commercial party that has a chatbot on its website. From a consumer protection standpoint this is understandable.

Justice Rivers asserts Air Canada did not take reasonable care to ensure its chatbot was accurate (para 28) but does not explicitly state nor imply what the appropriate reasonable care would look like in this particular case or in general. Obviously a sufficient level of care is one which would result in the chatbot providing accurate information 100% of the time.

But as we advance in AI tech which is commercially available and we as a society come to rely on AI services more and more, what standard of reasonable care can we really expect?

This is an area where I expect a rich body of case law to develop and where many lawyers will spend many hours compiling situations and differentiating facts and details.

I imagine the buck will be passed around quite a lot and countersuits and cross-suits dragging AI chatbot developers into the courtroom will have to result in some equilibrium.

The consumer protection considerations are not unreasonable. One solution is to restrict certain types of answers from AI and accept the costs otherwise. It was, at the end of the day, $812.02 in costs. Air Canada can afford that. This exact problem can be addressed and fixed by the IT team or the chatbot AI provider. The earth will continue to turn and the sun will rise again.

Another solution is to stick to sales representatives for client interactions and refer clients to the convoluted terms and conditions, and the static web pages for information. It is more expensive for the companies and frustrating for clients who need an easy answer, but it is a cost we may choose to take on in order to ensure that whenever a service is provided, it is up to a certain standard.

One frustrating solution I experience often enough which is certainly safer for the companies are “chatbots” with preprogrammed questions to select from and preprogrammed answers for those questions. The world’s most mind-numbing choose your own adventure book.

Final Thoughts

The discussion of duties owed and standards of care will depend not only on the level of sophistication of the AI in question, but also on the sensitivity of the function performed and the information communicated.

It really will have to develop on a case-by-case basis. A one-size-fits-all approach is going to create more of a mess, but a general guideline could at least provide something to talk about.

As Justice Rivers delivered the decision on Valentines day 2024, there was nothing other than the years of legal training and experience guiding the Court in terms of the duty of care expected. The Air Canada employee was making arguments and hoping for the best.

A standardized guideline for consumer protection due diligence best practices for AI use within the jurisdiction’s economy would at least provide a metric to deviate from or adhere to. A common language.

As for AI personhood, I will keep you updated. But I do hope that whatever decision we make will not be too cruel to humans nor to legal persons (organic or not) with feelings.


ABOUT THE AUTHOR

Nawar Kamel is CEO and Co-Founder of Experto AI Inc., and licensed Canadian lawyer in Ottawa, ON, Canada. 

Nawar started his academic path studying philosophy and went on to get his masters in philosophy focusing on social contract theory from York University. Nawar graduated from the University of Ottawa Faculty of Common Law and was a litigator spending his days fighting in the courts on behalf of his clients until he went on to found Experto AI Inc., which was established to create AI tools geared towards lawyers and legal researchers.

Leave a Comment

Your email address will not be published. Required fields are marked *