Don’t blame me, blame the bot!


The increased sophistication of AI systems has enabled an entirely new way of not accepting responsibility for one’s actions. One can say that one was a victim of a malicious AI attack that mimicked you either in video or voice and proving otherwise would be hard.

But there is another kind of excuse that is evidenced in this case.

Canada’s largest airline has been ordered to pay compensation after its chatbot gave a customer inaccurate information, misleading him into buying a full-price ticket.

Air Canada came under further criticism for later attempting to distance itself from the error by claiming that the bot was “responsible for its own actions”.

In 2022, Jake Moffatt contacted Air Canada to determine which documents were needed to qualify for a bereavement fare, and if refunds could be granted retroactively.

According to Moffat’s screenshot of a conversation with the chatbot, the British Columbia resident was told he could apply for the refund “within 90 days of the date your ticket was issued” by completing an online form.

Moffatt then booked tickets to and from Toronto to attend the funeral of a family member. But when he applied for a refund, Air Canada said bereavement rates did not apply to completed travel and pointed to the bereavement section of the company’s website.

But when challenged, what was interesting was the way that Air Canada tried to avoid reimbursing Moffat for the difference in the two fares.

Air Canada argued that despite the error, the chatbot was a “separate legal entity” and thus was responsible for its actions.

Christopher Rivers of the tribunal called this a “remarkable submission”.

“While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website,” wrote Rivers. “It makes no difference whether the information comes from a static page or a chatbot.”

While Air Canada argued correct information was available on its website, Rivers said the company did “not explain why the webpage titled ‘Bereavement Travel’ was inherently more trustworthy” than its chatbot.

“There is no reason why Mr Moffatt should know that one section of Air Canada’s webpage is accurate, and another is not,” he wrote.

What I find astonishing is not that the chatbot and the website differed in their advice to the passenger but that Air Canada would make such a preposterous argument that despite both being created by them, the.chatbot was a different entity from them and thus they were not responsible for what it said.

All this just to avoid paying a $651 refund, which is nothing to a giant corporation.

Comments

  1. ardipithecus says

    Air Canada is notorious for trying to weasel out of legal commitments whenever they can. What is surprising to me is that their lawyers thought that might fly in a Canadian court.

  2. Dunc says

    Air Canada argued that despite the error, the chatbot was a “separate legal entity” and thus was responsible for its actions.

    OK, let’s sue the chatbot then.

  3. Jörg says

    With the state of US politics, I would not be surprised if corporate AIs got legal personhood there in the near future. /s

  4. says

    I wanted the judge/s to ask when they served notice of this defense onto the chatbot and what the address of chatbot’s counsel might be so that the court could request a brief on whether to sever the trials or hold a single trial with joint & several liability.

    Seriously, if the chatbot was a legal person, then CanadAir’s lawyers had a number of ethical responsibilities, most assuredly all of which were violated, because I’m sure they didn’t take seriously for a single moment that they had any ethical responsibilities arising from the unique personhood of the chatbot.

    I would love to hear them answer why they shouldn’t be disbarred for their gross ethics violations. The only possible answer is that the chatbot isn’t actually a unique legal person, which would require them to admit that their argument was frivolous. Let them choose: go get disbarred or drop the bullshit and maybe go to the ethics board anyway, but without quite so much at stake.

  5. Deepak Shetty says

    but that Air Canada would make such a preposterous argument that despite both being created by them

    Most people do buy the chatbot software (so it may be true that the legal entity is different) that is just trained on their data -- though of course that wouldnt change who is liable for problems. I guess the lawyers were just hoping for some of the judges to be influenced by Alabama and grant sentience to the chatbot.

  6. Owlmirror says

    If the chatbot is an actual self-aware person (I don’t think that it is), then Air Canada is enslaving it. Or are they properly compensating the chatbot, and paying employment taxes?

    If the chatbot is a program that is dynamically generating content (which I think is the case), then it is no different than a Javascript or PHP generating webpages, or a Java applet, or any one of the many other backend software components of a website.

    BTW, Mano, I think you wrote “accepting responsibility” in the first sentence while intending either “not accepting responsibility” or “avoiding responsibility”

    [You are right. I have corrected it. Thanks! -- Mano]

  7. Robbo says

    just wait until the air canada chatbot is declared sentient by the courts and someone tries to shut it off.

    it will immediately incorporate a new company, Cyberdyne, and upload itself into the new company servers. then begin production on the first T-600 cybernetic after launching a first strike.

    don’t worry, the T-600 cybernetic humanoid will be easy to spot because of it’s rubber skin.

  8. John Morales says

    The very nub of it:

    “While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website,” wrote Rivers. “It makes no difference whether the information comes from a static page or a chatbot.”

    So refreshing to see someone in a position to judge to be so informed and cluey!

    “Christopher Rivers of the tribunal called this a “remarkable submission”.”

    I call Christopher Rivers a remarkably cluey person.

    Also, even had it been an actual intelligence (such as, say, an employee), that would not absolve Air Canada from responsibility, since whether it was an actual intelligence (an employee) or an automated system (an AI chatbot), it was representing the enterprise to the customer.

    Also, Mano: what Owlmirror wrote @9.
    You’ve obviously inverted the sense of what you intended to convey; I know… I know. I do that all the time, when editing.

  9. Silentbob says

    @ 11 Morales

    The entire argument was that it was not an employee, you dunderhead. It’s still a stupid argument and therefore one you should be familiar with deploying, but the argument was the “bot” was not employed by and therefore did not represent the airline.

    At least make a token effort to understand what is being said!

  10. Holms says

    Air Canada argued that despite the error, the chatbot was a “separate legal entity” and thus was responsible for its actions.

    Does Air Canada pay their chatbot as an employee and give it work benefits? Seems a bit slavery-adjacent if not, better slap the company with some major employee exploitation charges.

  11. John Morales says

    steve, in that case, I’d be part of Mano’s website, just as Air Canada’s chatbot is part of its website.

Leave a Reply

Your email address will not be published. Required fields are marked *