96 votes

Air Canada successfully sued after its AI chatbot gave BC passenger incorrect information: airline claimed it wasn't liable for what its own AI told customers

36 comments

  1. [26]
    Barrelephants
    Link
    "In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions." This is laughable now, but it won't be long until AI starts entering that uncanny...

    "In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions."

    This is laughable now, but it won't be long until AI starts entering that uncanny valley where they seem real enough that legal arguments like this might hold up.

    42 votes
    1. [5]
      unkz
      Link Parent
      Seems irrelevant — even if an AI becomes a legal entity, it will be an entity in the employ of or contracting to air Canada. If an employee gave incorrect info, the company would be as liable. The...

      Seems irrelevant — even if an AI becomes a legal entity, it will be an entity in the employ of or contracting to air Canada. If an employee gave incorrect info, the company would be as liable.

      The likely result of this is a disclaimer at the start of every session saying that nothing the chatbot says can be relied on.

      58 votes
      1. [3]
        Eji1700
        Link Parent
        Which is just such trash. It's like saying "nothing an employee says can be relied upon" and then just having them straight up lie or mislead you for profit. Someone needs to kick these "you sign...

        Which is just such trash. It's like saying "nothing an employee says can be relied upon" and then just having them straight up lie or mislead you for profit.

        Someone needs to kick these "you sign away your life" TOS agreements in the teeth and hard, but i doubt it's about to happen.

        42 votes
        1. [2]
          GunnarRunnar
          Link Parent
          If you're providing a service, you're responsible for it. No matter if it's a single employee or dumbass AI or just incorrect information on your website. I don't understand how we have even...

          If you're providing a service, you're responsible for it. No matter if it's a single employee or dumbass AI or just incorrect information on your website. I don't understand how we have even opened the door for other possibilities.

          23 votes
          1. first-must-burn
            Link Parent
            First, I heartily agree with you that the provider should be responsible for the service they provide. It's pretty simple. The logic of capitalism is that what ever they are selling as a service...

            First, I heartily agree with you that the provider should be responsible for the service they provide.

            I don't understand how we have even opened the door for other possibilities.

            It's pretty simple. The logic of capitalism is that what ever they are selling as a service or product is irrelevant and the only thing that matters is maximizing profit. This company got burned because they got out ahead of the pack, but you can bet there are plenty of lobbiests pushing for legal cover for this kind of thing.

            15 votes
      2. TanyaJLaird
        Link Parent
        I don't think Air Canada thought this all the way through. An AI that is so sophisticated that it is its own legal entity should also meet the qualifications of personhood. If I were some absolute...

        I don't think Air Canada thought this all the way through. An AI that is so sophisticated that it is its own legal entity should also meet the qualifications of personhood. If I were some absolute dictator and someone brought that claim before me, I would monkey's paw that wish so quick. "We agree Air Canada that you are not responsible for the actions of your AI. We are however going to imprison your entire leadership team under charges of slavery."

        12 votes
    2. [19]
      Carighan
      Link Parent
      Do they? Remember what the media has successfully rebranded as "AI" is a stochastic parrot system. And just like humans just blabbering something they've picked up (without understanding any of...

      Do they? Remember what the media has successfully rebranded as "AI" is a stochastic parrot system.

      And just like humans just blabbering something they've picked up (without understanding any of it), yeah sure it can fool people. Fake it till you make it is a thing for a reason.

      But it's also not in any way revolutionary or even noteworthy. If anything LLMs are inferior at this because they do stochastic recombinations where none are necessary and just quoting a certain input source verbatim would fool the people asking much better as its wording is not as obviously hacked together and they couldn't tell that this is just stealing the words from someone else, anyways.

      More importantly, LLMs are, as said above, just recombining things you put in. Think of a "traditional" chatbot, very old generation, that has a long long list of fixed responses to fixed questions. Now merely replace the fixed questions with a matcher that assigns percentages ("This seems to be like 60% about travel advise, 30% about booking and 10% about weather") and takes answers according to the percentages and hacksaws them together based on how existing texts it can access put words together.
      It's a chinese room. It has no way to actually know the content of the questions or the replies its giving, it is just matching pieces of output to pieces of input of which it understands neither.

      How can it have responsibility then, if it does not understand its actions or their context? After all, that's why children have limited liability, as do those not of sound mind at the time of their actions. And LLMs are 100% never of sound mind in this context, and cannot change this. They play a 5000 pieces puzzle with all pieces turned facing down.

      12 votes
      1. [13]
        Ashura_Savarra
        Link Parent
        I wish people would stop putting "AI" in air quotes or calling LLMs "not AI." It's an AI. The ghosts in Pac Man are AI. All it needs to do in order to qualify for that distinction is be able to...

        I wish people would stop putting "AI" in air quotes or calling LLMs "not AI." It's an AI. The ghosts in Pac Man are AI. All it needs to do in order to qualify for that distinction is be able to make choices without direct user input.

        What LLMs aren't, what no AI we currently know of is, is AGI. For that, it would need to understand the larger context of its decisions, and there's ongoing debate as to whether that's even technically possible. But they are definitely still AI. To say otherwise is misleading.

        22 votes
        1. [6]
          DefinitelyNotAFae
          Link Parent
          The problem is that the average human has spent their entire lives hearing the term AI in reference to something near-sapient or actually fully sapient thanks to our science fiction and the news...
          • Exemplary

          The problem is that the average human has spent their entire lives hearing the term AI in reference to something near-sapient or actually fully sapient thanks to our science fiction and the news articles that cover it. So using the technically correct term is causing genuine confusion with the general population. And I think, personally, that AI marketing has taken advantage of this and is partially to blame.

          But I'm not sure insisting that we call Pac-Man ghosts AI is going to be an effective solution

          19 votes
          1. [2]
            Ashura_Savarra
            Link Parent
            I agree completely. But I also think the opening salvo of an argument should never be semantic. Especially when it's incorrect. Suddenly everyone's a self-appointed expert. Every time I've seen...

            I agree completely. But I also think the opening salvo of an argument should never be semantic. Especially when it's incorrect.

            Suddenly everyone's a self-appointed expert.

            Every time I've seen the point that LLMs are just repeating things they've read without true understanding, it's been made by a human doing exactly that. The irony is really special.

            15 votes
            1. DefinitelyNotAFae
              Link Parent
              I think that using "AI" helps delineate that and avoids getting into the semantic debate somewhat but then again here we are

              I think that using "AI" helps delineate that and avoids getting into the semantic debate somewhat but then again here we are

              7 votes
          2. [2]
            raze2012
            Link Parent
            100%. From Pacman to GPT-5, AI as a term has been completely stretched thin to mean anything as simple as "scripted state machine" to your favorite sci-fi/cyberpunk story on androids. But at the...

            And I think, personally, that AI marketing has taken advantage of this and is partially to blame.

            100%. From Pacman to GPT-5, AI as a term has been completely stretched thin to mean anything as simple as "scripted state machine" to your favorite sci-fi/cyberpunk story on androids.

            But at the same time, that's language. We successfully made "literally" to be defined as it's antonym. I sort of wish we could use that power to properly plug in actual gaps in our language instead of further overloading common words, but no one person deigns how society moves and talks.

            4 votes
            1. DefinitelyNotAFae
              Link Parent
              I get that it's language, it's why I think putting quotes around AI to provide some attempt at clarity is fine. The marketing manipulation part of it isn't fine IMO. Probably legal and all but not...

              I get that it's language, it's why I think putting quotes around AI to provide some attempt at clarity is fine. The marketing manipulation part of it isn't fine IMO. Probably legal and all but not fine because it's counting on that misunderstanding.
              It is odd as a non-tech person to watch the two sides of AI debate each time,

              I love literally as its antonym because it's only the antonym in a hyperbolic sense. So literally means figuratively but only when hyperbolically used. I love it.

              1 vote
          3. teaearlgraycold
            (edited )
            Link Parent
            8 years ago "machine learning" hype was similarly overblown. I felt at the time that "automated statistics" would be a much better term. It demystifies ML in a way that suddenly helps anyone...

            8 years ago "machine learning" hype was similarly overblown. I felt at the time that "automated statistics" would be a much better term. It demystifies ML in a way that suddenly helps anyone that's taken a stats class understand what's happening in the machine.

            2 votes
        2. [4]
          arghdos
          Link Parent
          If statements, the ultimate AI

          All it needs to do in order to qualify for that distinction is be able to make choices without direct user input

          If statements, the ultimate AI

          3 votes
          1. [3]
            semitones
            Link Parent
            You joke, but the distinction between AI and Not AI exists somewhere on a spectrum of complexity. One If statement wouldn't count as AI, by most people. But plenty of video game opponents, etc.,...

            You joke, but the distinction between AI and Not AI exists somewhere on a spectrum of complexity.

            One If statement wouldn't count as AI, by most people. But plenty of video game opponents, etc., are probably just complicated logic, e.g. many if statements. The programmers made all the choices earlier, but the "AI opponent" or NPC is still appearing to make choices in the moment.

            7 votes
            1. [2]
              teaearlgraycold
              Link Parent
              This reminds me of Wolfenstein 3D's enemies. I read the amazing "Game Engine Black Book" (well, both of them) and it mentions how the enemies are really simple and just walk towards the player...

              This reminds me of Wolfenstein 3D's enemies. I read the amazing "Game Engine Black Book" (well, both of them) and it mentions how the enemies are really simple and just walk towards the player while shooting. But by having activation zones that switch specific enemies on when entered - and placing enemies down occluded corridors - John Carmack was able to give the feeling of an ambush. As you walk towards a 4-way intersection suddenly you'll activate 2 guards down the perpendicular corridor and they pop out at the same time. It felt very intelligent at the time, but really it's no more complicated than a trip-wire dumping a bucket of water on you.

              3 votes
              1. ThrowdoBaggins
                Link Parent
                Trip-wires, the ultimate AI (Sorry, I had to! Feel free to mark this as noise)

                Trip-wires, the ultimate AI

                (Sorry, I had to! Feel free to mark this as noise)

        3. [2]
          Carighan
          Link Parent
          Don't you mean that they're CI, not AI? Or did the terminology change in the past 20 years much? Because that's how I originally learned it, we can - and could, for a long long time - do CI, but...

          Don't you mean that they're CI, not AI? Or did the terminology change in the past 20 years much? Because that's how I originally learned it, we can - and could, for a long long time - do CI, but we're struggling how to do even approach GI.

          2 votes
          1. Ashura_Savarra
            Link Parent
            I'm having a hard time finding usage of the term "ACI" that's more than a few months old. It's being used to differentiate current gen systems from older, more limited ones, while also...

            I'm having a hard time finding usage of the term "ACI" that's more than a few months old. It's being used to differentiate current gen systems from older, more limited ones, while also distinguishing it from AGI. If there's an older usage, it's been drowned out in my searches by the current buzz.

            3 votes
      2. [3]
        DawnPaladin
        Link Parent
        As someone who uses an LLM as a programming assistant daily, LLMs are more powerful than you're making them out to be. LLMs do not have sapience or personhood, but they are capable of independent...

        As someone who uses an LLM as a programming assistant daily, LLMs are more powerful than you're making them out to be.

        LLMs do not have sapience or personhood, but they are capable of independent reasoning. Every day I take original code I've written (which could not have been in its training dataset), pass it to my AI assistant, and ask it to fix an error I'm seeing. Very often it's able to identify the source of the error, explain what I'm doing wrong, and suggest a fix that's customized to my codebase and coding style. It can read the code of a library that was released after its knowledge cutoff, explain it, and suggest how to integrate it into my private codebase. I can ask it how to achieve a goal in my program, and it will come up with a plan for how to achieve that goal.

        If that's not understanding, if it's not reasoning, then it's close enough as makes no difference.

        11 votes
        1. [2]
          Carighan
          Link Parent
          Hrm, interesting. Might I ask which assistant and which language that is for? I tried both ChatGPT 3.5, 4 and IntelliJ's AI Assistant for Java. My results were: It's amazing for lookups of...

          Hrm, interesting. Might I ask which assistant and which language that is for?

          I tried both ChatGPT 3.5, 4 and IntelliJ's AI Assistant for Java.

          My results were:

          • It's amazing for lookups of specific non-negotiable data. In particular ChatGPT can parse and return from javadoc or a manual page far faster than I could possibly find it.
          • It's okay - in particular the latter - for menial code work. Quite good in fact. Only... that's doing stuff a single Lombok-annotation would or an Alt+Insert would also perform.
          • It's absolutey rubbish for anything beyond that. It cannot add documentation that explains the how+why (only the what+how, which is not the point and in fact worsens code quality across a company over time), it absolutely cannot code, and when it pieces stuff together it only performs steps that perform superficial actions, meaning it can only do stuff that are inherently trivial to implement. See above again. It never once did something that went beyond that, sadly.

          I was super disappointed by that use case in particular, especially with IntelliJ's assistant as that's hyper-specific to the Java/Kotlin use case. And yet all it did was convince me thoroughly that if I ever want to ditch Lombok fully, it has me covered.

          2 votes
          1. DawnPaladin
            Link Parent
            I use Cursor, a fork of VSCode, with GPT-4 via an OpenAI API key, on a TypeScript project. LLMs have relatively short context windows, so Cursor provides lots of good tools for giving GPT-4 the...

            I use Cursor, a fork of VSCode, with GPT-4 via an OpenAI API key, on a TypeScript project.

            LLMs have relatively short context windows, so Cursor provides lots of good tools for giving GPT-4 the right context in which to do useful work:

            • Show the AI specific files from your codebase
            • Index your whole codebase and make an educated guess about what to show the AI
            • Have the AI read a documentation website
            • Have the AI search the Web

            I will say that while GPT-4 is low-to-medium-level competent at almost everything, it's not an expert at anything. If you've been focused on the same codebase for years, GPT may not have much to offer you. GPT is at its strongest when you're learning something new. In particular it's good at getting you past the stage where you don't know what questions to ask. When I was learning programming, I would frequently get stuck in situations where Google and StackOverflow were unhelpful, but a human tutor could recognize the problem and get me unstuck quickly; GPT-4 provides a tutor who's always instantly on call.

            If you've been wanting to pick up skills in a new language or library, I highly recommend downloading Cursor and using GPT-4 to help you learn. GPT-4 makes learning new things much easier; you might be able to start and finish a learning project much faster than you expect.

            2 votes
      3. [2]
        Minty
        Link Parent
        The LLMs matching customer queries to FAQ answers are pretty great and effective, though. Better than traditional keyword matching bots. Which is how all of these should work if the companies want...

        The LLMs matching customer queries to FAQ answers are pretty great and effective, though. Better than traditional keyword matching bots. Which is how all of these should work if the companies want to maximize utility and control the outputs.

        9 votes
        1. Carighan
          Link Parent
          Yeah, these can be great for improving how we interpret "loose" search queries. But they should not recombine answers beyond links to the existing answers, basically. Or only in very tightly...

          Yeah, these can be great for improving how we interpret "loose" search queries. But they should not recombine answers beyond links to the existing answers, basically. Or only in very tightly controlled environments.

          5 votes
    3. sparksbet
      Link Parent
      If anything, improvements to AI systems would make this legal argument less cogent, since it would be more analogous to a human customer service employee.

      If anything, improvements to AI systems would make this legal argument less cogent, since it would be more analogous to a human customer service employee.

      4 votes
  2. [2]
    zptc
    Link
    Air Canada's argument was even worse than the article states according to the linked decision from the judge. So they don't think they would have been responsible for what a human...

    Air Canada's argument was even worse than the article states according to the linked decision from the judge.

    Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives – including a chatbot. It does not explain why it believes that is the case. In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions.

    So they don't think they would have been responsible for what a human "representative," presumably someone on their payroll, would have said?

    also

    To the extent Air Canada argues it is not liable due to certain terms or conditions of its tariff, I note it did not provide a copy of the relevant portion of the tariff. It only included submissions about what the tariff allegedly says. Air Canada is a sophisticated litigant that should know it is not enough in a legal process to assert that a contract says something without actually providing the contract. The CRT also tells all parties are told to provide all relevant evidence. I find that if Air Canada wanted to a raise a contractual defense, it needed to provide the relevant portions of the contract. It did not, so it has not proven a contractual defence.

    They made arguments without the relevant evidence or explanations needed. Even without the AI part this seems like someone dropped the ball.

    17 votes
    1. semitones
      Link Parent
      I've been wondering about this for some time, because I've noticed some companies have salespeople who will say anything to make the sale, but it does not turn out to be true.

      So they don't think they would have been responsible for what a human "representative," presumably someone on their payroll, would have said?

      I've been wondering about this for some time, because I've noticed some companies have salespeople who will say anything to make the sale, but it does not turn out to be true.

      6 votes
  3. [2]
    Fiachra
    Link
    Sometimes the incompetent execution of a concept can be very helpful to illustrate the intent behind it, so you can recognise it when it's better disguised. "We're not responsible for our...

    Sometimes the incompetent execution of a concept can be very helpful to illustrate the intent behind it, so you can recognise it when it's better disguised.

    "We're not responsible for our algorithm when it gives wrong information to customers" was obviously never going to fool anyone, but "we're not responsible for our algorithm when it disproportionately denies loan approval to particular groups" does fool some people. What they both stand to gain is a sacrificial lamb, a machine eternally occupying Barney Stinson's P.L.E.A.S.E. role: someone they can assign any risky/illegal task to, who can be blamed later, thereby protecting the company. Not that the illegal conduct is the point per se, but when regulations conflict with profit it's in their interest to do due dilligence badly, fudge legal boundaries as much as they can get away with, let as many cases slip through the cracks as they can. AI helps them fudge further and insulate against the consequences.

    I expect countless more examples will come along over the next few years.

    "We didn't fail to moderate illegal hate speech, it was our ChatGPT based auto-mod!"

    "We didn't fail to screen for money laundering, it was our anti-crime AI!"

    "We didn't discriminate against disabled people in our hiring practices, it was our 'personal values online questionnaire'!"

    7 votes
    1. PuddleOfKittens
      Link Parent
      I think the term you're looking for is "moral crumple zone", which is used in self-driving cars referring to how in an emergency, the self-driving cars revert control to the " driver" 1 second...

      I think the term you're looking for is "moral crumple zone", which is used in self-driving cars referring to how in an emergency, the self-driving cars revert control to the " driver" 1 second before the crash (which is way under the human reaction time and does nothing other than giving the corporation a potential legal shield from legal liability, since they can honestly claim their software wasn't driving at the moment of the crash).

      3 votes
  4. [4]
    kovboydan
    Link
    Apologies to Canadians, but I don’t know much about public records or legal research in your jurisdiction and your “articling” process is confusing to me. The linked article is terribly light on...

    Apologies to Canadians, but I don’t know much about public records or legal research in your jurisdiction and your “articling” process is confusing to me.

    The linked article is terribly light on details so this is at best a guess, but the only reasonable legal argument I can think of that could be reported thusly is:

    1. Air Canada uses a third party’s “AI” on its website.
    2. The third party’s “AI” provides incorrect information.
    3. Air Canada is sued based on failing to perform as suggested by the “AI.”
    4. Air Canada argued the “AI” is a separate legal entity and that Air Canada isn’t responsible. More specifically, they argue that the company providing the “AI” used on Air Canada’s website is a separate legal entity and that is the entity that should be sued.
    5. The judge recognizes a very normal, run of the mill argument made by defendants but rules against Air Canada because their website is their website.
    4 votes
    1. [3]
      TheJorro
      Link Parent
      It might be better to read the decision directly if the article's reporting on it isn't as helpful as you'd like.

      It might be better to read the decision directly if the article's reporting on it isn't as helpful as you'd like.

      11 votes
      1. [2]
        kovboydan
        Link Parent
        Thanks for the link. It’s possible the filings would change my view but this reads like really bad lawyering on Air Canada’s side. Using the terms of the contract with the customer as a defense...

        Thanks for the link. It’s possible the filings would change my view but this reads like really bad lawyering on Air Canada’s side.

        Using the terms of the contract with the customer as a defense without submitting the contract into evidence? Bad.

        Not enforcing that indemnification clause in the contract they have with whoever their chatbot is from? Bad.

        Possibly not having an indemnification clause in that contract? Worse.

        Treating a product like a person and forgetting about whatever Canada’s version of respondeat superior is? Hilariously bad.

        9 votes
        1. TheJorro
          Link Parent
          Right? Reading through it, I couldn't figure out if I was much more ignorant of practicing law than I thought I was, or if they hired the first person off the tarmac they saw to do their legal...

          Right? Reading through it, I couldn't figure out if I was much more ignorant of practicing law than I thought I was, or if they hired the first person off the tarmac they saw to do their legal defense.

          So much of this just strikes me as the end result of non-stop gladhanding towards executive whims, and their actual legal team knew that had nothing usable at all because no due diligence was done in the first place. Heads should roll over something this extremely poorly done but considering how AC has been run, they'll probably shrug and then do it again.

          5 votes
  5. chocobean
    Link
    Oh phew. I misread the Tildes title and thought "Air Canada successfully sued" meant AC was the one who sued, successfully, a customer. But it is the reverse: AC found to be at fault

    Oh phew.

    I misread the Tildes title and thought "Air Canada successfully sued" meant AC was the one who sued, successfully, a customer.

    But it is the reverse: AC found to be at fault

    Air Canada's chatbot gave a B.C. man the wrong information. Now, the airline has to pay for the mistake

    11 votes
  6. Hollow
    Link

    Air Canada, for its part, argued that it could not be held liable for information provided by the bot.

    "In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website," Rivers wrote.

    "It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot."

    55 votes