96
votes
Air Canada successfully sued after its AI chatbot gave BC passenger incorrect information: airline claimed it wasn't liable for what its own AI told customers
Link information
This data is scraped automatically and may be incorrect.
- Title
- Air Canada's chatbot gave a B.C. man the wrong information. Now, the airline has to pay for the mistake
- Published
- Feb 15 2024
"In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions."
This is laughable now, but it won't be long until AI starts entering that uncanny valley where they seem real enough that legal arguments like this might hold up.
Seems irrelevant — even if an AI becomes a legal entity, it will be an entity in the employ of or contracting to air Canada. If an employee gave incorrect info, the company would be as liable.
The likely result of this is a disclaimer at the start of every session saying that nothing the chatbot says can be relied on.
Which is just such trash. It's like saying "nothing an employee says can be relied upon" and then just having them straight up lie or mislead you for profit.
Someone needs to kick these "you sign away your life" TOS agreements in the teeth and hard, but i doubt it's about to happen.
If you're providing a service, you're responsible for it. No matter if it's a single employee or dumbass AI or just incorrect information on your website. I don't understand how we have even opened the door for other possibilities.
First, I heartily agree with you that the provider should be responsible for the service they provide.
It's pretty simple. The logic of capitalism is that what ever they are selling as a service or product is irrelevant and the only thing that matters is maximizing profit. This company got burned because they got out ahead of the pack, but you can bet there are plenty of lobbiests pushing for legal cover for this kind of thing.
I don't think Air Canada thought this all the way through. An AI that is so sophisticated that it is its own legal entity should also meet the qualifications of personhood. If I were some absolute dictator and someone brought that claim before me, I would monkey's paw that wish so quick. "We agree Air Canada that you are not responsible for the actions of your AI. We are however going to imprison your entire leadership team under charges of slavery."
Do they? Remember what the media has successfully rebranded as "AI" is a stochastic parrot system.
And just like humans just blabbering something they've picked up (without understanding any of it), yeah sure it can fool people. Fake it till you make it is a thing for a reason.
But it's also not in any way revolutionary or even noteworthy. If anything LLMs are inferior at this because they do stochastic recombinations where none are necessary and just quoting a certain input source verbatim would fool the people asking much better as its wording is not as obviously hacked together and they couldn't tell that this is just stealing the words from someone else, anyways.
More importantly, LLMs are, as said above, just recombining things you put in. Think of a "traditional" chatbot, very old generation, that has a long long list of fixed responses to fixed questions. Now merely replace the fixed questions with a matcher that assigns percentages ("This seems to be like 60% about travel advise, 30% about booking and 10% about weather") and takes answers according to the percentages and hacksaws them together based on how existing texts it can access put words together.
It's a chinese room. It has no way to actually know the content of the questions or the replies its giving, it is just matching pieces of output to pieces of input of which it understands neither.
How can it have responsibility then, if it does not understand its actions or their context? After all, that's why children have limited liability, as do those not of sound mind at the time of their actions. And LLMs are 100% never of sound mind in this context, and cannot change this. They play a 5000 pieces puzzle with all pieces turned facing down.
I wish people would stop putting "AI" in air quotes or calling LLMs "not AI." It's an AI. The ghosts in Pac Man are AI. All it needs to do in order to qualify for that distinction is be able to make choices without direct user input.
What LLMs aren't, what no AI we currently know of is, is AGI. For that, it would need to understand the larger context of its decisions, and there's ongoing debate as to whether that's even technically possible. But they are definitely still AI. To say otherwise is misleading.
The problem is that the average human has spent their entire lives hearing the term AI in reference to something near-sapient or actually fully sapient thanks to our science fiction and the news articles that cover it. So using the technically correct term is causing genuine confusion with the general population. And I think, personally, that AI marketing has taken advantage of this and is partially to blame.
But I'm not sure insisting that we call Pac-Man ghosts AI is going to be an effective solution
I agree completely. But I also think the opening salvo of an argument should never be semantic. Especially when it's incorrect.
Suddenly everyone's a self-appointed expert.
Every time I've seen the point that LLMs are just repeating things they've read without true understanding, it's been made by a human doing exactly that. The irony is really special.
I think that using "AI" helps delineate that and avoids getting into the semantic debate somewhat but then again here we are
100%. From Pacman to GPT-5, AI as a term has been completely stretched thin to mean anything as simple as "scripted state machine" to your favorite sci-fi/cyberpunk story on androids.
But at the same time, that's language. We successfully made "literally" to be defined as it's antonym. I sort of wish we could use that power to properly plug in actual gaps in our language instead of further overloading common words, but no one person deigns how society moves and talks.
I get that it's language, it's why I think putting quotes around AI to provide some attempt at clarity is fine. The marketing manipulation part of it isn't fine IMO. Probably legal and all but not fine because it's counting on that misunderstanding.
It is odd as a non-tech person to watch the two sides of AI debate each time,
I love literally as its antonym because it's only the antonym in a hyperbolic sense. So literally means figuratively but only when hyperbolically used. I love it.
8 years ago "machine learning" hype was similarly overblown. I felt at the time that "automated statistics" would be a much better term. It demystifies ML in a way that suddenly helps anyone that's taken a stats class understand what's happening in the machine.
If statements, the ultimate AI
You joke, but the distinction between AI and Not AI exists somewhere on a spectrum of complexity.
One If statement wouldn't count as AI, by most people. But plenty of video game opponents, etc., are probably just complicated logic, e.g. many if statements. The programmers made all the choices earlier, but the "AI opponent" or NPC is still appearing to make choices in the moment.
This reminds me of Wolfenstein 3D's enemies. I read the amazing "Game Engine Black Book" (well, both of them) and it mentions how the enemies are really simple and just walk towards the player while shooting. But by having activation zones that switch specific enemies on when entered - and placing enemies down occluded corridors - John Carmack was able to give the feeling of an ambush. As you walk towards a 4-way intersection suddenly you'll activate 2 guards down the perpendicular corridor and they pop out at the same time. It felt very intelligent at the time, but really it's no more complicated than a trip-wire dumping a bucket of water on you.
Trip-wires, the ultimate AI
(Sorry, I had to! Feel free to mark this as noise)
Don't you mean that they're CI, not AI? Or did the terminology change in the past 20 years much? Because that's how I originally learned it, we can - and could, for a long long time - do CI, but we're struggling how to do even approach GI.
I'm having a hard time finding usage of the term "ACI" that's more than a few months old. It's being used to differentiate current gen systems from older, more limited ones, while also distinguishing it from AGI. If there's an older usage, it's been drowned out in my searches by the current buzz.
As someone who uses an LLM as a programming assistant daily, LLMs are more powerful than you're making them out to be.
LLMs do not have sapience or personhood, but they are capable of independent reasoning. Every day I take original code I've written (which could not have been in its training dataset), pass it to my AI assistant, and ask it to fix an error I'm seeing. Very often it's able to identify the source of the error, explain what I'm doing wrong, and suggest a fix that's customized to my codebase and coding style. It can read the code of a library that was released after its knowledge cutoff, explain it, and suggest how to integrate it into my private codebase. I can ask it how to achieve a goal in my program, and it will come up with a plan for how to achieve that goal.
If that's not understanding, if it's not reasoning, then it's close enough as makes no difference.
Hrm, interesting. Might I ask which assistant and which language that is for?
I tried both ChatGPT 3.5, 4 and IntelliJ's AI Assistant for Java.
My results were:
I was super disappointed by that use case in particular, especially with IntelliJ's assistant as that's hyper-specific to the Java/Kotlin use case. And yet all it did was convince me thoroughly that if I ever want to ditch Lombok fully, it has me covered.
I use Cursor, a fork of VSCode, with GPT-4 via an OpenAI API key, on a TypeScript project.
LLMs have relatively short context windows, so Cursor provides lots of good tools for giving GPT-4 the right context in which to do useful work:
I will say that while GPT-4 is low-to-medium-level competent at almost everything, it's not an expert at anything. If you've been focused on the same codebase for years, GPT may not have much to offer you. GPT is at its strongest when you're learning something new. In particular it's good at getting you past the stage where you don't know what questions to ask. When I was learning programming, I would frequently get stuck in situations where Google and StackOverflow were unhelpful, but a human tutor could recognize the problem and get me unstuck quickly; GPT-4 provides a tutor who's always instantly on call.
If you've been wanting to pick up skills in a new language or library, I highly recommend downloading Cursor and using GPT-4 to help you learn. GPT-4 makes learning new things much easier; you might be able to start and finish a learning project much faster than you expect.
The LLMs matching customer queries to FAQ answers are pretty great and effective, though. Better than traditional keyword matching bots. Which is how all of these should work if the companies want to maximize utility and control the outputs.
Yeah, these can be great for improving how we interpret "loose" search queries. But they should not recombine answers beyond links to the existing answers, basically. Or only in very tightly controlled environments.
If anything, improvements to AI systems would make this legal argument less cogent, since it would be more analogous to a human customer service employee.
Air Canada's argument was even worse than the article states according to the linked decision from the judge.
So they don't think they would have been responsible for what a human "representative," presumably someone on their payroll, would have said?
also
They made arguments without the relevant evidence or explanations needed. Even without the AI part this seems like someone dropped the ball.
I've been wondering about this for some time, because I've noticed some companies have salespeople who will say anything to make the sale, but it does not turn out to be true.
Sometimes the incompetent execution of a concept can be very helpful to illustrate the intent behind it, so you can recognise it when it's better disguised.
"We're not responsible for our algorithm when it gives wrong information to customers" was obviously never going to fool anyone, but "we're not responsible for our algorithm when it disproportionately denies loan approval to particular groups" does fool some people. What they both stand to gain is a sacrificial lamb, a machine eternally occupying Barney Stinson's P.L.E.A.S.E. role: someone they can assign any risky/illegal task to, who can be blamed later, thereby protecting the company. Not that the illegal conduct is the point per se, but when regulations conflict with profit it's in their interest to do due dilligence badly, fudge legal boundaries as much as they can get away with, let as many cases slip through the cracks as they can. AI helps them fudge further and insulate against the consequences.
I expect countless more examples will come along over the next few years.
"We didn't fail to moderate illegal hate speech, it was our ChatGPT based auto-mod!"
"We didn't fail to screen for money laundering, it was our anti-crime AI!"
"We didn't discriminate against disabled people in our hiring practices, it was our 'personal values online questionnaire'!"
I think the term you're looking for is "moral crumple zone", which is used in self-driving cars referring to how in an emergency, the self-driving cars revert control to the " driver" 1 second before the crash (which is way under the human reaction time and does nothing other than giving the corporation a potential legal shield from legal liability, since they can honestly claim their software wasn't driving at the moment of the crash).
Apologies to Canadians, but I don’t know much about public records or legal research in your jurisdiction and your “articling” process is confusing to me.
The linked article is terribly light on details so this is at best a guess, but the only reasonable legal argument I can think of that could be reported thusly is:
It might be better to read the decision directly if the article's reporting on it isn't as helpful as you'd like.
Thanks for the link. It’s possible the filings would change my view but this reads like really bad lawyering on Air Canada’s side.
Using the terms of the contract with the customer as a defense without submitting the contract into evidence? Bad.
Not enforcing that indemnification clause in the contract they have with whoever their chatbot is from? Bad.
Possibly not having an indemnification clause in that contract? Worse.
Treating a product like a person and forgetting about whatever Canada’s version of respondeat superior is? Hilariously bad.
Right? Reading through it, I couldn't figure out if I was much more ignorant of practicing law than I thought I was, or if they hired the first person off the tarmac they saw to do their legal defense.
So much of this just strikes me as the end result of non-stop gladhanding towards executive whims, and their actual legal team knew that had nothing usable at all because no due diligence was done in the first place. Heads should roll over something this extremely poorly done but considering how AC has been run, they'll probably shrug and then do it again.
Oh phew.
I misread the Tildes title and thought "Air Canada successfully sued" meant AC was the one who sued, successfully, a customer.
But it is the reverse: AC found to be at fault