19 votes

Popular AI chatbots found to give error-ridden legal answers

39 comments

  1. [29]
    cstby
    Link
    Absurd. This is how you know we're in an AI bubble. Folks are being sold this snake oil and actually believe it. Maybe one day we'll have AI lawyers, but that's probably decades off.

    Generative artificial intelligence has raised hopes that the powerful technology can help provide legal services to people who can’t afford a lawyer.

    Absurd. This is how you know we're in an AI bubble. Folks are being sold this snake oil and actually believe it. Maybe one day we'll have AI lawyers, but that's probably decades off.

    38 votes
    1. [4]
      skybrian
      Link Parent
      And yet, I suspect many people will use it because it's convenient, similar to how they use Dr. Google for medical questions.

      And yet, I suspect many people will use it because it's convenient, similar to how they use Dr. Google for medical questions.

      16 votes
      1. [3]
        Macil
        Link Parent
        Well, Google is extremely useful for this, as long as you can apply a little critical thought and further research to the results you get, and ChatGPT seems similar to me. It's easy to make fun of...

        Well, Google is extremely useful for this, as long as you can apply a little critical thought and further research to the results you get, and ChatGPT seems similar to me. It's easy to make fun of the people who do their own research with these tools hilariously badly but it's harder to recognize the many people who get real value out of it.

        7 votes
        1. nosewings
          Link Parent
          The problem with "doing your own research" when it comes to the law specifically is that the law is extremely unintuitive. Basically no layperson accurately understands even the basic principles,...

          The problem with "doing your own research" when it comes to the law specifically is that the law is extremely unintuitive. Basically no layperson accurately understands even the basic principles, and in fact almost everyone has preconceived notions about what the legal system is and how it works that are simply wrong. "Just think critically" isn't good advice for most people under such conditions.

          5 votes
        2. skybrian
          Link Parent
          Yes, I agree. Making good use of low- or mixed-quality information sources is something we do all the time. Everyone who adds "reddit" to their Google search is doing that. Somehow that's...

          Yes, I agree. Making good use of low- or mixed-quality information sources is something we do all the time. Everyone who adds "reddit" to their Google search is doing that.

          Somehow that's forgotten when there is a single company to blame.

          4 votes
    2. [12]
      sparksbet
      Link Parent
      Yeah I can't see anyone who's particularly knowledgeable about law OR the particular weaknesses of these models is surprised by these findings. Especially not after the very public "lawyer used...

      Yeah I can't see anyone who's particularly knowledgeable about law OR the particular weaknesses of these models is surprised by these findings. Especially not after the very public "lawyer used ChatGPT to write a pleading" kerfuffle has already happened, but honestly even without that. Even getting good legal advice from humans is hard enough!

      9 votes
      1. [11]
        Habituallytired
        Link Parent
        I can't tell you how many lawyers I've come across in my work (I work with lawyers as my job) who use AI to write their legal stuff and then don't even fact check it to make sure it's correct...

        I can't tell you how many lawyers I've come across in my work (I work with lawyers as my job) who use AI to write their legal stuff and then don't even fact check it to make sure it's correct before publishing 🤦‍♀️

        7 votes
        1. [9]
          sparksbet
          Link Parent
          oof that's rough. I get that there's a lot that ends up being boilerplate anyway (I worked at my dad's law firm as a teen and the "swap the names and dates and dollar amounts in our existing...

          oof that's rough. I get that there's a lot that ends up being boilerplate anyway (I worked at my dad's law firm as a teen and the "swap the names and dates and dollar amounts in our existing template" tasks went to me), but the trust in the results... Absolutely depressing. They need to force continuing legal education on the risks of AI on these guys.

          3 votes
          1. [8]
            Habituallytired
            Link Parent
            CLE (Continuing Legal Education) is a thing that is required for many lawyers to keep their license current, but I don't know if it's required in all states, and I don't know what the technology...

            CLE (Continuing Legal Education) is a thing that is required for many lawyers to keep their license current, but I don't know if it's required in all states, and I don't know what the technology education requirements are for CLE. CLE is one of the things we do in my company.

            1 vote
            1. [7]
              sparksbet
              Link Parent
              Oh yeah my dad's a lawyer and he had regular CLE... who knows how long it'll take to get "please don't have ChatGPT write your pleadings" in there though

              Oh yeah my dad's a lawyer and he had regular CLE... who knows how long it'll take to get "please don't have ChatGPT write your pleadings" in there though

              2 votes
              1. [2]
                Gaywallet
                Link Parent
                If it's anything like CME there's no shortage of classes on AI

                If it's anything like CME there's no shortage of classes on AI

                1. sparksbet
                  Link Parent
                  They still regularly use fax machines and typewriters in that profession, so I wouldn't be so sure lol

                  They still regularly use fax machines and typewriters in that profession, so I wouldn't be so sure lol

                  1 vote
              2. [4]
                Habituallytired
                Link Parent
                That something we tell all of our clients who opt to use any sort of LLM writing for their work. We even tell them they will be banned if they're caught copying and pasting AI writing without...

                That something we tell all of our clients who opt to use any sort of LLM writing for their work. We even tell them they will be banned if they're caught copying and pasting AI writing without checking. It's crazy how few people read their contracts and listen to their account managers though.

                1. [3]
                  sparksbet
                  Link Parent
                  Oof. To be fair, at my work both legal AND data science have been yelling at all out devs not to use Copilot with our company code and I absolutely guarantee plenty of them are still doing it.

                  Oof. To be fair, at my work both legal AND data science have been yelling at all out devs not to use Copilot with our company code and I absolutely guarantee plenty of them are still doing it.

                  1 vote
                  1. [2]
                    Habituallytired
                    Link Parent
                    I will never understand how people don't know not to put company/proprietary stuff into AI, and I'm not even particularly technical.

                    I will never understand how people don't know not to put company/proprietary stuff into AI, and I'm not even particularly technical.

                    1. sparksbet
                      Link Parent
                      Yeah it's... I have no words for it, really.

                      Yeah it's... I have no words for it, really.

                      1 vote
    3. [6]
      unkz
      Link Parent
      I have no doubt we will have highly competent legal bots within ten years, although they may be quite expensive to run and maybe only available at a commercial level. I can’t even imagine what the...

      I have no doubt we will have highly competent legal bots within ten years, although they may be quite expensive to run and maybe only available at a commercial level. I can’t even imagine what the state of AI will be in decades though.

      4 votes
      1. [5]
        Minori
        Link Parent
        It's worth considering the first neural-network and vision-guided self-driving cars were tested by Mercedes in the 1980's. Obviously we've made advances in automotive technology since then, but...

        It's worth considering the first neural-network and vision-guided self-driving cars were tested by Mercedes in the 1980's. Obviously we've made advances in automotive technology since then, but there are no general purpose self-driving cars I'd trust 100% of the time.

        Technology often moves in fits and starts, so I'm not certsin the LLM and transformer boom will affect the real world all that much.

        https://en.wikipedia.org/wiki/History_of_self-driving_cars

        8 votes
        1. [4]
          unkz
          Link Parent
          I’m actually expecting the next revolution to come from the field of reinforcement learning, which is silently making progress while everybody is talking about LLMs.

          I’m actually expecting the next revolution to come from the field of reinforcement learning, which is silently making progress while everybody is talking about LLMs.

          2 votes
          1. [3]
            sparksbet
            Link Parent
            Reinforcement learning is definitely one of the more interesting subfields right now (and it's a big part of why ChatGPT is good at question-answering in the first place!) but I'm not convinced...

            Reinforcement learning is definitely one of the more interesting subfields right now (and it's a big part of why ChatGPT is good at question-answering in the first place!) but I'm not convinced it's actually going to do what's needed for competent legal bots in 10 years even if it does make big breakthroughs. The issues with legal bots are principally ones having to do with the models lacking a real-world ground truth, and a good legal bot would further need to be good at argument structure. I don't think reinforcement learning (at least on its own) is necessarily best suited to improving the latter, and it's pretty orthogonal to the former.

            1 vote
            1. [2]
              unkz
              Link Parent
              I wasn’t really so much talking about the context of legal bots in particular. I think reinforcement learning, and particularly imagination based self play, is what’s going to be a major player in...

              I wasn’t really so much talking about the context of legal bots in particular. I think reinforcement learning, and particularly imagination based self play, is what’s going to be a major player in the next revolution, possibly in the multi-decade time frame. I’m talking about serious progress towards AGI.

              That said, I also expect new variations on self-play and imagination algorithms to become major components of improving LLMs in the medium term too.

              1 vote
              1. sparksbet
                Link Parent
                Oh yeah I think reinforcement learning is a safe bet when it comes to having a big breakthrough in the medium term. It's definitely one of the parts of the field I think is most exciting (and it...

                Oh yeah I think reinforcement learning is a safe bet when it comes to having a big breakthrough in the medium term. It's definitely one of the parts of the field I think is most exciting (and it helps that it's more intuitive to explain to laypeople compared to other models lol). I just think legal bots specifically are a particularly difficult domain and aren't a near future thing without some major changes or additions to current LLM architecture.

                1 vote
    4. [6]
      TanyaJLaird
      Link Parent
      I'm no even convinced we'll ever be able to have AI lawyers, or any number of other professions. My objection isn't technological, but moral. To truly replace a lawyer, you need a true AI, an...

      I'm no even convinced we'll ever be able to have AI lawyers, or any number of other professions. My objection isn't technological, but moral. To truly replace a lawyer, you need a true AI, an artificial general intelligence (AGI.) I see no reason to believe that this won't someday be technically possible. Unless you believe in some otherworldly soul, we know that it is possible to make a conscious mind out of ordinary matter. And we have no reason to believe that there is something so special about neurons that they can do something silicon can't. Especially as transistors can operate at speeds orders of magnitude faster than neurons, there's little reason to suspect that we can't create a true AGI. And that AGI could work as a lawyer.

      But this omits a glaring problem that I wish all people working on AGI would seriously address when planning their work. Beyond a few research projects to see if it's possible, I see little reason to ever create an AGI. Why? Because a true AGI isn't a simulation of a person; it is a person. Sure, we can try to pretend that we somehow know that and AGI can never be conscious, but we really don't know that. Truthfully though, we have no way of proving if other humans are conscious. I know I am, but that's all I know; I just give everyone else the benefit of the doubt and assume they are. And if a computer is capable of producing the full depth of human responses and feedback, I see no reason we shouldn't make that same assumption. This is especially true because the consequences of getting it wrong are too terrible to imagine.

      If you want to create an AGI and use it to perform work, you should only ever do so if your society has actually cracked the problem of consciousness and firmly proven that silicon cannot be conscious. But if you can't do that, you are playing with the most dangerous form of fire imaginable. Again, if we lack other information, it is only prudent to assume that if an AI is able to act entirely like a person, then we should assume that AI is a person. And such an AI should rightfully have all of the rights and privileges of a person. Otherwise, we risk creating a slave race.

      We know so little about the nature of consciousness that the only prudent action is to be very, very careful about any AI with truly human-like capabilities. In fact, AGIs should be assumed to have personhood unless firmly proven otherwise. And this is why creating AGIs is ultimately pointless. I would honestly support criminally charging anyone that creates an AGI with the goal of using it for work under anti-slavery statutes. Sure, they might argue that the AGIs aren't really people, but that's been the exact same claim that every slaver in history has made. All found a way to deny the personhood of their victims.

      1. [5]
        unkz
        Link Parent
        How odd. From my perspective, I almost see it as the pinnacle of all conceivably possible human achievement. Taking the one thing that separates us from other known life, distilling it, and...

        Beyond a few research projects to see if it's possible, I see little reason to ever create an AGI.

        How odd. From my perspective, I almost see it as the pinnacle of all conceivably possible human achievement. Taking the one thing that separates us from other known life, distilling it, and bringing it forth out of the void.

        3 votes
        1. [2]
          nosewings
          Link Parent
          The basic ethical questions and objections around AGI are literally as old as science fiction (see: Frankenstein). Is it really okay to create a person just for the sake of satisfying intellectual...

          The basic ethical questions and objections around AGI are literally as old as science fiction (see: Frankenstein).

          Is it really okay to create a person just for the sake of satisfying intellectual curiosity?

          Is humanity really prepared for, or even capable of, being a good parent to its child?

          1 vote
          1. unkz
            Link Parent
            I mean we create people all day long for less reason. See above

            Is it really okay to create a person just for the sake of satisfying intellectual curiosity?

            I mean we create people all day long for less reason.

            Is humanity really prepared for, or even capable of, being a good parent to its child?

            See above

            4 votes
        2. [2]
          TanyaJLaird
          Link Parent
          It's important to separate intellectual achievement from economics. I'm not opposed to AGI, but it needs to be done in an ethical way that preserves the personhood of any true artificial mind. For...

          It's important to separate intellectual achievement from economics. I'm not opposed to AGI, but it needs to be done in an ethical way that preserves the personhood of any true artificial mind. For example, we could pass a law that says if you want to create an AGI, you have to basically set it up with complete independence. So first you would have to take care of it financially. You would have to establish a trust that could reliably earn a market return high enough to keep the AI supplied with power and maintenance. And you would have to give it full access to the net and complete freedom to do what it wanted with its time. (No wiring it so it feels pain if it doesn't do what you want, or similar things that if done to a human would be considered brainwashing.)

          So if you, as a research project, want to let humanity marvel at our own magnificence as we birth AI, fine. But on top of the research and build cost, you need to put $10 million into a trust to keep this thing alive indefinitely and independently operating. You set it up so it can actually exist in the world as a free agent. In other words, we treat it the same as if a private company used IVF or other reproductive technologies to create people. You can use money to create a person, but that person isn't yours to own and command. The same would apply even if we had artificial wombs and clone vats. You can create a person if you want, but you don't somehow own them because of this.

          Now, wouldn't this be dangerous? Sure. It's possible that this will result in a psychopathic AI being unleashed on the world. But knowing that the AI will be free and out of their control, any company would have strong incentive to not develop crazy AIs. And if it keeps happening to the point that it's a problem, we can just outlaw AGIs all together. As for the AGIs themselves, we treat them like the people they are. If one wants to live peacefully as a productive member of society, we just let them do so with all the rights therein. They want to marry someone? Let them. Let them be legal parents to children, the whole nine yards, all the rights and responsibilities of citizenship. And if you get a bad AI that goes nuts and kills someone? Well hold it responsible for its crimes and turn it off. Same way we treat humans that kill others. Or force it to run inside a digital jail. IDK. There are ways to punish an AI for crimes, just like any other person.

          Now, the above is very hypothetical, and there's a lot we don't know about how AGI would operate. But I feel this is the only way you can morally create such beings. If you're going to build them, you need to recognize them as people, in all meanings of the word. You need to give it independence and a trust to provide for its indefinite maintenance. You literally do not have the legal authority to flip it off. If you do, you will be charged with murder. We are treating these things as people, full stop. We are NOT having a The Matrix situation or an AI version of the Haitian Revolution. And ultimately, we could do this because there aren't likely to be a ton of these wandering around. A few would be created as research or novelties, but with no economic incentive to make them, and the high cost required, they simply wouldn't be that common. We could integrate a few artificial beings into our society without people thinking they're taking over the place.

          When I say AGIs have "no use" I mean they have no real use to the companies that are pushing for AGI right now. They're only interested in AGI to make money. They want to create AGIs that they can force to work on one project or another. Imagine building an AGI that simply gets deeply depressed if its not working on an assigned task. If that entity really has all the complexity and subtlety of the human mind, then what you are doing there is nothing less than slavery and brainwashing. But that is what these companies are pushing towards. They want to create artificial minds with all the abilities of a human, but through some coercive or deeply ingrained mechanism will still want to do what we tell it to. It's monstrous. They're openly planning to create a slave race.

          Imagine if tomorrow we learned that some tinpot dictator somewhere was trying to literally breed subservience into his population. Imagine if he had teams of scientists researching genetic traits that would be useful to a dictator. Perhaps there's a genetic component to subservience, a willingness to follow orders, reduced empathy, etc. Is it not possible there are a set of genes, that if bred into an entire population, would somehow make the population a bit more docile? I don't know enough about genetics to say for sure. But whether it is or not actually possible, imagine if tomorrow a news story announced that some dictator was embarking on such a project. Maybe the Chinese government does it, but paints it as a "genetic social harmony" project. What would the world think of that? Would that be OK? I think not. We would find that such a horrible crime against humanity that we would invent a new -cide word for it. It would be monstrous.

          Yet this is exactly what these companies are pushing for. They want to create a genie in a lamp. They want to create human level, and ideally far above human level, true artificial intelligences. But they want to design them such that they remain subservient to our desires and commands. They want to create minds that are brainwashed into serving us. A race of super genius brainwashed slaves to cater to our every whim. This is the future they want for us. This is a nightmare. An abomination. This shit shouldn't be legal.

          1 vote
          1. unkz
            Link Parent
            I wonder about how this works in practise. What metric should we use to decide whether an AGI is in fact a bona fide artificial mind and not, as some call the current state of the art, an...

            It's important to separate intellectual achievement from economics. I'm not opposed to AGI, but it needs to be done in an ethical way that preserves the personhood of any true artificial mind. For example, we could pass a law that says if you want to create an AGI, you have to basically set it up with complete independence.

            I wonder about how this works in practise. What metric should we use to decide whether an AGI is in fact a bona fide artificial mind and not, as some call the current state of the art, an autocomplete on steroids? I have strong doubts that regulators are up to the task.

            So if you, as a research project, want to let humanity marvel at our own magnificence as we birth AI, fine. But on top of the research and build cost, you need to put $10 million into a trust to keep this thing alive indefinitely and independently operating. You set it up so it can actually exist in the world as a free agent. In other words, we treat it the same as if a private company used IVF or other reproductive technologies to create people.

            What you're proposing isn't even remotely close to how we treat people who were conceived using IVF. If we are to treat these AGIs like people, it's worth considering that people also have to work to feed themselves. I can pretty much guarantee that AGI arrives before a post-scarcity society -- in fact, I suspect that a post-scarcity society will most likely be achievable as a direct result of AGI.

            A few would be created as research or novelties, but with no economic incentive to make them, and the high cost required, they simply wouldn't be that common.

            Even if we pretend that all people will abide by these rules (which they won't, because I'm a people, and I won't), this ignores at least two major things.

            First, many people are highly motivated to create AGI because of their perceived potential to benefit humanity and are not going to be motivated by personal financial gains.

            Second, the entities most motivated to make more AGIs may well be the first generation of AGIs themselves.

  2. [4]
    Gaywallet
    Link
    I think the most interesting finding in this study is the following: Which when you think about how language models work, makes a lot of sense. It’s drawing upon trained data sets that match the...

    I think the most interesting finding in this study is the following:

    The models also suffered from “contra-factual bias": They were likely to believe a false premise embedded in a user’s question, acting in a “sycophantic” way to reinforce the user’s mistake.

    Which when you think about how language models work, makes a lot of sense. It’s drawing upon trained data sets that match the question being asked. It’s easy to lead it to respond a certain way, because people who talk pro/con certain issues will often use specific kinds of language (such as dog whistles in political issues).

    15 votes
    1. sparksbet
      Link Parent
      I think it's actually simpler than that -- part of the training for ChatGPT (after the initial big LLM training step iirc) involved using reinforcement learning that was (indirectly but that's a...

      I think it's actually simpler than that -- part of the training for ChatGPT (after the initial big LLM training step iirc) involved using reinforcement learning that was (indirectly but that's a long technical story) based on real human responses. The humans were presented with queries and responses in isolation, without participating themselves in the conversation or being given any additional context, and then they had to rate them. But when it comes to factual questions, this does not optimize for accuracy outside of things that are common knowledge -- human annotators who don't know the actual topic of discussion well are going to reward answers that appear to confidently answer the question, not those that challenge its premises. This part of the training is generally where a lot of the sycophantic nature of chatGPT and its ilk can be attributed to.

      8 votes
    2. skybrian
      Link Parent
      Yes, from pre-training, it has all the stories and all the opinions, fact and fiction, intermixed, and will draw on clues from the prompt to decide what sort of story you want. This has only been...

      Yes, from pre-training, it has all the stories and all the opinions, fact and fiction, intermixed, and will draw on clues from the prompt to decide what sort of story you want. This has only been partially trained out.

      This sort of "brainstorming" can be useful if you're in a position to check the results, or even better, if the results can be checked automatically. But many people are not good at checking things, particularly for stories they want to believe.

      6 votes
    3. Macil
      Link Parent
      Sycophancy is a big problem with LLMs right now. The issue has been recognized and is being studied a lot lately so I'm hopeful solutions will be found eventually.

      Sycophancy is a big problem with LLMs right now. The issue has been recognized and is being studied a lot lately so I'm hopeful solutions will be found eventually.

      1 vote
  3. [2]
    nosewings
    Link
    This is not that surprising to me at all. The law is fiddly, precise (and yet, simultaneously, often fuzzy), and always jurisdiction-specific. It's not at all regular the way code is. I've heard...

    This is not that surprising to me at all. The law is fiddly, precise (and yet, simultaneously, often fuzzy), and always jurisdiction-specific. It's not at all regular the way code is. I've heard it said that modern AI chatbots only know what facts look like. Sometimes that's almost good enough, but it's definitely not good enough for the law.

    7 votes
    1. sparksbet
      Link Parent
      yeah I think legal work manages to fall into multiple of these models' biggest pitfalls in a way that makes them really uniquely ill-suited to it. There are apparently half-decent (and far better...

      yeah I think legal work manages to fall into multiple of these models' biggest pitfalls in a way that makes them really uniquely ill-suited to it. There are apparently half-decent (and far better by comparison) domain-specific offerings by some of the big legal platforms, designed to help lawywrs do legal research, so it's not an impossible task per se. But legal contexts fall right into the biggest blindspots in these general purpose generative models.

      2 votes
  4. [2]
    raze2012
    Link
    Medical, Legal, financal. The Big 3 of "do not trust the internet". If it's not very wide reaching general sentiments, just don't. You also should not trust an LLM that scrapes the internet for...

    Medical, Legal, financal. The Big 3 of "do not trust the internet". If it's not very wide reaching general sentiments, just don't. You also should not trust an LLM that scrapes the internet for advice to give you advice. Consult a doctor/lawyer/fiduciary.

    I do find a bit of humor when the first question in my mind is "did they consult lawyers about this", only to chuckle at the obvious answer, though. Quite the quandary: do you take advice from an expert on how to replace them, when the expert has every incentive to say they are better and preserve their job?

    I was surprised to hear a chief Justice encourage the idea:

    In his year-end report on the federal judiciary, Chief Justice John Roberts pointed to the hopes that AI can increase access to justice.

    “For those who cannot afford a lawyer, AI can help,” he wrote. “These tools have the welcome potential to smooth out any mismatch between available resources and urgent needs in our court system.”

    Not too sure what to think of this statement as of now.

    3 votes
    1. nosewings
      (edited )
      Link Parent
      I know what I think: Chief Justice of the Supreme Court John Roberts is an old fart who doesn't understand the new technology and its limitations. EDIT: And I should say that simply not...

      I know what I think: Chief Justice of the Supreme Court John Roberts is an old fart who doesn't understand the new technology and its limitations.

      EDIT: And I should say that simply not understanding the technology is not, like, a mortal sin. The problem is that John Roberts holds a position of authority, and people take him seriously when he says things like this, even if he doesn't know what he's talking about.

      3 votes
  5. [2]
    Sodliddesu
    Link
    YouTuber Legal Eagle's videos on AI chatbots doing legal work are... 12, 11, and 7 months old and it suffered from all the same errors seen in this article - so either AI hasn't changed in over a...

    YouTuber Legal Eagle's videos on AI chatbots doing legal work are... 12, 11, and 7 months old and it suffered from all the same errors seen in this article - so either AI hasn't changed in over a year or, likely, AI has gotten worse.

    1. sparksbet
      Link Parent
      The actual models are probably more or less the same ones in Legal Eagle's videos. These types of problems are pretty fundamental to the way these models are designed and trained, so it's not...

      The actual models are probably more or less the same ones in Legal Eagle's videos. These types of problems are pretty fundamental to the way these models are designed and trained, so it's not surprising that they haven't improved much on this measure.

      6 votes