25 votes

AI hallucination cases - When lawyers use hallucinated legal content

15 comments

  1. [15]
    MimicSquid
    Link
    This is an archive of cases where lawyers submitted AI hallucinations or erroneous information to the court as fact and were caught and referenced in the decision. As of posting, there's 905...

    This is an archive of cases where lawyers submitted AI hallucinations or erroneous information to the court as fact and were caught and referenced in the decision. As of posting, there's 905 verified cases, and the maintainer acknowledges that it is a work in progress to identify the scope of the problem.

    Personally, it's interesting to see exactly how broadly AI use in legal filings is occurring. This archive only references the places where AI was a) used, b) noticed as incorrect, and c) mentioned in the decision. Regardless of its efficacy, it's being used broadly. I'm not sure what my takeaway is, though? AI systems should hallucinate less or not at all? Sure, fine. People shouldn't use such ineffective tools? Ok, yeah. Companies should be held responsible for false statements made by their AI? Maybe, but that's a nightmare to determine liability and enforce it. I dunno.

    12 votes
    1. [13]
      boxer_dogs_dance
      (edited )
      Link Parent
      I have a law license. Lawyers and law firms in the US (and I believe in other common law countries as well) should most definitely be held liable for false statements of the law generated by an...

      I have a law license. Lawyers and law firms in the US (and I believe in other common law countries as well) should most definitely be held liable for false statements of the law generated by an AI/LLM. Even when researching laws and related cases in paper books, and when using the older research databases such as Westlaw, lawyers and paralegals are ethically obligated to verify that the case or version of the statute that they cite is STILL GOOD LAW. Case rulings issued by appeals court judges change law all of the time in common law systems. Lawyers who write in a court case to claim that the law says one thing or another are ethically obligated to verify that their statement is true as of the date they wrote it/date it is filed with the court. Adding Artificial Intelligence to the mix of research tools does not change that obligation.

      Lawyers who rely solely on AI to verify their legal statements are careless and or lazy and what they are doing is unethical based on standards that existed long before AI. This behavior is absolutely Unprofessional.

      39 votes
      1. [7]
        nic
        Link Parent
        900 sounds high, but this must be out of what, well over a hundred million legal case documents a year perhaps in the USA, even more globally, and there must be over a million lawyers in the USA...

        900 sounds high, but this must be out of what, well over a hundred million legal case documents a year perhaps in the USA, even more globally, and there must be over a million lawyers in the USA alone producing this content, so it's less than 0.1% of lawyers and an infinitesimally small number of documented cases. I am frankly surprised it isn't a little higher?

        5 votes
        1. [6]
          MimicSquid
          Link Parent
          This is specifically only the ones where it's called out, and is a manually gathered list. It's very likely that it's much more common than is provable.

          This is specifically only the ones where it's called out, and is a manually gathered list. It's very likely that it's much more common than is provable.

          9 votes
          1. [4]
            cdb
            Link Parent
            I wonder if the narrow scope of focusing only on recording the number of cases where AI is used and was wrong is ripe for a base-rate fallacy interpretation. If we see this database getting...

            I wonder if the narrow scope of focusing only on recording the number of cases where AI is used and was wrong is ripe for a base-rate fallacy interpretation. If we see this database getting larger, one might assume this is an increasing problem. Without comparing it to the prevalence of AI use overall or comparing with the amount of incorrect references and arguments without AI use, it's hard to say if this is a problem that is getting better or worse.

            I mean, I highly suspect that AI use without proper review would result in errors, given that hallucination is a real and common thing, but I can't see this as evidence AI is causing a problem here without more information.

            2 votes
            1. CannibalisticApple
              Link Parent
              I don't think the point is that AI itself is the problem. The real problem is the individuals who are using it without reviewing it in any shape or form, given how easy it is to fact check whether...

              I don't think the point is that AI itself is the problem. The real problem is the individuals who are using it without reviewing it in any shape or form, given how easy it is to fact check whether the cited cases or statutes... well, exist. It's more a symptom of people being lazy or using a tool they don't understand, neither of which reflects well.

              In the legal field, you really want your lawyers to exactly know what they're doing and talking about. If they can't even do a basic fact check on the documents they submit, that raises all their other knowledge into question.

              10 votes
            2. [2]
              MimicSquid
              Link Parent
              I think it's clear that it's causing a problem; you can see the evidence of that. What's not clear is if the errors seen here are greater than the overall benefit of the tool, and I have no idea...

              I think it's clear that it's causing a problem; you can see the evidence of that. What's not clear is if the errors seen here are greater than the overall benefit of the tool, and I have no idea how to confirm or disprove that.

              2 votes
              1. cdb
                (edited )
                Link Parent
                I think it depends on what you mean by "causing a problem." Is it a new potential source of non-existent citations? Yes. Is it a problem in that it's increasing the error rate overall? Need more...

                I think it depends on what you mean by "causing a problem." Is it a new potential source of non-existent citations? Yes. Is it a problem in that it's increasing the error rate overall? Need more data to say.

                The main thing I'm trying to say is that without knowing the base-rate of things, that fact that this list exists or whether this list gets longer doesn't mean the situation is getting worse. Like you say, the overall benefit can't really be determined from looking at this list. I'm trying to say that the way human brains work is if we report on things like "bad list has many items" then there is a narrative and perception that this mean things are getting worse.

                I also see the potential for bias here. This site links to a paid service for verifying citations. There's an incentive to magnify the perception that this is a serious issue.

                2 votes
          2. nic
            Link Parent
            Legal arguments face intense adversarial scrutiny. When hallucinations are discovered, the judge is going to document this, as it is a serious charge of misconduct against the attorney. This list...

            Legal arguments face intense adversarial scrutiny. When hallucinations are discovered, the judge is going to document this, as it is a serious charge of misconduct against the attorney.

            This list is effectively an advertisement for a product that automatically scans for invalid references to case law. It's for lawyers to make sure they didn't include any hallucinations in their citations. If the author can productize that, they can automatically scan for findings of misconduct.

            So while the list was manually reviewed (because lawyers are usually careful about this stuff) it almost certainly wasn't manually created.

            They even are picking up cases where people self represented themselves and were found to have cited fake cases.

            So sure, LLM usage is surely more common, but not by orders of magnitude more, at least not for the important bits that underpin key legal decisions.

            2 votes
      2. [3]
        Wulfsta
        Link Parent
        Out of curiosity, what processes are used to verify some referenced law is true at a time of writing or submission?

        Out of curiosity, what processes are used to verify some referenced law is true at a time of writing or submission?

        5 votes
        1. boxer_dogs_dance
          (edited )
          Link Parent
          Cases are published and indexed in books called reporters and are also collected in a couple of research databases. Both Shepherds and Westlaw keep track of the status of all cases with regard to...

          Cases are published and indexed in books called reporters and are also collected in a couple of research databases. Both Shepherds and Westlaw keep track of the status of all cases with regard to whether they have been limited or overturned by a more recent case from a higher court. Checking this either physically or electronically is fundamental to legal research in the US.

          7 votes
        2. CannibalisticApple
          Link Parent
          Not a lawyer, but I suspect simply searching for the cited law can verify it. Of all the news stories I've heard about lawyers getting in trouble for this, the false information (whether a...

          Not a lawyer, but I suspect simply searching for the cited law can verify it. Of all the news stories I've heard about lawyers getting in trouble for this, the false information (whether a nonexistent law or a fictitious case) could discovered by just looking it up. A lot of the statements seem to be submitted without any proofreading or review given how easy it is to search and confirm the citations don't exist or are wrong.

          4 votes
      3. [2]
        patience_limited
        Link Parent
        If a client's case is prejudiced by filings containing false AI citations (I can see a judge acting out over this), is there any precedent to allow the client relief for inadequate representation...

        If a client's case is prejudiced by filings containing false AI citations (I can see a judge acting out over this), is there any precedent to allow the client relief for inadequate representation or malfeasance?

        3 votes
        1. sparksbet
          Link Parent
          There have been cases where lawyers were subject to sanctions for using AI that cited non-existent cases, but I'm not aware of (yet) any suits for inadequate representation over that.

          There have been cases where lawyers were subject to sanctions for using AI that cited non-existent cases, but I'm not aware of (yet) any suits for inadequate representation over that.

          2 votes
    2. glesica
      Link Parent
      Tracking people using AI badly is more a commentary about their industry or practices (in this case, lawyers) than anything to do with AI specifically. Imagine if there was a sudden outbreak of...

      I'm not sure what my takeaway is, though?

      Tracking people using AI badly is more a commentary about their industry or practices (in this case, lawyers) than anything to do with AI specifically. Imagine if there was a sudden outbreak of attorneys using psychics to help write their filings, basically the same thing, it's attorneys doing dumb / unethical things and the legal profession reacting to that.

      6 votes