10 votes

AI hallucination cases - When lawyers use hallucinated legal content

6 comments

  1. [6]
    MimicSquid
    Link
    This is an archive of cases where lawyers submitted AI hallucinations or erroneous information to the court as fact and were caught and referenced in the decision. As of posting, there's 905...

    This is an archive of cases where lawyers submitted AI hallucinations or erroneous information to the court as fact and were caught and referenced in the decision. As of posting, there's 905 verified cases, and the maintainer acknowledges that it is a work in progress to identify the scope of the problem.

    Personally, it's interesting to see exactly how broadly AI use in legal filings is occurring. This archive only references the places where AI was a) used, b) noticed as incorrect, and c) mentioned in the decision. Regardless of its efficacy, it's being used broadly. I'm not sure what my takeaway is, though? AI systems should hallucinate less or not at all? Sure, fine. People shouldn't use such ineffective tools? Ok, yeah. Companies should be held responsible for false statements made by their AI? Maybe, but that's a nightmare to determine liability and enforce it. I dunno.

    5 votes
    1. [4]
      boxer_dogs_dance
      (edited )
      Link Parent
      I have a law license. Lawyers and law firms in the US (and I believe in other common law countries as well) should most definitely be held liable for false statements of the law generated by an...

      I have a law license. Lawyers and law firms in the US (and I believe in other common law countries as well) should most definitely be held liable for false statements of the law generated by an AI/LLM. Even when researching laws and related cases in paper books, and when using the older research databases such as Westlaw, lawyers and paralegals are ethically obligated to verify that the case or version of the statute that they cite is STILL GOOD LAW. Case rulings issued by appeals court judges change law all of the time in common law systems. Lawyers who write in a court case to claim that the law says one thing or another are ethically obligated to verify that their statement is true as of the date they wrote it/date it is filed with the court. Adding Artificial Intelligence to the mix of research tools does not change that obligation.

      Lawyers who rely solely on AI to verify their legal statements are careless and or lazy and what they are doing is unethical based on standards that existed long before AI. This behavior is absolutely Unprofessional.

      14 votes
      1. Wulfsta
        Link Parent
        Out of curiosity, what processes are used to verify some referenced law is true at a time of writing or submission?

        Out of curiosity, what processes are used to verify some referenced law is true at a time of writing or submission?

        1 vote
      2. [2]
        nic
        Link Parent
        900 sounds high, but this must be out of what, well over a hundred million legal case documents a year perhaps in the USA, even more globally, and there must be over a million lawyers in the USA...

        900 sounds high, but this must be out of what, well over a hundred million legal case documents a year perhaps in the USA, even more globally, and there must be over a million lawyers in the USA alone producing this content, so it's less than 0.1% of lawyers and an infinitesimally small number of documented cases. I am frankly surprised it isn't a little higher?

        1. MimicSquid
          Link Parent
          This is specifically only the ones where it's called out, and is a manually gathered list. It's very likely that it's much more common than is provable.

          This is specifically only the ones where it's called out, and is a manually gathered list. It's very likely that it's much more common than is provable.

          3 votes
    2. glesica
      Link Parent
      Tracking people using AI badly is more a commentary about their industry or practices (in this case, lawyers) than anything to do with AI specifically. Imagine if there was a sudden outbreak of...

      I'm not sure what my takeaway is, though?

      Tracking people using AI badly is more a commentary about their industry or practices (in this case, lawyers) than anything to do with AI specifically. Imagine if there was a sudden outbreak of attorneys using psychics to help write their filings, basically the same thing, it's attorneys doing dumb / unethical things and the legal profession reacting to that.

      3 votes