12 votes

Hallucination-free RAG: Making LLMs safe for healthcare

2 comments

  1. [2]
    skybrian
    Link
    From the blog post:

    From the blog post:

    In this example, everything displayed with a blue background is guaranteed to be verbatim from source material. No hallucinations. LLMs remain imperfect, so it may still choose to quote the wrong part of the source material, but only “real” quotations are displayed on blue - they are deterministically generated.

    We think Deterministic Quoting is an “enabler” for deploying LLMs where there are serious consequences to incorrect information […]

    Many LLM systems can be designed to deterministically quote. This article provides motivation and explains a basic implementation.

    6 votes
    1. redshift
      Link Parent
      I wonder if it will allow you to expand the context around a quote, still guaranteeing that it's from the original source. It's too easy to selectively quote, and the LLM wouldn't even know (can't...

      I wonder if it will allow you to expand the context around a quote, still guaranteeing that it's from the original source. It's too easy to selectively quote, and the LLM wouldn't even know (can't know) that it's creating bias. Of course, most people won't bother to expand context even if they have that capability, so I still see this as harmful.

      3 votes