7 votes

Medicine’s AI knowledge war heats up

4 comments

  1. [3]
    skybrian
    Link
    From the article: ... ... I had no idea that some doctors were using AI tools already and they're billion dollar businesses. How often do they use these tools?

    From the article:

    Before the mid-1990s, everybody had their favorite textbook – every doctor was either a Harrison’s or a Cecil person. Then, seemingly overnight, the tomes (including one I edited, Hospital Medicine) began gathering dust on shelves everywhere, as a new kind of tool easily demonstrated its superiority in point-of-care medical knowledge retrieval.

    Today, UpToDate remains an extraordinary resource, created by over 7,500 human experts charged with culling and interpreting the medical literature and guidelines to produce chapters on every conceivable clinical topic – and keeping the chapters, as the name says, up to date. From the time it emerged in the late 1990s until about 2023, it deservedly had what felt like an unshakeable position as the dominant point-of-care tool for health systems and clinicians.

    And then, before you could say Clayton Christensen, OpenEvidence displaced UpToDate – particularly among doctors-in-training, often the vanguard of tech-driven change – because it could perform a trick that UpToDate couldn’t: take an entire clinical case, in all its staggering complexity, and produce an AI-generated “curbside consult” that was impressively accurate, context-specific, and, yes, up to date. Suddenly, UpToDate’s approach, which had seemed revolutionary a generation earlier, seemed stale.

    ...

    Thus my sense of déjà vu, as OpenEvidence is currently doing to UpToDate what UpToDate did to the textbook publishers. Based on its investments to date, OpenEvidence has been valued at $6 billion; one wonders whether that valuation was based partly on the assumption that UpToDate would sit back and allow its lunch to be eaten by its upstart rival.

    If so, investors in OpenEvidence may be in for a rude awakening. In last week’s announcement, UpToDate said it would soon roll out its own AI-based tool, called UpToDate Expert AI. Note the careful branding, designed to highlight the fact that the UpToDate tool won’t scour the entire medical literature or the unfiltered internet for insights. Instead, UpToDate’s AI will draw its wisdom exclusively from its thousands of continuously updated chapters, written by experts.

    ...

    Finally, there’ll be another competitor, one that will approach decision support from a very different angle. Epic, the largest electronic health record (EHR) vendor, also announced a new set of AI tools last month. One of them, named “Art” for clinicians, is designed to review individual patients’ records as well as Epic’s database (“Cosmos”) of deidentified records on millions of patients cared for in health systems that use the company’s EHR.

    I had no idea that some doctors were using AI tools already and they're billion dollar businesses. How often do they use these tools?

    5 votes
    1. Gaywallet
      Link Parent
      There's extensive use of AI in healthcare and has been for ages. Radiology has been leading the field for quite some time, as images are a great use-case for pattern recognition. Of note, they've...

      I had no idea that some doctors were using AI tools already and they're billion dollar businesses. How often do they use these tools?

      There's extensive use of AI in healthcare and has been for ages. Radiology has been leading the field for quite some time, as images are a great use-case for pattern recognition. Of note, they've been using AI since before LLMs existed - these are not LLM prompted models like the vast majority of mainstream "AI" is. Here's a paper from a few years ago that touches on a high level some of the AI use-cases in healthcare, which I think is an important starting point. Of note, it's highly important in drug discovery and understanding genomics as well (protein folding a huge example from this space).

      I think it's important to come from this viewpoint when we talk about AI in healthcare, because approximately 99% of the time, we're not talking about LLMs. We're talking about pattern recognition tools, trained on the data they are attempting to draw a pattern from. The outputs are numbers, not text, and they are not prone to quite the same issues that LLMs are, and we're not confusing them with "thinking".

      With that being said, healthcare is extremely complicated, deals with every kind of data imaginable, and also involves a good amount of human interfacing and thus there are LLM applications as well. For example, in our system, we have an LLM specifically for drafting responses to patients in the digital health record app that we have. It proposes a templated response in certain circumstances. In our testing and stand-up of this tool, it was incredibly important to center the physician in the interaction - it was designed to save them work and to be entirely controllable by them. Ultimately we found it saved very little time (a statistically insignificant amount of time and an amount of rather low magnitude) but was positively rated by the clinicians because it reduced the cognitive burden of writing a response.

      Another example that we're trying to pilot is again a human-centered design- we're looking into whether prompted LLMs can help summarize SAFE events (narrative heavy reports that surmise issues like non-compliance with standard processes) and to help standardize the documentation. Ultimately a human always sees the full original report, but they are also presented with a pre-completed form where they can confirm the items are correct or correct that which is incorrect or hallucinated.

      In the non-LLM field of things, there's a ton of patient prediction and models for things like estimating patient clinical complexity, identifying and fast-tracking individuals based on dermatology pictures for rapid referral if cancer is suspected, identifying and readying patients who are more likely to be discharged today, and so on. These kinds of models often predate LLMs, but are certainly still regularly under development because they help to identify opportunity - be it through improved clinical care, improved access, reduced cost, or some other metric.

      8 votes
    2. CannibalisticApple
      Link Parent
      A few articles have come up on here about doctors using AI to help with cancer screenings and such, so it seems to be growing somewhat common. I am not involved in medicine but given the dearth of...

      A few articles have come up on here about doctors using AI to help with cancer screenings and such, so it seems to be growing somewhat common. I am not involved in medicine but given the dearth of medical information available, I think medicine is one of the fields where AI can be really beneficial. It's physically impossible for a single doctor to comb through all the data available when trying to come up with diagnoses or treatments, so at bare minimum AI can help point them in the right direction. This article about using AI to search existing drugs to propose new treatments for rare diseases is my favorite example.

      My main concerns personally are A) the quality of the sources used by the AI, and B) having no human oversight involved. Particularly the first point. Once AI gets "contaminated" by an erroneous term, it's nearly impossible to remove that. So for something as critical as medicine, it's important to closely curate the sources and data.

      5 votes
  2. paris
    Link
    Please tag this as AI, thank you!

    Please tag this as AI, thank you!

    1 vote