What are doctors for? Have someone seen AI explaining eletrical putlet wiring by drawing it? Or drawing car engine? AI is not capable of many things that people think it can do right. It can be...
What are doctors for?
Have someone seen AI explaining eletrical putlet wiring by drawing it? Or drawing car engine?
AI is not capable of many things that people think it can do right.
It can be great hepling hand but that's where it should stay - it should provide help, not decide.
And that's why we need doctors. Even them can make errors, but I'd rather my doctor check me than AI telling me that there is no medical problem or totally missing the point.
Talking to CaBot was both empowering and unnerving. I felt as though I could now receive a second opinion, in any specialty, anytime I wanted. But only with vigilance and medical training could I take full advantage of its abilities—and detect its mistakes. A.I. models can sound like Ph.D.s, even while making grade-school errors in judgment. Chatbots can’t examine patients, and they’re known to struggle with open-ended queries. Their output gets better when you emphasize what’s most important, but most people aren’t trained to sort symptoms in that way. A person with chest pain might be experiencing acid reflux, inflammation, or a heart attack; a doctor would ask whether the pain happens when they eat, when they walk, or when they’re lying in bed. If the person leans forward, does the pain worsen or lessen? Sometimes we listen for phrases that dramatically increase the odds of a particular condition. “Worst headache of my life” may mean brain hemorrhage; “curtain over my eye” suggests a retinal-artery blockage. The difference between A.I. and earlier diagnostic technologies is like the difference between a power saw and a hacksaw. But a user who’s not careful could cut off a finger.
…
Tasking an A.I. with solving a medical case makes the mistake of “starting with the end,” according to Gurpreet Dhaliwal, a physician at the University of California, San Francisco, whom the Times once described as “one of the most skillful clinical diagnosticians in practice.” In Dhaliwal’s view, doctors are better off asking A.I. for help with “wayfinding”: instead of asking what sickened a patient, a doctor could ask a model to identify trends in the patient’s trajectory, along with important details that the doctor might have missed. The model would not give the doctor orders to follow; instead, it might alert her to a recent study, propose a helpful blood test, or unearth a lab result in a decades-old medical record. Dhaliwal’s vision for medical A.I. recognizes the difference between diagnosing people and competently caring for them. “Just because you have a Japanese-English dictionary in your desk doesn’t mean you’re fluent in Japanese,” he told me.
CaBot remains experimental, but other A.I. tools are already shaping patient care. ChatGPT is blocked on my hospital’s network, but I and many of my colleagues use OpenEvidence. The platform has licensing agreements with top medical journals and says it complies with the patient-privacy law hipaa. Each of its answers cites a set of peer- reviewed articles, sometimes including an exact figure or a verbatim quote from a relevant paper, to prevent hallucinations. When I gave OpenEvidence a recent case, it didn’t immediately try to solve the mystery but, rather, asked me a series of clarifying questions.
Penda Health, a network of medical clinics in Kenya, treats an enormous range of patients, from newborns sickened by malaria to construction workers who have fallen off buildings. Kenya has long struggled with a limited health-care infrastructure. Penda recently began using AI Consult, a tool that employs OpenAI models and runs in the background while clinicians record medical histories, order tests, and prescribe medicines. A clinician who overlooks a patient’s anemia would get an alert to consider ordering an iron test; another, treating a child with diarrhea, might be advised to forgo antibiotics in favor of an oral rehydration solution and zinc supplements.
An evaluation of the program, which was conducted in collaboration with OpenAI and has not been peer-reviewed, reported that clinicians who used AI Consult made sixteen per cent fewer diagnostic errors and thirteen per cent fewer treatment errors. They seemed to learn from the program: the number of safety alerts dropped significantly over time. AI Consult made mistakes; in testing, it confused a cough syrup for an antibiotic of a similar name. The absolute number of medical errors at Penda also remained high—at times because clinicians ignored the model’s advice. “They know that this patient doesn’t necessarily need an antibiotic, but they also know that the patient really wants it,” Robert Korom, Penda’s chief medical officer, said. Still, a Penda clinician deemed the program a “tremendous improvement.” Its success may have come from its focus not on diagnosis but on helping clinicians navigate the possibilities.
A similar principle could guide patients. If A.I. tools continue to misdiagnose and hallucinate, we might not want them to diagnose us at all. Yet we could ask them to rate the urgency of our symptoms, and to list the range of conditions that could explain them, with some sense of which ones are most likely. A patient could inquire about “red-flag symptoms”—warning signs that would indicate a more serious condition—and about which trusted sources the A.I. is drawing on. A chatbot that gets details wrong could still help you consider what to ask at your next appointment. And it could aid you in decoding your doctor’s advice.
What are doctors for?
Have someone seen AI explaining eletrical putlet wiring by drawing it? Or drawing car engine?
AI is not capable of many things that people think it can do right.
It can be great hepling hand but that's where it should stay - it should provide help, not decide.
And that's why we need doctors. Even them can make errors, but I'd rather my doctor check me than AI telling me that there is no medical problem or totally missing the point.
I think that’s the New Yorker’s equivalent of a clickbait headline. The article itself has a lot more nuance.
https://archive.is/M8ayJ
From the article:
…