What are doctors for? Have someone seen AI explaining eletrical putlet wiring by drawing it? Or drawing car engine? AI is not capable of many things that people think it can do right. It can be...
What are doctors for?
Have someone seen AI explaining eletrical putlet wiring by drawing it? Or drawing car engine?
AI is not capable of many things that people think it can do right.
It can be great hepling hand but that's where it should stay - it should provide help, not decide.
And that's why we need doctors. Even them can make errors, but I'd rather my doctor check me than AI telling me that there is no medical problem or totally missing the point.
I think it's really unfair that people compare AI to incredibly expensive professionals, instead of some random off Fiverr. An AI can very cheaply diagnose you, but so can Some Guy On Fiverr. The...
AI is not capable of many things that people think it can do right.
I think it's really unfair that people compare AI to incredibly expensive professionals, instead of some random off Fiverr. An AI can very cheaply diagnose you, but so can Some Guy On Fiverr. The AI will probably screw up, lie about basic and verifiable facts, but so will Some Guy On Fiverr.
I understand the sentiment but the "AI" they use and train for this hyper specific pattern recognition is not your run of the mill LLM a la GPT. These are far more capable of sifting through...
I understand the sentiment but the "AI" they use and train for this hyper specific pattern recognition is not your run of the mill LLM a la GPT. These are far more capable of sifting through thousands of images and recognize when it matches a certain pattern of disease. Doctors then verify if the system is correct because false positives happen.
Even with false positives, these systems are far less likely to just automagically come up with nonsense.
It's actually more about users than AI or that guy from Fiverr. People are lazy and they oftec don't faxt check, they just accept first thing they see as the ultimate truth. Not everyone, of...
It's actually more about users than AI or that guy from Fiverr. People are lazy and they oftec don't faxt check, they just accept first thing they see as the ultimate truth. Not everyone, of course, but many people are like that.
AI isn't inherently bad, but user should approach it with a bit of scepticism. It can do programming, it can do homework, it can write an essay for you, but that doesn't mean it does it perfectly. Yet this is the thing many people don't understand about it.
I don’t know much about Fiverr. Do people really use it for medical advice, or is that a whimsical comparison? I think it’s more obviously relevant to compare AI to professionals if you would...
I don’t know much about Fiverr. Do people really use it for medical advice, or is that a whimsical comparison?
I think it’s more obviously relevant to compare AI to professionals if you would normally go to a professional for advice. It seems relevant that the AI is far cheaper and easier to access. But also, the results are decidedly mixed and you need to be a sophisticated user to interpret them.
Unfortunately, in the US at least, sometimes you need to be a sophisticated user of medical services to get the most out of them, too. (Apparently there are professional patient advocates, or more often a close relative will do this unpaid.)
Another relevant comparison might be between using AI and doing an Internet search and reading articles from the Mayo clinic or similar places.
Had I the power to do so, it would already be done. :-) Maybe a better writer could publish an article describing such a possibility. The article could examine what CEOs actually do and what A.I....
Had I the power to do so, it would already be done. :-)
Maybe a better writer could publish an article describing such a possibility.
The article could examine what CEOs actually do and what A.I. could replace. Perhaps the stock holders could hire a vibe CEO at a fraction of the cost. Someone to type the questions about decision in.
Well, confirmed that we’re still around peak hype. I’d think the New Yorker would write better headlines at least. The attention baiting is gross. Is it so hard to choose something moderate like...
Well, confirmed that we’re still around peak hype. I’d think the New Yorker would write better headlines at least. The attention baiting is gross. Is it so hard to choose something moderate like ‘Reexamining the role of doctors in the age of AI diagnosis’?
Talking to CaBot was both empowering and unnerving. I felt as though I could now receive a second opinion, in any specialty, anytime I wanted. But only with vigilance and medical training could I take full advantage of its abilities—and detect its mistakes. A.I. models can sound like Ph.D.s, even while making grade-school errors in judgment. Chatbots can’t examine patients, and they’re known to struggle with open-ended queries. Their output gets better when you emphasize what’s most important, but most people aren’t trained to sort symptoms in that way. A person with chest pain might be experiencing acid reflux, inflammation, or a heart attack; a doctor would ask whether the pain happens when they eat, when they walk, or when they’re lying in bed. If the person leans forward, does the pain worsen or lessen? Sometimes we listen for phrases that dramatically increase the odds of a particular condition. “Worst headache of my life” may mean brain hemorrhage; “curtain over my eye” suggests a retinal-artery blockage. The difference between A.I. and earlier diagnostic technologies is like the difference between a power saw and a hacksaw. But a user who’s not careful could cut off a finger.
…
Tasking an A.I. with solving a medical case makes the mistake of “starting with the end,” according to Gurpreet Dhaliwal, a physician at the University of California, San Francisco, whom the Times once described as “one of the most skillful clinical diagnosticians in practice.” In Dhaliwal’s view, doctors are better off asking A.I. for help with “wayfinding”: instead of asking what sickened a patient, a doctor could ask a model to identify trends in the patient’s trajectory, along with important details that the doctor might have missed. The model would not give the doctor orders to follow; instead, it might alert her to a recent study, propose a helpful blood test, or unearth a lab result in a decades-old medical record. Dhaliwal’s vision for medical A.I. recognizes the difference between diagnosing people and competently caring for them. “Just because you have a Japanese-English dictionary in your desk doesn’t mean you’re fluent in Japanese,” he told me.
CaBot remains experimental, but other A.I. tools are already shaping patient care. ChatGPT is blocked on my hospital’s network, but I and many of my colleagues use OpenEvidence. The platform has licensing agreements with top medical journals and says it complies with the patient-privacy law hipaa. Each of its answers cites a set of peer- reviewed articles, sometimes including an exact figure or a verbatim quote from a relevant paper, to prevent hallucinations. When I gave OpenEvidence a recent case, it didn’t immediately try to solve the mystery but, rather, asked me a series of clarifying questions.
Penda Health, a network of medical clinics in Kenya, treats an enormous range of patients, from newborns sickened by malaria to construction workers who have fallen off buildings. Kenya has long struggled with a limited health-care infrastructure. Penda recently began using AI Consult, a tool that employs OpenAI models and runs in the background while clinicians record medical histories, order tests, and prescribe medicines. A clinician who overlooks a patient’s anemia would get an alert to consider ordering an iron test; another, treating a child with diarrhea, might be advised to forgo antibiotics in favor of an oral rehydration solution and zinc supplements.
An evaluation of the program, which was conducted in collaboration with OpenAI and has not been peer-reviewed, reported that clinicians who used AI Consult made sixteen per cent fewer diagnostic errors and thirteen per cent fewer treatment errors. They seemed to learn from the program: the number of safety alerts dropped significantly over time. AI Consult made mistakes; in testing, it confused a cough syrup for an antibiotic of a similar name. The absolute number of medical errors at Penda also remained high—at times because clinicians ignored the model’s advice. “They know that this patient doesn’t necessarily need an antibiotic, but they also know that the patient really wants it,” Robert Korom, Penda’s chief medical officer, said. Still, a Penda clinician deemed the program a “tremendous improvement.” Its success may have come from its focus not on diagnosis but on helping clinicians navigate the possibilities.
A similar principle could guide patients. If A.I. tools continue to misdiagnose and hallucinate, we might not want them to diagnose us at all. Yet we could ask them to rate the urgency of our symptoms, and to list the range of conditions that could explain them, with some sense of which ones are most likely. A patient could inquire about “red-flag symptoms”—warning signs that would indicate a more serious condition—and about which trusted sources the A.I. is drawing on. A chatbot that gets details wrong could still help you consider what to ask at your next appointment. And it could aid you in decoding your doctor’s advice.
To me this is the best case - basically code review for doctors. Our AI code review is so-so. It sometimes catches a real issue, but if there isn't anything real, it is usually reaching or posting...
A clinician who overlooks a patient’s anemia would get an alert
To me this is the best case - basically code review for doctors. Our AI code review is so-so. It sometimes catches a real issue, but if there isn't anything real, it is usually reaching or posting comments that don't make sense in context (like multithreading issues in lambdas). IMO, it can augment but should not replace anything a doctor is doing.
I found this an interesting note in the article. It is one of those things that kind of seem obvious, but also just interesting to read about.
Earlier this year, a poison-control center in Arizona reported a drop in total call volume but a rise in severely poisoned patients. The center’s director suggested that A.I. tools might have steered people away from medical attention.
I found this an interesting note in the article. It is one of those things that kind of seem obvious, but also just interesting to read about.
What are doctors for?
Have someone seen AI explaining eletrical putlet wiring by drawing it? Or drawing car engine?
AI is not capable of many things that people think it can do right.
It can be great hepling hand but that's where it should stay - it should provide help, not decide.
And that's why we need doctors. Even them can make errors, but I'd rather my doctor check me than AI telling me that there is no medical problem or totally missing the point.
I think that’s the New Yorker’s equivalent of a clickbait headline. The article itself has a lot more nuance.
I think it's really unfair that people compare AI to incredibly expensive professionals, instead of some random off Fiverr. An AI can very cheaply diagnose you, but so can Some Guy On Fiverr. The AI will probably screw up, lie about basic and verifiable facts, but so will Some Guy On Fiverr.
I understand the sentiment but the "AI" they use and train for this hyper specific pattern recognition is not your run of the mill LLM a la GPT. These are far more capable of sifting through thousands of images and recognize when it matches a certain pattern of disease. Doctors then verify if the system is correct because false positives happen.
Even with false positives, these systems are far less likely to just automagically come up with nonsense.
It's actually more about users than AI or that guy from Fiverr. People are lazy and they oftec don't faxt check, they just accept first thing they see as the ultimate truth. Not everyone, of course, but many people are like that.
AI isn't inherently bad, but user should approach it with a bit of scepticism. It can do programming, it can do homework, it can write an essay for you, but that doesn't mean it does it perfectly. Yet this is the thing many people don't understand about it.
I don’t know much about Fiverr. Do people really use it for medical advice, or is that a whimsical comparison?
I think it’s more obviously relevant to compare AI to professionals if you would normally go to a professional for advice. It seems relevant that the AI is far cheaper and easier to access. But also, the results are decidedly mixed and you need to be a sophisticated user to interpret them.
Unfortunately, in the US at least, sometimes you need to be a sophisticated user of medical services to get the most out of them, too. (Apparently there are professional patient advocates, or more often a close relative will do this unpaid.)
Another relevant comparison might be between using AI and doing an Internet search and reading articles from the Mayo clinic or similar places.
It's definitely a whimsical comparison. It feels appropriate given how people talk about how much AI reduces costs, though.
I'm still waiting for the articles about how stock holders are looking at A.I. to replace C.E.Os for the huge cost savings.
Someone should do it, that would be hilarious!
Had I the power to do so, it would already be done. :-)
Maybe a better writer could publish an article describing such a possibility.
The article could examine what CEOs actually do and what A.I. could replace. Perhaps the stock holders could hire a vibe CEO at a fraction of the cost. Someone to type the questions about decision in.
"Hey ChatGPT, how should I reorg my company?"
There you go, AI CEO!
Well, confirmed that we’re still around peak hype. I’d think the New Yorker would write better headlines at least. The attention baiting is gross. Is it so hard to choose something moderate like ‘Reexamining the role of doctors in the age of AI diagnosis’?
https://archive.is/M8ayJ
From the article:
…
To me this is the best case - basically code review for doctors. Our AI code review is so-so. It sometimes catches a real issue, but if there isn't anything real, it is usually reaching or posting comments that don't make sense in context (like multithreading issues in lambdas). IMO, it can augment but should not replace anything a doctor is doing.
I found this an interesting note in the article. It is one of those things that kind of seem obvious, but also just interesting to read about.