AI wants to please. How you ask questions and the train of questions often lead it to certain conclusions — and those conclusions can be wrong. In this case, the patient paid for it with his life.
A recent New York Times article tells one of the saddest stories I've read in a long time — and I think it's an incredibly important one to discuss.
It's about Joe Riley, a man diagnosed with a type of leukemia who ultimately chose not to pursue the treatment his oncologist recommended — because of information he had been given by an AI tool (Perplexity). He later passed away.
This is his son Ben's story as much as it is his father's, and it raises questions every patient and every clinician needs to sit with.
Joe was diagnosed with leukemia. His oncologist recommended a treatment plan.
Joe, however, had generated a Perplexity report about his condition. The report:
Ben did something most family members would never think to do: he contacted two of the authors of the papers that the Perplexity report was citing.
He sent them the AI-generated report and essentially asked: "Your research is being used to tell my dad he shouldn't do what his oncologist is recommending. Is that actually what your research shows?"
The authors' response was striking. Both of them said essentially:
"That is absolutely not what my research should lead you to conclude. Your father should listen to his oncologist."
Even with that direct pushback from the very authors whose papers the AI had quoted, Ben's father was not persuaded. He did not pursue treatment, and he passed away.
Near the end of the article, Ben essentially asks his father: "Your oncologist disagrees with the report. The two researchers cited in the report disagree with the report. Do you really believe you know more than these experts because of this Perplexity output?"
His father said yes.
This isn't a case of a confused patient who couldn't find reliable information. Ben's dad:
And the AI report still won. That's what makes this story so hard.
I want to say this clearly: AI is a wonderful tool. It has given patients access to information in ways we've never had before. I use it. My patients use it. That's not going away.
But I want to give everyone pause when using AI to diagnose or treat yourself, because:
These tools are built to be helpful and agreeable. The way you phrase your questions, the framing you use, the follow-up prompts — all of those nudge the AI toward certain conclusions.
If you are already skeptical of the medical system (as many patients understandably are in the US), AI can easily end up:
That's what appears to have happened here.
I see this all the time in my own patients — patients coming in with confident, incorrect AI-generated explanations of their own anatomy, their own diagnoses, their own surgical options. AI hallucinates. It misinterprets. It draws connections that aren't actually in the data.
Even medically-trained AI tools like Open Evidence — which does have access to real medical literature — are only as good as:
Authors misrepresent their own studies all the time. AI cannot reliably catch this. Neither can a layperson reading the summary.
Reportedly, Ben's dad did look at the source papers the Perplexity report cited. But he didn't have the underlying oncology training to recognize that those papers didn't actually say what the AI report claimed.
That's the trap. The AI cited real research — it just mischaracterized what the research showed. Without the domain expertise to catch that, the citations looked legitimate and reassuring, and they reinforced a belief that was ultimately fatal.
I want to be really clear about something: I get why patients do this.
The US healthcare system is:
Add in that patients have had their concerns dismissed by doctors for decades, and you end up with a population that is (reasonably) skeptical of the medical establishment.
AI steps into that gap and offers 24/7, on-demand, confident-sounding, free second opinions.
I understand why patients are drawn to that. I just want everyone to also understand that it can be dangerous — and in this case, it was fatal.
I don't have a perfect answer here. The cat is out of the bag. A hundred different AI tools are now part of every patient's decision-making process, and that isn't going to reverse.
But when it comes to healthcare decisions, especially ones involving:
...please be extraordinarily careful about letting an AI output override the clinician actually examining you.
A few guardrails worth holding onto:
AI is an incredible tool for learning, for synthesizing information, and for helping patients feel more informed. It is not a substitute for a qualified clinician who has examined you, has your full history, and can be held accountable for the advice they give.
Joe Riley's story is devastating. I don't want it to happen again. If you or someone you love is leaning on an AI tool to make a major health decision — especially one that contradicts their doctors — please, please pause, bring the information to the clinician, and have the harder conversation in person.
And to Ben Riley: thank you for sharing your dad's story so that it might protect someone else.