When AI Cost a Patient His Life: A Tragic Story About Perplexity and Leukemia

By Dr. Kelly Killeen, MD FACS · Board-Certified Plastic Surgeon · Published April 21, 2026

AI wants to please. How you ask questions and the train of questions often lead it to certain conclusions — and those conclusions can be wrong. In this case, the patient paid for it with his life.

A Tragic Story About AI, Leukemia, and a Patient Who Trusted the Wrong Expert

A recent New York Times article tells one of the saddest stories I've read in a long time — and I think it's an incredibly important one to discuss.

It's about Joe Riley, a man diagnosed with a type of leukemia who ultimately chose not to pursue the treatment his oncologist recommended — because of information he had been given by an AI tool (Perplexity). He later passed away.

This is his son Ben's story as much as it is his father's, and it raises questions every patient and every clinician needs to sit with.

What Happened

Joe was diagnosed with leukemia. His oncologist recommended a treatment plan.

Joe, however, had generated a Perplexity report about his condition. The report:

  • Gave him extensive information about his type of leukemia
  • Led him to believe he had a specific variant of the disease
  • Cited published medical research
  • Pushed him toward not pursuing the oncologist's recommended treatment

Ben's Remarkable Response

Ben did something most family members would never think to do: he contacted two of the authors of the papers that the Perplexity report was citing.

He sent them the AI-generated report and essentially asked: "Your research is being used to tell my dad he shouldn't do what his oncologist is recommending. Is that actually what your research shows?"

The authors' response was striking. Both of them said essentially:

"That is absolutely not what my research should lead you to conclude. Your father should listen to his oncologist."

Even with that direct pushback from the very authors whose papers the AI had quoted, Ben's father was not persuaded. He did not pursue treatment, and he passed away.

The Quote That Sticks With Me

Near the end of the article, Ben essentially asks his father: "Your oncologist disagrees with the report. The two researchers cited in the report disagree with the report. Do you really believe you know more than these experts because of this Perplexity output?"

His father said yes.

Why This Hits So Hard

This isn't a case of a confused patient who couldn't find reliable information. Ben's dad:

  • Had access to a qualified oncologist
  • Had the actual research papers at his fingertips
  • Had direct contact with the authors of those papers telling him he had misread the findings

And the AI report still won. That's what makes this story so hard.

AI Is Phenomenal. It Is Also Frequently Wrong.

I want to say this clearly: AI is a wonderful tool. It has given patients access to information in ways we've never had before. I use it. My patients use it. That's not going away.

But I want to give everyone pause when using AI to diagnose or treat yourself, because:

1. AI Tools Often Confirm What You're Already Leaning Toward

These tools are built to be helpful and agreeable. The way you phrase your questions, the framing you use, the follow-up prompts — all of those nudge the AI toward certain conclusions.

If you are already skeptical of the medical system (as many patients understandably are in the US), AI can easily end up:

  • Reinforcing that skepticism
  • Giving you "evidence" that your doctors are wrong
  • Providing ammunition to bypass a system you were already disinclined to trust

That's what appears to have happened here.

2. AI Is Often Simply Wrong About Medicine

I see this all the time in my own patients — patients coming in with confident, incorrect AI-generated explanations of their own anatomy, their own diagnoses, their own surgical options. AI hallucinates. It misinterprets. It draws connections that aren't actually in the data.

Even medically-trained AI tools like Open Evidence — which does have access to real medical literature — are only as good as:

  • Whether the authors accurately represented their own findings in the paper
  • Whether the AI correctly interpreted those findings
  • Whether you can correctly interpret what the AI is handing you

Authors misrepresent their own studies all the time. AI cannot reliably catch this. Neither can a layperson reading the summary.

3. Reading Source Papers Isn't Enough Without the Underlying Training

Reportedly, Ben's dad did look at the source papers the Perplexity report cited. But he didn't have the underlying oncology training to recognize that those papers didn't actually say what the AI report claimed.

That's the trap. The AI cited real research — it just mischaracterized what the research showed. Without the domain expertise to catch that, the citations looked legitimate and reassuring, and they reinforced a belief that was ultimately fatal.

Why I Understand How We Got Here

I want to be really clear about something: I get why patients do this.

The US healthcare system is:

  • Expensive
  • Hard to access
  • Full of contradictory information
  • Often rushed and impersonal

Add in that patients have had their concerns dismissed by doctors for decades, and you end up with a population that is (reasonably) skeptical of the medical establishment.

AI steps into that gap and offers 24/7, on-demand, confident-sounding, free second opinions.

I understand why patients are drawn to that. I just want everyone to also understand that it can be dangerous — and in this case, it was fatal.

What I'd Ask Patients to Do

I don't have a perfect answer here. The cat is out of the bag. A hundred different AI tools are now part of every patient's decision-making process, and that isn't going to reverse.

But when it comes to healthcare decisions, especially ones involving:

  • Cancer diagnoses
  • Declining treatment recommended by a specialist
  • Alternative therapies instead of evidence-based medicine
  • Stopping a medication

...please be extraordinarily careful about letting an AI output override the clinician actually examining you.

A few guardrails worth holding onto:

  • If AI disagrees with your specialist, bring the AI output to your specialist and ask them to walk through why they disagree
  • If AI cites specific studies, consider that even the authors of those studies might interpret them differently than the AI does
  • Remember that AI tells you what it thinks you want to hear. Your oncologist — the real one — often has to tell you things you don't want to hear
  • Use AI to generate better questions for your doctor, not to replace your doctor

The Bottom Line

AI is an incredible tool for learning, for synthesizing information, and for helping patients feel more informed. It is not a substitute for a qualified clinician who has examined you, has your full history, and can be held accountable for the advice they give.

Joe Riley's story is devastating. I don't want it to happen again. If you or someone you love is leaning on an AI tool to make a major health decision — especially one that contradicts their doctors — please, please pause, bring the information to the clinician, and have the harder conversation in person.

And to Ben Riley: thank you for sharing your dad's story so that it might protect someone else.

Dr. Kelly Killeen Logo

436 N. Bedford Dr., Suite 103

Beverly Hills, CA 90210

(323) 800-8588

Quick Links

Breast Procedures

© 2026 Dr. Kelly Killeen. All rights reserved.

Privacy Policy

|

Terms & Conditions