3 Things We Lose When AI-Powered Chatbots Answer Our Medical Questions

By David Burda, News Editor & Columnist, 4sight Health
Twitter: @davidrburda
Twitter: @4sighthealth_

I’ve never used a dating app — not even to research a story — but based on ads for dating apps that I see on TV, hear on the radio and read about on social media, dating apps have a chronic problem: People lie.

There’s a significant difference between how people describe themselves on the apps and who they are in person. Five-foot-eight in person becomes six-foot tall online. Fifty-four in real life becomes 44 on the app. Single on the app can mean married for five years, that sort of thing.

What about doctors and how well they interact with patients? Is there a difference between how they interact with patients in person and online? And what if that online interaction isn’t with a real doctor but with an AI-powered chatbot?

I thought about that recently after all the media coverage of a new study published in JAMA Internal Medicine last month.

Researchers from the University of California, San Diego, Bryn Mawr College, Johns Hopkins University and the U.S. Navy wanted to know who answered a patient’s question the best: a real doctor or an AI-powered chatbot. To answer their question, the researchers looked at 195 online exchanges between real doctors and patients. The researchers then posed those same patient questions to an AI-powered chatbot. A team of evaluators comprised of licensed healthcare professionals in pediatrics, geriatrics, internal medicine, oncology, infectious disease and preventive medicine blindly compared the answers from the real doctors with the answers from the chatbot.

Here’s what the researchers found:

  • 78.6% of the evaluators rated the chatbot’s responses better than the doctors’ responses.
  • Evaluators said the chatbot’s responses were of higher quality (4.1 to 3.3 on a 5-point scale).
  • Evaluators said the chatbot’s responses were more empathetic (3.7 to 2.1 on a 5-point scale).

“These results suggest that artificial intelligence assistants may be able to aid (physicians) in drafting responses to patient questions,” the researchers said.

If that happens — doctors start using robots to answer patient questions — what do patients lose? I think patients lose a lot.

  • First, all responses would sound the same regardless of who the doctor is, where they practice, how old the doctor is, etc. Patients lose insights into an individual doctor’s personality — polite or gruff, kind or mean, knowledgeable or incompetent, patient or impatient.
  • Second, all responses would be the same to the same question. They would answer the same way without variation or going off script. Standardization, not customization. Patients lose the ability to ask detailed, specific questions that pertain to their individual medical situations.
  • Third, patients lose the ability to interact with a physician. No interruptions. No back-and-forth. No follow-up questions. It’s a sterile, impersonal exchange.

Think of your experience now as a consumer when you confront an electronic phone tree that sends you on an endless chase to find the right department or a website chatbot that asks scripted questions and replies with preprogrammed answers. It’s one thing when you want to know why your new toaster isn’t working. It’s another when you have a fever and feel swelling under your jaw.

I also think it says something about a medical practice that would prefer you talk to a machine rather than a real person. Tell your most intimate medical secrets to this computer first. Thank you.

Come to think of it, why didn’t the researchers ask consumers to judge and compare the responses of real doctors versus the AI-powered chatbot? I honestly don’t care what a geriatrician evaluator thinks.

To learn more on this topic, please read “Healthcare’s Failure to Communicate” on 4sighthealth.com.

Thanks for reading.

This article was originally published on 4sight Health and is republished here with permission.