counter easy hit

40 million people globally are using ChatGPT for healthcare – but is it safe?

40 million people globally are using ChatGPT for healthcare – but is it safe?
2
coldthermometer-gettyimages-687128688
Faba-Photograhpy/Moment via Getty Images

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • 5% of messages to ChatGPT globally concern healthcare.
  • Users ask about symptoms and insurance advice, for example.
  • Chatbots can provide dangerously inaccurate information.

More than 40 million people worldwide rely on ChatGPT for daily medical advice, according to a new report from OpenAI shared exclusively with Axios.

The report, based on an anonymized analysis of ChatGPT interactions and a user survey, also sheds light on some of the specific ways people are using AI to navigate the sometimes complex intricacies of healthcare. Some are prompting ChatGPT with queries regarding insurance denial appeals and possible overcharges, for example, while others are describing their symptoms, hoping to receive a diagnosis or treatment advice.

It should come as no surprise that a large number of people are using ChatGPT for sensitive personal matters. The three-year-old chatbot, along with others like Google’s Gemini and Microsoft’s Copilot, has become a confidant and companion for many users, a guide through some of life’s thornier moments.

Also: Can you trust an AI health coach? A month with my Pixel Watch made the answer obvious

Last spring, an analysis conducted by Harvard Business Review found that psychological therapy was the most common use of generative AI. The new OpenAI report is therefore just another brick in a rising edifice of evidence showing that generative AI will be — indeed already is — much more than simply a search engine on steroids. 

What’s most jarring about the report is the sheer scale at which users are turning to ChatGPT for medical advice. It also underscores some urgent questions about the safety of this type of AI use at a time when many millions of Americans are suddenly facing new and major healthcare-related challenges.

(Disclosure: Ziff Davis, ZDNET’s parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

A closer look

According to Axios, the OpenAI report found that more than 5% of all messages sent to ChatGPT globally are related to healthcare. As of July of last year, the chatbot reportedly processed around 2.5 billion prompts per day — that means it’s responding to at least 125 million health-care related questions every day (and likely more than that now, since its user-base is still growing).

Also: Using AI for therapy? Don’t – it’s bad for your mental health, APA warns

Many of those conversations — around 70%, according to Axios — are happening outside the normal working hours of medical clinics, underscoring a key benefit of this kind of AI use: unlike human doctors, it’s always available. Some people have also leveraged chatbots to help spot billing errors and cases in which exorbitantly high medical costs can be disputed.

Desperate times, desperate measures

The widespread embrace of ChatGPT as an automated medical expert is coinciding with what, for many Americans, has been a stressful start to the year due to a sudden spike in the cost of healthcare coverage. 

With the expiration of pandemic-era Affordable Care Act tax subsidies, over 20 million ACA enrollees have reportedly had their monthly premiums increase by an average of 114%. It’s likely that some of those people, especially younger, healthier, and more cash-strapped Americans, will opt to forego health insurance entirely, perhaps turning instead to chatbots like ChatGPT for medical advice.

Risks

AI might always be available to chat, but it’s also prone to hallucination — fabricating information that’s delivered with the confidence of fact — and therefore no substitute for an actual, flesh-and-blood medical expert.

One study conducted by a cohort of physicians and posted to the preprint server site arXiv in July, for example, found that some industry-leading chatbots frequently responded to medical questions with dangerously inaccurate information. The rate at which this kind of response was generated by OpenAI’s GPT-4o and Meta’s Llama was especially high: 13% in each case.

Also: AI model for tracking your pet’s health data launches at CES

“This study suggests that millions of patients could be receiving unsafe medical advice from publicly available chatbots, and further work is needed to improve the clinical safety of these powerful tools,” the authors of the July paper noted.

OpenAI is currently working to improve its models’ abilities to safely respond to health-related queries, according to Axios.

ZDNET’s takeaway

For the time being, generative AI should be approached like WebMD: It’s often useful for answering basic questions about medical conditions or the complexities of the healthcare system, but it probably wouldn’t be recommended as a definitive source for, say, diagnosing a chronic ailment or seeking advice for treating a serious injury.

Also: Anthropic says Claude helps emotionally support users – we’re not convinced

And given its propensity to hallucinate, it’s best to treat AI’s responses with an even bigger grain of salt than that with which you might take information gleaned from a quick Google search — especially when it comes to more sensitive personal questions.

Artificial Intelligence

Leave A Reply