Study finds AI chatbots give wrong medical advice half the time
| Source: Mastodon | Original article
A new analysis of popular AI‑driven chatbots reveals that they dispense incorrect medical advice roughly half the time, raising fresh alarms about the technology’s readiness for everyday health‑care use. The study, conducted by researchers at the University of Tokyo and published in the *Journal of Medical Internet Research*, evaluated responses from five leading models—including ChatGPT, Gemini, and two proprietary Korean and Chinese bots—against a set of 200 clinically vetted questions covering symptoms, medication dosing, and chronic‑disease management. Across the board, 48 % of the answers contained factual errors, dangerous omissions, or advice that contradicted established guidelines.
The findings matter because chatbots have moved from novelty to a de‑facto first point of contact for millions seeking quick health information. In Scandinavia, where digital health services already dominate, patients increasingly turn to conversational AI for triage, mental‑health support, and medication reminders. Misleading guidance can delay proper treatment, exacerbate conditions, or even trigger harmful self‑medication. The study also notes that the error rate spikes when queries involve nuanced contexts—such as comorbidities or pediatric dosing—areas where human clinicians still hold a decisive edge.
Regulators and industry players are already feeling the pressure. The European Medicines Agency has hinted at forthcoming guidelines for AI‑generated health content, while major providers are piloting “medical‑review layers” that flag high‑risk answers for human verification. In the short term, users are urged to treat chatbot output as a supplement, not a substitute, for professional advice and to verify any recommendation with a qualified practitioner.
What to watch next: the research team will release a follow‑up paper this summer testing the impact of real‑time fact‑checking modules on error rates. Meanwhile, the Nordic health‑tech community is expected to convene a panel at the upcoming AI‑Health Summit in Copenhagen to debate mandatory transparency standards for medical chatbots. The outcome could shape how quickly, and under what safeguards, AI assistants become integrated into public health systems.
Sources
Back to AIPULSEN