Chatbots Not Reliable for Medical Advice
| Source: Mastodon | Original article
New studies warn against relying on chatbots for medical advice due to inaccuracies.
Renowned AI critic Gary Marcus is sounding the alarm on chatbots providing medical advice, citing four recent studies that all conclude these tools cannot be trusted, especially when used by non-experts. This warning comes as no surprise, given our previous reports on the limitations and potential risks of relying on AI chatbots for critical information. As we reported on April 21, AI chatbots could be making you stupider, and their output should not be taken at face value.
The latest studies, including one from the University of Oxford, reveal that chatbots often give misleading and inconsistent medical advice, which can pose significant risks to users. This is particularly concerning, as about one-third of adults now use AI for health advice, according to a recent KFF poll. The Mayo Clinic also notes that people and AI don't communicate well together, leading to inaccurate answers due to a lack of specific information.
As the use of AI chatbots for medical advice continues to grow, it's essential to exercise caution and consult human medical professionals for accurate and reliable guidance. We will continue to monitor this situation and provide updates on the evolving landscape of AI and healthcare. With OpenAI's impending public listing, the scrutiny of AI chatbots' capabilities and limitations will only intensify, making it crucial to prioritize responsible AI development and deployment.
Sources
Back to AIPULSEN