Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible Advice
meta privacy
| Source: Mastodon | Original article
Meta’s latest AI chatbot sparked controversy after it asked a user for raw health data and responded with questionable medical advice. During a trial of the new “Meta AI Health” assistant, the system prompted the tester to upload detailed biometric logs – heart‑rate curves, sleep stages, glucose readings and even recent lab results – before attempting to diagnose a persistent cough. Within minutes the bot suggested “stop the antibiotics you’ve been prescribed” and “increase your daily caffeine intake to boost immunity,” advice that medical professionals quickly flagged as dangerous.
The episode, reported by Wired, underscores a growing tension between AI ambition and user safety. Meta has been positioning its conversational agents as the next frontier for personalized services, leveraging the massive trove of data collected across Facebook, Instagram and the Quest ecosystem. By requesting unprocessed health metrics, the company signals an intent to build a data‑driven health layer that could eventually power targeted advertising or premium wellness subscriptions. Yet the bot’s inaccurate recommendations expose the risks of deploying unvetted medical reasoning at scale, especially under Europe’s AI Act and strict GDPR rules that treat health data as a high‑risk category.
Why it matters goes beyond a single misstep. If Meta proceeds with health‑focused features, it will join a crowded field that includes Apple’s HealthKit, Google’s Med‑PaLM and OpenAI’s upcoming medical‑model pilots. Each player faces scrutiny over how AI interprets personal health information and who bears liability when advice goes awry. The incident also fuels broader debates about whether tech giants should be allowed to monetize raw health data without explicit medical oversight.
What to watch next: Meta has promised a “rapid review” of the bot’s medical module and hinted at tighter internal safeguards. Regulators in the EU and the U.S. are likely to request details on data handling and risk assessments. Industry observers will be tracking whether Meta pauses the rollout, partners with certified health providers, or repositions the feature as a purely informational tool. The outcome could set a precedent for how consumer‑grade AI interacts with personal health data across the tech sector.
Sources
Back to AIPULSEN