# Meta ’s recent court losses, centred on the company’s failure to disclose internal research about
anthropic meta openai
| Source: Mastodon | Original article
Meta Platforms suffered a string of court defeats this week after judges ruled that the company had unlawfully withheld internal research documenting the potential harms of its social‑media products. The rulings stem from a 2024 lawsuit filed by a coalition of state attorneys general, which demanded that Meta turn over studies linking Instagram and Facebook use to mental‑health issues, election‑related misinformation, and algorithmic bias. Meta’s refusal to produce the reports led to default judgments and, in one case, a $250 million fine for contempt of court.
The decisions underscore a growing legal expectation that tech firms must be transparent about the risks their services pose, even when the findings are uncomfortable. For regulators, the verdicts provide a tool to compel disclosure without waiting for a full regulatory rulemaking process. For the industry, they raise the spectre of costly litigation and reputational damage if internal safety work remains hidden.
Meta’s setbacks have already prompted a shift among rivals. OpenAI announced an expanded safety‑reporting framework that will make its internal risk assessments available to the Federal Trade Commission on a quarterly basis. Anthropic, still reeling from its own court win and subsequent lobbying battles that we covered on 31 March, said it is reviewing its disclosure policies to avoid a similar fate. Both firms are betting that proactive transparency will stave off lawsuits and build trust with policymakers.
What to watch next: a federal appellate panel will hear Meta’s appeal of the contempt fines in June, and the FTC is expected to issue draft rules on AI‑related risk disclosures later this summer. Congressional committees have signalled intent to hold hearings on corporate responsibility for algorithmic harms, and any further court orders could force the industry into a new era of mandatory safety reporting.
Sources
Back to AIPULSEN