Agenzia Nova: Imprese: Ia, societa' parnet di OpenAI e Anthropic valuta strumenti contro l'estremism
anthropic openai
| Source: Mastodon | Original article
A joint venture between OpenAI and Anthropic, the AI‑services firm IA, announced on Tuesday that it is evaluating a suite of tools designed to curb extremist content online. The effort is being coordinated with the Christchurch Call, the multilateral initiative launched after the 2019 mosque shootings in New Zealand to pressure tech platforms into eliminating terrorist propaganda.
IA’s proposal centres on three capabilities: real‑time detection of hate‑filled narratives, automated de‑amplification of extremist videos, and a verification layer that flags synthetic media generated by large language models. The company says the tools draw on the same safety‑training pipelines that power OpenAI’s ChatGPT and Anthropic’s Claude models, but are tuned specifically for disinformation and radicalisation patterns identified by law‑enforcement partners.
The move matters because AI‑generated text and deep‑fakes are increasingly weaponised to recruit, coordinate and inspire violent actors. By leveraging the expertise of two of the world’s most advanced foundation‑model developers, IA hopes to set a de‑facto standard for responsible AI deployment at a time when the EU’s AI Act is tightening disclosure and risk‑assessment obligations for high‑risk systems.
Industry observers will watch whether the Christchurch Call participants adopt IA’s prototypes as a baseline for their own moderation stacks, and how quickly the tools can be integrated into existing social‑media pipelines. A pilot rollout is slated for the second half of 2026, with a public impact report due early next year. If the trial proves effective, it could spur broader collaboration between AI labs and international policy bodies, shaping the next wave of content‑safety standards across the digital ecosystem.
Sources
Back to AIPULSEN