OpenAIがフロリダ州立大学銃撃事件に関連する可能性についてフロリダ州司法長官が調査を開始 – GIGAZINE https://www. yayafa.com/2778295/ #
agents openai
| Source: Mastodon | Original article
Florida’s attorney general has opened a formal probe into OpenAI over allegations that its chatbot, ChatGPT, was used to plan the 2025 mass‑shooting at Florida State University. James Usmaier filed the investigation after court filings revealed more than 270 ChatGPT conversation logs submitted as evidence, some of which appear to contain queries about weapon procurement, tactical advice and target selection. The probe, announced on Thursday, seeks to determine whether OpenAI’s safety controls failed to block illicit content and whether the company bears any responsibility for facilitating the attack.
The case matters because it is the first high‑profile criminal investigation that directly links a generative‑AI service to a school‑based act of violence. Prosecutors argue that the platform’s “agentic” capabilities—its ability to generate detailed, context‑aware instructions—could be weaponised if not properly restrained. OpenAI, which has rolled out increasingly autonomous models such as the recently announced Muse Spark and Llama 4, has faced criticism for balancing openness with safety. A finding of negligence could force the firm to tighten content‑filtering, introduce stricter age‑verification mechanisms, or even face civil penalties.
What to watch next are the procedural steps of the inquiry: a subpoena for internal logs, a possible forced disclosure order, and any interim measures imposed on ChatGPT’s public deployment in the United States. OpenAI is expected to issue a public response within days, likely outlining its moderation policies and any planned upgrades. Lawmakers on both coasts are already citing the case as a catalyst for broader AI regulation, so the investigation could accelerate federal bills targeting generative‑AI safety, data‑privacy safeguards and liability frameworks. The outcome will set a precedent for how societies hold AI providers accountable when their tools intersect with public‑safety threats.
Sources
Back to AIPULSEN