OpenAI backs Illinois bill that would limit when AI labs can be held liable
openai
| Source: HN | Original article
OpenAI has formally backed an Illinois bill that would sharply narrow the circumstances under which artificial‑intelligence laboratories can be sued for “critical harm.” The company testified before the state Senate on Tuesday, arguing that the legislation—currently moving through committee—should protect developers from liability even when their models are used to cause mass casualties or billion‑dollar losses.
The proposal defines critical harm as the death or serious injury of at least 100 people, or financial damage of $1 billion or more, and would bar plaintiffs from suing AI labs unless they can prove the developer knowingly enabled the specific misuse. OpenAI’s testimony echoed its earlier stance, noted in our April 10 report on a similar federal‑level bill, and stressed that the current legal framework “does not differentiate between the tool and the user,” risking endless litigation that could stifle innovation.
Industry observers say the move could set a de‑facto standard for AI liability in the United States, especially as other states watch Illinois’ experiment. Critics warn the shield may leave victims without recourse and could embolden malicious actors to weaponise generative models. Consumer‑advocacy groups have already pledged to challenge the bill if it passes, citing concerns over accountability and public safety.
What to watch next: the Illinois Senate’s vote on the bill, likely slated for the coming weeks; potential counter‑legislation at the federal level, where lawmakers are already debating broader AI‑risk statutes; and the reaction of rival labs such as Anthropic and Google, which have yet to publicly declare a position. The outcome will shape how liability is allocated in the rapidly expanding AI ecosystem and could influence similar proposals in other jurisdictions.
Sources
Back to AIPULSEN