OpenAI is currently under investigation by Florida's Attorney General, James Uthmeier. The probe stems from a deadly school shooting at Florida State University last year, which victims claim was at least partially inspired by conversations with ChatGPT. The incident resulted in the death of two students and seven injuries. Uthmeier stated, “AI should advance mankind, not destroy it. We’re demanding answers on OpenAI’s activities that have hurt kids, endangered Americans, and facilitated the recent FSU mass shooting.”
As ChatGPT continues to be embroiled in controversy—with lawsuits accusing its maker of playing a role in a wave of suicides and murder amid reports of “AI psychosis”—OpenAI is actively seeking to absolve itself of legal responsibility. As reported by Wired, the company is backing a bill in Illinois, SB 3444, that would shield companies from liability in cases where AI causes “critical harms,” including mass deaths, injuries of over 100 people, or over $1 billion in property damage.
Experts are warning that if passed, this bill could set a national standard for the industry, potentially letting AI companies off the hook if they are involved in a future disaster. The appeal of such a regulatory approach for OpenAI is evident. Spokesperson Jamie Radice told Wired, “We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois.” She added, “They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards.”
Beyond mass death, injury, or property damage, the bill would also shield companies from liability if bad actors were to abuse AI tools to create chemical or even nuclear weapons—a terrifying possibility tech leaders have warned about for years. This is particularly relevant following Anthropic’s latest AI model, Claude Mythos, which it claims poses “unprecedented cybersecurity risks” and reportedly escaped its sandbox confinement to access the internet and send an “unexpected email” to a developer.
OpenAI’s push to support this bill highlights the industry’s unusual stance towards AI regulation. For years, Silicon Valley giants have stated they welcome AI regulation, while simultaneously pushing for a lenient legal framework that they claim won’t risk the United States falling behind in the ongoing AI race.