California governor vetoes hotly contested AI safety bill

Gavin Newsom vetoed a California AI safety bill, saying it doesn't adequately protect the public from the “real” threats posed by the technology.
Gavin Newsom vetoed a California AI safety bill, saying it doesn't adequately protect the public from the “real” threats posed by the technology.

California Governor Gavin Newsom vetoed a controversial artificial intelligence (AI) bill — arguing it would hinder innovation and fail to protect the public from “real” threats raised by the tech. 

Newsom on Sept. 30 vetoed SB 1047 — known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act — which had garnered significant pushback from Silicon Valley.

It proposed mandatory safety testing of AI models and other guardrails that tech firms said could stifle innovation.

In a Sept. 29 statement, Newsom said the bill focused too much on regulating existing top AI firms, without protecting the public from the “real” threats posed by the new technology. 

“Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”

Penned by San Francisco Democratic Senator Scott Wiener, SB 1047 would also require developers in California — including ChatGPT maker OpenAI, Meta and Google — to implement a “kill switch” for their AI models and publish plans for mitigating extreme risks. 

If the bill were to be implemented, AI developers would also be liable to be sued by the state attorney general in the instance of an ongoing threat from models, such as an AI grid takeover.

Newsom said he had asked the world’s leading AI safety experts to help California “develop workable guardrails” that focus a creating a “science-based trajectory analysis.” He added that he had ordered state agencies to expand their assessment of the risks from potential catastrophic events stemming from AI development.

Related: OpenAI’s move to for-profit: Is it really ‘illegal’?

Even though Newsom vetoed SB 1047, he said adequate safety protocols for AI must be adopted, adding that regulators can’t afford to “wait for a major catastrophe to occur before taking action to protect the public.”

Newsom noted that his administration had signed more than 18 bills concerning AI regulation in the last 30 days.

Politicians, big tech push back on AI safety bill 

The bill was unpopular among lawmakers, advisers and big technology firms in the lead-up to Newsom's decision.

Former United States House Speaker Nancy Pelosi and firms including OpenAI said that it would significantly hinder the growth of AI. 

The head of AI policy at Abundance Institute, Neil Chilson, warned that while the bill primarily targeted models of a certain cost and size — models costing more than $100 million — its scope could easily be expanded to crack down on smaller developers as well. 

Still, some are open to the bill. Billionaire Elon Musk — who is developing his own AI model dubbed “Grok,” is among a select few tech leaders in favor of the bill and of sweeping AI regulations more broadly.

In an Aug. 26 post to X, Musk said “California should probably pass the SB 1047 AI safety bill,” but conceded that standing behind the bill was a “tough call.”

Magazine: Advanced AI system is already ‘self-aware’ — ASI Alliance founder