Lawmakers in the state of California have pushed forward an artificial intelligence (AI) safety bill that has been a subject of controversy in the tech industry with pushback from key players and elected officials.
On Aug. 28, Senate Bill (SB) 1047 passed the State Assembly and now awaits a final say from the state’s governor, Gavin Newsom. Newsom will have until Sept. 30 to decide whether or not to sign it into law or veto it.
AI legislation enactment
SB 1047 was penned by Scott Wiener, a Democratic senator from San Francisco. Wiener argues that legislation must be put in place before any further advancements in AI become uncontrollable.
The bill makes safety testing mandatory for the majority of the most advanced AI models; either those costing over $100 million to develop or those that surpass a certain amount of computing power to operate.
It will require developers in the state of California — which would include big names like ChatGPT maker OpenAI, Meta and Google — to procure outlines on a “kill switch” for such models, should they become unmanageable and that third-party auditors participate in safety audits of the models.
Developers will also be able to be sued by the state attorney general in the instance of an ongoing threat from models like an AI grid takeover.
SB 1047 push back
While Wiener argues that the bill is necessary in order to prevent irreversible societal damage at the hands of AI systems gone awry — not everyone is in favor of SB 1047.
Related: Regulators are misguided in efforts to restrict open-source AI
The bill has received substantial pushback from key players in the tech industry and state politicians, saying it will hinder the state’s innovators.
Notably, Speaker Emerita Nancy Pelosi commented on the bill, saying that it is “more harmful than helpful” in the pursuit of having California lead the way in AI development.
The head of AI policy at Abundance Institute, Neil Chilson, has warned that while the bill primarily targets models of a certain cost and caliber, it “works by attempt” at creating a "reasonable care" standard for AI training.
He said this could end up overstretching and affect smaller companies and models as well.
Tech companies in the industry have particularly disliked the bill, with OpenAI leading a pushback against it, saying it will hinder growth. However, Wiener has disputed these claims.
Google and Meta Platforms have also made their concerns known to California Governor Newsom in a letter.
However, in the tug-of-war of opinions on the bill, Amazon’s Anthropic has shown support, saying the benefits will “likely outweigh the costs,” though there are still some ambiguous elements to it.
Former computer science professor at Carnegie Mellon and Obama Administration alumnus, Eric Daimler said he is “deeply concerned” about AI safety and has called on Washington to see SB 1047 as an example. He also said he believes there is a “more effective way forward.”
SB 1047 is not the only California AI bill currently being discussed. SB 1220, which bans the use of AI in call centers that provide welfare and health services such as SNAP and Medicaid, has also raised questions in the industry.
However, California’s AB 3211, which would require watermarks on AI-generated content, has gathered support from tech companies like OpenAI and Microsoft.
Billionaire Elon Musk, who is developing his model Grok, has expressed favor of sweeping AI safety regulations.
Cointelegraph has reached out to California legal experts and AI developers to better understand the climate surrounding the bills.
AI Eye: AI drone ‘hellscape’ plan for Taiwan, LLMs too dumb to destroy humanity