A California bill requiring AI developers to put up safety protocols to stop “critical harms” against humanity has caused a stir among Silicon Valley’s tech community.
California’s “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,” also known as SB 1047, would require AI developers to implement safety protocols to prevent events such as mass casualties or major cyberattacks.
It was proposed by California Democratic legislators in February.
The proposed regulations also mandate an “emergency stop” button for AI models, require annual third-party audits of AI safety practices, create a new Frontier Model Division (FMD) to oversee compliance, and impose heavy penalties for violations.
However, there has been opposition from Congress, with US Congressperson Ro Khanna releasing a statement opposing SB 1047 on Aug. 13 expressing concern that “the bill as currently written would be ineffective, punishing of individual entrepreneurs and small businesses, and hurt California’s spirit of innovation.”
Khanna, who represents Silicon Valley, acknowledged the need for AI legislation “to protect workers and address potential risks including misinformation, deepfakes, and an increase in wealth disparity.”
The bill has also been met with opposition from Silicon Valley, with venture capital firms such as Andreessen Horowitz (a16z) arguing that it will burden startups and stifle innovation.
On Aug. 2, a16z chief legal officer Jaikumar Ramaswamy sent a letter to Senator Scott Wiener, one of the bill’s creators, claiming it will “burden startups because of its arbitrary and shifting thresholds.”
There has also been pushback from prominent industry researchers, such as Fei-Fei Li and Andrew Ng, who believe it will harm the AI ecosystem and open-source development.
On Aug. 6, computer scientist Li told Fortune:
“If passed into law, SB-1047 will harm our budding AI ecosystem, especially the parts of it that are already at a disadvantage to today’s tech giants: the public sector, academia, and ‘little tech.’”
Related: Regulators are misguided in efforts to restrict open-source AI
Meanwhile, Big Tech companies claim that overregulating AI will restrain free speech and could push tech innovation out of California.
Meta’s chief AI scientist, Yann LeCun, said the legislation would hurt research efforts, claiming “regulating R&D would have apocalyptic consequences on the AI ecosystem” in a post on X in June.
The bill passed the Senate with bipartisan support in May and now heads to the Assembly, where it must pass by Aug. 31.
Magazine: AI may already use more power than Bitcoin — and it threatens Bitcoin mining