California Considers Unique Safety Regulations for AI Companies

( – California lawmakers are looking to pass legislation requiring artificial intelligence companies to add protective measures and testing practices. This would help to ensure that the programs couldn’t be manipulated to access and wipe out the electric grid or build chemical weapons.

Legislators are planning to vote on this bill, which is focused solely on reducing risks associated with AI, something that experts have spoken about since the debut of the technology. Technology companies like Meta and Google are opposing the regulations and say that it’s unnecessarily focused on the companies when it should be focused on individuals who take advantage of the technology.

Democratic state Sen. Scott Wiener authored the bill and said that the bill would provide reasonable safety measures by preventing “catastrophic harms” from powerful artificial intelligence models. However, the requirements would only be needed for programs that cost more than one hundred million dollars in power to train.

“This is not about smaller AI models. This is about incredibly large and powerful models that, as far as we know, do not exist today but will exist in the near future,” Wiener said.

Gavin Newsom, Governor of California, said that the state is an early adopter and regulator and that the state could possibly deploy generative AI tools. His administration is also considering new rules for AI discrimination in hiring practices.

Rob Sherman, Meta vice president, said, “The bill will make the AI ecosystem less safe, jeopardize open-source models relied on by startups and small businesses, rely on standards that do not exist, and introduce regulatory fragmentation.”

Copyright 2024,