Senator Wiener’s groundbreaking artificial intelligence bill is presented to Parliament, with amendments in response to industry engagement

Senator Wiener’s groundbreaking artificial intelligence bill is presented to Parliament, with amendments in response to industry engagement

SACRAMENTO – The Assembly Budget Committee passed Senate Bill 1047 by Senator Scott Wiener (D-San Francisco) with significant amendments introduced by the author. SB 1047 is a bill designed to ensure the safe development of large-scale artificial intelligence systems by establishing clear, predictable, and reasonable safety standards for developers of the largest and most powerful AI systems. The bill now moves to the full Assembly. It will be brought to the floor for a vote on August 20 and must be passed by August 31.

“The Assembly will vote on a strong AI security measure that has been revised in response to feedback from AI leaders from industry, academia and the public sector,” said Senator Wiener“We can advance both innovation and security; the two are not mutually exclusive. While the changes do not reflect 100% of the changes requested by Anthropic – a global leader in innovation and security – we have accepted a number of very reasonable changes and I believe we have addressed the core concerns of Anthropic and many others in the industry. These changes build on the significant changes to SB 1047 that I previously made to address the unique needs of the open source community, which is a critical source of innovation.

“With Congress deadlocked on regulating AI – aside from banning Tik Tok, Congress hasn’t passed major technology regulations since floppy disks were used for computers – California must act to anticipate the foreseeable risks of rapidly advancing AI while encouraging innovation.”

The key changes to SB 1047, which will be released by the Senate in the coming days, are:

  • Elimination of perjury – Replace criminal penalties for perjury with civil penalties. The bill no longer provides for criminal penalties. Opponents had misrepresented this provision, and a civil penalty serves as a deterrent to lying to the government.
  • Elimination of FMD – Remove the proposed new state regulatory agency (formerly the Frontier Model Division, or FMD). Enforcement of SB 1047 has always been handled by the Attorney General’s office, and this change streamlines the regulatory structure without significantly impacting the ability to hold bad actors accountable. Some of the FMD’s functions have been transferred to the existing Government Operations Agency.
  • Adaptation of legal standards – The legal standard that developers must demonstrate they have met their obligations under the law has changed from the “reasonable assurance” standard to the “reasonable care” standard, defined in centuries-old common law as the level of care a reasonable person would exercise. We lay out some elements of reasonable care in AI development, including whether they consulted NIST standards when creating their security plans and how their security plan compares to others in the industry.
  • New limit aims to give startups the opportunity to fine-tune open source models – Established a threshold to determine which fine-tuned models fall under SB 1047. Only models that cost at least $10 million to fine-tune now fall under it. If a model cost less than $10 million to fine-tune, the model does not fall under it and the developer doing the fine-tuning has no obligations under the law. The vast majority of developers who fine-tune open source models do not fall under it and therefore have no obligations under the law.
  • Restriction, but not abolition of preventive enforcement – Limiting the Attorney General’s ability to impose civil penalties where no harm has been caused or there is no immediate threat to public safety.

SB 1047 is supported by the two most cited AI researchers of all time: the “Godfathers of AI,” Geoffrey Hinton and Yoshua Bengio. Today Professor Bengio published an opinion article in Fortune supporting the bill.

From SB 1047, Professor Hinton, former AI lead at Google, said: “Forty years ago, when I was training the first version of the AI ​​algorithms behind tools like ChatGPT, no one – including me – would have predicted how far AI would advance. Powerful AI systems promise incredible potential, but the risks are also very real and should be taken extremely seriously.

“SB 1047 takes a very commonsense approach to balancing these concerns. I’m still excited about AI’s potential to save lives through improvements in science and medicine, but it’s critical that we have laws with real force to address the risks. California is a natural place to start because this technology has taken off there.”

False claims about the law are circulating on the Internetleading to divided opinions among AI leaders.

In recent weeks, other AI industry leaders have spoken out in favor of SB 1047. Simon Last, co-founder of Notion, was the latest to voice his support in a comment published last week.

Experts at the forefront of AI have expressed concern that failure to take appropriate precautions could have serious consequencesincluding risks to critical infrastructure, cyber attacks and the development of novel biological weapons. A recent survey found that 70% of AI researchers believe safety should be given greater priority in AI research, while 73% expressed “significant” or “extreme” concern that AI could fall into the hands of dangerous groups.

In line with President Biden’s Implementing Regulation on Artificial Intelligenceand their own voluntary commitments, several leading AI developers in California have made great strides in developing secure development practices and implementing important measures such as cybersecurity protections and security evaluation of AI systems’ capabilities.

Last September, Governor Newsom issued a Implementing Regulation Directing government agencies to begin preparing for AI and assessing the impact of AI on vulnerable communities. The government published a report in November Investigating the most beneficial uses and potential harms of AI.

SB 1047 balances AI innovation and safety by:

  • Clear standards for developers of AI models with a computing power of more than 1026 Floating point operations that cost over $100 million to train and would be far more powerful than any AI in existence today
  • Developers of such large “frontier” AI models are required to take basic precautions such as pre-deployment security testing, red teaming, cybersecurity, safeguards to prevent misuse of dangerous capabilities, and post-deployment monitoring.
  • Creating whistleblower protection for employees of AI pioneering laboratories
  • The California Attorney General will be empowered to take legal action if the developer of an extremely powerful AI model causes serious harm to Californians or if the developer’s negligence poses an imminent threat to public safety.
  • Establish a new public cloud computing cluster, CalCompute, to enable startups, researchers and community groups to participate in the development of large-scale AI systems and align their benefits with the values ​​and needs of California communities

SB 1047 was co-authored by Senator Roth (D-Riverside), Senator Susan Rubio (D-Baldwin Park), and Senator Stern (D-Los Angeles), and sponsored by the Center for AI Safety Action Fund, Economic Security Action California, and Encode Justice.

###

Leave a Reply

Your email address will not be published. Required fields are marked *