Does California’s AI bill go too far or not enough? It depends who you ask.

Does California’s AI bill go too far or not enough? It depends who you ask.

A bill to regulate artificial intelligence safety in California that pits influential politicians and veteran academics against some of the world’s largest technology companies appears close to final passage. But Governor Gavin Newsom, who has previously warned against overregulation of AI, has given no public indication as to whether he will sign the bill.

The legislation could serve as a model for how other states and the federal government deal with the tension between their desire to regulate AI and companies’ desire to be able to innovate — and make money — in peace, especially when those companies are among the most powerful in the country. Colorado passed a comprehensive law of its own in May and is likely to amend it further amid criticism from tech companies and business groups.

The sweeping legislation, sponsored by Senator Scott Wiener, whose district includes San Francisco, would require developers of the largest AI systems – which cost more than $100 million to train – to test whether they could be used to attack critical infrastructure, commit cyberattacks or terrorism, or to make weapons.

It would also establish CalCompute, a public “cloud” of computers to help host and develop AI tools, provide cloud computing services, promote equitable technology development and research “the safe and secure deployment of large-scale artificial intelligence models,” the bill says. Wiener’s bill would also provide new protections for whistleblowers at companies developing AI tools, including contractors.

The latter provision follows claims by Daniel Kokotajlo, a former OpenAI employee, that the company was too reckless in developing its generative AI chatbot ChatGPT and violated its security protocols, and that he was subject to the company’s extremely strict offboarding protocols.

The bill has already passed the California Senate and is currently being considered in committees of the California Assembly.

“With Congress failing to move forward and the future of the Biden administration’s executive order uncertain, California has an indispensable role to play in ensuring that we develop this extremely powerful technology with fundamental safeguards in place so that society can safely experience the significant, massive benefits of AI,” Wiener said in a Senate speech in May.

The bill has received bipartisan support and comes as California seeks to take a leadership role among state governments on AI, including by experimenting with generative AI in government operations.

But even leading AI researchers who support the law say it could have gone further. In a letter to state leaders earlier this month, renowned AI researchers Geoffrey Hinton of the University of Toronto, Yoshua Bengio of the Universite de Montreal and Stuart Russell of the University of California at Berkeley, as well as Lawrence Lessig of Harvard Law School, warned of the “serious risks that the next generation of AI poses if developed without sufficient care and oversight.”

“It has no licensing system, it doesn’t require companies to get approval from a government agency before training or deploying a model, it relies on companies’ self-assessment of risks, and it doesn’t even hold companies strictly liable if a disaster occurs,” they wrote. “Relative to the scale of the risks we face, this is a remarkably lax law.”

Others aren’t so sure. Fei-Fei Li, co-director of Stanford’s Human-Centered AI Institute, who is considered the “godmother of AI,” said in an op-ed that while the bill is “well-intentioned,” its restrictions on open-source development could harm innovation in California and elsewhere. Wiener responded in a written statement that the bill does not prohibit open source but allows the attorney general to initiate enforcement proceedings in limited cases.

The California Chamber of Commerce has spoken out against the bill, as have companies such as Facebook parent company Meta, venture capital firm Andressen Horowitz and a coalition of think tanks, business and political leaders, including the conservative American Legislative Exchange Council.

The latter coalition said it is an “unreasonable and impractical standard” to require developers to guarantee, even before training, that their AI models cannot be used for malicious purposes. They also argue that complying with various safety standards “would be expensive and time-consuming for many AI companies” and could therefore force them to leave the state.

The bill also appears to be a flashpoint in California’s political future. Representative Nancy Pelosi, Speaker Emeritus of the House and an influential Democratic politician in California – and nationally – released a statement in mid-August saying Wiener’s bill was “well-intentioned but poorly thought out” and “would stifle innovation and harm the U.S. AI ecosystem.” Wiener is considered a contender for Pelosi’s House seat in the greater San Francisco area when she retires.

Pelosi echoed many of the concerns raised by Representative Zoe Lofgren, ranking member of the House Science Committee and also a Democrat from California.

In a written response to these criticisms, Wiener said he rejected “the false claim that in order to innovate we must leave security solely in the hands of technology companies and venture capitalists.”

“While the vast majority of innovators in the AI ​​space are very ethical people who want to do the right thing for society, we have learned the hard way over the years that pure industry self-regulation does not have a positive impact on society,” Wiener continued.

Leave a Reply

Your email address will not be published. Required fields are marked *