California waters down AI safety bill to appease industry opposition

California waters down AI safety bill to appease industry opposition

California lawmakers have amended a bill that would hold artificial intelligence (AI) companies responsible for the harm their products cause. The original bill had faced significant opposition from the industry, including from AI company Anthropic.

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB-1074) is designed to protect whistleblowers and give the state of California the authority to intervene if it has reason to believe that an AI-related disaster is imminent.

However, California State Senator Scott Wiener (D-CA), who introduced the bill, acknowledged that changes had been made, citing input from San Francisco-based AI security and research firm Anthropic.

“While the additions do not reflect 100 percent of the changes requested by Anthropic – a global leader in innovation and safety – we have accepted a number of very reasonable proposed changes and I believe we have addressed the core concerns of Anthropic and many others in the industry,” Wiener said in an Aug. 15 statement.

Originally, the bill would have allowed the state to sue companies for negligence and inadequate safety measures, even if the violations did not result in a “catastrophic event,” and would have created a state regulatory agency responsible for implementing and enforcing safety measures.

After negative feedback from the tech industry, including a comprehensive list of suggestions from Anthropic, Weiner claimed his office had now found a happy medium:

“These additions build on the significant changes to SB-1047 that I previously made to address the unique needs of the open source community, which is a critical source of innovation.”

Two of the provisions that Anthropic particularly criticized were that AI companies could be sued before to assess the damage and create a new “Frontier Model Division” to oversee cutting-edge AI models.

But it wasn’t just the industry that raised concerns. Congresswoman Zoe Lofgren (D-CA) wrote to Wiener on August 7, warning: “There is a real risk that companies will decide to locate in other jurisdictions or simply not release their models in California.”

That would be a major blow to a state that is currently home to 35 of the world’s top 50 AI companies, according to an executive order issued by Gov. Gavin Newsom (D-CA) last September calling for an investigation into the development, use and risks of AI technology.

In the end, pressure was caved in, and amendments to SB-1047 limited penalties for coercive action, such as the ability to obtain an injunction to require models to be removed. Provisions for criminal perjury for lying about models were dropped, saying the existing law against lying to the government was sufficient. There is no longer any language that would create a Frontier Model Division, although some of the proposed responsibilities would be transferred to other government agencies. And the legal standard that developers must use to attest to compliance was reduced from “reasonable assurance” to “reasonable care.”

Despite these compromises, SB-1074 would still allow the state to hold any AI developer liable for harm caused by their products. Specifically, that means “mass casualties or damages totaling at least five hundred million dollars ($500,000,000).”

It is extremely difficult to predict all the types of damage that an AI model can cause. But because the threshold is set so high, it is hard to argue that any developer whose AI causes such severe damage should not beat least among those responsible.

The bill will now face a final vote in the House, which is expected to take place before August 31. Unless the governor vetoes it, technology companies in California will then face a new regulatory environment.

For artificial intelligence (AI) to operate compliantly and thrive in the face of growing challenges, it must integrate an enterprise blockchain system that ensures the quality and ownership of data inputs – so that the data can be kept safe while ensuring its immutability. Read CoinGeek’s coverage of this new technology to learn more about why enterprise blockchain will form the backbone of AI.

Watch: AI and blockchain will be extremely important – here’s why

width=”560″ height=”315″ frameborder=”0″ allowfullscreen=”allowfullscreen”>

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.

Leave a Reply

Your email address will not be published. Required fields are marked *