California lawmaker behind SB 1047 reignites push for mandated AI security studies
California State Senator Scott Wiener on Wednesday launched new amendments to his newest invoice, SB 53, that might require the world’s largest AI firms to publish security and safety protocols and concern studies when security incidents happen.
If signed into regulation, California can be the primary state to impose significant transparency necessities onto main AI builders, possible together with OpenAI, Google, Anthropic, and xAI.
Senator Wiener’s earlier AI invoice, SB 1047, included comparable necessities for AI mannequin builders to publish security studies. Nevertheless, Silicon Valley fought ferociously towards that invoice, and it was finally vetoed by Governor Gavin Newsom. California’s Governor then known as for a gaggle of AI leaders — together with the main Stanford researcher and co-founder of World Labs, Fei Fei Li — to kind a coverage group and set objectives for the state’s AI security efforts.
California’s AI coverage group lately revealed their closing suggestions, citing a necessity for “necessities on business to publish details about their programs” in an effort to set up a “strong and clear proof atmosphere.” Senator Wiener’s workplace mentioned in a press launch that SB 53’s amendments had been closely influenced by this report.
“The invoice continues to be a piece in progress, and I stay up for working with all stakeholders within the coming weeks to refine this proposal into essentially the most scientific and honest regulation it may be,” Senator Wiener mentioned within the launch.
SB 53 goals to strike a stability that Governor Newsom claimed SB 1047 failed to attain — ideally, creating significant transparency necessities for the biggest AI builders with out thwarting the fast development of California’s AI business.
“These are issues that my group and others have been speaking about for some time,” mentioned Nathan Calvin, VP of State Affairs for the nonprofit AI security group, Encode, in an interview with TechCrunch. “Having firms clarify to the general public and authorities what measures they’re taking to deal with these dangers appears like a naked minimal, affordable step to take.”
The invoice additionally creates whistleblower protections for workers of AI labs who imagine their firm’s expertise poses a “essential danger” to society — outlined within the invoice as contributing to the dying or damage of greater than 100 individuals, or greater than $1 billion in injury.
Moreover, the invoice goals to create CalCompute, a public cloud computing cluster to help startups and researchers creating large-scale AI.
In contrast to SB 1047, Senator Wiener’s new invoice doesn’t make AI mannequin builders answerable for the harms of their AI fashions. SB 53 was additionally designed to not pose a burden on startups and researchers that wonderful tune AI fashions from main AI builders, or use open supply fashions.
With the brand new amendments, SB 53 is now headed to the California State Meeting Committee on Privateness and Shopper Safety for approval. Ought to it move there, the invoice can even must move by means of a number of different legislative our bodies earlier than reaching Governor Newsom’s desk.
On the opposite facet of the U.S., New York Governor Kathy Hochul is now contemplating an analogous AI security invoice, the RAISE Act, which might additionally require massive AI builders to publish security and safety studies.
The destiny of state AI legal guidelines just like the RAISE Act and SB 53 had been briefly in jeopardy as federal lawmakers thought-about a 10-year AI moratorium on state AI regulation — an try and restrict a “patchwork” of AI legal guidelines that firms must navigate. Nevertheless, that proposal failed in a 99-1 Senate vote earlier in July.
“Guaranteeing AI is developed safely shouldn’t be controversial — it ought to be foundational,” mentioned Geoff Ralston, the previous president of Y Combinator, in a press release to TechCrunch. “Congress ought to be main, demanding transparency and accountability from the businesses constructing frontier fashions. However with no critical federal motion in sight, states should step up. California’s SB 53 is a considerate, well-structured instance of state management.”
Up thus far, lawmakers have did not get AI firms on board with state-mandated transparency necessities. Anthropic has broadly endorsed the necessity for elevated transparency into AI firms, and even expressed modest optimism concerning the suggestions from California’s AI coverage group. However firms corresponding to OpenAI, Google, and Meta have been extra resistant to those efforts.
Main AI mannequin builders usually publish security studies for his or her AI fashions, however they’ve been much less constant in latest months. Google, for instance, determined to not publish a security report for its most superior AI mannequin ever launched, Gemini 2.5 Professional, till months after it was made accessible. OpenAI additionally determined to not publish a security report for its GPT-4.1 mannequin. Later, a third-party examine got here out that prompt it could be much less aligned than earlier AI fashions.
SB 53 represents a toned-down model of earlier AI security payments, but it surely nonetheless might pressure AI firms to publish extra data than they do at the moment. For now, they’ll be watching intently as Senator Wiener as soon as once more assessments these boundaries.