California Governor Gavin Newsom has signed SB 53, the state’s new AI safety and transparency bill, which an advocate argues is proof that state-level regulation can protect safety without impeding the progress of artificial intelligence. Adam Billen, vice president of public policy at the youth-led advocacy group Encode AI, affirmed this viewpoint, stating, “The reality is that policy makers themselves know that we have to do something, and they know from working on a million other issues that there is a way to pass legislation that genuinely does protect innovation—which I do care about—while making sure that these products are safe.”
Enforcing Existing Safety Protocols
At its core, SB 53 is a first-in-the-nation law that requires large AI laboratories to be transparent about their security and safety protocols. Specifically, it focuses on how these companies intend to prevent their models from causing catastrophic risks, such as being used to facilitate cyberattacks on critical infrastructure or the creation of bio-weapons. The law mandates that companies adhere to these protocols, with enforcement provided by the Office of Emergency Services. Billen noted that many companies already perform the actions required by the bill, such as conducting safety testing and releasing model cards. He argued that the bill is necessary precisely because some firms are beginning to “skimp” on these standards.
Billen highlighted that some AI firms maintain policies that allow for the relaxation of safety standards when facing competitive pressure. For example, OpenAI has publicly stated that it may “adjust” its safety requirements if a rival releases a high-risk system without similar safeguards. Billen asserts that this new policy acts as a way to enforce companies’ existing safety promises, preventing them from cutting corners due to market competition or financial incentives.
Industry Pushback and Federal Preemption
Despite muted public opposition to SB 53 compared to a previous, vetoed bill, the general rhetoric from Silicon Valley and many AI labs is that any AI regulation is detrimental to progress and will hurt the U.S. in the race to beat China. This opposition has manifested in powerful individuals and organizations collectively funding super PACs to support pro-AI politicians and lobbying earlier this year for an AI moratorium that would have banned states from regulating AI for a decade. Encode AI ran a coalition of over 200 organizations to successfully block that proposal, but Billen warned that the fight is continuing.
Senator Ted Cruz, who championed the moratorium, has introduced a new strategy through the SANDBOX Act. Introduced in September, this bill would allow AI companies to apply for waivers to temporarily bypass certain federal regulations for up to ten years, essentially creating a workaround to achieve federal preemption over state laws. Billen anticipates another forthcoming bill that would establish a federal AI standard, which would be pitched as a compromise but would, in effect, override state laws. He warned that narrowly scoped federal legislation could “delete federalism for the most important technology of our time.”
The China Race and Chip Exports
While Billen agrees that the AI competition with China is real, he argues that killing state-level bills—which primarily address issues like deepfakes, transparency, and consumer safety—is not the solution. “Are bills like SB 53 the thing that will stop us from beating China? No,” he said, calling the claim “intellectually dishonest.” He contends that if policymakers are serious about maintaining American progress, they should focus on tools like export controls in Congress and ensuring American companies have the necessary chips.
Legislative measures like the Chip Security Act aim to curb the diversion of advanced AI chips to China through export controls, and the existing CHIPS and Science Act seeks to boost domestic chip production. However, major tech companies like OpenAI and Nvidia have shown reluctance toward these efforts. Nvidia, in particular, has a strong financial incentive to continue selling chips to China. Billen speculated that OpenAI might be hesitant to advocate for stricter chip export controls to stay in the good graces of critical supplier Nvidia. Adding to the mixed signals, the Trump administration, after expanding an export ban on advanced AI chips to China in April, later reversed course, allowing Nvidia and AMD to sell some chips to China in exchange for a 15% revenue share. Billen concluded that while people on the Hill are moving toward bills like the Chip Security Act, there will continue to be a narrative pushed to kill “quite light tough” state bills. He celebrated SB 53 as an example of democracy and federalism working, even if it is a “very ugly and messy” process.
Author’s Opinion
The argument that state-level AI safety laws like SB 53 will cede the global lead to China is fundamentally misguided; instead, these laws represent an indispensable form of quality control and risk mitigation that can differentiate American AI. By codifying ethical and security standards, California is building a reputation for “trustworthy AI,” which will eventually be a more powerful long-term export than an unregulated, high-risk product. The industry’s push for federal preemption and opposition to even light-touch state laws suggests a preference for unchecked speed over public safety, confirming that legislators must intervene to ensure that the immediate pursuit of profit does not compromise the long-term integrity and safety of the technology.
Featured image credit: Wikimedia Commons
For more stories like it, click the +Follow button at the top of this page to follow us.