
Follow ZDNET: Add us as a preferred source on Google.
ZDNET’s key takeaways
- New White House policy guidance wants to override most state AI laws.
- Proposed federal legislation is largely light-touch, worrying some states.
- Researchers are still dissatisfied with federal approaches to AI safety.
On Friday, the Trump administration released new policy guidance for Congress on how AI should be federally regulated, once again reviving the call to hamper state AI laws.
After a failed attempt to limit state AI legislation this past summer, the administration resumed its efforts with a December executive order and an ensuing AI Litigation Task Force focused on curtailing state laws it feels would limit competitive development.
Also: 5 ways rules and regulations can help guide your AI innovation
Here’s what the new framework wants Congress to do, an overview of the most significant state AI laws already in effect, and why experts think they matter.
What the guidance suggests
In keeping with this administration’s approach thus far, the new guidance — which we’ve been waiting for since the AI Action Plan this summer — aims to keep federal AI regulation minimal while still overriding several state AI laws.
In the absence of federal regulation addressing many states’ concerns about AI, local bills have cropped up across the country. The Trump administration and AI companies argue that state laws create an inconvenient regulatory patchwork that stymies innovation. They apply the same argument to AI safety regulation, especially at the federal level, saying it slows development, harms jobs in the tech sector, and cedes ground in the AI race to countries like China.
Experts I’ve spoken with disagree that safety is antithetical to progress.
Also: Trump’s AI plan says a lot about open source – but here’s what it leaves out
The framework says that state laws must not “act contrary to the United States’ national strategy to achieve global AI dominance.” That means not allowing states to “regulate AI development, because it is an inherently interstate phenomenon with key foreign policy and national security implications.”
It also suggests that states shouldn’t be allowed to “penalize AI developers for a third party’s unlawful conduct involving their models,” which targets the still-murky area of liability around model misuse.
Some potential movement, though: At the federal level, the framework calls on Congress to codify a pledge by AI companies to cover the rising energy costs of data centers.
Allowing some state protections
Still, certain parts of the framework allow state laws to override federal law, including for upskilling workforces with AI tools and in schools.
The framework would not preempt state zoning laws governing where data centers and other AI infrastructure can be built, and would allow states to use AI at their discretion for “services they provide like law enforcement and public education.” In practice, that could mean vastly different integrations of AI in policing and schools that vary across the country. Given early concerns about AI in policing and its potential civil rights violations, that’s notable.
Also: China’s open AI models are in a dead heat with the West – here’s what happens next
The framework would allow states to keep laws that address fraud and protect consumers. It would also let states enforce their own child protection laws when it comes to AI, including legislation around AI-generated child sexual abuse material (CSAM) and privacy.
Limiting state oversight
The attempt by Congress this past summer to ban states from passing AI regulations for 10 years would have withheld broadband and AI infrastructure funds from states that did not comply. The moratorium was defeated in a landslide, temporarily preserving states’ rights to legislate AI in their territory. That’s partly why it’s unclear whether this new call for restrictions on state laws will have bipartisan support.
“Federal HIPAA requirements allow for states to pass more stringent state healthcare privacy laws,” data protection lawyer Lily Li, who founded Metaverse Law, told ZDNET. “Here, there is no federal AI law that would preempt many of the state laws, and Congress has rebuffed prior efforts to add federal AI preemption to past legislation.”
Also: AI will accelerate tech job growth – former Tesla president explains where and why
On December 11, President Trump signed an executive order stating a renewed intention to centralize AI laws at the federal level to ensure US companies are “free to innovate without cumbersome regulation.” The order argues that “excessive State regulation thwarts this imperative” by creating a patchwork of differing laws, some of which it alleges “are increasingly responsible for requiring entities to embed ideological bias within models.”
On January 9, the Department of Justice announced an AI Litigation Task Force “whose sole responsibility shall be to challenge State AI laws” that are inconsistent with a “minimally burdensome national policy framework for AI.”
However, Li does not expect the AI Litigation Task Force to substantially impact state regulation, at least in California (more on that state’s law below).
“The AI litigation task force will focus on laws that are unconstitutional under the dormant commerce clause and First Amendment, preempted by federal law, or otherwise unlawful,” she told ZDNET. “The 10th Amendment, however, explicitly reserves rights to the states if there’s no federal law, or if there’s no preemption of state laws by a federal law.”
SB-53 and the RAISE Act
Earlier this year, first-of-their-kind AI safety laws in California and New York — both states well-positioned to influence tech companies — went into effect. Here’s what two of the country’s most ambitious state AI laws currently cover.
California SB-53, the new AI safety law that went into effect on January 1, requires model developers to publicize how they’ll mitigate the biggest risks posed by AI, and to report on safety incidents involving their models (or face fines of up to $1 million if they don’t). Though not as thorough as previously attempted legislation in the state, the new law is practically the only one in a highly unregulated AI landscape. Most recently, it was joined by the RAISE Act, passed in New York at the end of December, which is similar to the California law.
The RAISE Act, in comparison, also lays out reporting requirements for safety incidents involving models of all sizes, but has an upper fine threshold of $3 million after a company’s first violation. While SB 53 mandates that companies notify the state within 15 days of a safety incident, RAISE requires notification within 72 hours.
Also: Nvidia wants to own your AI data center from end to end
SB 1047, an earlier version of SB 53, would have required AI labs to safety-test models costing over $100 million and to develop a shutdown mechanism, or kill switch, to control them should they misbehave. That bill failed in the face of arguments that it would stifle job creation and innovation, a common response to regulation efforts, especially from the current administration.
SB 53 uses a lighter hand. Like the RAISE Act, it targets companies with gross annual revenue of more than $500 million, a threshold that exempts many smaller AI startups from the law’s reporting and documentation requirements.
“It’s interesting that there is this revenue threshold, especially since there has been the introduction of a lot of leaner AI models that can still engage in a lot of processing, but can be deployed by smaller companies,” Li told ZDNET. She noted that Gov. Gavin Newsom vetoed SB-1047, in part, because it would impose growth-inhibiting costs on smaller companies, a concern also echoed by lobbying groups.
Also: Worried AI will take your remote job? You’re safe for now, this study shows
“I do think it’s more politically motivated than necessarily driven by differences in the potential harm or impact of AI based on the size of the company or the size of the model,” she said of the threshold.
Compared to SB-1047, SB-53 focuses more on transparency, documentation, and reporting than on actual harm. The law creates requirements for guardrails around catastrophic risks: cyber, chemical, biological, radiological, and nuclear weapon attacks, bodily harm, assault, or situations where developers lose control of an AI system.
Additional protections – and limits
California’s SB-53 also requires AI companies to protect whistleblowers. This stood out to Li, who noted that, unlike other parts of the law, which are mirrored in the EU Act and which many companies are therefore already prepared for, whistleblower protections are unique in tech.
Also: Why you’ll pay more for AI in 2026, and 3 money-saving tips to try
“There really haven’t been a lot of cases in the AI space, obviously, because it’s new,” Li said. “I think that is a bigger concern for a lot of tech companies, because there is so much turnover in the tech space, and you don’t know what the market’s going to look like. This is something else that companies are worried about as part of the layoff process.”
She added that SB 53’s reporting requirements make companies more concerned about creating material that could be used in class-action lawsuits.
Gideon Futerman, special projects associate at the Center for AI Safety, doesn’t think SB 53 will meaningfully impact safety research.
“This won’t change the day-to-day much, largely because the EU AI Act already requires these disclosures,” he explained. “SB-53 doesn’t impose any new burden.”
Also: Cloud attacks are getting faster and deadlier – here’s your best defense plan
Neither law requires that AI labs have their models tested by third parties, though New York’s RAISE Act does mandate annual third-party audits at the time of writing. Still, Futerman considers SB 53 progress.
“It shows that AI safety regulation is possible and has political momentum. The amount of real safety work happening today is still far below what is needed,” he said. “Companies racing to build superintelligent AI while admitting these systems could pose extinction-level risks still do not really understand how their models work.”
Where this leaves AI safety
“SB-53’s level of regulation is nothing compared to the dangers, but it’s a worthy first step on transparency and the first enforcement around catastrophic risk in the US. This is where we should have been years ago,” Futerman said.
Regardless of state and federal regulations, Li said governance has already become a higher priority for AI companies, driven by their bottom lines. Enterprise customers are pushing liability onto developers, and investors are noting privacy, cybersecurity, and governance in their funding decisions.
Also: OpenAI’s rumored ‘superapp’ could finally solve one of my biggest issues with ChatGPT
Still, she said that many companies are just flying under the radar of regulators while they can.
“Transparency alone doesn’t make systems safe, but it’s a crucial first step,” Futerman said. He hopes future legislation will fill remaining gaps in the national security strategy.
“That includes strengthening export controls and chip tracking, improving intelligence on frontier AI projects abroad, and coordinating with other nations on the military applications of AI to prevent unintended escalation,” he added.