Premium

What to know about California’s executive order on AI

Gavin Newsom mandates strict AI safety, privacy, and transparency rules for state contractors, asserting California’s regulatory independence amid tensions with Donald Trump’s federal stance on artificial intelligence oversight

Gavin Newsom of California during The New York Times Dealbook Summit at Jazz at Lincoln Center in Manhattan, Dec. 3, 2025. Newsom plans to issue an order requiring safety and privacy guardrails for artificial intelligence companies contracting with the state. (Image: The New York Times)Gavin Newsom of California during The New York Times Dealbook Summit at Jazz at Lincoln Center in Manhattan, Dec. 3, 2025. Newsom plans to issue an order requiring safety and privacy guardrails for artificial intelligence companies contracting with the state. (Image: The New York Times)

California Gov. Gavin Newsom on Monday issued a first-of-its-kind executive order requiring safety and privacy guardrails from artificial intelligence companies that contract with the state.

California has been a leader in tech lawmaking, and was the first state to pass a law mandating safety and transparency from the biggest AI companies. Newsom, a Democrat, signed the order partly as a message to President Donald Trump, who has been trying to bat down state attempts to regulate AI.

Here’s what’s in his executive order.

Contractor Vetting

Companies vying for government contracts will first have to explain their safety and privacy policies around AI. The state will look carefully at policies on how the companies prevent exploitation of individuals, including the spread of child sexual abuse materials.

The government will also consider whether AI models, the technology that powers chatbots and other tools, are used to monitor individuals or are used to block certain speech. Companies should also explain how they are avoiding bias in their systems.

Independence From Federal Contracting Standards

If the federal government designates a company a supply chain risk, which the Pentagon has recently done with AI startup Anthropic, California will conduct its own assessment. If the company isn’t determined to be a risk, the state may allow it to remain a contractor.

This is significant because the Pentagon’s legal tussle with Anthropic, which had provided the Defense Department with AI technologies for use on classified systems, has exposed a rift in the administration’s pursuit of AI for war use. The Pentagon terminated its contract with Anthropic after the company said the government could not use its models for mass domestic surveillance and the deployment of autonomous weaponry.

Watermarking Requirement

The governor also called on state officials to begin watermarking AI-generated or manipulated videos that they create.

Story continues below this ad

The technique is aimed at guarding against the spread of misinformation. It would also allow consumers to tell the difference between human-generated and AI-generated images produced by the state.

 

Advertisement
Loading Recommendations...