
“It begins with the saying: “Preemption is a question for Congress, which they’ve considered and rejected, and should continue to reject,” said Travis Hall, state director for the Center for Democracy and Technology. His warning comes as there is a constitutional and political storm brewing over a Trump administration draft executive order to direct the Justice Department to sue states that pass laws regulating artificial intelligence.

This order marks a steep escalation in federal efforts to override state-level AI governance. It would charge the DOJ with challenging such laws on grounds they interfere with interstate commerce, establish a federal task force to review state AI statutes for potential free speech conflicts, and empower the Commerce Department to withhold broadband funding from states deemed non-compliant. It comes in the wake of an unsuccessful bid in Congress to impose a blanket moratorium on state AI regulation-a proposal that collapsed under bipartisan concerns about public safety, energy costs, and the constitutional limits of federal power.

1. Constitutional Limits to Federal Preemption
The Tenth Amendment leaves powers not delegated to the federal government to the states. While Congress may expressly preempt state regulation of interstate commerce, in the absence of such legislation executive orders must surmount high constitutional barriers. Legal scholars understand that dormant Commerce Clause jurisprudence-most particularly the Pike balancing test-balances burdens on interstate commerce against local benefits. State AI laws, such as those adopted in California, New York, and Colorado, would require frontier model training and deployment to be managed in ways that cannot be segmented by geography; compliance across many regimes would be technically and economically complicated. Courts, however, would demand proof that the benefits of such laws-such as preventing catastrophic AI risks-are real and at least proportionate to their costs at the national level.

2. Technical and Economic Stakes of State AI Laws
Large-scale model training is a single-point process with enormous datasets, distributed compute resources, and months-long optimization cycles. Whereas farming or manufacturing processes could conceivably be adapted to local standards, it is infeasible to retrain AI models at scale to accommodate divergent state rules. In other words, one state’s regulation can set a national standard-a source of downward pressure on model architecture, safety protocols, and downstream applications. Industry leaders say this “patchwork” is a barrier to innovation, though state advocates argue it fills a regulatory gap in high-risk domains like elections, health care, and employment.

3. Energy and Infrastructure Pressures from AI Expansion
The AI boom has now pushed data center growth to unprecedented levels: U.S. facilities are forecasted to use 183 terawatt-hours of electricity in 2024, more than 4 percent of national demand. The largest hyperscale AI data centers have single-year power demand equivalent to 100,000 households, and next-generation sites will utilize up to 20 times more. Large, concentrated clusters of data centers, such as in Virginia, have stressed local electrical grids, driving up capacity market prices and residential bills. To date, state-level regulations often target the secondary impacts of the facility-from renewable energy sourcing to water usage-reporting-that could be pre-empted under a federal framework.

4. National Security Dimensions:
The White House has framed the lone federal standard for AI as integral to outrunning China in the race to lead global technology. But with China’s state-supported investments in AI, quantum, and biotech estimated at $900 billion over the last decade, shrinking performance gaps in key AI benchmarks by 80%, federal officials say that fragmented state rules could delay domestic AI deployment in defense, logistics, and intelligence and reduce strategic readiness. Yet some security analysts caution that sidelining state innovation would weaken resilience since it is through local governance that some emerging threats-such as AI-enabled cyberattacks or synthetic media manipulation-are often best dealt with.

5. Comparative Governance Models
Other jurisdictions take a different approach. The European Union’s AI Act sets a harmonized regulatory floor, leaving room for member states to take more stringent measures in particular areas. China’s model puts the central government at the locus of authority, placing AI review mechanisms in national industrial policy and national security planning. U.S. states have acted as “laboratories of democracy,” experimenting with sectoral protections-from Colorado’s algorithmic discrimination regulations to Minnesota’s election deepfake bans-that could influence eventual federal frameworks.

6. Industry Influence and Legislative Strategy
Large AI companies and their trade associations have lobbied for federal preemption, citing compliance burdens and threatened competitiveness. The moratorium in the failed reconciliation bill in Congress reflected industry demands without accompanying federal protections; it elicited sharp criticism that doing so would entrench corporate power without safeguarding the public. Current White House efforts to insert preemption language into the National Defense Authorization Act represent a strategic pivot: using must-pass legislation to avoid standalone debate.

7. Balancing Promotion and Protection in AI Policy
The National Security Commission on Artificial Intelligence favors a government-directed technology strategy that would join the promotion of innovation to the protection of critical sectors, including the AI talent race, supply-chain security for semiconductors and data center components, and leading the international AI order in ways consistent with democratic values.

An appropriately designed Technology Competitiveness Council might harmonize economic, security, and scientific considerations and even provide a venue to sort out tensions between federal and state authorities on AI governance. The draft order from the administration therefore sits at the juncture of constitutional law, technical feasibility, energy infrastructure, and geopolitical strategy. Whether it is a binding directive or a political bargaining chip, the implications for the distribution of AI regulatory authority-and for America’s competitive posture in the global AI race-are profound.

