Federal AI power grab could end state protections for kids and workers

Just as AI begins to upend American society, Congress is considering a move that would sideline states from enforcing commonsense safeguards.
Tucked into the recently passed House reconciliation package is Section 43201, a provision that would pre-empt nearly all state and local laws governing "artificial intelligence models," "artificial intelligence systems," and "automated decision systems" for the next 10 years.
Last night, the Senate released its own version of the moratorium that would restrict states from receiving federal funding for broadband infrastructure if they don’t fall in line.
Supporters argue that a moratorium is needed to avoid a patchwork of state rules that could jeopardize U.S. AI competitiveness.
AI'S DEVELOPMENT IS CRITICALLY IMPORTANT FOR AMERICA – AND IT ALL HINGES ON THESE FREEDOMS
But this sweeping approach threatens to override legitimate state efforts to curb Big Tech’s worst abuses—with no federal safeguards to replace them. It also risks undermining the constitutional role of state legislatures to protect the interests and rights of American children and working families amid AI’s far-reaching social and economic disruptions.
In the absence of Congressional action, states have been the first line of defense against Big Tech. Texas, Florida, Utah, and other states have led the way to protect children online, safeguard data privacy, and rein in platform censorship.
Section 43201 puts many of those laws—even those not directly related to AI—at risk.
The provision defines "automated decision systems" broadly, potentially capturing core functions of social media platforms, such as TikTok’s For You feed or Instagram’s recommendation engine.
At least 12 states have enacted laws requiring parental consent or age verification for minors accessing these platforms. However, because these laws specifically apply to social media platforms, they could easily be construed as regulating "automated decision systems"— and thus be swept up in the moratorium.
Further, Section 43201 might also block provisions of existing state privacy laws that restrict the use of algorithms—including AI—to predict consumer behavior, preferences, or characteristics.
Even setting aside concerns with the moratorium’s expansive scope, it suffers from a more fundamental flaw.
The moratorium threatens to short-circuit American federalism by undermining state laws that ensure AI lives up to the promise outlined by Vice President J.D. Vance.
Speaking at the Paris AI Summit, he warned against viewing "AI as a purely disruptive technology that will inevitably automate away our labor force."
Instead, Vance called for "policies that ensure that AI… make[s] our workers more productive" and rewards them with "higher wages, better benefits, and safer and more prosperous communities."
That vision is nearly impossible without state-level action. Legislators, governors, and attorneys general from Nashville to Salt Lake City are already advancing creative, democratically accountable solutions.
Tennessee’s novel ELVIS Act protects music artists from nonconsensual AI-generated voice and likeness cloning. Utah’s AI consumer protection law requires that generative AI model deployers notify consumers when they are interacting with an AI.
Other states, including Arkansas and Montana, are building legal frameworks for digital property rights with respect to AI models, algorithms, data, and model outputs.
All of this is now at risk.
As laboratories of democracy, states are essential to navigating the inevitable and innumerable trade-offs entailed by the diffusion of emerging technologies. Federalism enables continuous experimentation and competition between states—exposing the best and worst approaches to regulation in highly dynamic environments.
That’s critical when confronting AI’s vast and constantly evolving sphere of impact on children and employment—to say nothing of the technology’s wider socio-economic effects.
Sixty leading advocacy and research organizations have warned that AI chatbots pose a significant threat to kids. They cite harrowing stories of teens who have been induced to suicide, addiction, sexual perversion, and self-harm at the hands of Big AI.
Even industry leaders are sounding alarms: Anthropic CEO Dario Amodei estimates that AI could force up to 20% unemployment over the next five years.
CLICK HERE FOR MORE FOX NEWS OPINION
Innovation inherently brings disruption—but disruption without guardrails can harm the very communities AI is purportedly meant to uplift. That’s why 40 state attorneys general, Democrats and Republicans alike, signed a letter opposing Section 43201, warning that it would override "carefully tailored laws targeting specific harms related to the use of AI."
To be sure, not all laws are drafted equal. States like California and Colorado are imposing European-style AI regulations particularly detrimental to "Little Tech" and open-source model developers.
But Congress shouldn’t throw out federalism with the "doomer" bathwater. Rather than a blanket pre-emption, it should consider narrow, targeted limits carefully tailored to stymie high-risk bills—modeled on California and Colorado’s approach—that foist doomer AI standards on the rest of the nation.
Absent a comprehensive federal AI framework, states must retain freedom to act—specifically, to ensure that AI bolsters American innovation and competitiveness in pursuit of a thriving middle class.
America’s AI future has great potential. But our laboratories of democracy are key to securing it.
What's Your Reaction?






