Skip to main content

Outlook ’25: Will There Be Movement on AI Legislation in the United States?

Artificial intelligence proved, for another year, to be one of the most widely talked about technologies influencing a variety of industries. It promises the same power in 2025.

But as development only accelerates further and companies work to adopt systems aimed at upping their game on deployment—whether to please the customer, streamline internal systems or otherwise—questions over how the technology will be regulated have risen to the surface, particularly in the United States.

Related Stories

As President-elect Donald Trump prepares to take office this month, experts said the nation is unlikely to see much forward movement at the federal level on regulation. Instead, states may take center stage—and some of the administration’s most outspoken characters may have a heavy hand on influencing agencies and lawmakers.

The year ahead is set to be full of uncertainty, increased public scrutiny of the technology space and an interesting dichotomy between industry and government.

Deregulation station 

Experts project that Trump will take a more deregulatory approach to the technology than his predecessor, President Joe Biden. During his presidency, Biden put forth the Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence, more widely known as the AI EO

But the president-elect has already promised to roll back the provisions from the AI EO, which, by and large, asked federal agencies to create guidelines for the safe and successful use of the technology. Some agencies, like the Department of Labor (DOL), have already set those guidelines into circulation. 

Under the incoming Trump-Vance administration, Reiko Feaver, partner and technology practice chair at CM Law, said it’s likely that those guidelines will go by the wayside. 

“To the extent there might have been some openness to federal legislation for AI, I think the appetite to regulate AI in the new administration is going to be a lot lower than it is now,” Feaver said. “I can see a lot of the Biden White House guidelines and directives being repealed or not followed.” 

Part of the issue, to Trump and fellow MAGA followers, is that many believe legislation stifles innovation and industry. Michelle De Mooy, director of the Tech and Public Policy Program at Georgetown University’s McCourt School of Public Policy, said when developed responsibly, legislation often helps industry titans understand the limits of their influence, rather than stifling some far off, presently intangible possibility for technology. 

“But try telling that to the new administration,” she quipped. 

De Mooy and other experts believe that some subsectors of AI may be regulated—particularly those around children’s safety and unauthorized deepfakes. That, De Mooy posits, is because these kinds of issues are ones legislators hear about from their constituents; industrial-level AI isn’t necessarily plaguing the mind of the average American, but personal safety may be. 

But the convergence of public interest and industry interest may be coming sooner than later, as an increasing number of data centers start to pop up around the United States. 

Industry interests

Microsoft announced in 2024 that it had entered into an agreement with Pennsylvania’s utility, Constellation Energy, to re-up one unit of the Three Mile Island nuclear plant. It will use the energy generated there to power data centers as it continues to develop AI systems. 

De Mooy said she thinks the public will start to take notice as other Big Tech behemoths join in on snatching up leases, land and contracts to build gargantuan facilities in small communities. 

“One issue that I think is going to become more and more of an issue on the public’s behalf, is data centers, is the energy question around AI. I’m starting to see them, you’re starting to see them—[Microsoft] is buying Three Mile Island, for God’s sake,” De Mooy told Sourcing Journal. “This is becoming more and more of a growing issue in communities, and it’s something that I think the tech companies cannot hide for very much longer.” 

That tension could test legislators and agencies, particularly among those reliant on Big Tech donors. Already, Amazon and Meta have both pledged $1 million donations to Trump’s inauguration, as has OpenAI CEO Sam Altman. 

Industry also has a hand in the direction of agencies, experts said. 

Last year, Trump announced that he had created a new position in his administration: White House AI and Crypto Czar. He has since appointed venture capitalist David Sacks to the spot, an announcement that caused a surge of excitement in the cryptocurrency sphere. Trump also noted that X and Tesla owner Elon Musk would serve alongside Vivek Ramaswamy, heading up the newly created Department of Government Efficiency (DOGE). 

Because none of those men will serve in official agency capacities, they will not need to be confirmed by a Senate vote. While that means that their direct power for creating rules or legislation is limited, their power and influence shouldn’t be underestimated, De Mooy said. 

“They have positioned themselves to have the ear of the president, and to have the ear of very wealthy and influential businesses in this country,” she said. “What that means is, they are at the helm of global industry. What they say and do is going to matter a great deal… These men can move markets, and clearly will do that to their own advantage.” 

Agencies’ agency

In the absence of comprehensive federal legislation on AI, some agencies have started to create guidelines, experimental programs and pilots for the technology. 

Others, like the Federal Trade Commission (FTC) have used existing legislation and rules to enforce the responsible use of AI systems. For instance, the FTC brought a case against Rite Aid, alleging it had unfairly used facial recognition technology (FRT) powered by AI to prevent retail theft. Ultimately, the FTC settled the case with Rite Aid, restricting it from using any facial recognition technology for five years, subjecting it to model disgorgement—or the destruction of the algorithms and models used to operate the system, put limits on its future use of biometric data and more. 

The agency warned that Rite Aid would serve as a model for future cases—and noted that, while AI is still a developing technology, it’s subject to all applicable laws. 

But the FTC has also, under Lina Khan’s reign, pursued a slew of antitrust actions that made her, at times, unpopular among industry players. De Mooy and Helen Christakos, partner at A&O Shearman, said Trump’s potential deregulatory approach could also extend to antitrust; she expects the president-elect to take a more hands-off approach and to instruct federal regulators to act in a similar way.

“It is generally thought that Trump will take a less restrictive approach on antitrust, and we expect that we’re going to see an increase in M&A in the AI space,” Christakos told Sourcing Journal. 

Ultimately, that could see Big Tech players snapping up startups and young companies touting AI at a more rapid rate than today’s FTC might find permissible. 

Existing legislation and the states 

Christakos and De Mooy said existing federal legislation may provide companies some framework around what’s permissible when it comes to the development of AI. 

“Privacy laws are the laws that have the greatest impact on the development of AI, in addition to the standalone AI laws that are passed,” Christakos said.

Christakos and De Mooy said intellectual property and copyright laws will also bear major importance on the continued development and use of AI systems. At present, a number of cases alleging copyright infringement against open-source, general-purpose AI systems are already playing out. De Mooy said she expects the results of those cases to determine the trajectory of AI’s continued development. 

“There’s a couple key pivot points in the development of AI that I think will make a big difference related to policy, and one of them is the copyright issue. What that ruling says…will be the difference between AI going one way and AI developing another way in this country,” she said.

Among so much uncertainty and deregulation at the federal level, Feaver and De Mooy said they expect most of the meaningful legislation on AI in the U.S. to be handed down at the state level; that trend has already started. 

“The states were really active in 2024, and I think that in the absence of federal action, they will continue to be,” De Mooy said. 

California, Colorado and Utah have already passed and signed into law AI-specific legislation. Meanwhile, legislators in other states, like Massachusetts, Illinois and Ohio, are actively working on bills on AI. 

And such efforts would be far from futile, even in the case that federal legislation does come along in the next several years. A federal AI bill or package might change how states implementing AI legislation have chosen to govern the technology thus far, but Christakos noted that it’s still possible that states’ legislation could prove more stringent than federal laws. In those cases, states could see those laws being left intact, if federal regulations don’t supersede them. 

Feaver said, in analyzing bills proposed by state legislators, several key themes have emerged. She expects the same to remain true throughout 2025. 

“They all have the same basic themes, which is fair use, transparency, compliance, program, monitoring, security, knowledge of the data that you’re using,” Feaver said. “Given the predominance of those specific concepts, the other states will probably be addressing the same things.”