The Trump administration is considering oversight of AI models, the New York Times reported on Monday.
SBA via Getty Images
The Trump administration is discussing oversight of new AI models, per the New York Times.
Tech policy experts say such oversight could slow innovation.
They added that such regulation should come from legislation, not an executive order.
President Donald Trump’s administration is discussing government oversight of the rollout of new AI models, the New York Times reported on Monday.
The administration is considering an executive order to create a working group of tech executives and public officials to decide how oversight would be carried out, the Times reported, citing US officials and people familiar with the discussions.
The report said that White House officials told executives from Anthropic, Google, and OpenAI about some of these plans.
The oversight would mark a U-turn from Trump’s stance on AI,. In his second term, the president has prioritized deregulation and innovation, saying that a tight hold could hamper American competitiveness against China.
Here is what tech and policy experts are saying about the potential regulation:
Daniel Castro, president at Information Technology and Innovation Foundation
Daniel Castro
ITIF
Daniel Castro, the president of science and tech think tank ITIF, listed several reasons the potential executive order is a “terrible idea” on X.
First, he said it is a “full embrace” of the precautionary principle — the idea that governments need to protect the public from risks even without full scientific certainty.
“It would mean firms need government permission to innovate. That flips the default from building freely to asking first,” Castro wrote on X.
He said that other risks include tech companies having to bend to each administration, allowing other countries to continue advancing, and slowing innovation.
“Innovation would move at the speed of Washington, not Silicon Valley,” he wrote. “Every product launch, feature update, or model release would slow down. Government is not known for speed.”
Janet Vestal Kelly, CEO of Alliance for a Better Future
Janet Vestal Kelly, who heads the Alliance for a Better Future, a conservative-leaning advocacy group, called the potential vetting of AI models “welcome news.”
“AI is the most powerful technology the world has ever seen, and left to their own devices, Big Tech companies will run roughshod over kids, workers, and American values,” she wrote in a statement shared on X on Monday.
“With the right approach, the US can have AI that protects kids, jobs and our nation, and also win the AI race against China,” Kelly added.
Adam Thierer, analyst at R Street Institute
Adam Thierer, an innovation policy analyst at free-market think tank R Street Institute, said that big AI regulation should not be Trump’s decision alone.
“Any sort of preemptive, pre-release ‘vetting’ of AI models by the White House could be tantamount to a de facto licensing regime, which should certainly not be done via executive orders,” he wrote on X.
Thierer, who testified about frontier models in Congress, said that one of Trump’s first priorities in his second term was to get rid of a Biden-era AI order.
“Executive orders have their place, but Congress needs to help establish a more stable framework that achieves these goals,” he wrote.
Conor Grennan, CEO of AI Mindset
Conor Grennan
Clint Spaulding/Patrick McMullan via Getty Images
Conor Grennan, an AI consultant and former chief AI architect at NYU’s Stern School of Business, raised concerns about this “vetting” process.
“Who gets the blame if something goes wrong- AI or the vetters? Or what if an AI leader criticizes Trump, is that model suddenly “too dangerous?” And how does this vetting happen, exactly?” he wrote in a post on LinkedIn.
He said that while guardrails are necessary, the government making these decisions “is going to get messy fast.”
Eli Dourado, head of strategic investments at Astera Institute
Eli Dourado, who leads strategic investments at Astera, a nonprofit AI research institute, said that the proposal could mean many different things.
“If it’s a mandatory review of AI models before they can be released, that’s in direct conflict with the courts’ First Amendment prior-restraint doctrine,” he wrote, referring to a constitutional law that says that the government cannot stop speech before it happens.
He added: “Seems unlikely to fly.”
Taylor Barkley, director of federal government affairs at Abundance Institute
Taylor Barkley, a director at tech policy organization Abundance Institute, said the potential regulation is a pre-approval process similar to the UK’s.
He called it a “giant step backwards for innovation and an undoing of President Trump’s excellent policy on AI so far” because it would slow implementation and raise barriers to entry.
He said it also gives regulators more power than innovators and undermines the administration’s goal of cutting red tape.
“For all these reasons and more, Congress needs to clarify proper regulatory measures by passing a national AI framework,” Barkley wrote on X.
Chris McGuire, senior fellow at Council on Foreign Relations
Chris McGuire, a senior fellow for China and emerging tech at the Council on Foreign Relations, said the move would be a “sorely needed regulatory pivot.”
In a post on X, he wrote that if the US government wants to vet models pre-release, it also needs a plan to preserve security when they’re live.
“This means putting in place certain cyber and physical security requirements that apply to frontier AI labs and any cloud providers that host advanced AI models,” McGuire wrote. “It also means export controlling advanced AI model weights, to prevent them from being transferred to untrusted entities.”
He added that the Biden Administration had a similar AI policy.
“The rule was complex and by no means perfect, but it was an honest attempt to get at a real problem the Trump administration now needs its own answer to,” he wrote.
Thomas Woodside, senior policy advisor at Secure AI Project
Thomas Woodside, a cofounder and advisor at AI policy group Secure AI Project, said that a one-time approval process for AI models may not be the right approach. Instead, he argued oversight should be continuous, since systems are frequently updated and can pose risks even during development and internal use.
Like other tech analysts, Woodside said any such framework should be established through legislation rather than executive action.