Trump rescinding Biden AI EO will make industry more chaotic, experts say
6 mins read

Trump rescinding Biden AI EO will make industry more chaotic, experts say


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Read more


In the new year, the incoming Trump administration is expected to make many changes to existing policies, and AI regulation will not be exempt. This will likely include rescission of an AI executive order by current President Joe Biden.

The Biden order established state regulatory offices and encouraged model developers to implement safety standards. While the Biden AI executive order rules focus on model developers, their repeal may present some challenges for companies to overcome. Some companies, such as Trump ally Elon Musk’s xAIcould benefit from a lifting of the order, while others are expected to face some problems. This may include having to deal with a patchwork of regulations, less open sharing of data sources, less government-funded research, and more emphasis on voluntary, responsible AI programs.

Patchwork of local regulations

Prior to the EO’s signing, policymakers held several listening tours and hearings with industry leaders to determine how best to appropriately regulate the technology. Under the Democratic-controlled Senate, there was a strong possibility that AI regulations could move forward, but insiders believe the appetite for federal regulations around AI has cooled significantly.

Gaurab Bansal, Managing Director of Responsible innovation labssaid during the ScaleUp: AI conference in New York that the lack of federal oversight of AI could lead states to write their own policies.

“There’s a sense that both parties in Congress are not going to regulate AI, so it’s going to be states that can run the same playbook as California’s SB 1047,” Bansal said. “Companies need standards for consistency, but it will be bad when there is a patchwork of standards in different areas.”

California State Legislature shot SB 1047 — which would have ordered a “kill switch” to models among other government controls — with the bill landing on Gov. Gavin Newsom’s desk. Newsom’s veto of the bill was praised by industry professionals such as Meta’s Yann Le Cunn. Bansal said states are more likely to pass similar bills.

Dean Ball, researcher at George Mason University’s Mercatus Centersaid company may find it difficult to navigate different rules.

“These laws may well create complex compliance regimes and a patchwork of laws for both AI developers and companies hoping to use AI; how a Republican Congress will respond to this potential challenge is unclear,” Ball said.

Voluntarily responsible AI

Industry-led responsible AI has always existed. But the burden on companies to be more proactive about being responsible and fair may increase as their customers demand a focus on security. Model developers and business users should spend time implementing responsible AI policies and building standards that comply laws such as the European Union AI Law.

During the ScaleUp: AI Conference, Microsoft Responsible AI Chief Product Officer Sarah Bird said that many developers and their customers, including Microsoftprepares its systems for the EU’s AI law.

But while no sweeping law regulates AI, Bird said it’s always good practice to build responsible AI and security into the models and applications from the start.

“This will be helpful for startups, a lot of the high level of what the AI ​​act is asking you to do is just common sense,” Bird said. “If you’re building models, you should control the data that goes into them; you should test them. For smaller organizations, compliance is easier if you do it from the ground up, so invest in a solution that controls your data as it grows.”

However, it can be more difficult to understand what is in the data used to train large language models (LLMs) that companies use. Jason Corso, professor of robotics at the University of Michigan and co-founder of the computer vision company Voxel51told VentureBeat that the Biden EO encouraged a lot of openness from model developers.

“We can’t fully know the effect of a sample on a model that presents a high degree of potential risk of bias, can we? So the business of the model users could be at stake if there is no governance around the use of those models and the data that went in” , Corso said.

Fewer research dollars

AI companies are enjoying significant investor interest right now. But the government has often supported research that some investors think is too risky. Corso noted that the new Trump administration may choose not to invest in AI research to save on costs.

“I just worry about not having the state resources to put it behind that kind of early-stage, high-risk project,” Corso said.

However, a new administration does not mean that money will not be allocated to AI. While it is unclear whether the Trump administration will abolish the newly created AI Safety Institute and other AI oversight offices, the Biden administration guaranteed budgets through 2025.

“A pending question that must color Trump’s replacement for the Biden EO is how to organize the agencies and distribute the dollars appropriated under the AI ​​Initiative Act. This bill is the source of many of the authorities and activities that Biden has mandated and funded agencies like NIST will continue through 2025. With those dollars already allocated, many activities will likely continue in some form, but what that form will look like has yet to be revealed, said Mercatus Center researcher Matt Mittelsteadt.

We’ll know how the next administration views AI policy in January, but companies should prepare for what comes next.