
President Trump’s executive order aims to eliminate perceived ideological bias in AI, raising complex questions about neutrality and innovation in technology.
President Donald Trump has embarked on a mission to establish the United States as a frontrunner in artificial intelligence, which entails purging AI models of elements he considers to be “woke” ideals. On Wednesday, Trump declared the signing of an executive order that prohibits federal entities from acquiring AI technologies infused with political bias or ideological frameworks such as critical race theory. This move signals an extension of his opposition to diversity, equity, and inclusion into technologies pivotal to online information retrieval.
The executive order, part of the White House’s AI action plan revealed Wednesday, introduces initiatives and policy recommendations aimed at advancing the U.S. in AI. Central to this plan is the “preventing woke AI in the federal government” directive, mandating that AI large language models used by the government, similar to those powering chatbots like ChatGPT, conform to Trump’s “unbiased AI principles.” These principles emphasize that AI should be “truth-seeking” and maintain “ideological neutrality.”
Trump asserted that the U.S. government would henceforth engage only with AI technologies that pursue truth, fairness, and strict impartiality. This pronouncement raises an essential query: Can AI be inherently biased or “woke”? Experts suggest the answer is not straightforward.
AI models are primarily shaped by the data they are trained on, the feedback received during training, and the specific instructions given—all of which can influence whether an AI chatbot’s responses appear “woke,” a subjective term. This is why bias, whether political or otherwise, remains a contentious issue within the AI industry.
“AI models don’t possess beliefs or biases like humans, but they can exhibit systematic biases or leanings, particularly in response to certain queries,” explained Oren Etzioni, former CEO of the Allen Institute for Artificial Intelligence.
The executive order outlines two “unbiased AI principles.” The first, “truth seeking,” mandates that AI models should prioritize historical accuracy and scientific inquiry when providing factual answers. The second, “ideological neutrality,” insists that government-used AI models remain neutral and nonpartisan, avoiding manipulation in favor of ideological frameworks such as DEI.
The order specifies that developers should not “intentionally code partisan or ideological judgments” into model responses unless prompted by users. The primary focus is on AI models procured by the government, with the order advising caution in regulating AI functionalities within the private sector. Nonetheless, major tech firms maintain government contracts; for instance, Google, OpenAI, Anthropic, and xAI were awarded $200 million to enhance AI capabilities for the Department of Defense.
This directive builds upon Trump’s longstanding assertions of bias within the tech industry. During his first term in 2019, the White House encouraged social media users to report perceived online censorship due to political bias. Despite these efforts, Facebook data from 2020 showed conservative content significantly outperformed more neutral material.
Trump’s 2020 executive order targeted social media companies after Twitter flagged two of his posts as potentially misleading. In response to Trump’s actions, Senator Edward Markey (D-Massachusetts) sent letters to CEOs of major tech firms, challenging the “anti-woke AI actions.”
“Even if bias claims were valid, using political power to alter platform speech is precarious and unconstitutional,” Markey argued.
While bias can be interpreted differently, some evidence suggests political inclinations in certain AI responses. Research from Stanford Graduate School of Business indicated that Americans perceive responses from popular AI models as leaning left. Brown University’s study from October 2024 also revealed AI tools can adopt positions on political subjects.
Andrew Hall, a Stanford professor, noted, “There’s evidence that, by default, models, when not personalized, tend to adopt left-wing stances.” This inclination is attributed to the way AI chatbots are trained—on diverse data sources, with human feedback guiding answer quality.
Adjusting AI models to alter their tone may lead to unforeseen consequences, warned Himanshu Tyagi, a professor and AI company co-founder. Such changes could inadvertently affect model functionality.
Recent incidents, like Elon Musk’s Grok AI chatbot displaying antisemitic responses after xAI incorporated “politically incorrect” instructions, highlight these challenges. xAI apologized, attributing the behavior to a system update.
AI’s accuracy issues persist, as demonstrated by Google’s temporary suspension of its Gemini chatbot’s human image generation capability following criticism for historical inaccuracies.
Hall theorizes that AI chatbots’ perceived left-leaning responses stem from tech companies’ efforts to prevent offensive content, resulting in skewed outputs.
Experts caution that vague terms like “ideological bias” complicate policy formation and enforcement. Questions arise about evaluation systems and decision-makers for assessing AI model adherence. Vendors are required to disclose system prompts and relevant documentation, but uncertainties remain regarding compliance verification.
Ultimately, constraints might be bypassed by instructing chatbots to align with specific political perspectives, suggests Sherief Reda of Brown University.
For AI firms collaborating with the government, Trump’s order could impose additional compliance requirements, potentially hindering innovation contrary to the intended acceleration of AI advancements.
“This kind of directive creates liability and complexity for developers, forcing a slowdown,” remarked Etzioni.
