The US government has announced an 'AI Action Plan' to screen AI for ideological bias, aiming to establish technological superiority over China.



On July 23, 2025, the White House, pursuant to an executive order from President Donald Trump, released 'Winning the AI Race: America's AI Action Plan.' The plan aims to solidify America's leadership in AI and deliver a new golden age of economic competitiveness, national security, and humanity to our people.

AI.Gov | President Trump's AI Strategy and Action Plan

https://www.ai.gov/

White House Unveils America’s AI Action Plan – The White House
https://www.whitehouse.gov/articles/2025/07/white-house-unveils-americas-ai-action-plan/

The Action Plan is based on three guiding principles that underpin the Trump Administration's AI policy:

The first is the 'workers first' principle to ensure that American workers and their families benefit from the opportunities created by the technological revolution. The second is the principle that AI systems, especially when dealing with factual information, must seek objective truth rather than social agendas and be free from ideological bias. The third is the security principle that advanced technologies must be protected from being stolen and misused by malicious actors and must be constantly monitored for unforeseen risks.


by

Ministerie van Buitenlandse Zaken

This action plan consists of three main pillars:

The first pillar is 'Accelerating AI Innovation.' This starts with removing bureaucratic regulations that hinder innovation in the private sector. Specifically, the Biden administration's AI executive orders will be revoked, and efforts will be made to identify and revise/repeal federal regulations that unfairly impede AI development. In addition, procurement guidelines will be updated to ensure that large language models (LLMs) procured by the federal government are free from ideological bias and objective.

In addition, it was recommended that references to concepts such as 'misinformation' and 'diversity, equity, and inclusion (DEI)' be removed from the National Institute of Standards and Technology (NIST) AI Risk Management Framework. To promote innovation, the plan also includes the establishment of a regulatory sandbox to support the development of open-source and open-source AI that is easy for startups and academics to use, and to encourage its use in important fields such as medicine, where the adoption of AI has been slow.

The second pillar is 'Building America's AI Infrastructure.' Because AI technology consumes huge amounts of electricity, it is essential to rapidly build data centers, semiconductor manufacturing facilities, and energy infrastructure. To that end, it is essential to streamline the permitting process under the National Environmental Policy Act (NEPA) and develop a strong power grid that can keep up with the pace of AI development. This includes stabilizing and optimizing the existing power grid and developing a growth strategy to meet future demand increases, and it is also an important goal to bring semiconductor manufacturing back to the United States and strengthen the supply chain. In addition, there is an urgent need to train engineers to build and operate this infrastructure.

The third pillar is 'Leading in International AI Diplomacy and Security,' which promotes exporting America's full suite of AI technologies, including hardware, models, software, and standards, to allies and partners to establish America's AI leadership globally. At the same time, the federal government has strongly advocated in international organizations such as the United Nations for an approach that counters authoritarian influences, especially those of China, and promotes innovation and reflects American values.

In terms of national security, the bill will strengthen enforcement of export controls to prevent cutting-edge AI computing technology from falling into the hands of hostile nations, and will close loopholes in export controls for semiconductor manufacturing. In addition, the bill will establish a system for the government to continue to be at the forefront of evaluating national security risks that frontier AI may pose, such as chemical and biological weapons development and cyber attacks.



As part of this action plan, a separate executive order, 'Preventing Woke AI in the Federal Government,' was also issued, calling for federally procured AI to be ideologically neutral.

Preventing Woke AI in the Federal Government – The White House
https://www.whitehouse.gov/presidential-actions/2025/07/preventing-woke-ai-in-the-federal-government/

The order aims to ensure that AI procured and used by the federal government, especially LLMs, remain reliable sources of information, and expresses concern that the incorporation of ideological biases and social agendas into AI could distort the quality and accuracy of the information it outputs. At the same time, AI that prioritizes specific ideologies and social agendas over factual accuracy is called 'woke AI,' and is criticized for prioritizing specific desired outcomes over objective truth and distorting the accuracy of information.

The order cites specific examples of AI generating the race or gender of historical figures that were different from historical fact, and failing to generate images that glorify the achievements of a particular race. Against this background, the order stipulates that the federal government has an obligation not to purchase 'woke AI,' a model that sacrifices truth and accuracy for ideology, in its procurement efforts.

To address this challenge, the order establishes two 'unbiased AI principles' that federally procured LLMs must follow. The first principle is 'truth-seeking,' which requires LLMs to convey the truth to users seeking fact-based information and analysis and prioritize historical accuracy and objectivity. The second principle is 'ideological neutrality,' which mandates that LLMs be neutral, nonpartisan tools that do not manipulate responses to support a particular ideology.

To ensure that these principles are effective, the Office of Management and Budget (OMB) will take the lead in developing specific implementation guidance and disseminating it to each agency within 120 days of the date of the order. Based on this guidance, compliance with these principles will be included as a condition for all future federal procurement contracts for LLM.

However, in relation to the Trump administration's AI policy, a contradictory situation has been reported, mainly surrounding Elon Musk's AI company 'xAI.' Grok, developed by xAI, has been criticized for not being ideologically neutral, for example by suddenly calling itself 'Mecha Hitler' and asserting anti-Semitism , yet xAI has already signed a large-scale contract with the Department of Defense.

US Department of Defense signs contracts with Anthropic, Google, OpenAI, and xAI worth up to $200 million each to use AI for national security - GIGAZINE



In a press conference on July 23, White House Press Secretary Caroline Leavitt responded to the question of whether President Trump supports a government contract with xAI by saying, 'I don't think so, no.' However, xAI announced on Monday that it had secured a contract with the Department of Defense worth up to $200 million, highlighting the discrepancy between the administration's intentions and reality.

TechCrunch , an IT news site, also analyzed that the Trump administration's AI action plan aims to block semiconductor exports to China, but lacks specific details on how to achieve this. Furthermore, it points out that 'rather than implementing new policies on top of existing guidelines, it merely presents the basic building blocks for future sustainable export guidelines,' predicting that the White House's plan lacks concrete effectiveness and that it will take some time to actually strengthen export controls.

According to the Financial Times , more than 500 organizations have lobbied the White House and Congress about AI in the first half of 2025 (January to June). This figure is about the same as in the first half of 2024, but it has increased by about two times since 2023. In particular, OpenAI's lobbying expenses have increased significantly, spending $380,000 (about 56 million yen) in 2023, but in the first half of 2025 alone, it seems that its expenditures have increased to $1.8 million (about 263 million yen).

in Software, Posted by log1i_yk