Google has quietly stripped away key ethical commitments from its AI principles, paving the way for potential military applications. The move is part of a larger trend among tech giants—Meta, OpenAI, and Anthropic—aligning their AI ambitions with U.S. national security goals. While the changes are framed as necessary for global competition, the growing militarization of AI raises concerns about human rights and the risks of AI-driven warfare.
Google’s Silent Policy Shift
Last week, Google updated its AI principles, removing long-standing commitments that previously ruled out military and surveillance applications. The company no longer explicitly prohibits:
- AI technologies that could cause harm.
- Weapons or systems designed to injure people.
- Surveillance tools violating international norms.
- AI applications that contradict international law and human rights.
This change follows a decision by U.S. President Donald Trump to revoke an executive order from the Biden administration, which aimed to regulate AI development responsibly. Without those guardrails, major tech companies are now integrating AI into national security initiatives with fewer constraints.
One sentence in Google’s updated policy remains: the company says it will align with “widely accepted principles of international law and human rights.” However, critics argue that this vague language leaves too much room for interpretation—especially when previous, more explicit commitments have been erased.
Big Tech’s Move Into National Security
Google is not alone in this shift. Over the past several months, other major AI firms have moved closer to the defense sector.
In November 2024, Meta announced that its Llama AI models would be available to government agencies working on defense and national security, despite its own policies banning such uses. Around the same time, AI firm Anthropic partnered with Palantir and Amazon Web Services to provide AI solutions to U.S. intelligence and military agencies. OpenAI, known for developing ChatGPT, teamed up with defense startup Anduril Industries to integrate AI into military defense systems.
This trend is not happening in isolation. The Biden administration previously met with AI industry leaders to discuss national security concerns, followed by a task force to oversee AI development in critical sectors. These government-backed initiatives have encouraged tech companies to shift their AI policies to align with defense priorities.
Tensions With China Are a Key Factor
The U.S. government has been pushing to secure AI leadership, especially amid growing competition with China. Export restrictions imposed in 2022 blocked China’s access to high-end AI chips, triggering retaliatory restrictions on critical materials used in chip manufacturing. The rivalry escalated further when Chinese company DeepSeek unveiled its own AI models, reportedly developed using 10,000 Nvidia A100 chips purchased before U.S. restrictions took effect.
Against this backdrop, the loosening of ethical constraints by U.S. tech companies makes strategic sense. With national security concerns intensifying, AI firms are positioning themselves as critical allies of the U.S. government in the technological arms race. However, the implications of this alignment stretch far beyond geopolitical competition.
AI on the Battlefield: The Risks and Consequences
The use of AI in warfare is not hypothetical—it’s already happening.
In the ongoing war in Gaza, the Israeli military has employed AI tools to identify targets. Reports from soldiers indicate these tools often make inaccurate assessments, contributing to civilian casualties. Microsoft and Google have provided computing power to support these AI systems, further entangling commercial tech companies in military operations.
The growing use of AI in combat raises urgent ethical questions:
- How reliable are AI-powered targeting systems?
- Who is accountable when AI-driven decisions result in civilian deaths?
- Should private corporations be deciding how AI is used in war?
Human rights organizations, including Human Rights Watch, have criticized Google’s policy shift. They argue that removing explicit restrictions against AI-driven weapons contradicts established international laws protecting civilians and ensuring security. While Google insists it will adhere to international law, it has not provided clear details on how it will enforce ethical standards internally.
The Future of AI in Military Applications
With tech giants now working more closely with defense agencies, AI’s role in military operations is set to expand. The U.S. and its allies are accelerating AI-driven defense projects, while China is making its own advances. The race for AI dominance is shaping the future of warfare—and reshaping how tech companies define their ethical boundaries.
For now, Google’s removal of key ethical clauses signals a shift in priorities. The question is no longer whether AI will be used in military operations, but how far companies will go in integrating their technology into warfare—and at what cost.