Opinion: AI In Warfare—The Tech Industry’s Quiet Shift Toward the Battlefield

Image generated using ChatGPT

Opinion: AI In Warfare—The Tech Industry’s Quiet Shift Toward the Battlefield

Reading time: 5 min

The debate over autonomous weapons, tech security policies, and AI ethics in the military has been ongoing, but recent days have brought major developments. Leaders from OpenAI, DeepSeek, and even Spotify’s founder announced new agreements to work with governments on defense technologies and strategic AI

Tensions around the use of artificial intelligence in warfare have intensified in the past few days. This month, several tech companies announced new strategic partnerships with governments to develop defense projects. And, as with much in the AI space, there’s been a sharp shift in recent months in how AI is being approached for military and weapons development.

Just days ago, OpenAI and the U.S. government announced a $200 million deal to develop AI-powered defense tools. Details remain scarce, with officials emphasizing “administrative operations” as the primary application.

Meanwhile, Swedish entrepreneur and Spotify founder Daniel Ek has backed the German company Helsing by leading a €600 million investment round. Helsing, which originally focused on software technology, is now moving into drones, submarines, and aircraft development.

Reuters recently revealed that DeepSeek is helping China’s military and intelligence operations. A senior U.S. official said that the AI startup has been helping solve the challenges in the U.S.-China trade war, and its open-source model is helping the Chinese government in surveillance operations.

Tech giants are collaborating with governments in ways we’re not used to seeing—at least not so publicly—and they’re getting involved in activities that traditionally haven’t been part of their role, like senior tech executives joining the U.S. Army Reserve.

What’s going on?

A Shift in Speech

Tech companies went from “We would never use AI for military purposes” to “Maybe we will silently delete this clause from our policies” to “Great news, we are now building AI-powered weapons for the government!”

At least, that’s how it appears to the attentive observer.

Not long ago, AI giants seemed proud to declare they would never support military applications, but something changed. Google is a great example.

In 2017, the U.S. Department of Defense launched Project Maven, the Algorithmic Warfare Cross-Functional Team, an initiative to integrate AI into military operations. Google was initially involved, but internal protests—driven by employee concerns over ethics—prompted the company to withdraw temporarily.

Last year, another push towards military activities arose, and almost 200 Google DeepMind workers urged the company to drop the military contracts.

“Any involvement with military and weapon manufacturing impacts our position as leaders in ethical and responsible AI, and goes against our mission statement and stated AI Principles,” wrote the concerned employees.

This time, Google’s response was to wait and quietly update its AI ethics guidelines by removing the part where they said they would never develop AI technology that could cause harm. Demis Hassabis, Google’s AI head, explained that they were just adapting to changes in the world.

While Google’s case illustrates the evolving relationship between AI and military use, it’s just one example of a broader industry-wide shift toward serving defense objectives.

AI Is Reshaping the Military and Defense Sector

The launch of Project Maven, or as some might call it, “when the U.S. government realized large language models could be extremely useful in warfare,” disclosed one of the reasons why the U.S. government is interested in AI.

AI systems’ abilities to process massive amounts of data, identify objects on the battlefield, and analyze imagery are especially appealing in the defense sector.

Enhanced Analyses, Beyond Human Capabilities

Since 2022, both Ukraine and Russia have been integrating AI systems into their military operations.

The Ukrainian government has partnered with tech companies and deployed multiple strategies to make the most out of large language models. It recently processed 2 million hours of battlefield footage—the equivalent of 228 years of video—to train AI models for military processes. How many humans would they need to analyze that much data?

“This is food for the AI: If you want to teach an AI, you give it 2 million hours (of video), it will become something supernatural,” explained the founder of the non-profit digital system OCHI, Oleksandr Dmitriev. The footage can optimize weapons performance and help improve combat tactics.

Another AI system, Avengers, is the AI-powered intelligence platform developed by Ukraine’s Ministry of Defense Innovation Center, which processes live videos from drones and identifies up to 12,000 enemy units weekly.

Drones: A Hot Commodity In The Battlefield

Drones on the battlefield—often referred to as “killing machines”—are currently among the most valuable technologies in modern warfare due to their autonomy, precision, and low cost. These robots allow warring nations to carry out high-impact strikes without risking human pilots and at a fraction of the traditional expense.

By May this year, Russia had deployed over 3,000 Veter kamikaze drones in Ukraine. These systems are capable of identifying targets and executing attacks autonomously.

Just days ago, Ukrainian soldiers deployed the Gogol-M drone, a “mothership” drone that can travel up to 300 kilometers, carry other drones, evade radar by flying at low altitudes, and scan the ground beneath it to detect and attack enemy troops.

According to The Guardian, each attack using this powerful drone costs around $10,000, whereas a missile system using slightly older technology would have cost between $3 and $5 million.

The brand new startup Theseus quickly raised $4.3 million after its young founders shared a post on the social media platform X last year, saying that they had built a drone for less than $500 that could fly without a GPS signal.

Although drone technology is not yet as precise as some developers hope—especially when affected by weather conditions that reduce its “visibility”—it has shown great potential in the sector.

A Hard-to-Reach Global Consensus

It’s not just countries at war or the world’s major powers that are developing new AI-powered technologies for defense. Many nations have been integrating AI into cybersecurity efforts and autonomous weapons development for years. This isn’t just a 2025 phenomenon.

Since 2014, the United Nations has been attempting to agree on regulatory frameworks with multiple nations, without success.

Over 90 nations recently gathered at the U.N. General Assembly in New York to discuss the future of AI-controlled autonomous weapons and their regulations. They did not reach consensus, and the General Assembly has only passed a non-binding resolution from 2023, which warns about the need to address lethal autonomous weapons systems (LAWS).

The big debate now is on whether to implement a global framework or not. Many countries agree on the need for new global guidelines that can regulate private AI companies and nations. Other countries, such as the U.S., China, Russia, and India, prefer to keep the current international laws and create, independently, new ones for each nation according to their local needs—or interests. And we’ve just witnessed how chaotic the process of creating new AI regulations was, even at the state level in California.

Tech Companies More And More Involved

Activists such as Laura Nolan of Stop Killer Robots worry about the lack of safety measures and legal frameworks that control the advancement of tech companies in the development of autonomous weapons and AI software for the military.

“We do not generally trust industries to self-regulate … There is no reason why defence or technology companies should be more worthy of trust,” said Nolan to Reuters.

In 2024, researchers revealed that Chinese institutions have been using Meta’s open-source large language model Llama for military purposes. The Pentagon reached a deal with Scale AI to develop Thunderforge—an AI project to modernize military decision-making. And OpenAI partnered with military contractor Anduril—a defense ally of the U.S. Military, the UK, Ukraine, and Australia.

Defense startups have also grown in Europe, gaining ground not only in the development of new technologies and projects but also in attracting top talent.

A Complicated Development

Another factor closely tied to tech companies’ involvement in national defense strategies is nationalism. More and more software developers and AI experts are choosing to work on projects that align with their ideals and cultural roots rather than simply chasing higher salaries. Some have even turned down jobs in the U.S. that offered twice the pay—such as Google or OpenAI—to join European ventures like Helsing, for example.

The threads of politics, technology, nationalism, and ideological battles are becoming increasingly intertwined—often leaving behind considerations of ethics, morality, and humanism.

Recent developments make it clear that tech giants are playing a huge role in military and national defense efforts around the world. The development of autonomous weapons and war-related technologies is advancing at an ultra-fast pace, while efforts by the United Nations to establish international agreements and regulations for the future of humanity appear increasingly minimized.

Without international agreements—and with ambitious tech companies backed by governments to develop the world’s most powerful weapons using AI—what does the future hold for humanity in the years to come?

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
0 Voted by 0 users
Title
Comment
Thanks for your feedback