
Photo by Mathias Reding on Unsplash
Nations Debate Future of AI Weapons at UN
Nations gathered at the U.N. General Assembly in New York on Monday to discuss the future of AI-controlled autonomous weapons and potential regulations governing their use. Experts warn of a growing urgency, citing both a lack of international consensus and limited time.
In a rush? Here are the quick facts:
- Nations gather at the U.N. General Assembly in New York to discuss the future of autonomous AI-controlled weapons.
- Experts are concerned about the proliferation of unregulated autonomous weapons and the nonexistent frameworks for the technology.
- China, the United States, India, and Russia don’t support the creation of a binding global framework.
According to Reuters, AI is already playing a significant role in current conflicts, particularly in regions such as Ukraine and Gaza. A few months ago Ukraine revealed it had collected around 2 million hours of battlefield footage to train AI systems.
Regulatory frameworks for AI technologies are struggling to keep pace. Although discussions on autonomous weapons have been ongoing at the Convention on Certain Conventional Weapons (CCW) in Geneva since 2014, no binding global regulations have been established to date.
U.N. Secretary-General António Guterres has set 2026 as the target for reaching an international consensus on a new legal framework.
“Time is really running out to put in some guardrails so that the nightmare scenarios that some of the most noted experts are warning of don’t come to pass,” said Alexander Kmentt, head of arms control at Austria’s foreign ministry, to Reuters.
Besides autonomous weapons, this week’s meetings will address other critical topics such as human rights and ethical concerns, and the involvement of non-state actors. While most countries support the creation of a binding global framework, others—such as China, the United States, India, and Russia—favor relying on existing international laws or national guidelines.
Multiple organizations, including Human Rights Watch, have expressed concern over the proliferation of unregulated autonomous weapons across various regions. The Future of Life Institute has identified approximately 200 autonomous weapons systems in use in locations such as Africa, the Middle East, and Ukraine.
“We do not generally trust industries to self-regulate… There is no reason why defence or technology companies should be more worthy of trust,” said campaigner Laura Nolan of Stop Killer Robots to Reuters.
The use of AI and autonomous systems to develop weapons is gaining space in the tech industry. Google lifted its ban of AI use for weapons a few months ago, Chinese researchers have been using Meta’s Llama model for military applications, and the startup Theseus recently raised $4.3 million to develop autonomous drones.