Contents
The Unseen Hand: How AI is Reshaping Modern Warfare and Life-or-Death Decisions
For years, the promise was that military robots would handle tasks deemed dirty, dull, difficult, and dangerous. However, today, Artificial Intelligence (AI) is already making critical decisions about launching attacks, holding sway over human lives. “These are life-and-death decisions for people,” states Dr. Kaja Kowalczewska, an assistant professor at the Digital Justice Center within the University of Wrocław and Swansea University.
AI on the Battlefield: A Current Reality
The vision of fully autonomous weapons operating without human oversight is a strategic goal for the Pentagon. To achieve this, the US Department of Defense sought to leverage AI models from Anthropic. However, Anthropic’s CEO, Dario Amodei, drew a line, refusing to participate. This costly decision led to the termination of a USD 200 million contract, compelling tens of thousands of companies in the American defense sector to remove Anthropic’s “Claude” model from their systems within six months.
The void was swiftly filled by Sam Altman’s OpenAI, which secured an agreement with the Department of Defense. This move sparked a global outcry, despite the fact that algorithm-driven weapons have been deployed on various fronts for some time.
The Hidden Depths of AI in Conflict
Dr. Kaja Kowalczewska is an expert on the role of artificial intelligence in warfare. In an interview, she sheds light on the opaque world of military AI.
The Minab School Attack: An AI Error?
Marta Zinkiewicz: Is the attack on an Iranian school in Minab an artificial intelligence error?
Dr. Kaja Kowalczewska: “Analysts claim so, but there is no official confirmation, and likely never will be. No nation will admit to possessing such systems. Discussions about AI’s use in warfare are based largely on publicly available information, which is scarce. What we know is merely the tip of the iceberg.”
What Lies Beneath the Surface?
“For over 13 years, discussions on this topic have been ongoing in Geneva. This week, the UN forum is also debating the applications of AI in armed conflicts,” Dr. Kowalczewska explains. “A decade ago, the conversation revolved around whether the decision-making process for any weapon – a rifle, chemical, or nuclear weapon – would ever be handed over to algorithms. Nations framed the discussion around ‘future systems,’ ‘yet-unused technologies,’ and ‘potential threats,’ avoiding concrete examples.”
Countries have long emphasized that purely defensive AI systems should not be banned. Their use has been observed in recent conflicts, such as the Russian attacks on Ukraine, with some systems deployed in Poland. The most well-known example is Israel’s “Iron Dome,” which autonomously intercepts missiles using algorithms.
The Burden of Decision: From Soldier to Algorithm
However, the narrative has shifted. “Today, discussions focus on AI solutions that relieve soldiers of the ‘burden of decision’,” notes Dr. Kowalczewska. “With the advancement of large language models, the discourse has moved towards AI Decision Support Systems (AI DSS). These are chatbots that provide soldiers with attack recommendations based on military data. The ultimate decision, ostensibly, remains with a human.”
Such systems are reportedly used by NATO, and Israel deployed them during military operations in the Gaza Strip. “Initially, nobody was concerned, because ‘ultimately, there’s a human making the decision.’ However, it’s often forgotten – and numerous studies confirm this – that humans tend to over-rely on AI systems.” This over-reliance introduces a critical vulnerability: the potential for commanders to rubber-stamp AI recommendations without sufficient independent verification.
Accountability in the Age of AI Warfare
Who is Legally Responsible for AI-Driven Attacks?
So, ultimately, is a human legally responsible for a decision, such as an attack on an Iranian school?
Dr. Kaja Kowalczewska: “It’s not that simple. Theoretically, the state responsible for such an attack should investigate its military commanders, hold them accountable, and determine if international humanitarian law was violated – specifically, if it constituted a war crime. But these are general principles. To date, there is no international law specifically regulating the use of AI in military operations, although the UN Secretary-General has urged member states to adopt such regulations this year.”
Currently, the law does not differentiate whether an attack was carried out using a rifle, chemical weapons, or artificial intelligence. The fundamental principles remain the same: civilian persons and objects, such as a school, must not be targeted.
The situation becomes complex when such an incident occurs. “Who do we prosecute then? The commander who decided to use the software for the mission? The soldier who pressed the proverbial button? The Minister of Defense who approved the procurement of such a solution and introduced it into the army?” she questions.
Is Everyone or No One Accountable?
In such a situation, is everyone to blame, or conversely, can everyone feel unpunished?
Dr. Kaja Kowalczewska: “Ultimately, no one is held accountable. Criminal liability exists to deter actions detrimental to society. Committing war crimes is detrimental and should not be advantageous to anyone; thus, they should not occur.”
International humanitarian law is universal. Irrespective of cultures, languages, or beliefs, all nations agreed to adopt it after World War II. It forms the core of our shared values – a compromise enshrined in law. Yet, today, new military solutions and capabilities are being tested, not in laboratories, but in real-world conflicts.
The Future of AI in Warfare: A Precarious Path
“The military has long asserted that when robots finally arrived, they would be used for tedious, boring, difficult, and dangerous tasks,” reflects Dr. Kowalczewska. “But today, we are talking about AI making decisions about launching attacks – decisions concerning human life and death. History provides numerous examples where specific weapons were banned only after countless lives were lost due to their use. The same fate could befall artificial intelligence.”
Frequently Asked Questions (FAQ)
What is the current status of AI in military operations?
AI is increasingly moving beyond support roles to making critical decisions in military operations, including recommendations for launching attacks. While fully autonomous weapons are a Pentagon goal, AI-assisted decision support systems are already in use.
What are AI Decision Support Systems (AI DSS)?
AI DSS are chatbots or software that analyze military data and provide soldiers with recommendations for combat actions, such as attack strategies. While a human is technically supposed to make the final decision, studies show a tendency for human over-reliance on these systems.
Who is responsible if an AI-driven attack results in a war crime?
This is a highly complex legal and ethical dilemma. Currently, there is no specific international law regulating AI in military actions. General international humanitarian law prohibits targeting civilians, but pinpointing accountability among commanders, soldiers, or defense officials for AI-generated errors is challenging, often resulting in no one being held directly responsible.
Has any company refused to develop AI for autonomous weapons?
Yes, Anthropic, an AI firm, famously refused to collaborate with the Pentagon on developing AI for fully autonomous weapons, leading to the termination of a USD 200 million contract. OpenAI subsequently partnered with the Department of Defense.
Source: Original article by Marta Zinkiewicz with Dr. Kaja Kowalczewska. Opening photo: Gemini