Have we entered the age of AI warfare?

The Israeli military allegedly used an artificial intelligence system to identify potential Palestinian targets in Gaza based on apparent links to Hamas, according to an Israeli media investigation citing military intelligence sources.

The AI system, called Lavender, at one point identified up to 37,000 Palestinians as potential Hamas militants and targets for possible air strikes. The claim comes from the testimony of six alleged Israeli intelligence officers given to Israel-based media organisations +972 Magazine and Local Call.

According to +972 Magazine, the Israeli army gave “sweeping approval for officers to adopt Lavender’s kill lists” in the early stages of the war. There was “no requirement to thoroughly check why the machine made those choices or to examine the raw intelligence data on which they were based”.

What did the commentators say?

Israel’s alleged use of powerful AI systems in its war on Hamas “has entered uncharted territory for advanced warfare”. It not only raises a “host of legal and moral questions” but it is also “transforming the relationship between military personnel and machines”, said The Guardian.

One source told +972 Magazine that military personnel served only as a “rubber stamp” for Lavender’s decisions, with about “20 seconds” devoted to each target before a bombing was authorised. This was despite knowledge that the system produced “errors” in about 10% of cases, and was “known to occasionally mark individuals who have merely a loose connection to militant groups, or no connection at all”, said the magazine.

  Israel hits Iran with retaliatory airstrike

The Israeli military has strongly denied the claims. “The IDF [Israel Defense Forces] outright rejects the claim regarding any policy to kill tens of thousands of people in their homes,” it said in response to the allegations. 

It said that the Lavender system was “simply a database whose purpose is to cross-reference intelligence sources, to produce up-to-date layers of information on the military operatives of terrorist organisations”.

The supposed utilisation of powerful AI systems, such as Lavender, has enabled life-or-death decision-making processes based on “statistical mechanisms”, said The Guardian, rather than human emotion and human-led decision-making. As one intelligence officer who allegedly used Lavender told the paper: “The machine did it coldly. And that made it easier.”

“Technological innovation has always changed warcraft,” said Andreas Kluth on Bloomberg in March. “It’s been that way since the arrival of chariots, stirrups, gunpowder, nukes and nowadays drones, as Ukrainians and Russians are demonstrating every day.” The most pressing “existential” question over the use of AI in warfare is now less about AI itself, and more to do with the level of human oversight. “Will the algorithm assist soldiers, officers and commanders, or replace them?”

The deployment of AI-enabled weapon systems has profound implications for the future of warfare, according to Dr Elke Schwarz, lecturer in political theory at Queen Mary, University of London. It may lead to the “objectification of human targets, leading to heightened tolerance for collateral damage” as well as weakening moral agency among operators of AI-enabled targeting systems, “diminishing their capacity for ethical decision-making” in the heat of battle. 

  Sudoku medium: February 22, 2024

“We don’t want to get to a point where AI is used to make a decision to take a life when no human can be held responsible for that decision”, said Dr Schwarz. 

What next?

International efforts, such as the US-led Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, have aimed to establish guidelines for the ethical deployment of AI in warfare, said Forbes. More than 50 countries have signed the declaration, which states that military use of AI must comply with international and humanitarian law and attempt to “minimize unintended bias and accidents”. 

But Israel is not a signatory to the non-binding declaration, nor are Russia, China and other major world powers. “Perhaps what is emerging about AI’s role in Gaza will encourage the world to negotiate an actual treaty on such things,” said Forbes. 

“Autonomous weapons are an early test of humanity’s ability to deal with weaponized AI, more dangerous forms of which are coming,” said Paul Scharre, the director of studies at the Center for a New American Security, on Foreign Affairs. “Global cooperation is urgently needed to govern their improvement, limit their proliferation, and guard against their potential use.”

(Visited 1 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *