Computer scientist Stuart Russell met with officials from the UK Ministry of Defense in October to issue a stern warning: embedding artificial intelligence in weapons could wipe out humanity.
But the pioneering artificial intelligence researcher, who has spent the past decade trying to ban the use of AI to locate and kill human targets, was unable to extract any promises from those present at the meeting. .
This week, Russell, a British professor at the University of California at Berkeley who co-wrote one of seminal textbooks on AI over 25 years ago, will use BBC radio’s annual Reith Lectures to further his case.
His calls for an international moratorium on lethal autonomous weapons have resonated with the academic community. Last week, more than 400 German AI researchers published a letter to the German government asking it to stop the development of these systems by its armed forces.
“The killing of human beings should never be automated on the basis of algorithmic formulas,” the letter said. “Such dehumanization of life and death decision-making by autonomous weapon systems must be prohibited worldwide. “
Russell, who regularly meets with governments internationally, said the United States and Russia, as well as the United Kingdom, Israel and Australia, were still against a ban.
“There is always a communication failure, a lot of governments and militaries don’t understand what the objection is,” Russell said in an interview with the Financial Times. “In very simple terms, we are not selling nuclear weapons to Tesco – and with those weapons, it will be exactly like that.
Deadly AI weapons were “small, cheap, and easy to make.” Without any control, they could soon be as ubiquitous as automatic rifles, over 100m of which are in private hands.
In the second of his four Reith talks on ‘Living with Artificial Intelligence’, which airs on BBC radio from Wednesday, Russell warned that AI weapons are no longer science fiction, but were developing rapidly, without any regulation. “You can buy them today. They are advertised on the web, ”he said.
In November 2017, a Turkish arms maker called STM announced the Kargu, a fully autonomous, rugby-ball-sized killer drone that could perform targeted strikes based on image and face recognition. The drone was used in the Libyan conflicts in 2020 to selectively hover at targets, despite an arms embargo on arms sales to Libya, according to the United Nations.
“STM is a relatively small manufacturer in a country that is not a technology leader. So you have to assume that there are programs going on in many countries to develop these weapons, ”Russell said.
He also described the Israeli government’s Harpy – a 12-foot-long fixed-wing aircraft that carries a 50-pound explosive payload, and its descendant, the Harop. Airplanes can be remotely piloted or can operate autonomously after a geography and target have been specified by a human operator.
The Harop may have been sold to India and Azerbaijan, where it was spotted in a video produced by the army. A Press release from Israeli aerospace industries that year, said “hundreds” of them had been sold.
Russell warned that the proliferation of AI weapons posed an imminent and existential threat. “A deadly AI-powered quadcopter could be as small as a box of shoe polish. . . about three grams of explosive is enough to kill a person at close range. An ordinary container could hold a million lethal weapons, and. . . they can all be sent to do their job at the same time, ”he said at his conference. “So the inevitable end point is that autonomous weapons become selective and cheap weapons of mass destruction. “
In the absence of diplomatic action, academics band together to craft their ideal version of an AI weapons ban treaty. In 2019, a handful of computer scientists, engineers and ethicists came together to solve it, at the Boston home of MIT professor Max Tegmark, co-founder of the Future of Life Institute.
The two-day meeting included roboethicist Ron Arkin of Georgia Tech; Paul Scharre, a former US Army officer who studies the future of war; and Russell, among others. Eventually, they agreed that specifying a minimum weight and size of explosives should be mandatory, so autonomous weapons cannot be used as swarms. “What you’re trying to avoid are two guys in a truck throwing a million guns,” Russell said.
Ultimately, he believes the only way to convince governments like the United States, Russia and the United Kingdom that still resist a ban is to appeal to their sense of self-preservation. As he said, “If the technical issues are too complicated, your kids can probably explain them. ”