Automotive

Thousands of Top AI Experts Vow to Never Build Lethal Autonomous Weapons


A KZO reconaissance drone of the Bundeswehr, the German armed forces, launches with the help of a booster rocket during Thunder Storm 2018 multinational NATO military exercises on June 7, 2018 near Pabrade, Lithuania.
Photo: Getty

Hundreds of companies and thousands of individuals, many of them researchers and engineers prominent in the fields of robotics and artificial intelligence, vowed on Wednesday never to apply their skills toward the creation of autonomous killing machines.

Led by the Future of Life Institute, a Boston-based nonprofit, as many as 160 AI-related companies in 36 countries, and 2,400 individuals in 90 countries, signed the pledge stating that autonomous weapons posed a “clear and present danger to the citizens of every country in the world,” and that they would not participate in their development.

Advertisement

“Artificial intelligence (AI) is poised to play an increasing role in military systems,” the pledge states. “There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.”

The signatories, who join 26 United Nations countries that have explicitly called for a ban on lethal autonomous weapons, include DeepMind, Google’s top AI research team; the European Association for AI; ClearPath Robotics/OTTO Motors; the XPRIZE Foundation, the Swedish AI Society; and University College of London, among others. Leading AI researchers Demis Hassabis, Stuart Russell, Yoshua Bengio, Anca Dragan, Toby Walsh, and Tesla and SpaceX founder Elon Musk are among the individuals who also signed the pledge.

“We cannot hand over the decision as to who lives and who dies to machines,” said Walsh, scientia professor of artificial intelligence at the University of New South Wales in Sydney, adding that lethal autonomous weapon systems, or LAWS, “do not have the ethics to do so.”

Advertisement

Nearly one year ago, 116 experts, among them Musk and Google AI expert Mustafa Suleyman, asked the United Nations to ban autonomous killing machines, calling them “weapons of terror.” “We do not have long to act,” the experts warned. “Once this Pandora’s box is opened, it will be hard to close.”

Engineers and scientists, in the wake of the U.S. government’s escalation of military drone use across the world, have warned that autonomous machines will be vulnerable to hackers, could be hijacked and turned on innocent populations, and that they will inevitably be easy for malicious actors to obtain and build on their own.

Seeking to illustrate life under the threat of autonomous killer drones, the Future of Life Institute helped produce the “Slaughterbots” video (below), which at one point depicts the murder of thousands of university students. The students, identified using facial recognition, were targeted by an unknown actor after sharing a video on social media “exposing corruption.” The video was created by the Campaign to Stop Killer Robots, an international coalition working to ban autonomous weapons, which includes the Future of Life Institute (FLI).

Slaughterbots

“No nation will be safe, no matter how powerful,” said Clearpath Robotics CEO Ryan Gariepy in a statement.

Advertisement

Skype co-founder and FLI member Jaan Tallinn told Gizmodo that weapons that do not require human operators are the “perfect tool” for terrorists and rogue states. “By definition,” he said, “they don’t require much manpower and their use will likely be hard to attribute (just like the cyberattacks are difficult to attribute today).”

“Ironically, by supporting the development of autonomous weapons,” added Tallinn, “the existing military powers might end up handing over their power to non-state actors and fringe groups.”

The signatories of the Lethal Autonomous Weapons Pledge further urged the UN, which will meet on the issue of autonomous weapons in August, to develop a commitment between countries that will lead to their prohibition.

Advertisement

The full text of the pledge is below:

Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.

In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable. There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual. Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems. Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage. Stigmatizing and preventing such an arms race should be a high priority for national and global security.

We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons. We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.

Advertisement

Additional reporting by George Dvorsky.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Popular

To Top