Automotive

U.S. Military Adopts New Ethics Guidelines For Artificial Intelligence


A robot tank, part of the Multi-Utility Tactical Transport (MUTT) family of systems, operates on Red Beach during an exercise at Camp Pendleton, California in 2017.
Photo: U.S. Navy/DVIDS

The U.S. military has adopted new ethics guidelines for the use of artificial intelligence in its futuristic robot tanks and smart weapons systems, according to a new press release by the U.S. Department of Defense. And at least one outside expert is actually impressed with the result, provided the Pentagon adheres to its own rules.

The Defense Department consulted with “leading AI experts” on the guidelines and laid out five considerations after 15 months of research, according to a DoD press release. The five considerations are that the AI be: responsible, equitable, traceable, reliable, and governable.

Advertisement

From the press release:

  1. Responsible. DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
  2. Equitable. The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  3. Traceable. The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
  4. Reliable. The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.
  5. Governable. The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

All of that leaves some loopholes, of course, as the U.S. military picks up where it left off in the 1980s—the last time DARPA tried to literally build Skynet. But Peter W. Singer, author of the Like War: The Weaponization of Social Media and the upcoming Burn-In, a book on AI and future of robotics, says that he’s “very supportive” of the new directive.

“In many ways, the US military is now ahead of not just other nation’s militaries, but most of the technology industry in thinking about the ethics of developing and using AI and increasingly autonomous robotics,” Singer told Gizmodo through Twitter DM.

Advertisement

“It will set a marker that others will have to react to, which is added value,” Singer continued. “Of course, the devil is the details of the algorithm so to speak, in that guiding principles are not the same thing as how it will eventually and actually be used in the field. But that is true for any ethical guidelines, not just AI ones.”

The Defense Department insists that it really will adhere to its new ethical guidelines, saying that it respects the law.

Advertisement

“AI technology will change much about the battlefield of the future, but nothing will change America’s steadfast commitment to responsible and lawful behavior,” the Trump regime’s Defense Secretary Mark T. Esper said in a statement.

He added that adopting “AI ethical principles will enhance the department’s commitment to upholding the highest ethical standards as outlined in the DOD AI Strategy, while embracing the U.S. military’s strong history of applying rigorous testing and fielding standards for technology innovations.”

Advertisement

It’s tough to believe that the Trump regime is at all interested in lawful behavior on the battlefield, given the president’s recent clemency for Navy Seal Eddie Gallagher, a man who was convicted in July 2019 of posing with the body of a dead teen in Iraq. Gallagher’s own platoon described him as “freaking evil” claiming that he’d shoot and kill “anything that moved.” His fellow troops even sabotaged the sight on his rifle in an attempt to save innocent civilian lives that he allegedly picked off from a distance.

Trump called Gallagher a “great fighter” and accused the “deep state” of trying to make Gallagher out to be a bad guy. The “deep state” in this case was both Gallagher’s fellow soldiers as well as the Pentagon. So while some experts might be optimistic that these new guidelines will be followed as AI is rolled out by the U.S. military, we’re not going to hold our breath, at least while President Trump is still in office.

Advertisement

But we’ll see. Only time will tell whether our future looks more like Terminator 2 or WALL-E. At this rate, with COVID-19 circling the globe, we’re just hoping it doesn’t look like Contagion.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Popular

To Top