Automotive

The Five Most Worrying Trends in Artificial Intelligence Right Now


Photo: Tomohiro Ohsumi (Getty)

Artificial intelligence is already beginning to spiral out of our control, a new report from top researchers warns. Not so much in a Skynet kind of sense, but more in a ‘technology companies and governments are already using AI in ways that amp up surveillance and further marginalize vulnerable populations’ kind of way.

On Thursday, the AI Now Institute, which is affiliated with New York University and is home to top AI researchers with Google and Microsoft, released a report detailing, essentially, the state of AI in 2018, and the raft of disconcerting trends unfolding in the field. What we broadly define as AI—machine learning, automated systems, etc.—is currently being developed faster than our regulatory system is prepared to handle, the report says. And it threatens to consolidate power in the tech companies and oppressive governments that deploy AI while rendering just about everyone else more vulnerable to its biases, capacities for surveillance, and myriad dysfunctions.

Advertisement

The report contains 10 recommendations for policymakers, all of which seem sound, as well as a diagnosis of the most potentially destructive trends. “Governments need to regulate AI,” the first recommendation exhorts, “by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain.” One massive Department of AI or such that attempts to regulate the field writ large won’t cut it, researchers warn—the report suggests regulators follow examples like the one set by the Federal Aviation Administration and tackle AI as it manifests field by field.

But it also conveys a the succinct assessment of the key problem areas in AI as they stand in 2018. As detailed by AI Now, they are:

  1. The accountability gap between those who build the AI systems (and profit off of them) and those who stand to be impacted by the systems (you and me) is growing. Don’t like the idea of being subjected to artificially intelligent systems that harvest your personal data or determine various outcomes for you? Too bad! The report finds that the recourse most public citizens have to address the very artificially intelligent systems that may impact them is shrinking, not growing.
  2. AI is being used to amplify surveillance, often in horrifying ways. If you think the surveillance capacities of facial recognition technology are disturbing, wait till you see its even less scrupulous cousin, affect recognition. The Intercept’s Sam Biddle has a good write-up of the report’s treatment of affect recognition, which is basically modernized phrenology, practiced in real time.
  3. The government is embracing autonomous decision software in the name of cost-savings, but these systems are often a disaster for the disadvantaged. From systems that purport to streamline benefits application processes online to those that claim to be able to determine who’s eligible for housing, so-called ADS systems are capable of uploading bias and erroneously rejecting applicants on baseless grounds. As Virginia Eubanks details in her book Automating Inequality, the people these systems fail are those who are least able to muster the time and resources necessary to address them.
  4. AI testing “in the wild” is rampant already. “Silicon Valley is known for its ‘move fast and break things’ mentality,” the report notes, and that is leading to companies testing AI systems in the public sector—or releasing them into the consumer space outright—without substantial oversight. The recent track record of Facebook—the original move fast, break thingser and AI evangelist—alone is example enough of why this strategy can prove disastrous.
  5. Technological fixes to biased or problematic AI systems are proving inadequate. Google made waves when it announced it was tackling the ethics of machine learning, but efforts like these are already proving too narrow and technically oriented. Engineers tend to think they can fix engineering problems with, well, more engineering. But what is really required, the report argues, is a much deeper understanding of the history and social contexts of the datasets AI systems are trained on.

Advertisement

The full report is well worth reading, both for a tour of the myriad ways AI entered the public sphere—and collided with the public interest—in 2018, and for a detailed recipe for how our institutions might stay on top of this ever-complicating situation.


This story is part of Automaton, an ongoing investigation into the impacts of AI and automation on the human landscape. For tips, feedback, or other ideas about living with the robots, I can be reached at bmerchant@gizmodo.com.

Advertisement

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Popular

To Top