Why Artificial Intelligence Researchers Should Be More Paranoid

Report highlights risks of AI, and urges some research be kept quiet. A robot-turned-assassin?
A new report highlights risks of artificial intelligence, such as malicious self-driving cars and robots programmed to be assassins.Ben Bours

Life has gotten more convenient since 2012, when breakthroughs in machine learning triggered the ongoing frenzy of investment in artificial intelligence. Speech recognition works most of the time, for example, and you can unlock the new iPhone with your face.

People with the skills to build things such systems have reaped great benefits—they’ve become the most prized of tech workers. But a new report on the downsides of progress in AI warns they need to pay more attention to the heavy moral burdens created by their work.

The 99-page document unspools an unpleasant and sometimes lurid laundry list of malicious uses of artificial-intelligence technology. It calls for urgent and active discussion of how AI technology could be misused. Example scenarios given include cleaning robots being repurposed to assassinate politicians, or criminals launching automated and highly personalized phishing campaigns.

One proposed defense against such scenarios: AI researchers becoming more paranoid, and less open. The report says people and companies working on AI need to think about building safeguards against criminals or attackers into their technology—and even to withhold certain ideas or tools from public release.

The new report has more than two dozen authors, from institutions including the universities of Oxford and Cambridge, the Elon Musk-funded institute OpenAI, digital-rights group the Electronic Frontier Foundation, computer-security company Endgame, and think tank Center for a New American Security.

Ethics has become a major topic of discussion in machine learning over the past year. The discussion has been triggered in part by government use of algorithms to make decisions that affect citizens, such as criminal defendants, and incidents where machine-learning systems display biases. Microsoft and IBM recently had to reeducate facial-analysis services they sell to businesses, because they were significantly less accurate at identifying the gender of people with darker skin.

Tuesday’s report is concerned with the more visceral harms that could result from AI software becoming much more capable and pervasive, for example in autonomous cars, or software that can automate complicated office tasks. It warns that such systems could be easily modified to criminal or even lethal ends.

A compromised autonomous vehicle could be used to deliver explosives or intentionally crash, for example. Work on creating software capable of hacking other software, for example as sponsored by the Pentagon, might help criminals deploy more powerful and adaptable malware.

What to do about that? The report’s main recommendation is that people and companies developing AI technology discuss safety and security more actively and openly—including with policymakers. It also asks AI researchers to adopt a more paranoid mindset and consider how enemies or attackers might repurpose their technologies before releasing them.

If taken up, that recommendation would stifle the unusual openness that has become a hallmark of AI research. Competition for talent has driven typically secretive companies such as Amazon, Microsoft, and Google to openly publish research, and release internal tools as open source.

Shahar Avin, a lead author on the new report, and researcher at Cambridge University’s Center for the Study of Existential Risk, says the field’s innocent attitude is an outdated legacy from decades of AI over-promising but under-delivering. “People in AI have been promising the moon and coming up short repeatedly,” he says. “This time it’s different, you can no longer close your eyes.”

Tuesday’s report acknowledges that drawing a line between what should and shouldn’t be released is difficult. But it claims that the computer security, biotechnology, and defense communities have shown that it is possible to develop and enforce norms around responsible disclosure of dangerous ideas and tools.

Avin argues that in some cases, the AI community is close to the line. He points to research by Google on how to synthesize highly realistic voices. In light of how Russian operatives attempted to manipulate the 2016 presidential election, research that could aid production of fake news should come with discussion of tools that might defend against it, he says. That might include methods to detect or watermark synthetic audio or video, Avin says. Google did not respond to a request for comment.

Internet companies, including Reddit, are already battling porn videos manipulated to star celebrities created using open-source machine-learning software known as Deepfakes.

Some people working on AI are already trying to open their eyes—and those of future AI experts—to the potential for harmful use of what they’re building. Ion Stoica, a professor at University of California Berkeley who was not involved in the report, says he’s collaborating more actively with colleagues in computer security, and thinking about public policy. He was lead author of a recent survey of technical challenges in AI that identified security and safety as major topics for research and concern.

Stoica says Berkeley is also trying to expose undergraduate and grad students flocking to AI and machine-learning courses to that message. He’s optimistic that a field previously focused primarily on discovery can adopt best practices seen among those who build business and consumer tech products, bridges, and aircraft. “We are trying to turn machine learning into more of an engineering discipline,” Stoica says. “There is a gap, but I think that gap is narrowing.”

Breaking Bad