The last firewall
We’ve been transferring the contents of our minds into technology for thousands of years. Clay tablets and cave paintings helped us record our stories and where we buried our bones. More recently, we’ve begun externalizing gigabytes-worth of our most essential memories, capacities and experiences into smart devices. At the same time, those devices are growing ever smaller and are getting ever closer to our bodies: we’re beginning to wear Google Glass on our faces, Fitbits on our wrists, insulin pumps on our hips and soon glucose-measuring contact lenses on our eyeballs.
But our self-extending, IT-heavy devices are no longer merely near or on us. They are making their way into us. Pacemakers, defibrillators and cochlear implants are now in wide use, and smart prosthetics, brain-machine interfaces and deep-brain-stimulation devices are becoming so. Each of these mechanisms generates and transmits a stream of data that may be life-supporting and supremely personal.
“When we put chips inside the body, the game changes,” says professor of electrical engineering and computer sciences Jan Rabaey. He and a team of Berkeley engineers are working with surgeons at UC San Francisco on a new generation of implantable devices that can both read data directly from the brain and deliver signals back. One such tool records electrical activity through sensors implanted deep within the brain, analyzes it in real-time and delivers therapeutic doses of electrical stimulation when and where it can best mitigate neurological and psychiatric disorders. Some conditions that may be studied and treated with such a device are Parkinson’s disease, essential tremor and treatment-resistant obsessive-compulsive disorder. The potential of such a device to tailor therapeutic treatments to individual patients is tremendous, says Rabaey.
As tiny wireless transmitters beam data between implanted chips in the brain and then out of the brain to be analyzed and converted into action by an external interrogator, signals can be vulnerable to exposure and, potentially, manipulation. Most researchers aren’t thinking about security and privacy yet. They are focused more narrowly on the technological hurtles of converting biological signals, like neuronal activity, into something a machine can read and work with, says Rabaey. Perhaps understandably so. But Rabaey is acutely aware of the security and privacy threats implicit in such technology, and he is trying to get the attention of his colleagues.
“Right now,” says Rabaey, “we’re working on the very edge of what is even possible. If we can get these brain-phase-reading machines to work, that’s amazing. Most of us are just not focused on security and privacy. Those seem like problems for the future.” Maybe so, he adds, but the future is quickly hurtling this way.
Whether or not proliferation of brain-implanted devices will entirely “change the game,” as Rabaey says (Miguel Nicolelis, a neuroscientist at Duke University, suggests our merging with our machines constitutes a speciation event), it will certainly raise serious security and privacy issues.
“We’re creeping around in human thought, not just tracking human glucose levels,” says Robert Knight, Berkeley professor of neuroscience and director of the Knight Cognitive Neuroscience Research Laboratory.
Knight says, “If you’re designing a car, you don’t want to have to figure out how to add a bumper and an airbag after the rest of the thing is done. Safety features should be part of the primary design.”
Hacks come in many varieties, says Dawn Song, professor of electrical engineering and computer sciences and winner of a 2011 MacArthur Fellowship. In 2012, she and several colleagues published a study showing how they could secretly glean personal information (such as important numbers and home location) from the users of commercially available EEG headsets. The headsets, which read the signature electrical discharges emitted by different kinds of brain activity, cost between $100 and $300 and are increasingly popular among gamers, those practicing memory exercises and as interface drivers for other kinds of computer applications. The two most popular manufacturers, Neurosky and Emotiv, have thousands of units in the market and predict the number will soon reach into the millions.
Song and her colleagues studied 36 undergraduates who engaged in various interactive exercises while wearing the EEG headsets. They were shown a series of seemingly random photographs while their brain activity was observed. Without the students’ knowledge, their brains were being plumbed for hints about such information as memorized PIN numbers, the locations of their homes and whether or not they recognized various faces.
The headsets read electrical discharges from the firing of billions of neurons throughout a brain. However, those signals are getting filtered through brain tissue and skull, and since each electrode is capturing the combined firings of so many neurons, the signals generated are both very rough and weak, says Knight. “You cannot do anything like read a person’s mind by watching an EEG,” he says. “And you never will be able to.”
But if you can observe how the EEG responds to certain images while simultaneously observing the brain’s response, you can infer things about the subject’s relationship to the images, says Song. In particular, Song’s team was interested in the characteristic spike in electrical activity that follows about 300 milliseconds after the subject sees something recognizable and significant, a relationship known as the P300 recognition response.
Song and her colleagues showed their subjects a sequence of images and numbers and looked for P300 signals. After showing many anonymous faces, the experimenters flashed an image of President Barack Obama, evoking a P300 response. After viewing a series of maps, subjects were shown a map of their own neighborhood, which catalyzed a P300 recognition response. Showing 10 maps, the researchers could guess which of them contained the subject’s own neighborhood with about 60 percent accuracy. When the subjects memorized a four-digit PIN and were then flashed a random series of numbers, Song and her colleagues were able to guess, with an accuracy of about 35 percent, which of the numbers started the PIN series.
That’s remarkable, but it is hardly good enough to enable malicious observers to glean social security numbers from the brains of users. Unless, Song says, the users are employing the headsets over long periods, observing many relationships and zeroing in on certain meaningful combinations. That’s not impossible, says Song, if EEG headset users played online games that allowed the game’s designers to control the images and to observe the brain-wave data associated with it. The user might think they were just playing an innocent game, while they were actually revealing personal information. That scenario becomes more plausible since the headset manufacturers publish application programming interfaces that allow third-party developers to write games and other apps that employ the devices.
“The attacker doesn’t even need to break into anything,” Song says. “He simply hides the attack in an EEG-headset-driven app, a game say, that the user downloads and plays. The game knows what the user is looking at and gets the brain signal readings at the same time.”
The purpose of the experiment, Song says, was to assess whether this kind of brain-machine interface (BMI) could potentially pose privacy threats to users. Her conclusion: “Yes. Absolutely.”
Knight, who collaborates with Song on some EEG projects, agrees. His own startup company invented an impressive EEG headset that was sold to the Nielsen Company last year for neuromarketing — monitoring the brain while watching TV. Those headsets have their processors and storage built into the headset itself, but can also stream encrypted data wirelessly.
Even if the headsets weren’t encrypted, there is no chance cyber pirates will be reading hapless minds any time soon, he says. “You can’t even identify an individual based just on their EEG signals. It’s very rough data.” Yet he agrees with Song and Rabaey that when it comes to brain-machine interfaces, whether they are taking readings from inside or outside the skull, it is none too early to start taking the issues of privacy, security and safety to heart.
Speaking of hearts, they may also need guarding. Several years ago, University of Michigan computer scientist Kevin Fu showed that it was possible to crack the encryption code of common defibrillators and pacemakers. The devices, which can be wirelessly adjusted, also transmit a stream of data so that doctors can monitor patients. Fu and his colleagues were able to break the code and flip the switch on the defibrillator (potentially deadly for its user) to adjust the pacemaker (also potentially lethal) and to eavesdrop on the data stream.
“Once you get into this device, you not only have all the medical information in it, but defibrillators are powerful stimulating devices,” says Rabaey. “If you can pirate remote control of a stimulating device, all hell breaks loose.”
Of course, the implantable cortical neuromodulation tool that Rabaey is working on (with faculty colleagues Elad Alon, Jose Carmena and Michel Maharbiz) is a stimulating device as well. Losing control of it to a malicious agent could likewise give a hacker lethal control.
“What can you do about it?” asks Rabaey. First, to prevent jamming and denial of service attacks, “you create a failsafe mode,” he says. If ever the device loses its link or detects that it is under attack, it should go offline and lock into a safe mode. And it should only be unlockable when a key code is transmitted to the user from a clinic or device administrator. If anyone tries to manipulate the device into doing something irregular, it goes offline.
Another approach, put forth by MIT professor Dina Katabi, is to conceal wireless transmissions between BMI components in a fog of static that is far louder than the signal itself. Each component would have the code that could subtract the loud noise, leaving behind the weak but meaningful signal. “Unless they have the code, it would be basically impossible for an intruder to sort the signal from the overwhelming noise,” says Rabaey.
“Security should be built from the ground up,” says Rabaey. Very often it is layered on later, from the top down. But if you start when you are still designing the physical layer of transmission, he says, you can build things into the die that make it really hard to spoof it, or change it. For instance, “If every chip had a unique number generator as part of its central design—you cannot change it, there is no software that allows you to mess around with that—that would be a big first step that you could start building from. You would always know if another device was pretending to be you,” he says.
“It always helps to throw a little randomness into the code, too,” he says. “If a wireless link is predictable, it’s always going to be much easier to read.”
Rabaey thinks such read-write devices are going to enhance human life and promote health for both disabled patients and eventually for healthy people, too. He imagines each of us having what he calls a personal “human Intranet,” linking each person’s sensors and devices together into their own medical, entertainment and educational network, which would require a powerful “personal firewall” to be secure.
“It is time to start thinking about this now,” Rabaey says, “while we are still building the physical layer. Or we can just add security later, but by then it will be too late to do right.”