Lasers Can Be Used To Hack Smart Digital Assistants, Researchers Found

Smart devices

Voice-controlled digital assistants are relatively new technologies. But their widespread usage means that the future is certain to embrace them.

But there are risks about AI-powered assistants, like many researchers have found in the past. And here, researchers found yet another vulnerability to these systems, where Google Home/Assistants, Apple’s Siri, Amazon’s Alexa and others can be hacked using lasers.

Yes, like using those handheld device laser pointers many people have.

According to researchers in the University of Electro-Communications in Japan and the University of Michigan, they have discovered a way to take over those digital assistants, even from a distance, by shining laser pointers to the device's microphone.

It even works with flashlights.

In an example, the researchers said, they managed to open a garage door by shining a laser beam at a voice assistant that was connected to it.

They said that they could easily control light switches and turn them on and off, made online purchases or opened a front door protected by a smart lock. The researchers can also remotely unlocked and started a car that was connected to the device.

Putting the test a bit further, the researchers tried aiming a laser 140 feet up at the bell tower of University of Michigan, and successfully controlled a Google Home device located 230 feet away in a separate building.

In one of tests, the researchers tried focusing the light beam using a telephoto lens, and said that they were able to hack a voice assistant located at distance of more than 350 feet.

The findings help discover some possible worse case scenarios, where many voice-command systems don’t require authentication. Using just lights, attackers wouldn’t need a password or PIN to take over a device; they just need to be in the object’s line of sight.

"This opens up an entirely new class of vulnerabilities," said Kevin Fu, an associate professor of electrical engineering and computer science at the University of Michigan. "It’s difficult to know how many products are affected, because this is so basic."

Google Home - laser
Credit: The University of Electro-Communication; The University of Michigan

It began in 2018, when cybersecurity researcher Takeshi Sugawara met Kevin Fu at the University of Michigan, to show him a trick he discovered.

He showed Fu that using a high-powered laser aimed at the microphone of his iPad, he could listen to the sound his iPad's microphone picked up. As Sugawara varied the laser's intensity by altering the frequency and wavelength, the noise became distinct.

Sugawa showed Fu how his iPad's microphone can inexplicably converted the laser's light into an electrical signal, just as it would with sound.

Months later, a group of researchers gathered to study this curious photoacoustic quirk, and found that it can do things more disturbing. They found that lasers can be used to silently "speak" to any computer that receives voice commands - including smartphones, Amazon Echo speakers, Google Homes, and Facebook's Portal video chat devices.

This happens because within microphones, there are a small plate called a diaphragm that vibrates when sound hits it. This small component acts like humans' ear drums, that turns vibrations into signals.

The findings found that not only sound that can vibrate that diaphragm, as lights too can vibrate it.

By focusing a laser or a flashlight directly to a microphone's diaphragm, the researchers found that the component can convert it into electric signals. After that, the rest of the system then responds the way it would to sound.

The researchers said they had notified Tesla, Ford, Amazon, Apple and Google to the light vulnerability.

The researchers said most microphones would need to be redesigned to remedy the problem. Dirt shields on many microphones, or even a piece of tape won't eliminate this issue, Fu said.

This kind of weakness can be surprising, since they are often worst-case scenarios where devices can be exploited only in the rarest circumstances. But still, this isn't the first discovery of a surprising vulnerability in digital assistants.

"This is the tip of the iceberg," Fu said. "There is this wide gap between what computers are supposed to do and what they actually do. With the internet of things, they can do unadvertised behaviors, and this is just one example."

The computer science and electrical engineering researchers Takeshi Sugawara from the University of Electro-Communications in Japan, and Kevin Fu, Daniel Genkin, Sara Rampazzi and Benjamin Cyr at the University of Michigan released their findings in a paper titled Light Commands: Laser-Based Audio Injection
Attacks on Voice-Controllable Systems
.

As for Genkin, he was one of the researchers who discovered the two major security flaws in computers, known as Meltdown and Spectre, that resided in the microprocessors of nearly all computers in the world

Published: 
05/11/2019