Is your smart home vulnerable to lasers?

Researchers at the University of Michigan have demonstrated a functioning attack on smart devices by using a laser to perform what is called Laser-based Audio Injection. This attack uses light energy to stimulate a MEMS (Micro-Electro-Mechanical System) microphone to produce electrical signals in the same way as sound waves, thus allowing a ‘voice’ command to be send from long distance to the device.

While the research was published in late 2019, the YouTube channel Smarter Every Day has now produced a really nice real life demonstration of the attack on popular smart speakers with the assistance of one of the researchers. The attack can be effectively performed with readily available equipment costing less than $500, even using a sufficiently bright light source that is not a laser.

Interestingly, why this works is not well understood. The research paper concludes with a call to better understand the physics of what is going on, as this could offer potential solutions and other uses for this effect.

So what does this mean for us?

While the attack is effective, it has some inherent limitations. Key among these is the requirement that the light strike the microphone directly. The devices that have been confirmed as vulnerable are those with clearly visible microphone holes that can be targeted. Even some of these, such as an Echo Dot, are not necessarily practical targets, as the microphone is on the top, and thus difficult to target from a distance.

Similarly, those with the microphones behind some sort of barrier, such as Apple’s HomePod fabric mesh covering, are similarly hard to target, even if you could see where to microphone is.

Devices the researchers have tested include:

  • Google Home

  • Google Home mini

  • Google NEST Cam IQ

  • Echo Plus 1st Generation

  • Echo Plus 2nd Generation

  • Echo

  • Echo Dot 2nd Generation

  • Echo Dot 3rd Generation

  • Echo Show 5

  • Echo Spot

  • Facebook Portal Mini

  • Fire Cube TV

  • EchoBee 4

  • iPhone XR

  • iPad 6th Gen

  • Samsung Galaxy S9

  • Google Pixel 2

Naturally, this list is not exhaustive, but serves as examples of the kinds of devices that can be exploited at a technical level.

While the list includes phone and tablets, these would be typically impractical targets given their mobile nature (so less opportunity to target), and the fact that they typically listen for the owners voice specifically. However, the researchers note that often the voice recognition is limited to the wake phrase rather than the whole command, so a recording of the owners voice using that phrase would be sufficient to inject any command to the device.

What is the risk?

As the YouTube video demonstrates, the risk level would be dependent on the smart devices you have in your home, as any command can be issued that the attacker wishes. The most obvious vulnerability is around access to the home, so smart locks and garage door openers are the biggest focus.

Again, the exposure depends on the devices and how they are implemented. Some platforms won’t allow high risk commands to be issued remotely, or will require a PIN or some other confirmation before unlocking or opening the door. Typically, however, an attacker could gain access using this attack if they can target a vulnerable device.

Other attacks could be less severe, but range from nuisance behaviors like manipulating lights, to potentially more harmful effects like setting thermostats to extreme values.

What can we do?

The best defense if to ensure that any vulnerable device with a targetable microphone is positioned away from windows. If the laser cannot by directed at the microphone, the attack is not possible. Smart screens, Smart TVs, and the growing number of devices with build in voice assistants, such as thermostats and even smoke detectors, need to be considered. Don’t forget tablets that are used as dashboards or control panels, these will have a microphone on the front face that can be attacked.

Another defense is to utilize voice recognition if it’s available. Smart speakers with user recognition may not process an arbitrary command from an unknown voice. I’ve tested this with the HomePod by playing the commands used in the YouTube video. Siri activated right away, but then asked who was speaking as the voice was not recognized.

Further Info

You can check out the layman’s summary of the research at lightcommands.com, and watch Smart Every Day’s awesome video below.

David Mead

David Mead is an IT infrastructure professional with over 20 years of experience across a wide range of hardware and software solutions. David holds numerous IT certifications and has dedicated himself to helping others with technology throughout his career.

Previous
Previous

The Low-Down on Ring and Privacy

Next
Next

What is Homebridge?