A technique, dubbed the “Near-Ultrasound Inaudible Trojan” (NUIT), allows an attacker to exploit smartphones and smart speakers over the Internet, using sounds undetectable by humans.
The sensitivity of voice-controlled microphones could allow cyber-attackers to issue commands to smartphones, smart speakers, and other connected devices using near-ultrasound frequencies undetectable by humans for a variety of nefarious outcomes — including taking over apps that control home Internet of Things (IoT) devices.
The technique exploits voice assistants like Siri, Google Assistant, or Alexa and the ability of many smart devices to be controlled by sound. Most devices are so sensitive that they can pick up voice commands even if the sounds are not in the normal frequency range of human voices.
In a series of videos posted online, the researchers demonstrated attacks on a variety of devices, including iOS and Android smartphones, Google Home and Amazon Echo smart speakers, and Windows Cortana.
In one scenario, a user might be browsing a website that is playing NUIT attack commands in the background. The victim might have a mobile phone with voice control enabled near the computer. The first command issued by the attacker might be to turn down the assistant’s volume so that responses are harder to hear, and thus less likely to be noticed. After that, subsequent commands could ask the assistant to use a smart-door app to unlock the front door let’s say. In less concerning scenarios, commands could cause an Amazon Alexa device to start playing music or give a weather report.
“This is not only a software issue or malware,” said Guenevere Chen, an associate professor in the UTSA Department of Electrical and Computer Engineering, in a statement. “It’s a hardware attack that uses the internet. The vulnerability is the nonlinearity of the microphone design, which the manufacturer would need to address.”
Defenses Against NUIT Cyberattacks
The latest attack allows any device compatible with audio commands to be used as a conduit for malicious activity. Android phones could be attacked through inaudible signals playing in a YouTube video on a smart TV, for instance. iPhones could be attacked through music playing from a smart speaker and vice versa.
In most cases, the inaudible “voice” does not even need to have to be recognizable as the authorized user, said UTSA’s Chen in a recent statement announcing the research.
Out of the 17 smart devices tested, [attackers targeting] Apple Siri devices need to steal the user’s voice, while other voice assistant devices can be activated by using any voice or even a robot voice. It can even happen in Zoom during meetings. If someone unmutes themselves, they can embed the attack signal to hack your phone that’s placed next to your computer during the meeting.
If you don’t use the speaker to broadcast sound, you’re less likely to get attacked by NUIT. “Using earphones sets a limitation where the sound from earphones is too low to transmit to the microphone. If the microphone cannot receive the inaudible malicious command, the underlying voice assistant can’t be activated by NUIT.”
The technique is demonstrated in dozens of videos posted online by the researchers:
https://sites.google.com/view/nuitattack/home
Thanks to Robert Lemos of DARKReading for this informative and scary article.
https://www.darkreading.com/vulnerabilities-threats/siri-hackers-control-smart-devices-inaudible-sounds
Deliver David's Tech Talk to my inbox
We'll send David's weekly Tech Talk to your inbox - including the MP3 of the actual radio spot. You'll never miss a valuable tip again!