Voice assistant devices could be more vulnerable than we think

Share

These researchers in China and the U.S can use these "hidden" commands on Apple's Siri, Amazon's Alexa and Google's Assistant. The researchers achieved this by making changes to audio files, which cancelled out the sound that the speech recognition system was supposed to hear and replace it with a sound that would be understood differently by machines while being virtually undetectable to the human ear.

This is according to researchers at Berkeley who recently published a paper on it dubbed the CommanderSong concept which shows how voice commands can be hidden inside of music. While at present this is strictly an academic exercise, researchers at the university say it's foolish to assume hackers won't discover the same methods as well.

"We wanted to see if we could make it even more stealthy", said phD computer security student Nicholas Carlini said.

Researchers in China a year ago demonstrated that ultrasonic transmissions could trigger popular voice assistants such as Siri or Alexa, in a method known as 'DolphinAttack'. Google says its virtual assistant has features to mitigate these undetectable commands. Apple points out that HomePod can't do things like open doors, while iPhones have to be unlocked to execute certain Siri commands. The receiver must be close to the device, but a more powerful ultrasonic transmitter can help increase the effective range. Since then, however, more and more researchers have improved on that theory and were even able to make one that worked even 25 feet away.

Nadal's record run on clay ended by Thiem in Madrid quarters
Thiem has never won a single set off the 31-year South African without requiring a tiebreaker. It was very important I went into the match with a positive attitude-with an attitude to win.

There have been other (unfortunately successful) attempts to fool voice assistants, and there aren't a lot of ways to counter such audio from being broadcasted to target people's "smart" devices. In the latest development of the research, some of the UC Berkeley students determined a way to hide commands within music or spoken text recordings. It's a fair warning to companies designing digital assistant to get out in front of the problem rather than be reactionary. The group provided samples of songs where voice commands have been embedded to make digital assistants do specific things, including visiting websites, turning on Global Positioning System, and making phone calls.

Music can be transcribed as arbitrary speech, and human beings cannot hear the targeted attacks play out. He wrote one of the first papers on audio attacks, which he titled "Cocaine Noodles" because devices interpreted the phrase "cocaine noodles" as "OK, Google".

Carlini went on to note that: "We want to demonstrate that it's possible, and then hope that other people will say, 'Okay this is possible, now let's try and fix it'".

Share