Smart speakers can hear commands inaudible to humans.- Secret messages can be used to make calls and unlock doors.
- Researchers in China call the technique, 'Dolphin Attack'.
Research teams in China as well as the US have been able to secretly activate the AI systems on smartphones and smart speakers. Once activated, the university labs have dialed phone numbers and opened websites. Nonetheless, the technology could be used for darker ambitions like unlocking doors, wire money or shop online.
Researchers Nicholas Carlini and David Wagner spoke to The New York Times about how they embedded the secret message, “OK Google, browse to evil.com,” in seemingly harmless sentence as well as in a short video of Verdi’s ‘Requiem’. Both times, Mozilla’s open-source software was fooled.
Carlini stated, “We want to demonstrate that it’s possible, and then hope that other people will say, ‘’Okay this is possible, now let’s try and fix it.’”
Researches in China call the technique ‘
In response to the article by the American publication, all three companies have lent assurances that their smart speakers are secure. Specifically with respect to the
If anyone remembers the Superbowl Amazon ad for earlier this year, it illustrates how Amazon is already aware of such a function. They designed their ad in such a way that Alexa speakers at home with viewers wouldn't respond even though Alexa was uttered nearly 10 different time during the ad spot. In fact, Amazon filed a patent back in 2014 called, "Audible Command Filtering," that describes different techniques to prevent Alexa from waking up.
Security risks have been a continuous concern when it comes to the Internet of Technology (IoT). Nonetheless, this highlights an issue that could pose of an obstacle in the future. It’s a fair warning to companies designing digital assistant to get out in front of the problem rather than be reactionary.