- Researchers exposed a flaw in the Echo that lets anyone record your audio
- It requires hackers to take apart an Echo, alter the firmware and then re-attach it
- Then, they must be able to connect it to the same WiFi as another Echo device
- Amazon says it issued a software patch for the vulnerability to all users in July
A group of security researchers have exposed a flaw in the Amazon Echo that allows hackers to secretly listen to unsuspecting users’ conversations – but only if they’re savvy enough to be able to carry out the attack.
In a presentation dubbed ‘Breaking Smart Speakers: We are Listening to You,’ researchers from Chinese tech giant Tencent explained how they were able to build a doctored Echo speaker and use that to gain access to other Echo devices.
The researchers have since notified Amazon of the vulnerability, and the company issued a patch in July.
Scroll down for video
Hackers from Tencent’s Blade security research team exposed a flaw in Amazon’s Echo smart speaker that would allow someone to secretly spy on others and play random sounds
‘After several months of research, we successfully break the Amazon Echo by using multiple vulnerabilities in the Amazon Echo system, and [achieve] remote eavesdropping,’ the researchers said in the presentation, which was given at the DEF CON security conference, according to Wired.
‘When the attack [succeeds], we can control Amazon Echo for eavesdropping and send the voice data through a network to the attacker.’
First, the researchers took apart an Echo speaker and removed its flash memory chip.
They modified the firmware on the chip and then re-soldered the chip back to the Echo’s motherboard.
From there, they were able to get the doctored Echo onto the same local area network (LAN) as another Echo speaker.
Researchers used Amazon’s Home Audio Daemon, which the device uses to communicate with other Echo devices on the same WiFi connection, to gain control over users’ speakers, Wired noted.
This meant they could silently record conversations, or engage in other creepy behavior, like playing random sounds.
The attack marks the first time researchers have identified a major security flaw in a popular smart speaker such as the Amazon Echo.
The researchers have since notified Amazon of the security flaw and the firm said it issued a software patch to users in July. They also note it requires access to a physical Echo device
However, Amazon and the researchers caution that the method identified is very sophisticated and likely to challenging for the average hacker to carry out.
‘Customers do not need to take any action as their devices have been automatically updated with security fixes,’ an Amazon spokesperson told Wired.
‘This issue would have required a malicious actor to have physical access to a device and the ability to modify the device hardware.’
Still, some have pointed out that the attack could be carried out in areas where there are multiple Echos being used on the same network, such as hotels.
Earlier this year, researchers from the University of California, Berkeley identified a flaw where hackers could control popular voice assistants, such as Alexa, Siri, and Google Assistant by slipping inaudible voice commands into audio recordings.
The secret commands can instruct a voice assistant to do all sorts of things, ranging from taking pictures or sending text messages, to launching websites and making phone calls.
WHAT IS DOLPHINATTACK?
Researchers at China’s Zhejiang University published a study last year that showed many of the most popular smart speakers and smartphones, equipped with digital assistants, could be easily tricked into being controlled by hackers.
They used a technique called DolphinAttack, which translates voice commands into ultrasonic frequencies that are too high for the human ear to recognize.
While the commands may go unheard by humans, the low-frequency audio commands can be picked up, recovered and then interpreted by speech recognition systems.
The team were able to launch attacks, which are higher than 20kHz, by using less than £2.20 ($3) of equipment which was attached to a Galaxy S6 Edge.
Researchers tested DolphinAttack on iPhone 4s to iPhone 7 Plus, Apple watch, Apple iPad mini 4, Apple MacBook, LG Nexus 5X, Asus Nexus 7, Samsung Galaxy S6 edge, Huawei Honor 7, Lenovo ThinkPad T440p, Amazon Echo and Audi Q3
They used an external battery, an amplifier, and an ultrasonic transducer.
This allowed them to send sounds which the voice assistants’ microphones were able to pick up and understand.
Researchers tested Apple iPhone models from iPhone 4s to iPhone 7 Plus, Apple watch, Apple iPad mini 4, Apple MacBook, LG Nexus 5X, Asus Nexus 7, Samsung Galaxy S6 edge, Huawei Honor 7, Lenovo ThinkPad T440p, Amazon Echo and Audi Q3.
The inaudible voice commands were correctly interpreted by the speech recognition systems on all the tested hardware.
Researchers say the fault is due to both software and hardware issues.
The microphones and software that runs assistants such as Alexa and Google Now can pick up frequencies above 20Khz, which is the limit of the audible range for human ears.