Security researchers with SRLabs revealed that they were able to create skills/Actions for both Amazon Alexa and Google Assistant that could allow hackers to eavesdrop on users without their consent. It’s a surprise to see how these malicious apps made it through Amazon’s and Google’s stringent app selection process.
Through the skills, the researchers were able to hack the smart speakers by exploiting security flaws. Although the apps were genuine, they hid malicious code. ZDNet was first to report the finding by security researchers from SRLabs, a hacking firm.
As revealed by security researchers in a blog post, the skills requested and personal information including user passwords. They were also able to eavesdrop on users even when they thought the speakers are not listening to them. SRLabs has shown how the hacks work in a series of videos.
“We found two possible hacking scenarios that apply to both Amazon Alexa and Google Home. The flaws allow a hacker to phish for sensitive information and eavesdrop on users. We created voice applications to demonstrate both hacks on both device platforms, turning the assistants into Smart Spies,” the researcher started.
In a statement provided to ArsTechnica, Amazon said it has put mitigations in place to prevent and detect this type of skill behavior and reject or take them down when identified. The company also said it takes down skills whenever this kind of behavior is identified.
Meanwhile, Google told Ars Technica that it has review processes to detect the type of behavior, and has removed the Actions that the company found from these researchers. A spokesperson for Google also told the publication that it is conducting a review of all third-party actions available from Google, and has temporarily paused some actions currently in the process.