Alexa, Google Assistant Smart Speakers Can be Exploited for Phishing, Eavesdropping: Researchers

There has been a lot of debate lately regarding the privacy aspect when it comes to smart home devices, but it appears that the concerns are not unwarranted. Experts at Security Research Labs have uncovered vulnerabilities associated with Alexa and Google Assistant voice app backend systems that can be exploited to eavesdrop on users and for phishing out a password with ease. The security experts demonstrated the vulnerabilities in proof-of-concept videos and revealed how easy it is trick users into giving up sensitive information such as passwords and account details.

Security Research Labs explained in its report that malicious parties can use non-readable characters like a “�” in the code of voice apps for Amazon’s Alexa assistant called Skills, or Actions in the case of Google Assistant. When such a character is encountered in the course of an ongoing interaction between users and the virtual assistant, it prompts a long pause, which tricks users into believing that the app has malfunctioned.

In such a scenario, users might think that the interaction has stopped and they need again to say a hotword like “Ok Google” or “Hey Alexa” to initiate an action. But in reality, the malicious party can use this pause to listen to whatever the user has said in the meanwhile, and can send the voice transcript of everything they said in a short duration to a dedicated server belonging to hackers.

Similarly, when the unreadable “�” character induces a short pause, say for 30 seconds to trick users into believing that something has malfunctioned, the malicious party can follow that up in their voice app with a code that reads a fake update message. In such cases, the false update voice prompt may ask users to say their password to install the update, and might also ask for more information such as the linked account. With this info, one can take control of an unsuspecting user’s Amazon or Google account.

The eavesdropping and phishing vulnerabilities can be exploited via the backend that Google and Amazon provide to developers of Alexa skills and Google Assistant actions. And in the absence of stringent vetting protocols, malicious parties can gain access to functions that provide them access to critical commands and subsequently control how the virtual assistants behave. Security Research Labs reported the vulnerability to Google and Amazon months ago, but they are yet to be patched. Moreover, since Amazon and Google do not vet the code of app updates, malicious parties have a free hand here.

“All Actions on Google are required to follow our developer policies, and we prohibit and remove any Action that violates these policies. We have review processes to detect the type of behaviour described in this report, and we removed the Actions that we found from these researchers”, a Google spokesperson was quoted as saying by ZDNet regarding the issue, but Amazon is yet to issue a statement. Google also wants to spread awareness that the Google Assistant won’t ask them for sensitive information such as a password via a voice skill, with the intention of keeping them aware of such deception.

Comments are closed.