I don’t know whether to categorise this under STT or SN, but I guess Steve Gibson will probably pick up on it for SN anyway. So I’m posting under STT because listeners could be interested in knowing when Amazon and Google release fixes.
Researchers have found possible ways of getting some Amazon and Google speakers to keep listening after you think they’ve stopped, generate credible messages asking for personal info like passwords, and extract any responses back to the attacker. Here’s an article with details:
It does appear that a developer has to create a malicious voice app that purports to add Skills/Actions to the speaker, so a bit of caution regarding third-party apps may be all that’s needed to avoid this.
I wasn’t really aware their were third party apps for Google speakers in the same way there are for Amazon Echos. Google speakers always seemed to be heavily controlled, where users couldn’t really activate apps/skills.
Could well be that it’s not a real risk with Google devices. It’s not an area I know a lot about, I was just calling attention to the report in case someone who knows more cares to assess it, after the link popped up in my Twitter feed from a security news source.