I don’t know whether to categorise this under STT or SN, but I guess Steve Gibson will probably pick up on it for SN anyway. So I’m posting under STT because listeners could be interested in knowing when Amazon and Google release fixes.
Researchers have found possible ways of getting some Amazon and Google speakers to keep listening after you think they’ve stopped, generate credible messages asking for personal info like passwords, and extract any responses back to the attacker. Here’s an article with details:
It does appear that a developer has to create a malicious voice app that purports to add Skills/Actions to the speaker, so a bit of caution regarding third-party apps may be all that’s needed to avoid this.