That is nothing new, voice recognition has always been very bad with regional accents and dialects in England. I expect that it won’t be any better, once Apple Intelligence is launched in Germany, with its various regional accents and dialects.
Mrs Littlejohn told BBC News: "Initially I was shocked - astonished - but then I thought that is so funny. The text was obviously quite inappropriate.
“The garage is trying to sell cars, and instead of that they are leaving insulting messages without even being aware of it. It is not their fault at all.”
Well, the garage isn’t leaving insulting messages, is it? Sorry, but that is a stupid quote to make. This is just a typical voice recognition failure and has nothing to do with the garage.
- The fact it is over the telephone and, therefore, harder to hear
- There is some background noise in the call
- The way the garage worker speaks is like he is reading a prepared script rather than speaking in a natural way
"All of those factors contribute to the system doing badly, " he added. "The bigger question is why it outputs that kind of content.
“If you are producing a speech-to-text system that is being used by the public, you would think you would have safeguards for that kind of thing.”
This is exactly the point. If there is a lot of noise in the background, or it thinks that there are inappropriate words in the message or it is unsure due to the accent etc. it should just say that it was unable to interpret the message.
This is a perfect example of what is wrong with AI at the moment, instead of coming up with a message saying the source material is too unclear to process, it does makes up a stupid answer full of mistakes without admitting defeat. If the voicemail was so hard to hear, it should appologise and say the message is partially inaudible and the recipient should listen themselves or call back to confirm what was being said.
This is why I find AI so infuriating. There is nothing wrong with admitting you can’t complete a task, if you can’t complete it. In fact, that is much more acceptable than making things up! That is what the companies behind these AI engines need to work on. Let the AI be fallible, and when it is, it should simply admit it and let the user decide for themselves what they should do as the next step.