Beep boop - this is a robot. A new show has been posted to TWiT…
What are your thoughts about today’s show? We’d love to hear from you!
Beep boop - this is a robot. A new show has been posted to TWiT…
What are your thoughts about today’s show? We’d love to hear from you!
I think Musk is bitter that he didnt get his own way with OpenAI - and in any case would we really trust someone who who has nosedived one of the big Social Media platforms into the ground???
No, but the rest of the people on the list are generally very intelligent and they do make some good points.
I don’t want to go all Butlerian Jihad, but taking a moment to sit back and think about exactly what we are doing with AI, and how it works and how to control it, isn’t a bad idea. The creators don’t even know how to test the platforms they are creating, to make sure they are sensible and safe, because they don’t know how they come to the conclusions they are coming to.
We have AIs which can confidently lie to us, and most people will not be able to tell, whether it is the truth or a lie. This sort of thing needs to be sorted out, before AI can be used in the mainstream.
Using it for discrete tasks, where there is no lying or badly produced results can be quickly verified, such as telling it to perform an analysis on a table of figures and format the output, as the sort of baby steps we should be taking, whilst they work out how to stop AI lying.
There have been some genuine groundbreaking steps forward in the last 6-9 months, but we are still a long way from things like ChatGPT being ready for prime time use. The problem is, these companies need to recoup their investments for short-term thinking investors, which means the products are incomplete & not ready for public consumption (outside of testing by people who know what they are getting into and who provide good feedback), but are being thrown at the general public anyway.
I’m always thinking about accessibility when some legal or moral issue with technology comes up. To bring two current topics together, what if a blind person buys a book, digitizes it, and wants to run it through an AI to generate audio narration of it for themself? Few books are available in Braille, only some are available as official audiobooks, and both of those options are expensive when they exist.
Most likely that AI would be some kind of online service. Even if that person is the only one who has access to the results for all ordinary intents and purposes, they’d be “transmitting” the text of the book to the service’s servers, and that might be considered illegal.
If they have to lend the physical book to a friend to get it digitized in the first place, it’s changing hands twice.
The simplest solution is purchasing ebooks on a platform that has its own AI text-to-speech service built in. That brings us to the current controversies over AI voices. I’ve seen a lot of people (mainly on Twitter) “criticizing” the very existence of AI voices due to the potential for misuse. The “criticisms” tend to be profanity-laden screeds and incredibly aggressive dogpiling on anyone who points out any possible benefit of AI, including accessibility uses.
Yet at the same time, politicians, big business and. dare I say it, journalists, lie to us on a regular basis. Iteration is the key to moving forward and the greatest journeys start with the first step.
I would also say that if the west doesn’t move forward, TikTok will be the least of the Chinese worries!
Paul Smith-Keitley
Adobe Creative Educator
Just because we have corrupt and immoral entites around us, doesn’t mean we should make new ones in their image… We should push to better ourselves, not sink our standards ever lower.
I don’t see that we are, I think many of those people are afraid of what they don’t understand, while others are afraid they are being left behind
Hey @Leo have a great holiday, we are off to the home of Stroopwaffels tomorrow fot a long weekend, will post pictures.