TWIT 964: No One Talks to the Faucet Anymore

Beep boop - this is a robot. A new show has been posted to TWiT…

What are your thoughts about today’s show? We’d love to hear from you!

1 Like

When it comes to AI ingesting stuff from creators, I don’t see why it matters if a for-profit company runs it. I don’t see the difference between a human doing an impression and a machine doing an impression.
Just as the US government classed encryption as ammunition in the late 90s and prevented the export of anything more than 40 bits which hurt encryption, we run the risk of doing a similar thing to AI if we’re not careful.
As for training data being poisoned and us not being able to get back to a clean slate, I feel that’s what all the sci-fi movies and the fatalists are warning against, so maybe.
The problem with letting AI spot the fakes is how often it’ll get it wrong. We, as humans, can probably spot the fakes more reliably than a bot can. Those are my thoughts anyway.
The only way I can think that having the app store is good for security is that Apple can, in theory, scan the apps being let into the store. Those that don’t meet basic standards for useability, security, and the common design language get rejected.

1 Like

I am not against AIs having access to information, but nobody has yet answered the question, which I have been putting on these forums for weeks now, how is it going to be sustainable?

At the moment, AI is a parasite. It gobbles up information from everywhere, usually with no compensation. A few news organisations have signed licensing deals. Give it historical information, I’m fine with that. But if the AI gobbles up the information from reporters and people get their news from the AI, who is going pay the journalists to keep reporting news? Without the reporting, the AI will become less and less useful, but by that time, the news organisations will have been driven out of business, no reporters will be reporting…

Yes, I know @Leo says, nobody will read the NYT on ChatGPT, but they will ask it for summaries of what is going on in the world, and for many, that will be enough, they will never go to the source materials.

@gigastacey did touch on it, with the AI companies having to employ the reporters, I think, but she barely got a sentence out, before the topic moved on and the whole point wasn’t really discussed, which I felt was a shame, I’d have like to have heard Stacey’s view in more detail on that point.

Don’t get me wrong, I’m not for protecting old media, per se. But AI has to come up with a new model for getting its information, if it is going to destroy old media. And it needs to come up with that new model, before it destroys the old one, otherwise it is destroying itself in the process.

As I said, AI is a parasite and it needs to learn to become a symbiot.

3 Likes

I have a feeling that at some point we will be seeing diminishing returns from AI use for general purpose reporting. The reason for this is that AI will consume AI generated content which wont be original at all, so we will be moving in an infinite circle, disrupted only by AI hallucinations which will be accepted as fact.
The reason for this is that most publications will try to optimize for cost and fire their most expensive journalists/content creators. This will be a recipe for disaster that news and creative industries will realise too late.

3 Likes

Wow, his future AI is way more conservative than I really thought

Best laugh I’ve had in a long time listening to TWiT. Thanks Stacey!

The discussion about stiff penalties for distributing deepfake porn reminded me of an episode of Jack Rhyseider’s podcast “Darknet Diaries” where a Florida woman tried to get police to charge a man who had been posting various pornographic photos and videos of them for years, only to be mostly ignored under a “revenge porn” statute. Had the woman’s sister not been an attorney, seems like there’s a good chance no action would have been taken. Revenge Bytes – Darknet Diaries

1 Like

I said hello to these folks at CES

5 Likes

Amen. I wish @gigastacey had been able to build that out a little more.

1 Like

I feel like Leo left 2023 with a nuanced, skeptical view of machine learning software. Then he fired into 2024 as a pure shill for anything labeled “AI.” Prison time for users who employ a tool like Nightshade? Really? That’s nuts. If you’re gonna have your ML model run amok knocking on every door in the world, you’ve got to expect stuff like this.

I wonder if there’s a business model behind providing curated datasets for ML training. That could be a solution to the sustainability problem that @big_D keeps mentioning. I think the public internet will eventually become so poisonous to ML training that it won’t be effective to simply let it loose on the WAN as we do today. People who want a usable ML model would pay for access to datasets that are vetted to contain reliable content, and folks who have content within said datasets are fairly compensated.

1 Like

Yeah you are right, if you restrict ML to a reliable training set then you are mostly ok, but then your model is quite restricted. It becomes a glorified autocomplete bot. In a lot of cases this is fine, but it won’t be the transformative tool many people think it is.
It will just be a great productivity enhancer.

I think most businesses are hoping that ML will allow them to fire people, and in this restrictive form it won’t be able to do that.

Our boss is over 80. He has decades worth of research and says that his scientists keep coming back with repetitions of the work that has been done over the decades, because noone has the time to go through the copious notes he has assembled.

He wants an AI that could look at a customer’s requirements and come up with the best chemical solution for them, based on the existing research - often a customer will come and want a new paper coating and if they had looked back in the notes, they would have found that they used to have an experiment back in the 80s that would cover it. (We research and make makro-molecular polymers.)

But that is most definitely an AI solution that could never be put on a cloud service! That would be over 50 years of trade secrets that could easily escape. Of course, we would have to first get those 50 years of researh into computer readable form, or at least the first 30 years or so.

Such a project would make the company more effective. The research scientist in R&D could concentrate on really new products, instead of continually reinventing the wheel for customers, when an old product could simply be dug out & used, or used as a basis for more refinement.

2 Likes