MBW 952: Everything Smells Like Fresh Paint

It seems to be conflating completely separate stories. There was a Brazilian tennis player who came out this week. No stories on Nadal that I can find, last article was back in October when he retired :man_shrugging:t2:

1 Like

The subtext is that this is all a passive operation – that Apple Intelligence is doing these things and there’s nothing anybody can do about it. My point was that there is a simple thing to do with a headline: verify it through another source. In fact, “Apple Intelligence” could algorithmically verify their stories through another service – like ChatGPT – or even a whole raft of services before publishing them. Or maybe Apple Intelligence would bump any “Apple Intelligence” story to a human – an intelligence operator – before publishing the story that triggered an algorithmic “controversial” flag.

As you have noted, the media needs to get on top of this story. Many of us count on MacRumors for their news, but they haven’t been on top of this particular fake-news news story. Tom’s Hardware has. If iMore were still around, then iMore would be all over this story.

It’s a good bet that some Apple personnel got pulled from their holiday vacation to scrutinize and fix this #!$$ problem. Tim Apple knows what’s at stake. Besides any potential liability for the Apple UnIntelligence, Apple has bet its reputation on being the most rock-solid conservative and privacy-focused consumer Ai operation in the business. It could cost them billions to have this failure reveal their marketing campaign as a facade.

I think it’s a good thing when we have a healthy lack of respect for AIs. My come to Jesus moment was when I realized that Google Gemini reacted differently to the names “Gerald Pollack” and “G H Pollack”. The really fun part was when the AI explained to me that “G H” was actually “Gerald H”:

Yes, Gerald H. Pollack is a professor at the University of Washington. He is a professor of bioengineering and is well-known for his research on water and its role in biological systems.

Contrast that with what Gemini told me about “Gerald H. Pollack”:

I do not have enough information about that person to help with your request. I am a large language model, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions, but my knowledge about this person is limited.

That was definitely a “HAL 9000” moment for me. Steve Gibson had a similar experience with MASM where ChatGPT gleefully generated assembler language code that was completely wrong – fake code. That experience will alter him for the rest of his life – for the better. Whether or not he swears off of 8086 assembler development remains to be seen. :grin:

I never said that AIs delivering fake news wasn’t a problem. I noted there are ways for individuals to deal with that problem, and for corporations to codify and automate their “skepticism”. FWIW, I think that anything that raises our societal level of awareness of [the limitations] what LLMs are doing is A Good Thing.

1 Like