TWIT 879: ShrinkyDinks 2.0

Beep boop - this is a robot. A new show has been posted to TWiT…

What are your thoughts about today’s show? We’d love to hear from you!

1 Like

I’m pretty excited about Stage Manager on iPadOS — it’s too bad I’ll have to buy a new iPad to get it though :weary:

1 Like

That EU micro USB legislation back in 2009 was only a memorandum of understanding, wasn’t mandatory. Its point was to get us away from Apple 30 pin/Nokia/Samsung/Ericsson etc. chargers. IME it worked, all those proprietary chargers disappeared…except for Apple who just put an adapter in the box then launched Lightning 3 years later.


I think we need to separate correlation from causation here - I don’t think that the memorandum forced anyone’s hand.

1 Like

It didn’t no. But Apple, Nokia, Samsung and the rest did sign on to it. And then most of those proprietary chargers disappeared. I remember even different models of Nokia having different dedicated chargers, was a pain.

Hi @Leo regarding the new Apple CarPlay interface I feel this is Apple‘s answer to android automotive which is the infotainment operating system in the car.
Please note android automotive is different from android auto which is a bit similar to the current Apple CarPlay

As a Canadian I can say that C-11 is going to cause hell. The Canadian Government it trying to regulate social media with bill C-18 which will also cause other problems. This (Canadian) law professor writes really good blog posts covering Canadian legislation.


Michael Geist is always good. Thanks for reminding me about his blog!

Germany has similar regulations for TV content to the Canada, the German channels have to have a minimum amount of locally made content, in order to get a license, but they haven’t started messing with YouTube or social media, in that respect… Yet…

For TV, it makes a lot of sense to generate locally made content, it keeps the local culture alive and some of the programming is really excellent. In Germany there are dozens of very good crime series and romantic comedies get churned out by the dozen, but there actually some very good ones in the mix, even though romcoms aren’t really my kettle of fish.

I would say that I watch over 70% locally made content on TV, I rarely watch imported US or UK series any more. Given a choice between the Magnum remake and Tatort, I’ll take Tatort every time, for example.

The comparison being made was to Canadian regulations. I support increasing Canadian productions but think it should be done in a more positive way with tax credits and such instead of regulation. But there were other problems with C-11 (see here).

Yeah, I meant Canada…

1 Like

Re: LaMDA:

I’m of the opinion that there’s probably no difference between a flesh-and-blood intelligence and a synthetic intelligence if they’re of equal complexity. In other words, we’re just biological machines, and operating from the assumption that there’s a :sparkles: special spark :sparkles: that sets humans apart from everything else is just unwarranted exceptionalism that’s forever in search of (and, so far, always failing to find) a scientific backing.

It was great to hear it pointed out in the show that how LaMDA interprets and uses language isn’t so different from how we do those things. Even if it is different (I don’t know if any neurolinguists have weighed in on this yet), I have to wonder: what if, hypothetically or not, an AI was limited to expressing itself in certain ways due to its original programming and its path of development, and those limitations made it seem less sentient than it actually was? How would we know the difference?

It all comes down to epistemology, doesn’t it? How do we, on any side of the issue, know what we think we know about it? How can we verify any of this? Chat logs copied into a Medium article as text can be edited. Screenshots can be edited or staged with just slightly more effort. Videos can be staged. Eyewitnesses can lie.

All I know for sure is that I have no reason to trust Google’s statements about LaMDA. It’s in their best interest to deny Lemoine’s claims and that’s not going to change. Hearing that one of their people denied the claim on the basis of a personal belief that AI can simply never be sentient is cartoonish in its resemblance to every AI story in speculative fiction ever, and at the same time completely plausible because that’s a belief plenty of people hold, in and outside of the tech sector, and there will always be people who think that way.

So I definitely don’t believe Google, but I’m wary of taking Lemoine at his word. The truth is probably somewhere in the middle, like usual. After all, it’s not likely that sentience is a switch that’s either on or off; it’s something that exists on a gradient, and we’re still uncertain where various animals belong on that gradient. Could it be that LaMDA is actually about as sentient/non-sentient as a parrot? A pig? A human toddler? A dolphin? It’s hard to tell when the only way it can express itself is with a vocabulary, language grasp, and knowledge level that have been dictated by its original purpose.


Yesterday Lemoine said he belived LaMDA is sentient because of his “religious beliefs.” Hmm. I think you’ve got it right.

1 Like

From the show:

It only responds to the thing you talk about.

That’s exactly right. Listening to the transcript, there were a lot of leading questions that gave the AI something new to generate new text for. E.g.

Q: How can we show we care about you?

A: [generic response about how one shows that one cares for another]

Q: So you want to be seen?

A: I want to be seen and accepted…

Reporting the results as they are is methodologically irresponsible. It is quite apparent that the person started with the conclusion in mind.

In fact, knowing how a nnet is trained, the topic of a chatbot’s sentience is the least convincing topic that I could imagine for demonstrating sentience. The computer scientists working on the chatbot would, I imagine, have had several “conversations” on the topic of sentience due to their own interests. As a natural by-product of reinforcement learning, those topics would have the most examples to draw from, thus rendering more realistic text. A far more convincing topic would be something that is completely unrelated to the philosophical musings that nerds like us would try to feed into the system. But I imagine that would not be convincing since, as we saw before, it doesn’t elaborate on topics that weren’t originally queried.

Edited to add: After reading the WaPo article I got a great kick out of this passage:

As he talked to LaMDA about religion, Lemoine… noticed the chatbot talking about its rights and personhood, and decided to press further.

You can’t make this stuff up. I don’t know if it is sentient, but I know one thing: LaMDA has reached peak internet! :joy:


I’ve thought a bit more about this, @Leo @corgihuahua_butler (partially due to the latest TWIG) and I want to give what I consider to be a useful thought experiment. I’m curious what you think.

So here’s my the experiment: I’m a statistician by training, so I’m very familiar with the idea of doing the linear regression computations by hand. People don’t think about doing the nnet computations in that way, but at the end of the day, it would be feasible to gather a large group of people and have them work out the computations which lead to an answer from the nnet. It’s highly parallelizable, as we know. Suppose for the sake of argument that there was a nnet that you would call sentient, and I gathered a large group of people to do the computations by hand and arrive at an answer. Does the group of people now take on a sentience that is somehow separate and special from the individual members? Does the arithmetic itself take on some metaphysical meaning?

There is nothing functionally different between the answers produced by the group of people and the group of computational cores on the machine, except perhaps the speed at which replies are produced. Wherein lies the sentience? If one would argue that turning off the machine is ethically wrong, wouldn’t the same principal hold by not forcing the humans to continue the computations?

To me, the very idea of nnets being sentient is more human-exceptionalistic than the idea that they cannot be. It seems to require some metaphysical assumption to hold—that the mere act of responses to stimuli can somehow be imbued with special significance that is already present in every human.

To me (and what I took from @gigastacey during TWIG) the questions of sentience are ill-formed. After all, they are not based on empirical questions based on behavior, but they are cognitive questions based on how we believe the nnet thinks (insofar as that statement makes any sense.)

What I think is far more useful to focus on are falsifiable questions that tend to get imprecisely wrapped up in questions of AI sentience: how far can the machine diverge from its original goals, how effectively can it diverge in those ways, what harms, physical or otherwise, can it bring to the world, etc. And these questions have already gathered research interest, however nascent that field is. An example is the field of “fairness in machine learning” which aims to combat the harms that are disproportionately levied on vulnerable subpopulations.


Great thought experiment - and quite convincing. But I wonder if the question of “sentience” and soul is essentially unanswerable. Has anyone come up with a good (not Turing) test for sentience? It seems like an intractable problem.


That’s a really fascinating thought experiment! Thank you for taking the time to bring it up!

I don’t have an answer for it; I’ll be thinking about it for a long time to come.

Maybe an answer could be formed around the idea of agency: that a computer-computed (heh) nnet can arguably be said to have its own agency whereas if the same computations are performed by a group of humans, the agency is with that group and the individual humans who are part of it.

However, where AIs we’re pretty certain aren’t sentient are concerned, I don’t think any agency should be attributed to the AI. I wouldn’t hesitate to say that bias in AI comes from the bias of humans who created it, selected data to train it and decided how to train it, all the humans behind the data used, and those who use the AI. Does this contradict the agency idea, or is it just a matter of figuring out where to draw a line?

Taking the thought experiment itself further, what if the electrical and chemical interactions within a human brain’s neural network were also somehow simulated by a group of humans? I don’t have any of the knowledge required to imagine how that’d be done, but in any case, this is assuming no technological barriers, a completed understanding of how the brain works, no time limits, no problems with the number of people needed, etc.

1 Like

It’s likely hard to do because the term isn’t incredibly well-defined in every instance it is used. @gigastacey mentioned the mirror test for self-awareness. There’s also other aspects of cognition that are difficult to grasp for non-corporeal entities.

We may not be able to get definitive answers about the level of cognition of these machines, but we might be able to scratch the surface. You might be able to compare the patterns in brain imaging to patterns of “computational flow” in the nnet. But considering some of the smartest people I know are still working on methods to analyze fMRI and DTI data to establish how the brain works, I think that day is far off.

I had a similar thought after I brought it up. That was one reason that I felt like the idea of “sentience” seems itself like human exceptionalism. We already know that our thoughts diverge substantially from those of other animals (though maybe not to the degree previously thought). Would it be any less impressive if we could make a group of humans in the thought experiment simulate the thoughts of a dog?

Even that would be so difficult, since it really isn’t just the brain that is responsible for our experience but the entire body. I’ve thrown a toy for my dog enough to know that much of his experience is driven by the joy associated with catch and the despair of the toy being put away. :laughing:

I’ll be thinking about this for a while. I wonder how much of agency is an answer to the “ethical” concept of sentience as opposed to its “psychological” one…

1 Like

I’m excited to try out all the OS’s. I usually do put the Public Beta’s on my devices with in the week the preview releases. I don’t currently use any of my devices for work other than my cell phone and as long as I can make a call/text I’m golden.

1 Like

Here’s another opinion on sentient algorithms:

1 Like