TWIT 792: Get Out of My Grocery Aisle

Beep boop - this is a robot. A new show has been posted to TWiT…

What are your thoughts about today’s show? We’d love to hear from you!

After TWIG during the week, I was surprised how much @Leo had changed his point of view. I was also very surprised that Denise couldn’t find anything that Big Tech is doing as grotesque and in need of investigation…


2 cents time:

The more I learn about section 230, the more it sounds like and industry whining about being required to introduce quality measures and minimum standards.

Certainly, it would require more work and more clever solutions. And maybe - just maybe - some clever legislation. But maybe the path of the internet is not only to grow to my toaster and every inch of the planet, but in quality, too.

There’s been so much hand wringing about how certain platforms would be a “dumpster fire” - but god forbid platforms would be made liable to moderate content and increase quality. In short: to care for and tend to the in part hazardous raw material of their profit generation.

Right now, platforms behave a bit like mines that simply let their unfiltered waste water run into the ground and poison the well. Just before they were told not to. :confused:

1 Like

I agree up to a point. For small forums, like this, it would be difficult to implement, you need the resources.

But Facebook and, for example, YouTube knew about these problems from the beginning, but they ignored them when they could have implemented a solution and scaled it up as they grew. Now they are too big to implement anything without causing a huge fall in profits. But profits > law, therefore the law can go get lost.


Let’s start by moderating all the world’s phone calls and text messages, and all the email, and every public speech… and every book… and every university lecture and school newspaper, and every radio and television station… and why stop there… let’s enforce thought police too… and moderate every thought of everyone… surely it’s just a matter of perspective and the will to enforce it…

/sarcasm mode off

Private conversations are not covered.

TV, radio and newspapers are already highly regulated about what they can and can’t say. In fact if they published some of the stuff that Facebook & Co. get away with, they would be fined and possibly shut down or face other censure.

That is part of the criticism, normal media outlets have to follow the law, Facebook & Co. just ignore the law, until the fines and lawyers cost more than actually complying. That is not the way you should do business.


You either want anyone, anywhere in the world (Facebook), to be able to have something to say publicly, or you don’t. With all the world’s governments having a say on who can say what, it’s questionable if you’d even be able to say “hello” without some censor somewhere getting their back up. (After all, some countries don’t have friendly relations with other countries.)

So as soon as you decide to moderate anything, you have to moderate everything, from everyone’s (or at least every country’s) perspective. It’s a nightmare and I think the cost is not worth the benefit, so just outlaw anything like Facebook and be done with it… allow no product that crosses an international border, and thereby make the rules for each product specific to the country it is in. Basically break Facebook up into {# of countries in the world} different smaller companies.

1 Like

I’m all in for great hyperbole. That said: I don’t think I follow your reasoning.

Take church - manners suggest you behave a certain way there. Take being around children - ideally no swearing. Take taking to administration or police: sure, you can act like a Hooligan, but where’s that going to get yourself? Take “at the dinner table”. Take any social context. Anywhere, there are certain codes - which are being enforced to one or another degree - that incentivize certain behaviour. Just not online. Except for when platforms throw you out of you did not behave.

If I understand correctly, you are arguing along the lines of “if you start using weed killer to tend to your garden, you technically must poison every living organism on earth”. In my mind: no you don’t. You need to develop a certain sense of balance and appropriateness. And follow that route. It’s not that hard if the tech industry would not just throw it’s hands on the air and say: na-ah, too hard, not gonna do it.

Big tech can land rockets on end but not moderate a forum? I am more optimistic than that.

The problem I do see is that many countries do not seem to have governmental and political circumstances in which particularly well weighed and appropriate solutions are expected to see the light of day…


I was a bit surprised at that as well. I was also disappointed that the antitrust conversation didn’t include Apple and their behavior.

Let’s look at the accusations against Microsoft about illegally tying an application (in this case, the browser) to the operating system, and then apply that to Apple. I would respectfully submit that Apple’s restriction on the App Store are tantamount to the same behavior: an app is tied to the operating system that the user has no means to remove or circumvent.

For my part - I find the defense of curation and security that Apple and their partisans use to be unconvincing. It effectively implies that the only company that knows how to do security and curation is Apple. And neither of those are a defense against antitrust concerns.

I also take the point of view that developers are customers because they are also purchasing a service or good from Apple: access to the developer program and access to the platform. The argument of customers (non-developers) versus developers is a misnomer: both purchase something from Apple and, therefore, both are customers.

1 Like

Context, in your message, and in moderation is king. Imagine someone less skilled with words than yourself, trying to get medical help, and missing the best medical words for hemorrhoids and then writing something about their butt in such a way that the AI thinks is swearing or in need of moderation. Until we reach an AI of GENERAL intelligence, which so far, is unproven to ever happen, it may be near impossible for said AI to understand the nuance to properly moderate that message, or any of many other missed contexts. Remember that Facebook has already had problems with pics of breast feeding or the naked napalm children… among other missed contexts.

TL;DR AI or computer moderation WILL NOT work for many years, until we achieve GENERAL AI, if we ever do.

1 Like

Certainly an interesting perspective. And it’s perfectly fine at this point to say agree to disagree. Just continuing on not since it’s interesting and not to try and win an argument. :handshake:

Still makes me think that by that logic, we should stop driving cars until we’ve mastered flawless autonomous driving. Guns should be outlawed that may hurt the innocent. Substances should be forbidden that don’t contribute to health.

There are regulations for all of those and none of them are outlawed. I don’t want any of them outlawed, but like to see them regulated. It’s not impossible to regulate responsibly. Conversely, not being able to regulate responsibly might mean an incapacitated society. Which might be closer to truth than we’d like to assume.

Sure you might say that driving without regulations likely ends in people hurt or killed while platforms without regulations might only end up hurting peoples’ feelings. However, given the slowly increasing number of cases that used social media not only to whip up hate but to actively or passively broadcast hate crimes live to the net (e.g., Christchurch) and give them amplification is something that stirs up regulation. In these cases there is even self-regulation: platforms take down those videos.

One might say that starting from the least common denominator of regulating “broadcasting hate crimes is an offense” is a plausible starting point for regulation of platforms. You can build code to discover that. It’s an isolated phenomenon. The next thing might be something like “reporting information that goes contrary to evidenced information should be flagged and contextualized” might be another (what Twitter and FB are doing). Same thing here: you can code that. There might be a library. It could be open source. That might be part of the regulation, too.

Calling out mentioning of a butt or boob is, truly, something that ought to go unnoticed anyways.

Goes without saying that I’m no expert in this subject. Merely thought about this a couple of times while listening to Leo and the gang. I admit that I am not certain where such an approach would lead. But that’s another facet of my point: let’s do that, take it step by step, and learn from the process. You might be right in calling me optimistic, there. I think the process would be beneficial, but the chances of it unfolding as I hope I would are… oh well - a man can dream.


Who mentioned AI? AI isn’t a solution to very much in its current state. That is the problem with big tech. They automatically think that tech makes everything better. It is their hammer and everything they do must be a nail. The reality is that for some things they still need to fall back on wetware.

Again, this is another part of the problem. The platforms expand while ignoring moral and legal responsibilities until they are too big to do anything sensible, so they throw half baked solutions at the issue, which cause even more problems.

We need to drive it into startups that they need to do things right from the beginning, not wait until it is too late, because it would mean lower income and slower growth. They shouldn’t sacrifice society for a couple of dollars.

1 Like

Okay well lets start with basic facts. YouTube has videos going up at the rate of more than one a second. There is not enough time in the world to watch all of those videos. It’s not humanly possible, even if everyone on the planet were assigned to do only that for all their waking hours. Now add in all the posts to Facebook. The problem is not that the companies don’t want to be able to moderate, it’s that it is not humanly possible. So, I mentioned AI, because a machine assist is all that will make it possible. The scale of the problem as it already exists, before we bring on even more people from more places in the world currently under-served, is immense. It’s easy to suggest it should be done, but at the current scale it just can’t be done. So now you have to decide what you want to prevent so that the volume can be limited so that it could be moderated by humans.

Let’s start by disallowing anyone in Europe to post, ever. /sarcasm but you see my point… no one wants to be the target an unfair limit… so again… to make it fair, it’s going to need a computer, and computers… just… cant… do… it… right… now…

YouTube has explicitly chosen to optimize for ease of upload and ease of publication. The fact that there are hundreds of videos uploaded every second is a direct result of their choices; it is not some simple fact of nature that they are contending with. They could do all kinds of things to limit the rate of publication, and limit who can or cannot publish on their platform. Those things just run counter to their business model. There’s a term I’m forgetting that captures this kind of flaw in argumentation. They are creating a condition then arguing they cannot do anything because of the condition they created. Come on logic nerds, help me out here!



1 Like

I think digital regulation in the US and Europe is chronically underfunded by a factor of 100 or even a 1000.

What we need is lawmakers with the time and resources to run a war of attrition and not a bunch of techno-noddies looking for a “quick win” or a “big bang”.

Digital is not a subset of the physical market - it’s a mirror. Digital reflects physical. Digital regulation will need to be as sophisticated and voluminous as all the physical market regulation we have in place. Just think of the pages and pages of regulation we have in US and EU on just Banana imports.

We are just living in a time of digital under-regulation.

Every feature and practice will need to come under the microscope and a correct path debated and ensured that is deployed on every market participant. Take CAN-SPAM - that’s a law that applies to a single protocol and feature (send email) - big tech comprises thousands upon thousands of features. Each feature from photo tagging, image cropping (see Twitter latest issues on this) needs market regulation.

This won’t happen overnight - we need lawmakers prepped and ready for a long war of attrition - dealing with the iterations of the digital economy as they occur.

The only country that seems to be anywhere near the level of digital scrutiny required is China. US, Europe and the rest are a long way behind - and that’s a failure of political will and funding.

Fix the funding, get in it for the long haul, then get to work and fix the digital economy.


Companies know that politicians have an expiry date and regulators need guidance from politicians. Until you invent a politician with long term planning instead of focusing on “short term winning” you’ll never get any working regulation against anyone willing to stonewall a bit.