TWIG 632: It's No Sudoku

I think about my childhood and how I was laughed at and picked on for being “goofy” or “weird,” then one day something clicked for me and I just stopped giving a sh** about the kids my age. Sports equalized things a little for me, but I was still “out of the club.” I had only a few instances of physical attacks. I stood my ground and it didn’t happen anymore. Just chatter. But as I said, I eventually stopped caring. I know everyone isn’t wired the way I am, but at the same time I wish and urge kids not to give bullies any power. My middle boy struggled with it a little. But I had some practice in place to help get him through that stuff. (Stuff that make most of society call me a horrible/mean parent)

Smartphones were eased into their hands because of the crap kids say/do online. Yeah they got laughed at for not having a phone, but I had to do it that way as part of preparing them for the idiocy of social media fakers. I’m fortunate they’re fairly grounded on the platforms and haven’t made any bad decisions yet. I’m sure those bad decisions are coming one day. They’re human. I know I made a ton of 'em.

Dadgumit I LOVE this show and the community around it. We can passionately disagree and still have tremendous respect for each other.

5 Likes

Yuuuup. The discussion with Mr James Vasile was fascinating

2 Likes

The social network discussion was interesting but what piqued my interest was when Jeff said that he doesn’t get Chrome tablets and @Leo replies neither do I. @Leo can you expand on that?

2 Likes

First, I don’t have any doubt that Facebook, intentionally or otherwise, propagates angry content foremost. I’m not sure any other social network is better in this regard, but I never really usd some of the newer social networks (SnapChat, TikTok, or Finsta)

IMO there are a few simple steps that would dramatically improve the Facebook experience.

  1. Severely downweight content from pages you do not follow. I don’t follow any news outlets on Facebook, and I do not generally wish to see posts from them - if I did, I’d like their page. Just because my college roommate commented on a photo from The Young Turks 's fan page does not increase its relevance to me.
  2. Do away with infinite scrolling. Some of my relatives, mostly older, don’t realize when they have reached “the end”, and the more you keep scrolling, Facebook never stops. Instead, it keeps bringing up posts of lesser and lesser quality until you end up reading the crime blotter from Forks, Washington.
  3. Understand that not all friends are created equal. Just because I like sparring over politics with a old friends, doesn’t mean I want to see every post that mentions the a politician or political subject. There are certain people I’ve talked politics with for over 20 years, but my drunk uncle who is like “Hillary is Coming for your Guns!!!” isn’t one of them.
  4. Emphasize events and groups. Most of my positive experiences from Facebook have been because of actual in-person events I learned about on the platform, like bands touring through my city or a simple notification like “Robert is going to JuJuan’s 40th Birthday Party”. Obviously I understand that these things are in direct conflict with raising ad revenue from the dive bar where the band is touring, but that’s not my problem to solve.
  5. Add a “dislike” or “hide” button, grouped with the “like” and “share” buttons. Facebook does have a “show me less content like this” button, but it is buried pretty deep in the UI, and I imagine few of their users know it even exists.
3 Likes

bullet point FIVE.
Smh. Sad truth

1 Like

TWIT is the best podcast network I know. TWiG is the best podcast on TWIT at least for me even though I use no social networks or Google products. This is because the subject matter is mostly about how technology affects society and because the panel is truly outstanding even on a network with so many stellar contributors (I’m looking at you Father Robert and you Mike Elgan). And this was the best TWiG I’ve heard in all the many years I’ve been following.
I generally disagree with Jeff’s stand on social media but I respect the quality of his argument and wish I could share his optimistic view of humanity.

3 Likes

I found myself agreeing with @JeffJarvis that most of us could benefit significantly from some algorithmic help picking what content we want to see from our friends, news outlets, favorite band, etc… We rely on algorithmic search to find results that are actually relevant, rather than keyword-stuffed. We rely on algorithmic filters to keep spam out of our email. Many of us rely on algorithmic importance ratings to decide whether an email should make our phones buzz. Most of us receive too much information to deal with it ourselves, or even with manually-configured filters.

On the other hand, @Leo is absolutely right that Facebook cannot be trusted to build algorithms that are good for users, society, or the world. Facebook, and anyone else with a similar business model and incentives will optimize for whatever gets them the most ad revenue. Most of the time, that will be bad for users, society, and the world.

I don’t pretend to have a solution here, but a more nuanced problem statement may be a good first step toward formulating one.

4 Likes

Any algorithms need to be 100% transparent. The last 3 decades have proven that these large scale algorithms, whether it be social media feeds, stock exchange movements, custodial sentencing, facial recognition, insurance etc. are all inherently biased, none do a good job and have critical flaws in their implementation that are detrimental to society, even though they are heralded as making things better and fairer.

Not one of these algorithms has so far been proven to be robust, fair or have positive benefits for all those that they control.

1 Like

Agreed, but to play Devil’s advocate, how does Google make their search algorithm transparent without revealing proprietary IP?

how does Google make their search algorithm transparent without revealing proprietary IP?

They don’t. There’s an argument to be made that web search algorithms are too important to society to be allowed to be proprietary. I don’t currently hold that position, but I could imagine conditions evolving to the point where I would.

2 Likes

Android tablets aren’t great because the Android software is almost entirely designed for phones. I like ChromeOS but it’s not a tablet OS - it needs a keyboard - so ChromeOS tablets are also not so hot.

4 Likes

Twitter has added a “Downvote” button with its update today, although it appears to be a test rolled out only to a portion of the user base.

1 Like

I hesitate to bring this up but the TWIT community is usually fairly civilized. We’ll see.

First, I don’t like facebook and only have an account so when my family sends me something on facebook and tells me it’s there. Also, I don’t care for their business model.

That being said, most of the panel seemed to accept as already know or proven by the Facebook leak that Facebook and Instagram do enormous damage to the mental health of people and especially to teenage girls.

This is not established by the research on media use and mental health. The Facebook research is self reported responses from small sample sizes and not peer reviewed. This should at least make one pause in assuming that it is entirely accurate. Other, peer reviewed studies have found that 2 to 3 times as many teens report social media makes them feel better about themselves than feel worse.

Does the influence of social media on individuals and society merit more research? Absolutely.

Should Facebook help with that research? I think so.

Would I prefer it if Facebook

Do we know that Facebook is doing serious harm to large number of children and teens? It certainly doesn’t like we are there yet from a scientific perspective.

One can’t help but notice a strong similarity to what some people are saying about social media to what has previously been said about comic books, television, video games and rap music.

Facebook could vanish from world tomorrow and I wouldn’t bat an eye. That said, I think the issues are a lot more complex than much of the media coverage would lead one to believe.

There several places to go for reviews of the research on social media but many are behind paywalls. One place to start is here:

Ok, you nailed this one!!! Well done

Interesting points and thank you for sharing them! :slight_smile: While I agree with some points, I wanted to add to some facets you brought up:

Really! Really, though? I was surprised so I did a double-check on that.

I just entered “detrimental effects facebook” into the academic publication database of my university, picked “journal articles” and “peer-reviewed” and found 5’971 hits. Granted, I did not thoroughly look through the list, but the titles appear pretty damning at first sight. I’d be surprised if those were 5’971 all duds.

Just pointing to “peer reviewed research” (or lack thereof) is a fig leaf, nowadays. Way too many doctoral students and assistant professors need to publish something, anything, to advance. Also: way to many companies spending way to much money to conduct studies that might get skewed in one or the other direction. Due to the acceptance of the Anglo Saxon model (publish or perish) almost everywhere, there has been an influx in publications for at least thirty years. Any popular trend will have ample amounts of research for and some against. Much of this is driven by careers of the researchers and not necessarily valid let alone valuable outcomes. (Another driver: scraping data from Facebook or asking people about Facebook is rather easily accessible data…) Also: “peer reviewed” does not really mean much, today. Too much interference and to many tightly knit circles.

So the call for “more research” … always sounded like a delay tactic to me. In my opinion, it’s a question of quality - of who gets tasked (ideally recognised national or international research agencies who have a repuation to lose and whose businessmodel is indepedent of the outcome) with doing what kind of research (ideally “the expensive kind” - long term, longitudinal, broad-scope, and mixed-method approach with several indepedently managed but interconnectedly cross-examinating projects) on what question (and I wonder if we are far enough here, yet - cause the question cannot be “is facebook bad” // this is a policy question and I am sad to say that I doubt that the US is politically in a position to determine this). Why not set up a branch within something like the FDA which exclusively focuses on the psychological and societal effects of “informational nutrition”. FDA appears rather independent and pretty tough. The topical stretch might be the least of the worries.

Although:

The FDA is responsible for protecting and promoting public health through the control and supervision of food safety, tobacco products, dietary supplements, prescription and over-the-counter pharmaceutical drugs (medications), vaccines, biopharmaceuticals, blood transfusions, medical devices, electromagnetic radiation emitting devices (ERED), cosmetics, animal foods & feed and veterinary products. (Wikipedia)

Just add social media to the list. Protecting and promoting public health is already there and psychology is a part of health.

Nah, you are right. No one at Facebook or Facebook altogether could or should be tried for that. They just don’t really care that much. The mission is profitability and success for the network and the individual career. It is really not their mission to improve society.

I do also agree that this “children and teens” melody always seems to me like a media trope in and by itself. The smallest common denominator is to get people around “protecting” “children and teens”. That said, a match stick can be dangerous (say, one bully at school), a lighter (widespread school yard bullying) even more so, but Facebook (world wide social media bullying that never goes away) might be the flame thrower here - and Facebook’s the one making money if everything’s lit on fire.

This is like asking if car manufacturers harm passengers. But only if car manufacturers also ran the much more profitable hospital. Car manufacturers learned to care. First about safety, now, rudimentarily, about the environment (or at least the look of it). There were big class-action lawsuits. This is what might be useful with Facebook. Wonder if that could come to be.

If anyone is interested and wants to look beyond what journalists write, you can actually use research databases. Several are, sadly, paywalled, but some are not - at least not up to reading the abstract. If you want to look into the actual journals, there may be libraries that have access and you can use for free.

https://www.proquest.com

https://www.jstor.org

https://www.sciencedirect.com/

3 Likes

Thanks very much for the thoughtful reply. Perhaps I was unclear, I did not mean there had been no research I meant that when you look at the results, especially those with large well selected samples you see very little indication that social media use leads to a negative effect on mental health to very many people.

It may very well adversely affect some people and positively affect others. If so, who decides what the balancing point is?

1 Like

I know this is a bit of a necro, but I am catching up on some old casts here, and I don’t think any of these points have substantially changed.

I had a big post typed up that I made an honest attempt to edit, but the intersection of the control that tech exercises over our lives along with the difficulties of proper study in this area made me a bit wordy.

Indeed, this is exactly the type of research that reviewers in the sciences place high value upon. However, there is a fundamental problem that partially prevents the “easy” answer of the FDA stepping in on this kind of study: if it is true that using social media can cause real harm to individuals, then it is unethical to randomly assign people to use social media as part of a study. You may not be very familiar with the FDA’s processes, but it typically places extremely high weight on the gold standard of causal research: the double-blinded, randomized controlled trial. Also, though exceptions exist, it typically does not deal with public health, but rather protecting consumers from new medical products. Given the politicization of scientific research recently, it seems unlikely that the FDA will see their mission enlarged to incorporate this aspect of public health.

On the other hand, the mission of population monitoring of health crises would IMO more naturally fall under the purview of the CDC, which has in the past monitored rates of gun violence, self harm, and other detriments to public health. The expertise needed for population monitoring is also more readily found there. I actually think that we could go one step further and, if we can establish the detriments to public health of different online systems, form an additional institute under the NIH focusing on the study of digital mental health; but that’s a bit beyond the pale for now.

So I will agree with you that it is quality over quantity when it comes to academic research, and a big issue is the quality of the data. Since we can’t do the randomized trials to establish these effects, we would need high-quality details in large sample sizes. These studies are unlikely to be completed without an industry-wide data sharing agreement (I won’t hold my breath) or significant public health funding to study this issue.

I agree with this personally, but don’t think that requiring openness, which essentially is one side effect of “timeline only,” is the answer.

Absolutely this. Most consumers do not want choice, but rather some form of curation (what are the desktop Linux figures currently at? :sob:). Yet companies cannot be trusted to do this in a vacuum.

My estimation of the issue is that consumers do not have enough choice at all: e.g. if your friends are using Messenger, you need some kind of account there just to keep in contact. What I would love to see is an open standard behind these platforms. The platform still would offer value in the same way that browsers do, even though the web itself is open. Part of that value could be the way the content is curated via algorithms and design. We see inklings of this in FLOSS software with connections between services like Mattermost, Matrix and IRC.

Maybe more details around the idea were discussed on FLOSS weekly 650. I’ll look forward to catching up on that episode next!

That may be true for most consumers, though I think many would like some knobs they can turn when the curation isn’t providing results to their liking. I want a mix, myself: algorithmic curation for a lot of things, but full control over the algorithms.