11 votes

Shows of support for racial justice from social media giants like Facebook, Twitter and YouTube don’t address the way the platforms have been weaponized by racists and partisan provocateurs

3 comments

  1. [3]
    Douglas
    Link
    Last time I watched a Joe Rogan clip while logged into YouTube, it immediately put me on the YouTube radicalization path towards Ben Shapiro, Crowder, and uhhh... that other guy with the stache...
    • Last time I watched a Joe Rogan clip while logged into YouTube, it immediately put me on the YouTube radicalization path towards Ben Shapiro, Crowder, and uhhh... that other guy with the stache who started Proud Boys.

    • Then on Twitter, I found if I read the comment section of any progressive political person, the top ones are sorted by anger-inducing counter opinions from users with little-to-no followers or likes-- followed by tweets from people who agree and have lots of likes/support. Then by contrast, if I go visit a conservative or republican tweet and their comment section, they all seem to be sorted by which tweets got the most likes (as it should be/that's a sorting I can understand).

    I know I've heard it in a podcast or two, but I'm convinced these platforms are knowingly using anger as an engagement tool.

    8 votes
    1. rogue_cricket
      Link Parent
      I made note of a similar experience just today! I've known for a while that Facebook and their ilk sort posts by engagement & I thought that was bad enough - posts and comments that are more...

      I made note of a similar experience just today!

      I've known for a while that Facebook and their ilk sort posts by engagement & I thought that was bad enough - posts and comments that are more controversial generate more replies because of arguments, which counts as "engagement", so these outrage-bait things would be bubbled up to the top. It sucks, but it at least made some sense to me: high-engagement things are more likely to engage me, so it showed them to me. Thing is, I figured Facebook was measuring current engagement to sort for relevance. I don't want to say I thought outrage was incidental to it, I just thought that it was the controversy that caused the post to get placed higher, and that resulted in a feedback loop that eventually put it at the top. I didn't think too hard about it.

      I found my counterexample today. I follow a jokey left-leaning group on Facebook that is open to public comment, and the only comment on a meme post about cops that wasn't hidden behind an additional click was something like "try it, antifa, you'll get shot". No likes, no replies, it wasn't even the newest comment. And that's the one Facebook wanted me to see first.

      So Facebook isn't showing me stuff that has already caused an argument, seems like they're showing me stuff with the intention of manipulating me into starting an argument. I have no idea if they guessed it'd make me mad because that guy had different politics than the rest of the group he was posting in, or different politics than me, personally, or if they did some kind of sentiment analysis ML bullshit or whatever but it felt very creepy.

      We really need to stop with this experimental reinforcement-learning stuff when it comes to recommendation engines on social media; between this and the filter bubble it results in some truly awful feedback loops. And of course it will only get worse as it goes on because it shapes the environment it runs in to be more and more extreme.

      6 votes
    2. viridian
      Link Parent
      Twitter sorting isn't even reproducible in many cases, but the consensus is that the black box algorithm in place is aiming to maximize engagement. Here's Twitter's sparse official policy on it:...

      Twitter sorting isn't even reproducible in many cases, but the consensus is that the black box algorithm in place is aiming to maximize engagement. Here's Twitter's sparse official policy on it:

      You may notice that some replies in a conversation are not shown in chronological order. Replies are grouped by sub-conversations because we strive to show you the best content first, and what we think you’d be most interested in. For example, when ranking a reply higher, we consider factors such as if the original Tweet author has replied, or if a reply is from someone you follow.

      Now obviously they are leaving out some important factors that feed into this, but the overall goal is transparent. The algorithm is as you guessed it, fishing for any possible venue of engagement. Anger (or more specifically outrage) seems to be a common way to optimize for 'time on site'. That particular idea isn't new though, many newspapers have a long and storied history of running front page headlines intended to invoke the same response.

      4 votes