47 votes

US judge rules YouTube, Facebook and Reddit must face lawsuits claiming they helped radicalize a mass shooter

11 comments

  1. [10]
    C-Cab
    Link
    I feel like this could be a pretty big case from my background that is admittedly naïve of the law. I am probably being wrong about section 230 of the Communications Decency Act here, but it seems...

    I feel like this could be a pretty big case from my background that is admittedly naïve of the law. I am probably being wrong about section 230 of the Communications Decency Act here, but it seems like this is less about moderation and more so about the algorithms themselves that promote user engagement. It's hard to argue that a website isn't liable for the algorithm that would lead someone down a rabbit hole like that, even if they're not generating the content themselves - is that the gist of the legal argument of the plaintiffs?

    Edit: Certainly one could argue that individuals are responsible for how they engage with the platform, but I feel like there's a case to be made that when you are intentionally designing something that seeks to keep people on it for as long as possible by playing into our psychology that maybe you're not quite so blameless in the process.

    29 votes
    1. [7]
      DavesWorld
      Link Parent
      Look, I hate what companies do with engagement algorithms as much as the next guy, but this is a dangerous area. What this lawsuit is trying to allege. Section 230 states pretty explicitly...
      • Exemplary

      Look, I hate what companies do with engagement algorithms as much as the next guy, but this is a dangerous area. What this lawsuit is trying to allege.

      Section 230 states pretty explicitly companies (ie, the internet platforms) aren't responsible for user content. The reason the law came into being was pretty clear; everyone wants to sue the big bank account. Why would you sue a nobody, who has just as much money as you do; namely little to none?

      You don't unless you're on a crusade and you're that angry at them. And, of course, you have the money to pay for your crusade. You sue the deep pockets, because if you can get the win, they have the funds to pay the reward you seek.

      Congress realized this, and further concluded companies would have zero incentive to allow user content that the platform would be liable for. If you ran an anything, from a bulletin board at the bar to a newsletter to a full on Youtube competitor, and you could be held accountable for whatever some random asshat put up on it, would you allow anything to go up unless you put it up?

      No, of course not. Just about no one would. It's one thing to be held accountable for your speech, but someone else's? We've all see what random asshats say online. You want that to be your fault?

      The lawsuit's lawyers are trying to endrun around that (reach the deep pockets) by saying the algorithm is responsible, and the platforms control their own algorithm. Except, for one thing, it could be argued any kind of sorting is an algorithm. Even alphabetical, since the old phone book trick of "AAAAA Plumbing" and "AAAAAAAAAAA Advantage Plumbing" would come back and become the site's fault for not seeing through it when bad actors used that system.

      It's a Free Speech issue at its heart. The plaintiffs find the speech objectionable, but can't sue on First Amendment grounds. They want the platforms to have basically banned such speech, but the platforms aren't since that puts them in the position of having to morally judge each individual piece of speech on the site.

      Not only are they not required to, but it's pretty impossible for sites the size of Facebook and Youtube. No matter how many moderators they assemble, just thousands upon tens of thousands of moderators, something would slip through. Which makes the demand kind of impossible, and thus unreasonable. People forget just how much stuff goes up on Youtube and Facebook and so on each minute.

      What's the end result if these 230 Algorithm suits succeed? That any host (which is anyone, not just a Big Company) is now liable for applying sorting. And when any list can become "algorithmic", that would cripple the net and destroy social media. Search is an algorithm. If you search for "garden pottery", there has to be some sort of algorithmic ranking to figure out what order the results display in.

      That's what they're trying to use as their lever to make the platforms liable. They're trying to claim the algorithm should have been "moral", or more moral. And that it wasn't means the deep pockets should pay.

      Engagement algorithms are pretty shitty. They're programmed for twisting and corrupting attention just to make the company more money. Those companies don't use engagement algorithms for "Good", but in fact Evil. They could set up an algorithm to do something like "oh, you like garden pottery; you might like regular pottery, gardening, farming, painting, etc" and throw those options at you. Figuring out adjacent interests and so on.

      Instead, they set them up to throw the most "engaging" content at you.

      What's engaging? Whatever gets people active on the site. That's what they're selling; you. Your attention. Advertisers value a site that has a million eyeballs every minute more than one that has ten thousand every minute. And if there's ways to keep any given set of eyeballs on a site longer, that's the more valuable way to go since that's what they're selling. You. Millions of you.

      If you get upset and put up replies, reaction videos, linking back and forth, linking to it from off site trying to build a crusade, and so on ... that's engagement. Negative engagement is actually more valuable than positive engagement.

      What's the quickest way to get an answer online? State a wrong one, preferably confidently, because people fall all over themselves to correct you. But if you just ask like a normal person, many fewer people feel compelled to take the time to give an answer. They only feel that urgency if you can be wrong while they get to be right. Or if you piss them off.

      Then there's urgency, and they jump to. That's engagement, and that's more valuable than casual users who casually scroll without becoming engaged.

      The people who posted all this stuff the plaintiffs claim weaponized the shooter are the proper target. They're who said the speech. If they said something actionable, they're liable. If they directly called for violence, they're liable. If they gave step-by-step instructions on how to do violence, they're probably liable.

      It should be noted that it's not like Google actively had an employee sitting there going "oh, more of (this), we'll click-click that into prominence." It happened algorithmically. Automated. Computers, until we get semi-sentient AI, can't judge morals. And you can't just have the algorithm scan and exclude, having it decide"oh they said (insert a bad word) here; ban!"

      Because what about games or movies that might use words like "guns" or "shoot them" or "blow it up" or "I will kill you"?

      If they win, social media will probably shut down. There are eight or nine billion people on the planet, and it's pretty are to find someone without a computer or smartphone. So there's probably 7-8 billion social media users.

      Too many to morally judge each and every item they post. At least, for a human to do so. Comments, videos, whatever; no anything could employ or harness enough people to keep up with the entire planet's social media activity. So that activity, which in the event of a 230 Algorithm loss by the Big Dogs, is now a source of liability.

      Logical answer? Shut it all down. It would end up costing them more than they could make from it. Social media wouldn't exist. The internet would revert to those cute little dancing gopher backgrounds on "Welcome to my page, all about (insert hobby here)" sites individuals used to put up and wait for Yahoo to find and list.

      Which Yahoo mostly did by having a human eyeball the site before listing it. The internet used to be that small.

      It isn't anymore.

      And anyway, some PETA adherent would probably object to the digital gophers. Claim it "normalizes" gopher abuse. So we wouldn't even have that.

      14 votes
      1. [2]
        ackables
        Link Parent
        I've thought about this because we obviously wouldn't sue the internet for a harmful website or the city for someone yelling hate speech on the street corner, but social media is not just a...

        I've thought about this because we obviously wouldn't sue the internet for a harmful website or the city for someone yelling hate speech on the street corner, but social media is not just a collection of voices the same way the internet as a whole is. Social media sites actively boost or hide certain content based on an algorithm designed to increase user engagement and please advertising customers.

        Social media sites will actively boost the visibility of harmful fringe ideas if it increases the amount of time people use their website. I agree that social media companies should not be liable for what people say on their sites, but at what point does their algorithm promoting certain posts become an endorsement of what that user is saying? If social media companies editorialize their content using an algorithm instead of a real person, why would they have less liability for their speech than a publication with a human editor?

        19 votes
        1. roo1ster
          Link Parent
          IMHO, If you present stuff strictly in chronological order or by user determined popularity, your platform should be safe (up to the point the entire site or sections of the site are deemed...

          IMHO, If you present stuff strictly in chronological order or by user determined popularity, your platform should be safe (up to the point the entire site or sections of the site are deemed socially unacceptable, however that's defined).

          Once you start tweaking what individual users are presented with to drive individual user engagement, it gets a lot greyer to me.

          Modern social platforms appear to me to have significant insight into post content and can determine what an individual user will see according to that individual users' proclivities. On its face that seemed pretty innocuous too, but now we've got examples of this increasing radicalization and that (to me) is a problem.

          Modern corporations have demonstrated that their only moral is "Increasing shareholder value at all costs". If you want to change their behavior, you'll need to be able to impact shareholder value... The beauty of the US legal landscape is that it grows to meet the needs of the people it serves. Again, imho, it's past time for it to catch up with the tech is purports to govern.

          26 votes
      2. [2]
        sparksbet
        Link Parent
        I largely agree with your First Amendment analysis and I don't think the plaintiffs here have much shot at winning this as a result. However, I do have some contention with this portion: While...

        I largely agree with your First Amendment analysis and I don't think the plaintiffs here have much shot at winning this as a result. However, I do have some contention with this portion:

        Except, for one thing, it could be argued any kind of sorting is an algorithm. Even alphabetical, since the old phone book trick of "AAAAA Plumbing" and "AAAAAAAAAAA Advantage Plumbing" would come back and become the site's fault for not seeing through it when bad actors used that system.

        While sure, you could argue any sorting is "an algorithm", it's perfectly possible for a judge to distinguish between different types of algorithms -- and I think it's likely they would. Even a non-technical person like most judges can tell there are huge differences between displaying something chronologically or alphabetically and using an engagement-based algorithm, and I think the judge will meaningfullg consider how much the difference matters here. They could absolutely distinguish between these in a decision, even on a non-technical level.

        I'm not convinced it's likely they will rule in that way, but I think it's faulty logic to treat these types of algorithms as legally indistinguishable from any other sorting.

        16 votes
        1. Protected
          Link Parent
          I'm not going to join the pile-on since anything I would have said has already been said (and very well!) but I'd like to extend what you said by pointing out it shouldn't just be about which...

          I'm not going to join the pile-on since anything I would have said has already been said (and very well!) but I'd like to extend what you said by pointing out it shouldn't just be about which algorithm or which type of algorithm but about whether whatever engagement-oriented algorithm is (was) in use at the time was responsibly designed to prevent this type of situation. I don't know what the law currently looks like in the US, but I'd love it if the designers had a legal responsibility not to be negligent in their designs (accidents will still and always happen, of course).

          8 votes
      3. raze2012
        Link Parent
        that's where the subteties come from here. You're not responsible if someone uploads, say "women are inferior" on your platform and someone wants to sue the platform (not the video creator) for...

        If you ran an anything, from a bulletin board at the bar to a newsletter to a full on Youtube competitor, and you could be held accountable for whatever some random asshat put up on it, would you allow anything to go up unless you put it up?

        that's where the subteties come from here. You're not responsible if someone uploads, say "women are inferior" on your platform and someone wants to sue the platform (not the video creator) for libel (yeah, it's a stretch, but don't read too deeply into this part of the metaphor). But if you make curations that start off with "feminist gets OWNED" and then goes on to recommend the "women are inferior" down the chain, is that something you're responsible for? It's a very interesting question. Heck. even a pro-feminist video can lead down that chain based on user habits. As the lawsuit states, they are maximizing for engagement, not necessarily what you like or find interesting.

        The resulting issue here wouldn't necessarily lead to platforms banning certain content (although I imagine that's how most would handle it), but it could lead to them becoming more hands off on curation.

        Reddit would have been an interesting edge case here some 12 years ago, when most of the weight of the curation came from the users themselves. But they clearly recommend subs and email you various posts these days as well.

        The lawsuit's lawyers are trying to endrun around that (reach the deep pockets) by saying the algorithm is responsible, and the platforms control their own algorithm. Except, for one thing, it could be argued any kind of sorting is an algorithm.

        Sure, this is why I hope "curation" is a more important part of the argument here, as opposed to objective sorts like most liked, most viewed, alphabetical order, etc. If there's any part of the pipeline where the platform can insert its own recommended videos, that should be something the platform takes responsibility for. Because platforms can then be financially incentivized to put stuff into their curation.

        But I can see a less nuanced argument popping up in court.

        It's a Free Speech issue at its heart.

        These days, I'm more ambivalent. The issue here is that Section 230 was intended to allow platforms to more or less remain a neutral actor so that they can't be held liable for content they did not create. But these days, I think we all can agree that these megacorps are anything but neutral. they have multimillion dollar incentives to nudge people towards watching certain kinds of content and similar incentives to suppress objectionable content. At that point are they really a neutral actor?

        I know this lawsuit has different intentions, but this is why I'm not necessarily going to be as staunch a defender today as I woulda been some 8-10 years ago. I don't necessarily hope they get dinged here as indirectly responsible for a shooting, but these companies definitely need a solid smack somewhere.

        14 votes
      4. Gaywallet
        Link Parent
        First I'd like to say that I think for reasons similar to yours, this wont stand up to scrutiny in court, especially if it makes its way up to higher courts like 5th circuit or supreme. It will...

        First I'd like to say that I think for reasons similar to yours, this wont stand up to scrutiny in court, especially if it makes its way up to higher courts like 5th circuit or supreme. It will ultimately be deduced as an issue of free speech, and therefore will not get ruled on in a way that ends up changing anything.

        But I think it's completely unfair to take a multidimensional issue like this one and simplify it in the way that you are. This is lazy legal interpretation. You could use the same argument to simplify down racially motivated hate crimes to issues of free speech. Yet we have law that makes the discrimination between murdering someone and murdering someone because they are black. We could easily make similar context based deductions or arguments here to draw a distinction between types of engagement, as you rightfully did by discerning between content that is served because it is "similar" rather than just because it keeps eyes on your platform. We can also draw a distinction between engaging content and engaging hateful, divisive, or polarizing content. We also have laws which protect certain individuals on the internet from certain content, requiring people be of a certain age to access content and we have laws restricting what content can be shown based on people's ages- the same reasoning could be applied towards protective rails for end users (although that's more an argument for the creation of law than rendering a judgement on existing law).

        Ultimately the US loves the concept of the text of the law - an attempt to limit laws reach to the precise words on a paper and nothing more. Even abstract concepts such as harassment, incitement, and violence have been defined through countless court rulings to have very specific definitions generated through creative interpretations of the words themselves and the biases of the judges of various courts. The companies at risk of suit here are so massive that there is no way they will allow any other possible ruling than to hold up the protections of 230 and twist this into an issue of free speech.

        But I wish people wouldn't make so many slippery slope arguments against legal hypotheticals. It's not like Facebook isn't one of the most massive companies on the internet. It's not like monoliths on this level "will probably shut down". Apple is facing billions of dollars in fines by the EU and they're basically scoffing at it, still refusing to change their practices because these fines don't meaningfully hurt them. Even if Facebook or any of these companies were held liable, I bet the judgement would hardly be noticed. I wish instead of framing this as an "everyone loses" scenario, or a slippery slope into an internet where no one operates, we gave these legal issues more time of day so that we could envision a world that's a bit less dystopian. A world where we might have some actual regulation on the companies which have already cost us so much on their endless need to maximize profits at the expense of human life.

        11 votes
    2. Deely
      (edited )
      Link Parent
      Oh, thats actually interesting question. If someone builds blogging platform with aim at max "engagement" and it results in a lot of toxic behaviour and if also this someone do not allow users to...

      Oh, thats actually interesting question. If someone builds blogging platform with aim at max "engagement" and it results in a lot of toxic behaviour and if also this someone do not allow users to block some blogs from the news feed (like Facebook and Reddit) then I think it makes sense to punish this someone.

      (I still have a grudge with Reddit for blocking alternative clients that allows you to block some subs from appearing everywhere)

      12 votes
    3. Weldawadyathink
      Link Parent
      I am not a lawyer, but this feels like a really important distinction. These platforms clearly do not create the content they host. That has been legally protected in the past. But their...

      I am not a lawyer, but this feels like a really important distinction. These platforms clearly do not create the content they host. That has been legally protected in the past. But their algorithms are different. They produce the algorithm. I think one could reasonably argue that the algorithm is what is being consumed (in addition to the content). If these platforms want to host a mastodon-esque no algorithm timeline, they can argue protection. But they are building a product, and that product is radicalizing people. The fact that their product uses other people’s content doesn’t even seem important.

      8 votes
  2. GunnarRunnar
    Link

    In her decision, the judge said that the plaintiffs may proceed with their lawsuit, which claims social media companies — like Meta, Alphabet, Reddit and 4chan — ”profit from the racist, antisemitic, and violent material displayed on their platforms to maximize user engagement,” including the time then 18-year-old Payton Gendron spent on their platforms viewing that material.

    22 votes