17 votes

Biden wants to get rid of law that shields companies like Facebook from liability for what their users post

10 comments

  1. [3]
    DanBC
    Link
    You should read the BackPage documents before commenting on this, because their behaviour is mostly what's caused this change in mind in US legislators. Children were being kidnapped, drugged, and...

    You should read the BackPage documents before commenting on this, because their behaviour is mostly what's caused this change in mind in US legislators.

    Children were being kidnapped, drugged, and raped, and they were being offered for sale in BackPage ads. Backpage i) knew this, ii) did not try to stop it iii) did not help law enforcement and iv) made changes that encouraged the ads and hindered law enforcement.

    Children who had been rescued would find images of themselves being raped, posted to Backpage, and they would contact backpage and ask for those images to be taken down. Backpage always refused, and continued to refuse all the way through the court cases.

    https://www.hsgac.senate.gov/imo/media/doc/Backpage%20Report%202017.01.10%20FINAL.pdf

    This report contains three principal findings.

    First, Backpage has knowingly concealed evidence of criminality by systematically editing its “adult” ads. As early as 2006, Backpage executives began instructing staff responsible for screening ads (known as “moderators”) to edit the text of adult ads to conceal the true nature of the underlying transaction. By October 2010, Backpage executives formalized a process of both manual and automated deletion of incriminating words and phrases, primarily through a feature called the “Strip Term From Ad Filter.” At the direction of CEO Carl Ferrer, the company programmed this electronic filter to “strip”—that is, delete—hundreds of words indicative of sex trafficking ( including child sex trafficking) or prostitution from ads before their publication. The terms that Backpage has automatically deleted from ads before publication include “lolita,” “teenage,” “rape,” “young,” “amber alert,” “little girl,” “teen,” “fresh,” “innocent,” and “school girl.” When a user submitted an adult ad containing one of these “stripped” words, Backpage’s Strip Term From Ad filter would immediately delete the discrete word and the remainder of the ad would be published. While the Strip Term From Ad filter changed nothing about the true nature of the advertised transaction or the real age of the person being sold for sex, thanks to the filter, Backpage’s adult ads looked “cleaner than ever.” Manual editing entailed the deletion of language similar to the words and phrases that the Strip Term From Ad filter automatically deleted—including terms indicative of criminality. By Backpage’s own internal estimate, by late-2010, the company was editing “70 to 80% of ads” in the adult section either manually or automatically.4 It is unclear whether and to what extent Backpage still uses the Strip Term From Ad filter, but internal company emails indicate that the company used the filter to some extent as of April 25, 2014. Manual editing appears to have largely ended in late 2012. Over time, Backpage reprogrammed its electronic filters to reject an ad in its entirety if it contained certain egregious words suggestive of sex trafficking. But the company implemented this change by coaching its customers on how to post “clean” ads for illegal transactions. When a user attempted to post an ad with a forbidden word, the user would receive an error message identifying the problematic word choice to “help” the user, as Ferrer put it.5 For example, in 2012, a user advertising sex with a “teen” would get the error message: “Sorry, ‘teen’ is a banned term.”6 Through simply redrafting the ad, the user would be permitted to post a sanitized version. Documents from as recently as 2014 confirm the continued use of these error messages.7 Backpage employed a similarly helpful error message in its “age verification” process for adult ads. In October 2011, Ferrer directed his technology consultant to create an error message when a user supplied an age under 18. He stated that, “An error could pop up on the page: ‘Oops! Sorry, the ad poster must be over 18 years of age.’”8 With a quick adjustment to the poster’s putative age, the ad would post.9

    Second, Backpage knows that it facilitates prostitution and child sex trafficking. In addition to the evidence of systematic editing described above, additional evidence shows that Backpage is aware that its website facilitates prostitution and child sex trafficking. Backpage moderators told the Subcommittee that everyone at the company knew the adult-section ads were for prostitution and that their job was to “put[] lipstick on a pig” by sanitizing them. Backpage also knows that advertisers use its site extensively for child sex trafficking, but the company has often refused to act swiftly in response to complaints about particular underage users—preferring in some cases to interpret these complaints as the tactics of a competing escort. Backpage may also have tried to manipulate the number of child-exploitation reports it forwards to the National Center for Missing and Exploited Children.

    16 votes
    1. NaraVara
      Link Parent
      Yeeesh I didn't realize Backpage's behavior was so bad. I generally am pro-legalizing (and heavily regulating) prostitution, so I was generally sympathetic to the idea that they were providing a...

      Yeeesh I didn't realize Backpage's behavior was so bad. I generally am pro-legalizing (and heavily regulating) prostitution, so I was generally sympathetic to the idea that they were providing a safe platform for it. I kind of just assumed they knew they were on thin ice and would have taken some basic care to try and keep the non-victimless (is there a word that's like "victimfull"?) variants, like child abuse and sex-trafficking, under control.

      That said, I think there is a trend in law enforcement now where expansive powers are being given to curtail "sex trafficking" but this is basically resulting in them defining the term overly-broadly to catch lots of stuff that probably doesn't deserve that level of enforcement. It's kind of like how anti-terrorism powers are largely just used to wage the drug war now.

      15 votes
    2. spacecowboy
      Link Parent
      Sex trafficking is already exempted from being covered by Section 230 by a newish law called FOSTA (Fight Online Sex Trafficking Act). So Backpage is no longer an good argument for abolishing...

      Sex trafficking is already exempted from being covered by Section 230 by a newish law called FOSTA (Fight Online Sex Trafficking Act). So Backpage is no longer an good argument for abolishing Section 230.

      You can find a lot of articles about FOSTA and Section 230 on the blog of Eric Goldman: https://blog.ericgoldman.org/archives/2019/06/latest-linkwrap-on-fostas-aftermath.htm (lots of related links in the bottom).

      5 votes
  2. [5]
    Comment deleted by author
    Link
    1. NaraVara
      (edited )
      Link Parent
      I think there is a basic difference in how platforms like Facebook, Twitter, Reddit, and even Tildes work vs. how the socialization used to work in the pre-social media internet through Usegroups,...

      I wonder at times if this might actually produce some good in the end. Many of us of enjoy commenting and linking online, but is it an inherent good?

      I think there is a basic difference in how platforms like Facebook, Twitter, Reddit, and even Tildes work vs. how the socialization used to work in the pre-social media internet through Usegroups, bulletin boards, IRC chats, and blogs with commentariats. I suspect the former model is inherently flawed but I'm not really sure what to do about it since there is no putting the genie back in the bottle.

      I talked about what I think of as the basic flaw in social media here. After rereading that post I find I still agree with the argument a year later. The 1-to-many paradigm might just be fundamentally broken because it creates no incentive to treat people like people, instead the primary thing I interact with is atomized bits of "content" bereft of all context and humanity.

      If you'll allow me a bit of a digression, there is a neurological disorder called "Capgras Syndrome" where after a traumatic brain injury, one becomes convinced that a person they see and know is actually an imposter. One of the theories about what's happening is that the brain actually recognizes people on two levels. One level is the straight up facial recognition, but on an emotional level that facial recognition is accompanied by a memory trigger that "reminds" you of your general emotional orientation towards that person. So I see my wife and I know she's my wife not just because this human has my wife's face but because I associate that face with all the memories and feelings and things I know about her. The face is anchored to a whole suite or emotions and thoughts.

      Some traumatic brain injuries manage to retain the part of your brain that has a memory of the person's face, but it breaks your brain's ability to pull in all the associated resources it expects upon detection of the face. You brain expects this bevy of anchored thoughts and feelings to come rushing in and when they don't come your brain assumes this is not the person you know. The fact that they have the same face and claim to be this person means they are an imposter!

      All of that is a roundabout way of saying there is more to human interpersonal reaction that merely knowing you're talking to a person. You can interact with someone on an intellectual level, but you can't really trust or be open with them unless you can anchor this to an emotional connection built on shared memories and experiences and background knowledge about them. The Twitter paradigm that strips all that context out of the posts you interact with might be hampering your brain's ability to interact with people as humans.

      9 votes
    2. [2]
      onyxleopard
      Link Parent
      It is all a matter of scale. When human communities are small-scale, they can self-police. When communities grow too large, the economic incentives for moderation break-down. If moderation becomes...

      I suppose part of me being on Tildes is the hope that a clever enough tech wizard could solve the problem since I do trust Deimos, but we're also perhaps pinning too much hope now on tech wizards to solve the problems they have collectively created.

      It is all a matter of scale. When human communities are small-scale, they can self-police. When communities grow too large, the economic incentives for moderation break-down. If moderation becomes onerous, people give up and stop doing it, unless the platform steps in and incentivizes it. And the platform operators’ motives are unlikely to align with the community when the community grows too large and attracts interests of the wider world.

      6 votes
      1. [2]
        Comment deleted by author
        Link Parent
        1. onyxleopard
          Link Parent
          I think the tech companies will try to automate the moderation, but the moderation will be circumvented enough that the human users will be drowned out by the bots. I don’t think it’s...

          And if all tech companies are made to hire tons of people to police content, then in a circuitous way you're still strangling the internet because only the biggest players could afford to have user engagement.

          I think the tech companies will try to automate the moderation, but the moderation will be circumvented enough that the human users will be drowned out by the bots. I don’t think it’s fundamentally feasible to run a large platform (millions of legitimate, human users) and not sell-out in some way. Either you are selling out in that you are allowing third parties to exploit your legitimate, human users, or your are selling out by paying your human users to moderate themselves. And since the latter option eats into profits, it’s highly unlikely to either scale well or actually be successful compared to platforms that opt for the former.

          2 votes
    3. Leonidas
      Link Parent
      In a way, this is what's already happening with Mastodon instances and platforms which use ActivityPub in general. Most are geared towards specific interests and have more stringent moderation (in...

      Such a law could create a significantly more fragmented internet where small social groups police themselves with one or a few leaders, which would more resemble how our real life social networks are.

      In a way, this is what's already happening with Mastodon instances and platforms which use ActivityPub in general. Most are geared towards specific interests and have more stringent moderation (in practice as well as policy) than large sites like Twitter, and since users are able to migrate to different instances, they're able to collectively exert more pressure over admins' decisions. However, on the flip side, moderation isn't really democratized per se, so it's effectively replacing the corporate hierarchy of traditional social media with benevolent despots who control smaller fiefdoms. Due to the ability to defederate with other instances, most users are protected from places rife with bad actors and nonexistent moderation such as Gab, but this ability can also be overused due to spats with another instance's admin. Overall, the power structures are very different.

      2 votes
  3. [3]
    dihedral
    Link
    I think that this is a poor solution to the problem at hand. Section 230 was implemented to protect ISP's from having to moderate their users. Do you really want a manual process that decides...

    I think that this is a poor solution to the problem at hand. Section 230 was implemented to protect ISP's from having to moderate their users. Do you really want a manual process that decides wether or not your speech online is protected or not? Unmoderated free speech is a core feature of the web. Companies are allowed to moderate, but should not be penalized if they don't.

    If I yell fire in a movie theater, should I be prosecuted for my action? Or should the theater be prosecuted because they allowed me on the premises without ensure that I would not yell fire first.

    8 votes
    1. NaraVara
      Link Parent
      If the theater is aware of the existence of groups dedicated to wreaking havoc in movie theaters by provoking stampedes, and the theater then takes active measures to monetize these peoples'...

      If I yell fire in a movie theater, should I be prosecuted for my action? Or should the theater be prosecuted because they allowed me on the premises without ensure that I would not yell fire first.

      If the theater is aware of the existence of groups dedicated to wreaking havoc in movie theaters by provoking stampedes, and the theater then takes active measures to monetize these peoples' activities despite the harm to their patrons overall I'd say yeah you should probably bear some blame for it. At that point it's negligence on the part of the venue.

      8 votes
    2. [2]
      Comment deleted by author
      Link Parent
      1. onyxleopard
        Link Parent
        The distinction to draw I think is whether a platform is a "dumb pipe" or whether it curates the data somehow (via humans, algorithms, or some combination).

        The distinction to draw I think is whether a platform is a "dumb pipe" or whether it curates the data somehow (via humans, algorithms, or some combination).

        6 votes