23 votes

Neo-nazis are gaming Twitter’s new rules, exactly like people warned

9 comments

  1. [6]
    Seven
    Link
    The problem here is that Twitter is trying to apply its rules universally where a moral equivalence doesn't exist. An online hate mob doxxing a games journalist for being a feminist is wrong; an...

    The problem here is that Twitter is trying to apply its rules universally where a moral equivalence doesn't exist. An online hate mob doxxing a games journalist for being a feminist is wrong; an antifascist group doxxing a white supremacist trying to organize a neo-nazi rally is good. There's no real way for Twitter's rules to differentiate between "good" doxxing and "bad" doxxing without taking an ideological stance itself. It's the struggle that all online spaces have to deal with, and creating rules that prevent the alt-right from abusing them is made all the more harder by the fact that Twitter doesn't really have a problem with nazis on their platform in the first place.

    17 votes
    1. onyxleopard
      Link Parent
      This is tricky. I'd say that "preventing a neo-nazi rally" is good in the macro sense, and doxxing a white supremacist might be a step that has the effect of preventing a neo-nazi rally, so it can...

      An online hate mob doxxing a games journalist for being a feminist is wrong; an antifascist group doxxing a white supremacist trying to organize a neo-nazi rally is good.

      This is tricky. I'd say that "preventing a neo-nazi rally" is good in the macro sense, and doxxing a white supremacist might be a step that has the effect of preventing a neo-nazi rally, so it can be deemed good in the sense of the end justifying the means. But, I'd still argue that doxxing, per se, is not good. It opens Twitter up to arguments about morality and how they determine what is good or bad.

      I think there are plenty of other steps that have nothing to do with Twitter that could be effective at preventing the neo-nazi rally from occurring. As such, I don't think it's Twitter's responsibility to insert itself into moral litigation like this when that can be handled externally (such as by laws). Neither do I think we should place that mantle of responsibility on Twitter's shoulders. Twitter policies (or corporate policies generally) shouldn't be dictated by moral principles. I think fairness and equity are much better guiding principles for policies like this, rather than moral principles.

      15 votes
    2. [3]
      skybrian
      Link Parent
      I think this is true of laws and regulations in general (they are supposed to be unbiased). While this case is about private regulation, it is basically something that isn’t well-handled by making...

      I think this is true of laws and regulations in general (they are supposed to be unbiased). While this case is about private regulation, it is basically something that isn’t well-handled by making rules.

      An alternative or supplement to rules is relying on human judgement. But the armies of human moderators these large sites rely on are themselves pretty limited by whatever cultural knowledge they started with (they might be from a different country) and what you can train people to do at scale.

      Consider how you would moderate discussion in a different country where you don’t know the culture. If you don’t get the jokes or the cultural references, you’d probably screw up a lot, right? Heck, I’m sufficiently out of touch that I’d probably screw up a lot if there are a lot of memes going around.

      A local moderator (of a small site or discussion group) can do better if they come from the same community and know its ways.

      5 votes
      1. [2]
        Gaywallet
        Link Parent
        I would like to point out that this is inherently flawed. Humans cannot be unbiased. I personally believe that the societal move in America away from the spirit of the law to the word of the law...

        they are supposed to be unbiased

        I would like to point out that this is inherently flawed. Humans cannot be unbiased. I personally believe that the societal move in America away from the spirit of the law to the word of the law is perhaps one of America's greatest failings. To ignore contextual information when interpreting law ignores the whole point of law, which is to enforce societal norms - societies cannot exist without people and people do not exist in a context vacuum. Furthermore, it creates an inherent imbalance between those that have and those that do not, because those that have are able to hire expensive people who do a good job interpreting the word of the law in frankly absurd ways (or on purpose, as legal loopholes are sometimes designed). The have nots simply do not have this same access.

        I find it particularly odd how people are able to hold on to the cognitive dissonance necessary to think this is okay. For example, the idea of a 'good guy with a gun' or laws on self defense justifying people to both defend themselves against harm and to actively cause harm to individuals - the same laws can be used to justify defending yourself from an armed assailant and for a cop to kill a minority which he has been taught to fear. There are many laws for which the interpretation of the word of law has lead to absurd dichotomies precisely because circumstances do not matter. Also, somehow the judge has some leniency for this and can often decide within parameters how harshly to punish someone but not whether the law should apply in the first place?

        I think most humans are compassionate towards the hungry stealing a loaf of bread to survive, so why did our legal system decide to ignore the person and instead focus on the 'stealing'? The implications of a system founded on this interpretation of word, rather than spirit is even more dire when you forecast a future where automation is only increasing.

        11 votes
        1. skybrian
          Link Parent
          This argument is basically about trusted people having more or less discretion. In the limit, some rules can be entirely automated (particularly for a website). The other limit would be to have...

          This argument is basically about trusted people having more or less discretion. In the limit, some rules can be entirely automated (particularly for a website). The other limit would be to have trusted people make judgements arbitrarily with no rules whatsoever. But consistency has a lot to do with justice (usually people find inconsistent decisions unfair), so usually we rely on a combination of trusted human judgement and some rules or guidelines for consistency and predictability. Also, though rules-lawyering is a risk, it’s hard to expect people to abide by rules if you don’t write them down.

          I don’t think context should be ignored and that’s why I argue in favor of having moderators who are part of the community, so they understand the context. A major problem with Twitter is lack of structure. Just inherently as part of how it works, tweets are often taken out of context, and since Twitter is global, context collapse is ever-present. Reddit, for all its problems, a least has community moderators.

          (Fortunately, online we usually don’t have to worry about stealing. We could talk about the rise in shoplifting but I think that should be a separate discussion.)

          4 votes
    3. nothis
      Link Parent
      It would require acknowledging that some objectively correct decisions are political. This used to be less relevant but now "believing in science" is a political stance so you gotta pick sides.

      It would require acknowledging that some objectively correct decisions are political. This used to be less relevant but now "believing in science" is a political stance so you gotta pick sides.

      5 votes
  2. [3]
    dootdoot
    Link
    Is this “gaming” the rules? For example, the neo-nazis reported the Atlanta Antifascists’ doxxing of a white student union organizer. This seems in line with the rule’s intent. Should twitter...

    Is this “gaming” the rules? For example, the neo-nazis reported the Atlanta Antifascists’ doxxing of a white student union organizer. This seems in line with the rule’s intent. Should twitter allow doxxing people with certain profiles (race, ideology, gender, etc)? No doxxing is a standard rule for online platforms. I feel it is a good rule overall (despite decreasing visibility on malicious actors) and as all rules, should be applied equally.

    10 votes
    1. [2]
      dubteedub
      (edited )
      Link Parent
      Here is the statement from Atlanta Antifa on this - https://twitter.com/afainatl/status/1466463241425588232 And their blog post here which includes a screenshot of the post in question which just...

      Here is the statement from Atlanta Antifa on this - https://twitter.com/afainatl/status/1466463241425588232

      The tweet in question exposed an organizer for a racist "White Student Union", which had littered Auburn University with anti-Semitic flyers and harassed members of the campus community. Despite the post's information being newsworthy and in the public interest, Twitter denied our appeal and forced us to delete it.

      And their blog post here which includes a screenshot of the post in question which just names the individual - https://atlantaantifa.org/2021/12/01/caleb-petersen-2017-2018-president-of-auburn-universitys-white-student-union/

      Petersen was featured in a 2017 documentary on Richard Spencer and the Alt-Right, produced by Channel 4 in the UK. Although he did not provide a name and his face was blurred, the “White Student Union” leader talking about “the death of our [white] race” is clearly Petersen, who will be recognizable by voice to anyone who has met him.

      This is from Twitter's new personal information rules:

      However, we recognize that there are instances where users may share images or videos of private individuals, who are not public figures, as part of a newsworthy event or to further public discourse on issues or events of public interest. In such cases, we may allow the media to remain on the platform.

      I personally think that there is a strong argument that Caleb is a public figure and that this information is newsworthy.

      In addition, all Atlanta Antifa did was share this name, which is specifically listed as one of things allowed under Twitter's rules.

      sharing information that we don’t consider to be private, including: name; birthdate or age; place of education or employment; location information related to commercial property or places of business, where this information is publicly available; descriptions of physical appearance; gossip, rumours, accusations, and allegations; and screenshots of text messages or messages from other platforms (unless they contain private information e.g., phone number).

      What these various antifa accounts do is name white supremacists, neo-nazis, and other altright individuals and at times state their place of employment. These two items are allowed under these rules and should not have been actioned.

      So it does appear to me to be a clear case of these nazis gaming the system by mass reporting these accounts to get them banned. This is a tactic we have seen employed very often on Reddit where teams of reactionaries will organize on discord and mass report posts/comments of users they are harassing to get them banned.

      20 votes
      1. dootdoot
        Link Parent
        Thank you for your thoughtful response. I think you have mostly convinced me. One thing to note, however, is that it is only allowed “where this information” is publicly available. In this case I...

        Thank you for your thoughtful response. I think you have mostly convinced me. One thing to note, however, is that it is only allowed “where this information” is publicly available. In this case I feel we fall back to whether they are public figures. That is an unfortunately ambiguous classification so it is hard to ‘prove’ at the edges.

        5 votes