15 votes

Topic deleted by author

21 comments

  1. [4]
    cfabbro
    (edited )
    Link
    Edit: In retrospect, I regret making this comment. It was overly cynical and hypocritical of me, considering I was bitching about online low-effort contrarianism not too long ago. Sorry for...

    Edit: In retrospect, I regret making this comment. It was overly cynical and hypocritical of me, considering I was bitching about online low-effort contrarianism not too long ago. Sorry for posting this and sidetracking this discussion with my negativity, @emmanuelle.

    Much like reddit's sudden new stances, where they claim to want to combat hate speech, I'll believe it when I see it with Discord as well:

    https://disboard.org/servers/tag/8chan
    https://disboard.org/servers/tag/all-lives-matter
    https://disboard.org/servers/tag/offensive-memes
    https://disboard.org/servers/tag/free-speech
    https://disboard.org/servers/tag/toxic
    https://disboard.org/servers/tag/pol
    https://disboard.org/servers/tag/no-rules
    https://disboard.org/servers/tag/boogaloo

    14 votes
    1. [2]
      Adys
      Link Parent
      Nearly all those servers have 0-50 online. Mid-sized discord servers routinely have 2000+ concurrent online. This list is kinda like taking a list of a few dozen toxic subreddits with <400...

      Nearly all those servers have 0-50 online. Mid-sized discord servers routinely have 2000+ concurrent online.

      This list is kinda like taking a list of a few dozen toxic subreddits with <400 subscribers and saying "reddit has a problem". Reddit does have a problem, but it's not with the 400 subs communities. Similarly, discord does have a problem, but it's not really in those links here.

      7 votes
      1. Crocodile
        Link Parent
        Yep, @cfabbro electively looked for those specific discords. Not to say that they should not be banned, but as you said, they are not the main issue. Instead, it is within medium discords with not...

        Yep, @cfabbro electively looked for those specific discords. Not to say that they should not be banned, but as you said, they are not the main issue. Instead, it is within medium discords with not such an overt name or more importantly, larger somewhat unrelated discords with plenty of hate. I would consider myself a heavy discord user, and frequently I see blatantly racist and ant-Semitic content. Mods usually take care of it, but the issue is indubitably pervasive across the platform.

        8 votes
    2. [2]
      Comment deleted by author
      Link Parent
      1. cfabbro
        (edited )
        Link Parent
        For sure. It's definitely a first step in the right direction... I am just hesitant to dole out any praise until they actually do more than just write a blog post about their plans to change...

        For sure. It's definitely a first step in the right direction... I am just hesitant to dole out any praise until they actually do more than just write a blog post about their plans to change things and create a "taskforce", since I have seen similar words and efforts bear absolutely no fruit way too many times before to have faith in them. But if/when they actually start implementing meaningful changes that have the desired effect, I will happily eat crow for my cynicism.

        6 votes
  2. [16]
    zptc
    Link
    No too long ago I wanted to report a user posting racist content on a non-racist community. The reporting guidelines actively discouraged reporting anything short of actually inciting violence or...

    No too long ago I wanted to report a user posting racist content on a non-racist community. The reporting guidelines actively discouraged reporting anything short of actually inciting violence or threats. The impression it gave me was that they simply didn't want to be bothered with handling reports of casual racism; if somebody posts stuff of that nature, you're encouraged to remove them from your community and/or put them on ignore. There was zero indication that DIscord would do anything against such users on their own. I don't see anything here to suggest that will change, unless I missed something.

    12 votes
    1. [6]
      Adys
      Link Parent
      Why do you think that should change? If my Racist Uncle says one thing out of place in a messenger group, should he lose access to messenger altogether? If not, why does that rule apply to...

      Why do you think that should change?

      If my Racist Uncle says one thing out of place in a messenger group, should he lose access to messenger altogether? If not, why does that rule apply to discord?

      Platform wide bans are reserved for abusers of the platform.

      3 votes
      1. [2]
        thundergolfer
        Link Parent
        You've leapt to that outcome all on your own. u/zptc was simply talking about being able to report instances of racism, and having those reports being taken seriously by the platform.

        should he lose access to messenger altogether?

        You've leapt to that outcome all on your own. u/zptc was simply talking about being able to report instances of racism, and having those reports being taken seriously by the platform.

        8 votes
      2. [3]
        zptc
        Link Parent
        If they have a history of posting racist content for years, but have never crossed the line to actual threats, should they be immune from consequence? Why does Discord allow such racist speech to...

        If they have a history of posting racist content for years, but have never crossed the line to actual threats, should they be immune from consequence? Why does Discord allow such racist speech to continue as long as it doesn't cross that line?

        3 votes
        1. [2]
          Adys
          (edited )
          Link Parent
          My bag was stolen a few months ago. Why did the police allow that to happen? Well, it's not the role of the police to prevent my bag from being stolen but rather to act when the deed does happen....

          My bag was stolen a few months ago. Why did the police allow that to happen?

          Well, it's not the role of the police to prevent my bag from being stolen but rather to act when the deed does happen. And had I not talked to the police, they wouldn't have been able to do anything there. Were it the role of the police, what would that look like? When authorities move from reactive to preventative, things can get extremely invasive.

          I'm being technical here but I feel it's important not to ask messaging platforms to proactively look at their users' chatlogs and police them, whether it's "for the children" or not. Furthermore, remember that in E2E-encrypted platforms (which is what we should ask messaging platforms to be, and sadly discord is not), that is an impossibility.

          @thundergolfer mentioned you're only talking about the ability to report such content which, well, it's a decent ask. But I also feel Discord isn't in the wrong to demand community moderators to moderate. You run a Discord community, you keep it clean. If you don't keep it clean, that is where Discord themselves ought to step in. Real world parallel: If a member of my board game group is toxic, I'm going to talk to the board game group organizer about it… not run to the police right away. If the group themselves is toxic, I may report it to whoever hosts that group. If the host allows that to happen, I may go a step up. And so on.

          Edit: For context, I am / have been a moderator of several 10k+ Discord communities. And obviously, zero tolerance for racism.

          8 votes
          1. zptc
            Link Parent
            At no time did I mention Discord being proactive or filtering etc. What I'm not okay with is Discord acting like casual racism - anything short of threats or an entire racist community - simply...

            At no time did I mention Discord being proactive or filtering etc.

            What I'm not okay with is Discord acting like casual racism - anything short of threats or an entire racist community - simply doesn't exist. Or that they can't be bothered with it, I guess.

            Their statement says

            Racism is not only something you consciously do. It is not just vicious comments and violent acts. It’s insidious.

            "... but as long as you don't threaten anyone, casual racism on our platform is totally cool and if your community mods are fine with it, so are we" is the impression I got.

            If you don't keep it clean, that is where Discord themselves ought to step in.

            But they won't, because they don't even allow themselves to know about it in an organised fashion. That's the problem I was trying to point out. So yes, there should be the ability to report users like that, and Discord shouldn't tolerate that behavior even if the mods are okay with it.

            6 votes
    2. [9]
      Crocodile
      Link Parent
      I know what you mean. Really, the only solution here is to be scanning every message and have some damn good filters. Not all racist content is some people spamming the n-word. Even straight up...

      I know what you mean. Really, the only solution here is to be scanning every message and have some damn good filters. Not all racist content is some people spamming the n-word. Even straight up racism might not have "racist" terms in it, but rather is racist because of the context or other meanings. Therefore, this method is pretty much impossible to achieve with good accuracy. There would be too many false positives and negatives. That over-censorship, and especially when it starts to become accidental if they created this hypothetical filter, would drive many, many users off the platform. Regardless, it simply is not feasible.

      While it is somewhat nice to see from Discord, I do not see what else they can do besides crackdowns on servers that are openly racist (such as being called White Pride), or those that are widely reported. Unfortunately, almost any large platform (especially those which allows users to have their own communities) will have this problem. Those open and self-moderating communities are awesome, but there will be that element of bigoted communities popping up as well.

      1. [8]
        thundergolfer
        Link Parent
        We could certainly do it with enough investment. It's a matter of prioritisation. If you've become intimately familiar with racists and white nationalists online, it is quite easy to pick out even...

        Therefore, this method is pretty much impossible to achieve with good accuracy.

        We could certainly do it with enough investment. It's a matter of prioritisation. If you've become intimately familiar with racists and white nationalists online, it is quite easy to pick out even their really subtle stuff.

        It's impossible with mere computers, but with active human involvement it's quite doable, and has already been done in certain online communities.

        Facebook and Reddit and Twitter just don't care enough about suppressing racism to spend (or lose) a shitload of money rooting it out of their platforms.

        2 votes
        1. aethicglass
          Link Parent
          This would be kinda awesome to see happen. But with discord, I feel like it also comes down to the mod teams on individual servers. I mod on a couple public servers (both around 500 users), a...

          This would be kinda awesome to see happen. But with discord, I feel like it also comes down to the mod teams on individual servers. I mod on a couple public servers (both around 500 users), a couple private servers that aren't problematic whatsoever, and assist with community stuff in a few 500-1k servers. Basically, it's a team effort to keep the garbage at bay. But it's also pretty doable for communities of that size.

          For sub 1k servers, I'd say a mod per 100 users is a pretty decent ratio. Larger than that, you don't necessarily need more people with high permissions, just a sort of community watch. People who are active who have community standards in mind, who are willing to help out people who need it, and issue warnings to those in violation of standards.

          One thing I've noticed is that the bigots/sexists/racists tend to dip their toes first. Sure, there are always edgy trolls who come in guns blazing, but they're simple to deal with (except in cases where they persist with bouncing around a VPN and creating multiple accounts, but these are rare). The ones that are most difficult to deal with walk and talk like everyone else for the most part. They're just slightly off. Like you said, "If you've become intimately familiar with racists and white nationalist online, it is quite easy to pick out even their really subtle stuff." It's almost like they dog whistle for each other so they know they aren't alone before they start really letting loose. I've also been noticing lately that it's been working quite well to make them feel alone in that regard. Calling out the dog whistles for what they are in combination with letting them know that "this is not the place you're looking for" really takes the wind out of their sails. As a side note, I don't feel like this used to do the trick. I think recent events have begun to illustrate the depth and breadth of sheer numbers of people who are not willing to tolerate their shit anymore, and how few in number they are in comparison. That's just my running theory though.

          8 votes
        2. [6]
          Crocodile
          Link Parent
          Just curious, which large online communities has that been successful in? I could see how human involvement would work in smaller ones, but not for something as large as Discord.

          Just curious, which large online communities has that been successful in? I could see how human involvement would work in smaller ones, but not for something as large as Discord.

          1 vote
          1. [5]
            thundergolfer
            Link Parent
            I've not seen it work in large communities, but I think the techniques that work in smaller communities can easily be scaled up with enough investment in human capital and economic capital. If we...

            I've not seen it work in large communities, but I think the techniques that work in smaller communities can easily be scaled up with enough investment in human capital and economic capital.

            If we cease to accept that solutions avoid significantly hurting profits, then things get a lot easier. Facebook spends $4.015B on R&D in one quarter. Imagine prioritising the removal of racism from the platform such that it got $1,000,000,000 of investment per quarter. Reckon we eventually could get racism off Facebook with that?

            1. [2]
              DrStone
              Link Parent
              Let's say they get human moderators and pay the US minimum wage, which was $7.25 as of 2019. A year has ~8766 hours, so that's ~$64k per year per 24/7 "moderator" (actually a few individuals'...

              Let's say they get human moderators and pay the US minimum wage, which was $7.25 as of 2019. A year has ~8766 hours, so that's ~$64k per year per 24/7 "moderator" (actually a few individuals' shifts). With a proposed quarter budget of $1B, so $4B annually, that pays for ~62,500 24/7 mods (i.e. at any given time, you'll have 62.5k mods working). According to this there are ~2.6 billion monthly active facebook users as of Q1 2020 and rising. That means each mod could be responsible for ~41,600 individuals per month, assuming even distribution.

              This isn't going to be entirely accurate with real life factors. For example, all of the users aren't going to be consistently active, so the effective responsibility could be less. On the other hand, getting paid minimum wage for a full time job is going to get you bottom-barrel mod quality/effort, probably not worth the mental burnout; you'll probably have to pay a fair bit more (and maybe for counseling too), meaning less mods with the same budget, so more responsibility.

              Having never moderated personally, I don't know what a reasonable responsibility range is. I also have no idea what the immediate and and knock-on consequences are for repurposing such a large portion of the R&D budget for this (e.g. direct pay for the human mods, plus probably lost profits, plus maybe less desirable dev jobs with a smaller R&D budget, possible boosts if the user community does indeed change, etc.).

              1 vote
              1. thundergolfer
                (edited )
                Link Parent
                Facebook currently employs minimum wage workers to do a lot of their moderating, but I'd attack the problem without just scaling up their numbers. Of course the system can't just brute-force the...

                Facebook currently employs minimum wage workers to do a lot of their moderating, but I'd attack the problem without just scaling up their numbers.

                Of course the system can't just brute-force the moderator-to-user ratio, but so many other options exist and Facebook already uses some of them.

                With $1 Billion a quarter, I'd focus investment on very highly paid employees $150k+/year. I think that kind of human capital investment allocation between low-paid and high-paid is what Facebook actually does, but with far less money and it crucially also hampers its team's ability to do their job by having C-suite Facebookers come in and explicitly allow racism on their platform if it's the speech of the powerful (eg. Trump).

                1 vote
            2. [2]
              Crocodile
              Link Parent
              I'm all for removing racism (duh), and $1 billion of R&D would make significant changes, and I sincerely hope they invest in strategies to spot and remove it. Potentially, it would open up new...

              I'm all for removing racism (duh), and $1 billion of R&D would make significant changes, and I sincerely hope they invest in strategies to spot and remove it. Potentially, it would open up new doors for harmful content spotting+removal we cannot see yet. However, I do not see an outcome where people are content with the moderation.

              For instance, this new automatic (and maybe combined with human intervention and spotting) system would be removing too much where non-racist but unfortunately related to a fascist dogwhistle, or content about racism (and its problems, let's say) is removed. Meme example: Pepe the frog. No doubt is it used in many vile communities, but I still see it used as a general meme (and it sucks Nazis started appropriating this, as they do with other symbols). A system could potentially pick up on this being "harmful" and thus removing it in instances where it is not. Two bad outcomes from that. The first is obvious: removal of non-harmful content/wrongful censorship which would make many users angry (and rightfully so). The second bad outcome is racists, trolls, or both (as usual) could play to this system by purposefully appropriating more symbols to mess with this censorship.

              The other way this automated/human system could go is the opposite. Sites, like Facebook, understand the risks mentioned previously with wrongful removal, so the system is weakened, which then still allows for content to be present, and perhaps adapts into being stealthier. Users are still mad as it looks like the system did not do anything (even if it greatly reduced it).

              1. thundergolfer
                Link Parent
                I can tell the difference most of the time. We still have the 'Pepe the frog' image as a Slack icon at work. In really borderline cases there doesn't need to be a remove/keep dichotomy. In a...

                Meme example: Pepe the frog. No doubt is it used in many vile communities, but I still see it used as a general meme (and it sucks Nazis started appropriating this, as they do with other symbols). A system could potentially pick up on this being "harmful" and thus removing it in instances where it is not.

                I can tell the difference most of the time. We still have the 'Pepe the frog' image as a Slack icon at work. In really borderline cases there doesn't need to be a remove/keep dichotomy. In a sophisticated system multiple avenues of action could be taken in the grey areas.

                The most difficult parts are found in empowering well-paid moderators to take a very strong stance against racism, and have the community accept and participate in the anti-racist system.

                In our current reality, on really big platforms this would really piss off literally millions of racist users, on so Facebook and Discord could lose a shitload of money and open them up to attacks by the authoritarian Right.

                2 votes
  3. freddy
    Link
    As much as I dislike Discord as a product, I can really get behind this message. Well done Discord (for once).

    As much as I dislike Discord as a product, I can really get behind this message. Well done Discord (for once).

    3 votes