22 votes

Masnick's Impossibility Theorem: Content moderation at scale is impossible to do well

23 comments

  1. [15]
    nacho
    Link
    Here's a short rebuttal for these lines of argumentation I think is very persuasive: Moderation based on rules and judgement for specific cases is just like any other legal system. Legal systems...

    Here's a short rebuttal for these lines of argumentation I think is very persuasive:

    1. Moderation based on rules and judgement for specific cases is just like any other legal system.
    2. Legal systems and societies don't collapse at scale.
    3. Legal systems can demonstrably handle whenever they get things wrong in robust and good ways.
    4. There's no reason moderation systems can't handle these things analogously within their framework and scope.

    This leads to a number of conclusions:

    • Predictably, the issue isn't moderation at scale, or rules-based behaviors at scale, but how social media sites perform that task, review mistakes, processes of appeal and judgement etc. It's the moderation that's not good enough, not that moderation at scale is impossible.
    • A key issue here is how large a proportion of the platform's operations will be governing and moderating what happens on the platform. Social media platforms tend to assume it's not the vast majority of their operating costs, although it probably should be.
    • The stakes are much lower with forum moderation than in a legal system, so the system can be way simpler. Lower stakes also means that getting things wrong is way less important. You can err on the side of caution, and assume reasonable users will agree to that.
    • Platforms get in social trouble when they don't err on the side of caution and remove too much rather than too little. That lesson goes all the way back to forums in the early 1990s: "When in doubt, remove, watch your community improve."
    • Platforms do too little to assert the legitimacy of their apparatus of governance. That trust is one of the most essential things in other systems of control: Be that mechanisms of political governance, legal reckoning, running of public services, the court of public opinion (also press) and so on. Why in the world don't social media sites spend more effort on this necessary trust in the system that lets a society of individual liberties and rights function? If the system running a social site is to work, trust and good faith from users to that system is imperative.

    Who in the world trusts youtube's, reddit's, twitter's or facebook's moderation systems? What kind of effort do those platforms do to win us over in good faith as users?


    I think the problems lie there, not in some mistaken view that moderation at scale is impossible. Such a view buys into way too many prerequisites these social media companies wish weren't true because they mean running a site is harder and more expensive than their investors surely hope.

    17 votes
    1. [8]
      NaraVara
      Link Parent
      I don’t think these premises hold. The legal system does not regulate speech and one of the tenets of liberal democracy is that it should not. Cases where it does enter into regulating speech are...

      Moderation based on rules and judgement for specific cases is just like any other legal system.
      Legal systems and societies don't collapse at scale.

      I don’t think these premises hold. The legal system does not regulate speech and one of the tenets of liberal democracy is that it should not. Cases where it does enter into regulating speech are extremely fraught and complicated, often resulting in unintended outcomes and overly harsh rulings.

      Porn is a good example. It’s technically allowed but can also result in arrests for indecent exposure or behavior depending on how it’s recorded. There is a pretty hard line on pedophilia, but then there’s a gray area around technically adult performers pretending to be underage and a pretty dark gray area around artistic/cartoon depictions of underage people.

      It’s also quite likely that whatever rules there are happen to be very weakly and seldom enforced. Meaning it doesn’t actually operate that well at scale outside of handful of high profile incidents, which is functionally how moderation on a site like Reddit works.

      You can see additional difficulties at scale in non-liberal democracies. China literally has a firewall that blocks all unapproved content from entering the country and enables Stasi-like oversight of people’s online behaviors at random. And even then, they have trouble keeping up when people spontaneously decide to all flout a specific rule, especially if they’re being sly about it.

      7 votes
      1. [7]
        nacho
        Link Parent
        I don't believe that's true. There are all sorts of rules for speech in law in every single liberal democracy in the world. Speech law is a foundation required for liberal democracy to function....

        The legal system does not regulate speech and one of the tenets of liberal democracy is that it should not.

        I don't believe that's true.

        There are all sorts of rules for speech in law in every single liberal democracy in the world. Speech law is a foundation required for liberal democracy to function.

        Libel, slander, national secrets, incitement, child pornography, fraud, rules for advertising, copyright and trademarks, fighting words, threats, the right to ones own image, rules for commercial speech, speech restrictions as part of a job/role (be that public or private, say as a doctor, or lawyer). These are just a tiny amount of examples of speech laws ubiquitous to all democracies (and most other societies too)

        There are rules where speech can be compelled in situations, where speech is binding. We're legally responsible for things we say in a host of other contexts too.

        Every developed country in the world bar one (USA) has a law on the books regarding hate speech in some form or other.

        Speech is highly, highly regulated. Those who say otherwise are drawing a line in the sand suggesting that all speech regulations we have are somehow "natural" while all others would somehow be against free speech.


        Speech rules are strong, ruthlessly enforced and litigated. There's an absolute ton of jurisprudence and case law on all aspects of speech in every country based on their laws and specific wordings.

        Speech cases end up in both civil and criminal courts all the time. Speech law strongly influences speech behaviors and how speech is used, norms for speaking, norms for what's acceptable and unacceptable speech.


        I could see the issues regarding enforcement of speech removal and gray areas if the supposition is that allowing as much speech as possible will necessarily create the most open arena with the greatest degree of freedom of expression. Both theoretically and in practice.

        However, that supposition is wrong. Allowing people to intimidate, shock and harass others into silence, or scare them away by allowing too much speech of the wrong kind is a good recipe for reducing the diversity and experienced freedom of expression by those who participate in an arena for speech.

        14 votes
        1. [6]
          vord
          (edited )
          Link Parent
          I disagree with almost every example you provided. With very few exceptions, there shouldn't be a criminalization of speech. Hate speech and theats can be classed as violence, and while shouldn't...

          I disagree with almost every example you provided. With very few exceptions, there shouldn't be a criminalization of speech.

          Hate speech and theats can be classed as violence, and while shouldn't be criminalized, should certainly open that person to being beaten to within an inch of their life. But not by the state. By people.

          There can be rules surrounding punishments for being decietful, which covers libel/slander/fraud/impersonation etc. Those are not restrictions on speech. Those are restrictions on being deceptive. It's a subtle but important difference.

          National secrets should not exist. Disclosing them should not be a crime. If they do need to exist, then only hire people you trust to not disclose them. Snowden is a hero.

          Companies are not people. they can and should be regulated appropriately. Restricting disclosure as a course of performing job duties is also far different, as that is merely a restriction on performing a job properly, like OSHA for doctors/lawyers.

          Treat people like adults, and punish actions, not words. Fighting words and incitement? Let them fight and riot.

          Child pornography is by far the most complicated problem. It's used as a weapon to do all sorts of invasive thigs to justify stopping it. Pictures of my naked newborn technically are child pornography according to the US government. So are nudes of 17 year olds they share with each other. Creation of genuine child pornography is an abuse and abuse of power problem (depending on age of child, as 17 year olds are 'children' as well). Possesion perpetuates it by creating demand. So my stance is that, if found with alleged child porn, full disclosure of how it was attained should be mandatory, with it opening a proper abuse investigation.

          1 vote
          1. [2]
            nacho
            Link Parent
            You can think these systems aren't the best way of solving things, but in this case western legal systems do not work the way you suggest. This is incorrect. You can be as deceptive or misleading...

            You can think these systems aren't the best way of solving things, but in this case western legal systems do not work the way you suggest.


            Those are not restrictions on speech. Those are restrictions on being deceptive.

            This is incorrect. You can be as deceptive or misleading as you like, but unless you speak falsehoods, you cannot be convicted for doing so. That's because it's so incredibly hard to prove intent unless that intent is actually expressed.

            Example: You can advertise something as having no added sugar, while adding massive amounts of honey. You've got grossly misleading advertisement, but haven't spoken a falsehood. Your advertisement gets to stay in most western jurisdictions.


            Hate speech and threats can be classed as violence

            That is not true. The burden of proof and level of punishment when violence is enacted in almost all western democracies depends on the degree of physical harm caused.

            Violent crimes are things like murder, assault, rape, robbery, negligence, endangerment, kidnapping, harassment, extortion, assault and the like.

            A criminal threat is specifically speech that threatens others with physical harm. Hate speech does in no way have to threaten someone with harm based on their innate characteristics, it can also demean, stigmatize, or simply spread hate.


            National secrets should not exist.

            Society can't function without them. There's way more that goes into state secrets than nuclear launch codes, or national security systems. No national secrets means no individual who interacts with the state would have a right to any degree of privacy. Any foreign entity could abuse any territory that doesn't protect itself through keeping things out of public.

            Get leave for medical treatment as a state employee? Those records would be public for why you as a random individual unless the state could hold things secret.

            Have one water main that supplies an entire metropolitan area that someone discovers a flaw in? That terror target threatening the lives of many would have to be public info. Weaknesses in the power grid, all sensitive infrastructure would have to be out in the open for people with bad intentions to access.

            No state secrets? The military wouldn't be allowed to use encrypted communications, or have password protected servers.

            I don't think you know what your belief would actually entail.


            Treat people like adults, and punish actions, not words. Fighting words and incitement? Let them fight and riot.

            I believe in the power of speech. Very strongly in fact. Speech can be used to manipulate, brainwash, break someone down.

            If we don't believe in the power of speech, we wouldn't keep people accountable for the predictable and foreseeable likely consequences of their speech.

            But why in the world shouldn't we? Why could people be culpable for the predictable consequences for their other actions, like driving drunk or hitting someone, but not be culpable for using words and their effects?


            It's my view that child porn gets way too much attention when people discuss the supposedly few tricky areas of speech law.

            There are other, way more common, much more important and impactful areas that are way harder to regulate. That's why these areas of law have different solutions in similar countries. The solution for child porn is simple: Look at any child porn law in any western country, and you'll find they amount to pretty much the same thing with small variations.

            If you look at laws governing where a consumer's right to protection from predatory behavior from companies and how that intersects with a company or person's right to market a product, service or present an argument you'll find way more variation in what is and isn't allowed from jurisdiction to jurisdiction. That's because balancing these intersecting rights is way harder. It's way more a matter of complex judgement and balance.

            That's exactly the same for rules governing what constitutes all sorts of criminal speech behavior from what terms in a contract are legally enforceable, to the intellectual rights to ideas I put on the page.


            Legally, speaking is almost always considered an action like all other actions. I believe speech are some of the very most powerful actions most of us regular people can perform, that acts of speech are the ways we as individual humans can impact/change the world the most.

            Freedom of thought is absolute: I can think whatever I want and am not culpable for it.

            But the instant I change that thought to an act of speech, I'm just as legally culpable as if I were to silently punch someone. Because all societies recognize that the pen is at least as powerful as the sword, and that speech can force behaviors and reactions.

            6 votes
            1. vord
              Link Parent
              I wasn't meaning to suggest that's how they function, rather how they should. Because our systems are broken. Hence why hate speech and the ilk should be classified as violence IMO, even though...

              I wasn't meaning to suggest that's how they function, rather how they should. Because our systems are broken. Hence why hate speech and the ilk should be classified as violence IMO, even though they are not currently.

              Speech is very powerful I agree. It is the oldest, most direct form of expression of thought. This makes it fundementally different from other actions and why it should never be criminalized (punished by the state). Restricting speech is akin to punishing thought crimes.

              You can be as deceptive or misleading as you like, but unless you speak falsehoods, you cannot be convicted for doing so.

              And that is a failing of the system. Being deceptive to manipulate or impersonate is effectively a falsehood, even if not technically.

              A criminal threat is specifically speech that threatens others with physical harm. Hate speech does in no way have to threaten someone with harm based on their innate characteristics, it can also demean, stigmatize, or simply spread hate.

              This is a distinction without a difference. It is violence, but neither should be criminalized without action on the speech. But it should strip the speaker from protection from violence. A person preaching on a soapbox about the evil and abominal nature of homosexuality should not be able to prosecute someone who chucks a rock at their face. And demeaning and stigmatizing has its place. It's how societal norms formed in the first place. For example, walking around naked should not be illegal, but perhaps may be stigmitized. Demean, stigmitize, and shun the haters.

              Ending with a semi-tangental aside....rigidity of law is a massive part of the problem. Humans are complex and flexible. Trying to make rigid laws to contrain flexibility is a losing proposition and is the only reason we need lawyers at all. Rigidity makes loopholes possible.

              Have basic, flexible laws which punish actions, not thoughts.

              Edit: Fixed some important missing words.

              1 vote
          2. [3]
            justcool393
            Link Parent
            that's a pretty screwed up viewpoint if you think about it for more than half a second. speech, even such that may be considered hateful, should not at all do that. otherwise you'd just get people...

            should certainly open that person to being beaten to within an inch of their life. But not by the state. By people.

            that's a pretty screwed up viewpoint if you think about it for more than half a second. speech, even such that may be considered hateful, should not at all do that.

            otherwise you'd just get people who could actually learn from their mistakes to either resent you or enact violence towards you.

            3 votes
            1. [2]
              vord
              Link Parent
              How exactly does restriction of hate speech work? In my mind, what I'e proposed is just a de-abstraction of the violence by not using the state as the intermediary.

              How exactly does restriction of hate speech work?

              In my mind, what I'e proposed is just a de-abstraction of the violence by not using the state as the intermediary.

              1 vote
              1. justcool393
                Link Parent
                don't restrict hate speech on a governmental level. still don't allow violence.

                don't restrict hate speech on a governmental level. still don't allow violence.

                1 vote
    2. [5]
      mrnd
      Link Parent
      This sounds spot on. Also: This doesn't sound right at all. I've been on the internet for a long time, and I've been on the receiving end of moderation many times. I don't think there's been a...

      This sounds spot on.

      Also:

      First, the most obvious one: any moderation is likely to end up pissing off those who are moderated. After all, they posted their content in the first place, and thus thought it belonged wherever it was posted -- so will almost certainly disagree with the decision to moderate it.

      This doesn't sound right at all. I've been on the internet for a long time, and I've been on the receiving end of moderation many times. I don't think there's been a single instance where I felt it was unjustified: usually I didn't really think something through or was mistaken about what kind of discussion the community wanted. Like you said, it is about trust on and legitimacy.

      5 votes
      1. [4]
        Deimos
        (edited )
        Link Parent
        It's definitely been my experience that unless the reason for moderating something is extremely obvious and binary (e.g. "don't copy-paste entire articles into a comment"), people almost always...

        It's definitely been my experience that unless the reason for moderating something is extremely obvious and binary (e.g. "don't copy-paste entire articles into a comment"), people almost always try to argue about it. This is even true here on Tildes, where the overall community is very supportive of moderation. On the relatively rare times I need to moderate, I fully expect part of it to involve having an argument with someone about it afterwards. It legitimately surprises me when someone says they understand why their post was removed, and it's absolutely less common than someone trying to debate it. I'd even say that attacking or insulting me is more common than agreeing.

        There have even been multiple times where me removing some comments or locking a thread caused one of the involved users to immediately tell me to delete their account and all of their posts. Some of these users used Tildes regularly and made hundreds of posts over months of activity on the site. If you had asked them the day before, I'm sure they would have said they were supportive of the way the site was run. But as soon as it's one of their posts getting moderated, they want nothing more to do with the site.

        10 votes
        1. unknown user
          Link Parent
          This reminds me: I want to thank you for how you moderated my comments, specifically. In the moment, sure, I'm all steam about how my stuff got deleted but not all the rest. Looking back? Yeah, I...

          This reminds me:

          I want to thank you for how you moderated my comments, specifically.

          In the moment, sure, I'm all steam about how my stuff got deleted but not all the rest.

          Looking back? Yeah, I get it. I've been an angry son of a bitch with an some frequency here. Sometimes I get too involved with my side of the argument to see it through peacefully. I'd rather I didn't get this agitated, myself. I have things to be angry about. Not an excuse: just some reasoning, soil for understanding.

          So, yeah: thank you.

          4 votes
        2. daturkel
          Link Parent
          The issue of "people respond negatively to being moderated" made me think about the fact that in some communities, this may be deliberate. Maybe a community is fine pissing off the people it...

          The issue of "people respond negatively to being moderated" made me think about the fact that in some communities, this may be deliberate. Maybe a community is fine pissing off the people it moderates (for, lets say, the worst infractions) because it wants those people to leave.

          This is perhaps not an option for Facebook, where the target audience is "everyone in the world" (except for, I suppose, people who want to use Facebook to commit crimes). But for smaller, more selective communities, the fact that removing content sometimes leads to removing people (explicitly with bans or de facto by irritating people enough that they leave) may be a feature, not a bug. This is maybe a logical extension of the author's "theorem:" part of maintaining good content-moderation is limiting the scale.

          Of course this is dicey because the most well-intentioned efforts to keep a forum/community "high-quality" can easily become exclusionary and/or elitist. Another potential side-effect is creating an echo-chamber because debate often turns confrontational online, so a peaceful community might be a community with no diversity of opinion.


          One thing that comes to mind is, ~10 years ago, the state of public vs private torrent trackers. I think the advent of music and video streaming services have likely taken a huge bite out of people's interest in building and maintaining high-quality private music-and-video-sharing communities, but at the time communities like what.cd and Pass the Popcorn put a ton of effort into maintaining the quality of their offerings and their communities. They did so by being exclusive and by introducing high-barriers to entry and making it nontrivial to stay in good standing (maintaining a good seed/leech ratio). Obviously the breadth of the selections required allowing the communities to grow, but the quality of the community required stemming and directing that growth. I don't want to say that making it hard to join and stay in communities is a surefire way to keep a community healthy, but it seemed to work in those cases. (Quick edit: this of course did fall into the trap I mentioned above, which is the inherent elitism and allure of exclusivity of these communities.)

          3 votes
        3. justcool393
          Link Parent
          It's kinda interesting to see it from both sides, both as a moderator and as a user. The mute feature is the most aggravating thing to me I've found as an end-user (same goes for any ban that...

          It's kinda interesting to see it from both sides, both as a moderator and as a user.

          The mute feature is the most aggravating thing to me I've found as an end-user (same goes for any ban that doesn't at least have a short description of what led to it). I've actually adjusted how I moderate because I know that it feels sucky to be on the side of an at least perceived unexplained moderator action.

          I've stopped using muting almost entirely when it isn't outright spam just because it never seems justified when weighed against how agitating it might be.

          I also pretty much always tend to give at least a short explanation and link if applicable if it's a ban. I've found that people won't read wall of text rules but may read

          banned conduct

          https://example.com/id/whatever/id

          It's a bit more work though and it's sometimes difficult to do if on mobile, but I don't tend to do much moderation work of any kind from there anyway.

          3 votes
    3. creesch
      Link Parent
      It is and it isn't at the same time, much of it depends on context (in the current situation) and there is one big difference. If every online community was actually governed with the same amount...

      Moderation based on rules and judgement for specific cases is just like any other legal system.

      It is and it isn't at the same time, much of it depends on context (in the current situation) and there is one big difference. If every online community was actually governed with the same amount of legal scripture as offline legal systems the premise would automatically hold up because you cannot realistically expect people to go through pages upon pages of legal documents before joining a community. You'll end up with a similar situation as what you already see with an ToS out there that people skip.

      You can argue that it doesn't need to be as extensive as offline (real world) law but because we are the internet if we went down to being strict I think it actually ends up being more extensive.

      You also touch on operating costs, that is exactly an other reason why it is difficult to do it to a degree that is satisfactory. Purely speaking from personal experience as reddit mod, it is run by volunteers. As soon as you want things to go much farther (even for more serious subreddits like /r/askhistorians) you are asking for an incredible time investment that you can't demand from volunteers. At that point it might not even be profitable anymore to run something like reddit but maybe even more importantly you won't have the same communities anymore. Because communities run by volunteers will be different from those that are just moderated by people on the clock.

      This is just a long ramble from my perspective to say that this is all fairly complex and there isn't an easy solution like saying "just operate it as a legal system" because while in theory that is the right call in practice it isn't a solution and just more one argument against moderation at scale...

      4 votes
  2. [4]
    Kuromantis
    (edited )
    Link
    A neat meta-article Deimos linked me here as an overview of why moderation at mass at scale is impossible. This implies some really far-reaching consequences. If we can't moderate something like...

    A neat meta-article Deimos linked me here as an overview of why moderation at mass at scale is impossible.

    This implies some really far-reaching consequences. If we can't moderate something like Facebook, Twitter or reddit, can we have anything like Facebook, Twitter and reddit at all?

    Do we have to turn the social media landscape into thousands or tens of thousands of platforms doing the same thing because that's the only way people can moderate each one?

    Replace social media with IRC?
    BBS?
    Federation?
    Nothing?

    Because even if a lot of people here would probably love it if any of the above happened to social media, that sounds Malthusian.

    4 votes
    1. [2]
      soks_n_sandals
      Link Parent
      I would argue that Facebook, Reddit, and Twitter are all moderated, but each one has its own unique failures. Facebook, in my opinion, through the views of Zuckerberg, is far too idealistic. Their...

      I would argue that Facebook, Reddit, and Twitter are all moderated, but each one has its own unique failures.

      Facebook, in my opinion, through the views of Zuckerberg, is far too idealistic. Their approach of creating a set of rules that they're applying uniformly in every locality is simply not working. I think part of the issue, especially in the US, is lack of digital literacy on the part of the users. But the same is to be said about places where posts lead to violence.

      Twitter fails by not uniformly applying its rules to all users. An anecdotal example would be users copying President Trump's tweets and having their account banned, but Trump stays up. I understand their reasoning, but I think it's flawed. Why would a user with no followers be barred for those tweets when someone with such a big following wouldn't?

      Reddit's failures I prescribe to the moderation of individual subs. Some are brazenly hateful or call for violence yet it takes months for them to be removed!

      The small community of Tildes is one of its strongest features. I think that you are correct that a lot of small-scale social media platforms would eventually fail to work. I think part of the solution is changing changing to a system where users pay to support a platform that exists without targeted ads and sponsored posts. The "free-ness " of social media is part of the problem, I think. There are a lot of things that could be done in an attempt to rectify the state of the current internet, so I'm not trying to imply that a premium model is the best or only solution.

      Edit: sourced Twitter anecdote

      7 votes
      1. RNG
        Link Parent
        I think in each of these cases, there are some invisible pressures that are influencing these companies' behaviors in less-than-ideal ways if one is prioritizing effective moderation. These are...

        Facebook, in my opinion, through the views of Zuckerberg, is far too idealistic.
        Twitter fails by not uniformly applying its rules to all users.
        Reddit's failures I prescribe to the moderation of individual subs. Some are brazenly hateful or call for violence yet it takes months for them to be removed!

        I think in each of these cases, there are some invisible pressures that are influencing these companies' behaviors in less-than-ideal ways if one is prioritizing effective moderation. These are businesses that aim to avoid alienating any particular user base, and will focus on maximizing the number of engaged users on the platform. To maximize value, they must tolerate hate speech at least until it reaches a threshold to cause a net loss of users, engagement, or revenue through loss of advertisers.

        The other critical component here is regulation. Facebook has been brought before Congress and regulators numerous times, and the POTUS has openly discussed regulating Twitter. There is an effort to satiate concerns regarding perceived bias in order to forestall potential regulations.

        These pressures seem to show to me that any for profit business focused on building an online platform, when large enough, will face pressures that will be antagonistic towards healthy moderation strategies.

        3 votes
    2. NaraVara
      (edited )
      Link Parent
      The thing to keep in mind is that the majority of the time, norms don’t hold because of enforcement. They hold because of a broad consensus among the people involved as to what the rule means and...

      This implies some really far-reaching consequences. If we can't moderate something like Facebook, Twitter or reddit, can we have anything like Facebook, Twitter and reddit at all?

      The thing to keep in mind is that the majority of the time, norms don’t hold because of enforcement. They hold because of a broad consensus among the people involved as to what the rule means and should be. All enforcement does is punishes the occasional breaking of the rules. But if everyone breaks them there isn’t much any authority figure can do about it without things getting ugly.

      The reason you can't content moderate platforms like Facebook or Reddit is because they’re designed in a way to promote context collapse. Without a shared context as to what the norms and customary rules are/should be, the rule might as well not exist. A great example is the Reddit downvote. It’s technically supposed to be a “doesn’t add to discussion” button. It is instead, by default, used as an “I don’t like this” or “I disagree” button. It’s impossible to enforce people using it as intended and no amount of complaining about it can solve the problem. This is because there is no socialization process before you can start posting to Reddit. The “Eternal September” issue means you can never reliably count on people to self-police in these things.

      2 votes
  3. skybrian
    Link
    This is not a theorem, it’s just an opinion. Dressing it up with mathematical terms doesn’t doesn’t make it math. Name-checking Kenneth Arrow doesn’t make it economics, either. In particular, what...

    This is not a theorem, it’s just an opinion. Dressing it up with mathematical terms doesn’t doesn’t make it math. Name-checking Kenneth Arrow doesn’t make it economics, either.

    In particular, what “do well” means is nebulous and no precise definition is given. Obviously you can’t do it perfectly but how well is good enough?

    4 votes
  4. [2]
    post_below
    Link
    I think it's easy to misunderstand what the author is trying to say, probably because there's a lot of bluster. The way I read it, he's saying that moderation at scale can be done better but that...

    I think it's easy to misunderstand what the author is trying to say, probably because there's a lot of bluster.

    The way I read it, he's saying that moderation at scale can be done better but that it's not as easy as a lot of people (especially some journalists and politicians) seem to think.

    The reason I think this is an important point (if imperfectly made) is that eventually we're going to arrive at legislation.

    When that happens the argument that big companies can do unimpeachable moderation because they can afford to throw the employee hours at the problem also means something else which often gets missed: smaller companies can't.

    I hope we can avoid public support for legislation that will make the barrier to entry for new platforms prohibitively high. We've seen this happen in industry after industry. They fight regulation but when it's clear it can't be avoided they get in the room and make sure that said regulation blocks out everyone else.

    I'm not saying they shouldn't do better. They're shooting themselves in the foot by not working a lot harder to address problems like radicalization, hate and misinformation. But if we're not careful they're still going to win when it's all said and done.

    Side note about comparisons to other industries or social structures: There is no comparison. Nothing that has come before looks anything like the volume of interaction that happens on even a medium sized digital platform.

    4 votes
    1. soks_n_sandals
      Link Parent
      I think your concern about legislation blocking out new platforms is already happening to a degree. The anti-competitive nature of major tech companies is already leaving smaller platforms with an...

      I think your concern about legislation blocking out new platforms is already happening to a degree. The anti-competitive nature of major tech companies is already leaving smaller platforms with an option of being bought-out or being crushed. But you are right-- without anti-trust legislation in place first, the tech giants will happily sit at the table and cement their dominance.

      3 votes
  5. Rocket_Man
    Link
    I may be a bit slow today, I did only have a single cup of coffee. But I don't understand the article. I think part of my issue is that a lot of the terms aren't defined. What do we mean by...

    I may be a bit slow today, I did only have a single cup of coffee. But I don't understand the article. I think part of my issue is that a lot of the terms aren't defined. What do we mean by moderation, scale, and what do we define as successful moderation?

    Also a lot of the issues with social media site's moderation seem to stem from poor policy rather than scale. Nobody is really that annoyed that Facebook misses some porn or shock photographs. They're annoyed that they're allowing communities to form that spread hateful propaganda.

    1 vote