33 votes

Hey Elon: Let me help you speed run the content moderation learning curve

8 comments

  1. [7]
    onyxleopard
    Link
    In this Techdirt piece, editor Mike Masnick explains why content moderation is a hard problem, especially for high profile social media platforms that span diverse geo-political domains. Through...

    In this Techdirt piece, editor Mike Masnick explains why content moderation is a hard problem, especially for high profile social media platforms that span diverse geo-political domains.

    Through examples, Masnick illustrates the fallacious, tech-bro mindset that everyone else must be incompetent or stupid, when the reality is that organizationally mature platforms have to put a lot of effort into a complex moderation policy that balances between sometimes conflicting concerns of users, corporations, politicians, and laws.

    12 votes
    1. [2]
      Amarok
      Link Parent
      Nice piece, file it right alongside A Group Is It's Own Worst Enemy from '03 and If Your Website Is Full Of Assholes, It's Your Fault from '11. All this time and such little progress...

      Nice piece, file it right alongside A Group Is It's Own Worst Enemy from '03 and If Your Website Is Full Of Assholes, It's Your Fault from '11. All this time and such little progress...

      7 votes
      1. Liru
        Link Parent
        I also keep a bookmark to "On a technicality" because it's a good summary.

        I also keep a bookmark to "On a technicality" because it's a good summary.

        7 votes
    2. [3]
      noble_pleb
      Link Parent
      At a very simplistic or non-technical level, content moderation is a hard problem simply because public opinion is varied and they all come from varied backgrounds, narratives, philosophies,...

      At a very simplistic or non-technical level, content moderation is a hard problem simply because public opinion is varied and they all come from varied backgrounds, narratives, philosophies, ideologies, etc. Moderator also is a human with the same set prejudices, vulnerabilities and fallibilities as most other humans. Once you filter out the basic abuses in a content by automation or content filter, what then remains is just somebody's opinion which, however toxic, however insensitive, however partisan is still highly subjective and hence a very difficult problem to solve.

      2 votes
      1. [2]
        JakeTheDog
        Link Parent
        Once we reach that point, why is it a 'problem' to solve? This doesn't seem like a viable technical objective, outside of marketing PR, precisely because of the inherent ambiguity. Who decides...

        what then remains is just somebody's opinion which, however toxic, however insensitive, however partisan is still highly subjective and hence a very difficult problem to solve.

        Once we reach that point, why is it a 'problem' to solve? This doesn't seem like a viable technical objective, outside of marketing PR, precisely because of the inherent ambiguity. Who decides what is "toxic, insensitive, partisan". Partisan is easy, but I'm troubled by why that is worthy of censorship (beyond just being a poor choice of mindset to hold). Good faith conversations sometimes require detours into the badlands.

        At worst, I think this framing misrepresents discourse or promotes an unsustainable top-down approach to public discourse. I'm all for content moderation, but at what point do our increasingly sophisticated censorship tools impede legitimate albeit difficult conversations? We already have built-in mechanisms of e.g. shame, which on the societal level has led to resignations and cancellations, and continues to do so.

        I'm being triggered here only by your proposition of this 'final hard problem', particularly the censorship of subjectively indecent content, which can suffocate discourse. And no, I'm not a free speech absolutist, I bring this up because there is a distinction to be made for the effective guidance of productive content/thought exploration, and I don't see this being discussed.

        2 votes
        1. FlippantGod
          Link Parent
          That was more thoughtful than most proponents of unrestricted free speech I've seen. Unfortunately I think understanding where to draw the line on removing illegal user content versus removing...

          That was more thoughtful than most proponents of unrestricted free speech I've seen.

          Unfortunately I think understanding where to draw the line on removing illegal user content versus removing undesirable user content is not a solved problem.

          Because I don't have the answers, and you seem to want a discussion, I'll take the opportunity to list some of the challenges as I see them, and maybe we can see where the two of us stand on some of them.

          Challenges to online discourse:

          1. Representation
            Taking into account the statistical population of a site, it's tough to get a quality representation of opinions.
          2. Imbalance between posters and viewers
            Most people will read but not respond to a comment. Those who do post tend to post more, exacerbating issue #1.
          3. Hostile environments
            Under representing opinions is just one thing of many that can contribute to an environment that dissuades some people from contributing. I suspect moderation has the biggest role to play here, potentially alleviating pressures from harmful while not necessarily illegal content.

          Removing harmful content is certainly censorship, so how can we justify it?

          Discourse online is a one-to-one affair while also being one-to-many. Someone just reading comments on a site will be overexposed to certain opinions through mechanisms 1, 2, and 3. That may not be healthy. However, at any time, a comment can also be inundated with responses. If someone posts a dissenting minority opinion, they may very well be engaged at a scale they cannot respond to, maybe not unlike a ddos attack. For each individual, and in every scenario, this is going to be a fuzzy boundary.

          I believe this culminates in ecosystems where quality of representation will suffer without moderation that can alleviate pressures created by the nature of harmful content. And exactly because no single site's population will arrive at the same conclusions of what is or is not harmful, it is important for sites to be able to decide for themselves.

          Ultimately, removing harmful content is censoring, but creating an environment that dissuades other users from contributing is effectively self censoring. But in my eyes, one of these is more likely to result in a healthy site with the capacity for lively and productive discourse.

          As for whether or not developing moderation tools will increase moderation, I don't think there is any issue with the tools. The issues will always stem from improper application, and thus, there should be transparency and accountability in moderation.

          In conclusion of my thoughts on boundaries between guiding content and unnecessary stifling of discourse, I think large social sites will always tend to over moderate or become increasingly less usable. Thus, I think the small (indie) web model of every person possessing their own platform (usually in the form of a personal website/blog) is the way to go.

          Rather than concerns about censorship at individual site levels, I take issue with censorship at infrastructure level. Should Cloudflare control who can use their infrastructure? Should ISPs? I think these are more pressing concerns.

          5 votes
    3. Protected
      Link Parent
      Some of the problems here aren't really about content moderation or lack thereof, but stuff like elderly unelected EU politicians pushing through laws they had no business making in the first...

      Some of the problems here aren't really about content moderation or lack thereof, but stuff like elderly unelected EU politicians pushing through laws they had no business making in the first place, or just the terrible state of copyright law everywhere... How about politicians make that spam illegal instead? We can do a global treaty, everyone in the world's invited. Spammers can be punished by having their faces glued to a computer monitor for a year and having to watch all the garbage other spammers are posting, sixteen hours a day.

      1 vote