19 votes

Is content moderation a dead end?

17 comments

  1. [11]
    kfwyre
    (edited )
    Link
    The teaching equivalent of content moderation is called “classroom management”, and it basically refers to how we handle student behaviors. It can basically be split into two halves: preventative...

    The teaching equivalent of content moderation is called “classroom management”, and it basically refers to how we handle student behaviors.

    It can basically be split into two halves: preventative and reactive measures. Preventative measures help eliminate misbehavior before it starts. For example, a seating chart puts students in predetermined locations which can help head off disruptions by keeping friends or enemies away from one another and thus much less likely to chat or argue during class. Allow students to seat themselves, however, and you haven’t laid as strong a preventative groundwork.

    Reactive measures are for when students do break the rules/norms of the class. This is not an if — it is absolutely a when. Even the most well thought out and implemented preventative measures will not stop 100% of misbehaviors. The reactive measures in schools are intended to be instructive. The student needs to experience a consequence for their actions, and hopefully learn for next time what not to do. Ideally they learn the why behind that, but more often than not that understanding doesn’t happen until way down the road. In the interim it’s enough that the consequence have enough impact that they want to avoid the targeted behavior again in the future simply because they want to avoid the consequence associated with it.

    I think any platform will have to have both, and I think any community that only tries one will fail at managing their userbase. This is what we are seeing with many tech platforms right now that have played fast and loose with little/no preventative measures in place (some even have outright escalation measures). These platforms now look at the shitstorms they’ve created and try to go all in on reactive measures, trying to close the barn doors after the horses have left and also the whole barn is currently on fire.

    I think Tildes has attracted a lot of people who are much more interested in laying out preventative measures, which is a breath of fresh air online, as so many other platforms deliberately turn a blind eye to ignore it. I do think, however, that there’s an alluring siren song here that if we structure things perfectly and put the exact right preventative measures in place then we won’t need reactive moderation. This is false, but if the paradigm is believed then it can make it look like our community is failing any time rules do get broken or misbehavior does get surfaced. The common response is “what can we do to prevent this next time?” (which is a valuable question), but sometimes the answer is “nothing, this is an instance where we need to make sure our reactive measures kick in and are effective.”

    There definitely are differences between online platforms and classrooms, but I think the basic split of paradigms still holds. It’s also worth noting that there is a tension between preventative measures and freedom. Sometimes it’s worth allowing more freedom even if it allows for more potential misbehavior. Some classes can get away with not having seating charts, for example — either because there are really solid preventative measures elsewhere (e.g. a strong class culture), or because there’s an understanding that reactive measures simply will have to be used at some point on account of the freedoms offered.

    32 votes
    1. [7]
      NaraVara
      Link Parent
      Your analogy is good and I think it illustrates the problem, which is one of scale. Norms don't scale and most modern social media is designed to scale. You can't establish or maintain group norms...

      Your analogy is good and I think it illustrates the problem, which is one of scale. Norms don't scale and most modern social media is designed to scale. You can't establish or maintain group norms at that sort of massive scale where people can keep coming and going. This is the /r/All problem in Reddit where you can have a perfectly good community but if it does anything that goes viral it'll go to shit in short order.

      When you do classroom management you don't lecture the entire school at once. Each teacher maintains a classroom environment with its own norms and that exist within the broader norms of the school. The students understand, as they go from one class period to the next, which teacher maintains which norms, which classes they can crack jokes in, which ones are hardasses, etc. But all that granular control disappears on your Facebooks and Reddits. It's a free for all. Even the moderated groups on Facebook are simply too big for amateur, volunteer moderators to moderate.

      20 votes
      1. [6]
        kfwyre
        Link Parent
        This is a really great point, NaraVara. I hadn't thought about it until now, but in the event that someone does attempt to lecture the entire school at once (e.g. an assembly in the gym), that's...

        This is a really great point, NaraVara.

        I hadn't thought about it until now, but in the event that someone does attempt to lecture the entire school at once (e.g. an assembly in the gym), that's when we ramp up the severity of reactive measures significantly. The principal is usually the one leading it, as a heavy, and admins and teachers shut down any misbehavior immediately on account of the greater audience and potential disruption it has.

        I was a habitual rule-follower when I was a student in school, and I still remember when I was kicked out of an all-school assembly for joking quietly with a friend. No warning, no second chances, no accounting for past behavior -- it was 100% the hammer, given at the earliest provocation. At the time it felt wildly unfair to me, but in hindsight I now see its purpose: make an example of the bad behavior as a way of setting the terms for everyone watching.

        That works in school, but definitely doesn't work online, especially because online people tend to side with the banned and escalate things themselves. It's the online equivalent of the whole gym rioting the moment the teachers kick out the one kid who was shooting spitballs.

        18 votes
        1. [2]
          Amarok
          Link Parent
          Crowdsourcing the moderation may flip the expectations there a bit, as there's no 'central authority' to attack with an us-vs-them mentality - instead it's up to the people on the bleachers to...

          Crowdsourcing the moderation may flip the expectations there a bit, as there's no 'central authority' to attack with an us-vs-them mentality - instead it's up to the people on the bleachers to boot the trouble off of their row. If they have the sense it's 'their bleacher' or 'their community' that might be enough.

          10 votes
          1. kfwyre
            Link Parent
            Also a great point, Amarok. And we are really stretching this metaphor as far as it will take us, and I am here for it!

            Also a great point, Amarok.

            And we are really stretching this metaphor as far as it will take us, and I am here for it!

            6 votes
        2. [2]
          whispersilk
          Link Parent
          There's also the fact that online spaces aren't seen with the same gravity that physical ones are, and their moderators are consequently seen as not having the same degree of authority. If you're...

          That works in school, but definitely doesn't work online, especially because online people tend to side with the banned and escalate things themselves. It's the online equivalent of the whole gym rioting the moment the teachers kick out the one kid who was shooting spitballs.

          There's also the fact that online spaces aren't seen with the same gravity that physical ones are, and their moderators are consequently seen as not having the same degree of authority.

          If you're a kid in a school you may not necessarily respect the staff's decisions but you typically acknowledge that they do in fact have power over you and can exercise it in punishment. In an online space, that's not necessarily true. Moderators are seen by many as "just people"—which they are, yes, but which teachers are, too; what's important is the position they have been given. But this means that any hard application of their power is received not as use of authority they have been rightly given but as abuse of special privileges they don't deserve.

          Maybe this ties into the fact that online spaces are entirely voluntary while the environments of a school or a workspace are some degree of compulsory? You don't necessarily have to go to school, or get a job, but the two aren't things you can engage with or not engage with entirely on your own terms. Online communities are that way, and the absolute (or near absolute) freedom to join and take a hiatus from and leave an online space as you please might give people the idea that since they can interact with the space whenever they want, they can also do so however they want.

          If that's the case, maybe systems like Tildes where you have to be invited to join will tend to naturally suppress the riot impulse? Not by much, maybe, but it lends membership in the group a small degree of value while also reframing it as something that isn't there for you on your own terms.

          8 votes
          1. skybrian
            Link Parent
            I know a lot of people here are refugees from Reddit and have problems with it, but let’s at least acknowledge that there are well-moderated subreddits and it’s good that they have moderators who...

            I know a lot of people here are refugees from Reddit and have problems with it, but let’s at least acknowledge that there are well-moderated subreddits and it’s good that they have moderators who are part of the communities they moderate. There can be bad mods, but it’s an improvement on moderation by a faceless horde of anonymous, interchangeable workers that we see elsewhere.

            11 votes
        3. NaraVara
          Link Parent
          Furthermore, social media sites are incentivized to keep user counts and engagement high. So there is a strong disincentive in place to actually ban people. And worse yet, people being shitheads...

          Furthermore, social media sites are incentivized to keep user counts and engagement high. So there is a strong disincentive in place to actually ban people. And worse yet, people being shitheads actually improves engagement in the short term since everyone will want to correct and dogpile on the shithead, further disincentivizing the excretion of toxic elements.

          7 votes
    2. [3]
      raze2012
      Link Parent
      I don't see it as a "blind eye" so much as a cultural rejection due to norms sets since the very beginning. And it helps (and hurts) that the majority of internet is "opt-in". I see it as a public...

      think Tildes has attracted a lot of people who are much more interested in laying out preventative measures, which is a breath of fresh air online, as so many other platforms deliberately turn a blind eye to it.

      I don't see it as a "blind eye" so much as a cultural rejection due to norms sets since the very beginning. And it helps (and hurts) that the majority of internet is "opt-in". I see it as a public park more than a classroom for that reason; a good park can be almost perfectly self-maintaining, while a bad one can make you wish you were snoozing off in a classroom.

      but yes, a strong culture is key. That tends to be why the more advanced classes I took had less preventative measures, while receiving almost zero conflict outside of the occasional tardiness. At that point, it was a class of a few dozen individuals who (for the most part) sought to learn and go beyond what was needed to get a diploma (read: were likely under extreme cultural pressure to get into the top colleges or receive scholarships, and felt any little mishap or roadbump can cost them their future).

      10 votes
      1. [2]
        kfwyre
        Link Parent
        Good point. Some of it is a deliberate ignorance, but a lot of it is cultural inertia as well. Also you unintentionally helped me realize I used a phrase I'm trying to get out of the habit of...

        I don't see it as a "blind eye" so much as a cultural rejection due to norms sets since the very beginning.

        Good point. Some of it is a deliberate ignorance, but a lot of it is cultural inertia as well.

        Also you unintentionally helped me realize I used a phrase I'm trying to get out of the habit of using ("blind eye") so thanks for highlighting that for me. I'm going to edit that out of my original post.

        7 votes
        1. monarda
          Link Parent
          *Noise I appreciate that you did a strike-through instead of a delete. Not only do I think edits in general should be dealt with that way (unless it's within minutes of being posted), but editing...

          *Noise
          I appreciate that you did a strike-through instead of a delete. Not only do I think edits in general should be dealt with that way (unless it's within minutes of being posted), but editing that way allows us all to learn. Thank you!

          7 votes
  2. [4]
    daturkel
    Link
    Submission statement: I saw this essay posted on Hacker News (comments section here if you dare). I'm not sure I agree with the author's premise that content moderation can be side-stepped by...

    Submission statement:

    I saw this essay posted on Hacker News (comments section here if you dare). I'm not sure I agree with the author's premise that content moderation can be side-stepped by "changing the model"—but, maybe I'd be a little more credulous if we had a few vague examples of what that might look like.

    I know the Tildes community has historically been interested in issues of content moderation (especially as it relates to this site and others like it), so I thought it might be good fodder for conversation.

    12 votes
    1. [3]
      joplin
      Link Parent
      Here are some changes to the model that I think would have the intended effect, though they would have other effects as well, which may not be desirable: Don't make everything automatically...

      Here are some changes to the model that I think would have the intended effect, though they would have other effects as well, which may not be desirable:

      • Don't make everything automatically public. The up side is that if someone does or says something inappropriate, it only affects others who have opted in to the conversation. Unlike Twitter, which I can read without an account, I'll never see it if it's not public. The down side is that to find content you can't just search because you need an account on any sites that aren't public by default.
      • Don't allow the size of any group to exceed some set small number (like 100 members). The down side is that you have fewer people giving input and responding, but the upside is that everyone can know each other and learn and understand the culture of the group. You'll probably also need to have multiple groups for discussing the same thing, which would be confusing. You could opt to only show users the most recently created one that's still under 100 user, but that also comes with issues.
      • Stop making ad supported sites. They attract the lowest effort, highest controversy postings because it costs nothing and reaches a large audience. If it weren't free, it would be far less attractive to the bad actors. The downside is that things we're used to not paying for would cost money.
      10 votes
      1. [2]
        vord
        (edited )
        Link Parent
        I think this model works incredibly well in chat-like systems (Discord/IRC/etc), but I'm not sure how it would scale to the general commenting threads. Especially when any given topic could have...

        Don't allow the size of any group to exceed some set small number (like 100 members). The down side is that you have fewer people giving input and responding, but the upside is that everyone can know each other and learn and understand the culture of the group.

        I think this model works incredibly well in chat-like systems (Discord/IRC/etc), but I'm not sure how it would scale to the general commenting threads. Especially when any given topic could have thousands interested. Heck, you likely need several hundred just to keep the conversations going.

        Don't make everything automatically public. The up side is that if someone does or says something inappropriate, it only affects others who have opted in to the conversation.

        I like this, especially on an invite-only site. Conversations don't hit non-member visibility for 24 hours or more. Possibly have member-only sections. Have thresholds of locking (member, trusted member, etc).

        7 votes
        1. NaraVara
          Link Parent
          Kinja did something kind of like this with their "star" system. The site's writers could give you a gold star that made your posts visible by default, but non-starred people's contributions were...

          Kinja did something kind of like this with their "star" system. The site's writers could give you a gold star that made your posts visible by default, but non-starred people's contributions were collapsed by default and greyed out when expanded. They could be "promoted" to being visible if either a site's writer promoted it or if a starred commenter responded to it. I believe there was a feature request (that never got implemented) to allow replies without promotion as well to discourage engagement with trolls.

          It did have the advantage of making it so you could ignore most of the strangers responding to you since they weren't really visible by default. There was no temptation to save face by having to debunk bad faith attacks as long as the trolls were all in the greys.

          6 votes
  3. AreaDev
    Link
    This is a very interesting topic. And the comments here are great. As noted above, if simplify, a lot depends on the staff of the community. What are their goals? Is there a fork in the interests?...

    This is a very interesting topic. And the comments here are great.

    There's no objective definition of right or wrong in content moderation. Right and wrong is subjective, especially across cultures, and moderation should be subjective too. - comment from Hacker News

    As noted above, if simplify, a lot depends on the staff of the community. What are their goals? Is there a fork in the interests? They want more people, or they want quality. It's just that if you need money, everything changes. Money dictates behavior.

    I am familiar with some communities where there is nothing to be done without strict punitive measures. But no one will do them. Need advertising, need coverage, we need money. The administration stands for high-quality content on the one hand, but also wants more, more participants. Two goals. They are difficult to combine.

    3 votes
  4. nothis
    Link
    This seems to be the core of it. It's an interesting suggestion. I'm just skeptical about where to draw the line. If removing viral feed algorithms and less advertising isn't the solution, then...

    Hence, I wonder how far the answers to our problems with social media are not more moderators, just as the answer to PC security was not virus scanners, but to change the model - to remove whole layers of mechanics that enable abuse. So, for example, Instagram doesn’t have links, and Clubhouse doesn’t have replies, quotes or screenshots. Email newsletters don’t seem to have virality. Some people argue that the problem is ads, or algorithmic feeds (both of which ideas I disagree with pretty strongly - I wrote about newsfeeds here), but this gets at the same underlying point: instead of looking for bad stuff, perhaps we should change the paths that bad stuff can abuse.

    This seems to be the core of it. It's an interesting suggestion. I'm just skeptical about where to draw the line. If removing viral feed algorithms and less advertising isn't the solution, then what should be removed? Youtube, for example, became overrun by conspiracy theorists because of suggestion algorithms, which he seems to specifically exclude here (supposedly, because there's just too much crap out there to search it manually).

    3 votes