13 votes

How will moderation on existing groups work?

So far, I haven’t seen too much moderation aside from bans, etc. dealt out by the admins (unless I’m wrong here and a moderation system is currently in place, please correct me if I’m wrong), but how will this work once Tildes is fully released to the public? Will people who show interest in a certain community be reached out to and asked?

14 comments

  1. [6]
    Comment deleted by author
    Link
    1. [5]
      wow
      Link Parent
      Some parts of that first link worry me a bit: I agree wholeheartedly with the above. People who are trusted as higher members of groups and are given permissions accordingly should be selected...

      Some parts of that first link worry me a bit:

      One of the most common ways that communities defend themselves is by appointing moderators—people entrusted with defining and enforcing the norms of behavior for the community. This is an effective system, but has its own weaknesses, including difficult decisions about which users should be made (and allowed to remain) moderators.

      In my experience, it's always been the best approach to select new moderators from the people known as active, high-quality members of the community.

      I agree wholeheartedly with the above. People who are trusted as higher members of groups and are given permissions accordingly should be selected based on their commitment and drive towards the community. It must be known that these people have great intentions and are willing to put in the work to maintain high levels of quality in their relative sections.

      This next part, however, troubles me:

      My goal with the trust system on Tildes is to turn this process of discovering the best members and granting them more influence into a natural, automatic one.

      It's worth noting that the process does not need to be entirely automatic. The trust system won't necessarily be a complete replacement for manually promoting users, and a combination of both systems may end up working best.

      Any process which includes automatically promoting users based on their, for example, reporting ‘stats’ and contributions to a group could cause problems. This system may be extremely easy to game and take advantage of regardless if there is some sort of manual aspect. Personally, I believe the entire process should be manual and people should be judged on a case-by-case basis by pre-existing moderators and not by a bot.

      3 votes
      1. [3]
        Comment deleted by author
        Link Parent
        1. [2]
          Kiloku
          Link Parent
          A good way to do that is giving filtered data to whoever's responsible for appointing mods, making their job easier

          A good way to do that is giving filtered data to whoever's responsible for appointing mods, making their job easier

          1 vote
          1. Amarok
            Link Parent
            A lot of this depends on doing this more openly than for example reddit. I've been the top/old grumpy mod on /r/listentothis on reddit for almost a decade. The hardest thing we have to do there is...

            A lot of this depends on doing this more openly than for example reddit.

            I've been the top/old grumpy mod on /r/listentothis on reddit for almost a decade. The hardest thing we have to do there is find new moderators. Most people don't want to do the job, especially if it's all janitor work. We usually approach top submitters, but reddit doesn't even make it easy to figure out who those people are so we can talk to them about it. Tildes should be able to provide a list of people with a thorough overview, which would make selecting new mods and growing the team very easy.

            This process is simple/basic for the most part, most groups will never have any problems. Some groups however (such as politics) tend to bring out the absolute worst behavior in people, and things stop being so simple. It's much harder to guard against biases and cliques when dealing with controversial, rage-inducing subjects.

            6 votes
      2. [2]
        pseudolobster
        Link Parent
        Presumably mod actions done in bad faith can be reverted by other users who've been appointed as mods. If we work off the assumption that most people are generally good, and there are restrictions...

        Presumably mod actions done in bad faith can be reverted by other users who've been appointed as mods.

        If we work off the assumption that most people are generally good, and there are restrictions on new accounts that prevent bots and brigades from overwhelming the userbase, this should work. Wikipedia doesn't even have that barrier to entry, they give editing powers to everyone, by default, without an account. They've encountered a ton of issues because of it, but for the most part, 99% of articles are unvandalized and the place isn't a complete shit-show.

        4 votes
        1. Amarok
          Link Parent
          Right. Just a few hot-topic areas that end up as flash points for groups attacking each other. I'm not sure how to solve those problems in the form of code - I think in those cases it's going to...

          Right. Just a few hot-topic areas that end up as flash points for groups attacking each other.

          I'm not sure how to solve those problems in the form of code - I think in those cases it's going to require a much more careful selection of mods, people who can set their own biases aside and remain impartial. I think that has to be solved by the people, not the technology. I look at /r/changemyview as a model of doing this well.

          1 vote
  2. [9]
    Amarok
    (edited )
    Link
    The basic idea is that as you participate (even just by lurking/reading) you begin to build up reputation in the groups where you participate. Think of this like levels in a game. As you gain rep,...

    The basic idea is that as you participate (even just by lurking/reading) you begin to build up reputation in the groups where you participate. Think of this like levels in a game. As you gain rep, your votes and actions are more powerful. Each level grants access to more moderator and editor tools/features. Eventually higher up the chain you graduate into a full moderator.

    The goal is to turn several thousand of a group's core members into the moderation team, rather than having some small group of a dozen people dictating everything. We also want to have some democratic/governance systems in place so that mod teams can have reliable polls and get people to vote on issues facing the community with a quorum of the subscribers.

    We still talk about what those features will be. Here's an incomplete list...

    • access to reporting systems
    • access to comment tagging (and some tags may require more rep than others to use)
    • ability to edit titles/content (for corrections, addendum, etc)
    • ability to mute or remove users from the community (for abuse cases)
    • ability to make anonymous posts/comments from your real account
    • ability to remove posts
    • ability to merge posts, manage megathreads
    • access to build/edit the wiki and permissions (yes, we want wiki for each group)
    • access to a moderator backroom (if we have ~music.metal, it'd be ~music.metal.gov or somesuch)
    • ability to promote posts up to the parent group
    • ability to review posts being promoted into your group from children groups
    • ability to change active group features (comment/thread modes, widgets, etc)
    • ability to make changes to the group styling, edit the sidebar and all other content
    • access to special content classes with different ranking rules (AMAs, current event threads, etc)

    That topic log over there ---> will maintain a record of whatever's being done to the threads, by whom, and why - to keep things honest. We're still debating what the tools and features will be, but there will probably be a hell of a lot of them once the site is a few years old.

    There might be another 'tier' of moderation once there are a few million users here - people who manage entire group trees, rather than individual groups. These people would be more concerned with the flow of content between groups and the overall health of their group tree than specific groups. That might be the sort of position the best mods can eventually graduate into from the groups in those trees.

    Another point that's worth making - moderation on sites like reddit and twitter is overly focused on dealing with abuse, using ancient, very poor/basic tools that don't make it easy. I think we'd like that to change. Less janitor/cop, more editor/promoter/curator activity.

    Earning high level access isn't going to be easy (think years, not weeks). Each level of trust is likely to be progressively harder to earn than the prior ones, with the most abuse-risky features reserved for the top of the trust tree. Simpler stuff like basic comment tags will be much easier to earn, and how those are used will be reflected in the person's rep as well.

    I should also point out that this isn't reddit. If/when a mod team goes rogue, it'll be dealt with immediately and fully by the tildes admins (which, right now, is just deimos).

    The main benefit of this system is that bans have teeth. If you earn your way into a position of power, it takes a lot of time, and if you get the banhammer because you abuse that power, it's going to be a very long time before it can be earned back. Activity that on other sites relies on easy account turnover (like spamming and trolling) will be much harder here, since new accounts start out as dirt with no access to anything beyond the basics we have here now.

    I think there will probably be three trust systems at work... one measuring you in each group, another measuring you in each tree, and another that watches your sitewide behavior. Access to anonymous commenting and basic comment tagging, for example is the kind of thing you should probably earn on a sitewide basis rather than in each group.

    This rep will decay over time, as well, so inactive accounts will lose access to a lot of these features. That will stop people from wielding large armies of alt and new accounts to influence the content.

    6 votes
    1. [8]
      wow
      Link Parent
      I feel like that promotion system (reputation that is automatically gained) can be taken advantage of and abused by someone with bad intentions. It also doesn’t ensure that a person is genuine in...

      I feel like that promotion system (reputation that is automatically gained) can be taken advantage of and abused by someone with bad intentions. It also doesn’t ensure that a person is genuine in their actions and can promote someone who is looking to cause harm to a group/user(s).

      Is there any way to have it contain aspects of both automatic reputation and manual review and promotion? I feel like that would be best as it would better screen potential users and prevent situations of abuse from occurring. Maybe users with high reputation are picked from a pool by reviewers and promotions are handled from there?

      1 vote
      1. [3]
        Amarok
        Link Parent
        For a person to gain promotion capability, they've got to participate well in a group for a long time. Most trolls and spammers haven't got the patience for that - spending many months behaving...

        For a person to gain promotion capability, they've got to participate well in a group for a long time. Most trolls and spammers haven't got the patience for that - spending many months behaving like a model contributor only to turn around and spam once they earn access. That said, it will happen because a lot of people are paid to do this, which is why people found to be abusing a specific feature will lose access to it. Keeping things open/transparent increases the chance that users will notice when something like this happens, and they can report it to the mods/admins who will deal with it. Having hundreds or even thousands of mods (who are also the group's top posters/voters/lurkers/commenters) makes it nearly impossible to corrupt an entire team.

        We do expect human-driven mod teams to take over from the automation after a point, likely the point where someone graduates to full mod access. That's why we want a mod backroom, so they can interview each other, talk about issues and develop consensus, etc. We just need to be careful that teams don't develop into small cliques who try to lock everyone else out.

        4 votes
        1. [2]
          wow
          Link Parent
          Is that a good thing though? I understand having many moderators increases the chances of a bad apple being weeded from the team, but what is the point of giving many users moderation privileges...

          Having hundreds or even thousands of mods (who are also the group's top posters/voters/lurkers/commenters)

          Is that a good thing though? I understand having many moderators increases the chances of a bad apple being weeded from the team, but what is the point of giving many users moderation privileges if everyone else who contributes are also given these abilities? It washes out what honest users are truly able to accomplish. How will big decisions be discussed when the opinions of hundreds/thousands must be collected first?

          1. Amarok
            Link Parent
            It's how the best mod teams form anywhere - the group within the group. The people who invest the most effort in the group are also the people who will take those moderation tasks the most...

            It's how the best mod teams form anywhere - the group within the group. The people who invest the most effort in the group are also the people who will take those moderation tasks the most seriously. If you want a good overview of the challenges of self-governance (and how bad we are at it) read Shirky's essay on a group being its own worst enemy. It's basically a blueprint for how to build trust/teams properly.

            Over on reddit, /r/science has more than 1500 mods. They kick ass at moderation with a team that size. It can work.

            3 votes
      2. dredmorbius
        Link Parent
        The vetting problem is an ancient one. Any tracked-trust system should largely correspond to reality: It accrues slowly. It's based on relationships to actions, interacttions, and others. It's...

        The vetting problem is an ancient one. Any tracked-trust system should largely correspond to reality:

        • It accrues slowly.
        • It's based on relationships to actions, interacttions, and others.
        • It's reasonably tolerant of honest error.
        • It's intolerant of deliberate malice, and responds quickly to same.

        Karma-whoring is effective if it is cheap to accrue, and enduring. It's not if the dynamics are reversed.

        A problem on Reddit is that low-effort posts to popular subs accrue karma. High-effort self-posts for a long time accrued none (because, ironically, of earlier abuse). Circlejerks are another low-effort gaming mechanism. Account-creation at scale provides another element to reputation-gaming attacks. See /r/hailcorporate for examples, or view the sub's own sidebar for a delicious irony salad: the mods are now shilling cryptocurrency.

        1 vote
      3. [3]
        Account
        Link Parent
        I agree with this 100%. Someone can just grind themselves into a position where they can ban a user they don't like.

        I feel like that promotion system (reputation that is automatically gained) can be taken advantage of and abused by someone with bad intentions.

        I agree with this 100%. Someone can just grind themselves into a position where they can ban a user they don't like.

        1. Amarok
          Link Parent
          Then that user complains, or other users notice, mods investigate it (or admins, if mods won't), and the ban is reversed, and the abuser is locked out from any mod actions forever, or outright...

          Then that user complains, or other users notice, mods investigate it (or admins, if mods won't), and the ban is reversed, and the abuser is locked out from any mod actions forever, or outright banned. Problem solved.

          Prevention of abuse is impossible. Detecting and reacting quickly to abuse is possible, provided things are all done out in the open. Honestly I'm more worried about false narratives and witch-hunting than I am about rogue mods. Hopefully with the trust system giving most of the power to the 'og' users of a group that won't happen, because you'd be witch-hunting and attacking the top contributors of the group, people with whom by then most group members should have some level of trust/rapport with.

          4 votes
        2. Algernon_Asimov
          Link Parent
          It will take months for someone to work themself up from brand-new user, to good user, to trusted user with low-level moderation abilities, to reliable moderator with mid-level moderation...

          Someone can just grind themselves into a position where they can ban a user they don't like.

          It will take months for someone to work themself up from brand-new user, to good user, to trusted user with low-level moderation abilities, to reliable moderator with mid-level moderation abilities, to respected moderator with high-level moderation abilities. I assume that banning users won't be a low-level or even mid-level moderation ability. That'll be reserved for the moderators at the top of the pyramid. The privilege of banning people will have to be earned.

          Who's got the time and patience to be a good contributor and moderator for that long if their intentions are to be an evil troll mod? Most trolls I know give up after a few days at most, or a couple of weeks if they're exceptionally obsessed. And, remember - if they reveal their evil side too early, they won't make it up to the next level of the moderation pyramid. They might even be pushed down the pyramid!

          This system won't be easy to game.

          3 votes