16 votes

Community moderators?

I think I read all of the documentation and at least skimmed most threads here and haven't found much on how moderation will be handled (I'm sorry if I missed a post where this is mentioned, it's very likely!).

Basically, will it be like on reddit, with users volunteering? Or will there be more of an "admin" caste handling that on a higher level? Also I know Deimos made AutoModerator so...

Just wondering since I've long thought that good moderation is all that really helps in keeping subreddits quality.

48 comments

  1. [15]
    Amarok
    Link
    It's a good question and there are no easy answers. I think we'd like to avoid duplicating reddit's moderation system - the 'god-king' model has problems. I wrote a blurb on this a while back,...

    It's a good question and there are no easy answers. I think we'd like to avoid duplicating reddit's moderation system - the 'god-king' model has problems. I wrote a blurb on this a while back, I'll repost the relevant content here so people can weigh in on it.


    Ask yourself this... why does a sub with 1000 subscribers self-moderate near-flawlessly using nothing but the most basic up/down voting system... while a sub with over 250k subscribers inevitably becomes a cesspool of group infighting, recycled content, and lowest common denominator trash? Why can't we maintain the quality of the original, smaller sub as it grows without resorting to power moderators, complex rules, and bots - which eventually create their own problems?


    Well, what's the real difference between these two places?

    When the sub starts, it is invisible, only attracting subscribers that actively search it out. The people subscribing in the youth of the sub are the people with a direct interest in that sub's topic matter and the intelligence to find it. The sub develops culture and a common behavior due to these early adopters. They make the place interesting enough to attract more users. This core set of users has far more good actors than bad actors.

    As the sub grows, however... (1k to 250k)

    • the rate of submissions rises quickly (5 a day vs 200 a day)
    • more new users (who don't know or care about the culture) find it more and more easily
    • the ratio of moderators to subscribers goes way out of balance (500 to 1 vs 125,000 to 1)
    • old mods are tired, vastly more mod work is required leading to burnout or laziness
    • new mods are less keen on the subject than old mods who created the place, good mods hard to find
    • cliques of users formed that want different things, fighting with each other over what's on topic
    • spammers are drawn in by the popularity and begin aggressively promoting crap content
    • bursts of new users and drive-by voting are common due to high profile submissions
    • people begin sharing links to the subreddit in conversations all over reddit causing more of the same
    • content quality slides, driving away more discerning contributors and original members
    • original culture is lost, alternative subreddits spring up as older members move elsewhere
    • these smaller communities are better content stewards than the larger, older subreddit
    • leading to this cycle repeating itself over, and over, and over

    We've all seen this play out on reddit a couple thousand times. Frankly, I don't think this problem can be solved by traditional human moderation. If we don't try something new, then this situation is not going to improve. Basic up/down voting has conclusively proven itself inadequate to the task of large scale moderation.


    We usually turn to large moderation teams at this point.

    This can go either way, we've seen spectacularly good teams and spectacularly bad teams on reddit with most falling in the middle. The original creators don't necessarily know how or want to run a large team. Power moderators end up on many teams causing concerns about the website's integrity. I know of precious few moderation teams that have managed to grow large and maintain their quality, and the few that did, tend to have subs that have an incredibly narrow topic focus and employ heavy handed censorship.

    The solution is to find a way to somehow, organically, turn those original 5000 people into pseudo-moderators by the time that crowd of 250k people has shown up. This is a different kind of moderation, where the original members quietly acquire distributed power. The key question here is, how do we grant them greater power, and how do we keep it distributed fairly, with power coming from the aggregate in a democratic fashion, rather than from a chosen few?


    In the end, this is all about identifying "good faith actors" in your community and handing them a small increase in power. The hard part is getting that power in the hands of the right people. Who are the right people? There are a few good bets we can place...

    • early subscribers who were there when the place was young and helped build it
    • people who've earned a lot of karma from within the sub itself, both link and comment
    • 'approved submitter' types, often distinguished by flair for their contributions (like askscience experts)
    • people who are actively visiting the place and voting often, even if they don't comment (hardcore lurkers)
    • generally, older users are more likely to know the rules and the culture than new arrivals
    • special 'calibration posts' can be used to identify good faith actors, explained further below
    • perhaps look at how many votes a user has cast within that subreddit as another metric

    So now that we have a way to identify the good faith actors, how do we give them the power?

    It's incredibly simple to implement, really... all you have to do is make sure that their votes count more than everyone else.

    Know that this is playing with fire. Make their votes too heavy, and you will create power user cliques and turn every subreddit into an echo chamber that buries new ideas. That was part of digg's problem. Make the votes too light, and you'll end up with reddit's problem, where all uniqueness and quality is washed away by a sea of lazy new users crashing the party. This is why so many defaults 'go to shit' and you see users saying that subs with more than 10k people aren't worth visiting. They aren't wrong, this is a very real effect and we need to solve this problem.

    There must be a balance between these two extremes. On striking that balance, the voting system itself should be much better at moderating content. Normal moderators become people who merely need tend to the CSS, or set a few dials and fiddles on subreddit options. The mods won't need to police content as rabidly as they do now because thousands of subscribers that have been automatically identified as acting in the sub's interests are empowered to do that job.

    I think the number one most important thing we can do for the long term health of this website is to experiment until we find that balance. This is as democratic as moderation can be, drafting thousands of original subscribers to become the moderation team.


    There are five critical points I want to make about any implementation of this idea.

    First, this system must act uniquely in each sub, rather than as one large sitewide system. Let every sub run their own experiments, and we will quickly find out what works and what doesn't. Mods are free to leave this system off as well - I don't expect most to feel the need to use it until they get past 50k subscribers.

    Second, the differences in the weights must be small to safeguard against creating a power user clique. If someone has a weight of 10, that's giving them ten times the power of a new user which is very extreme. I'm not convinced that is a wise choice, though I could be wrong - only by experimenting will we know for sure. We should start small and if necessary move it up slowly until we hit the right balance. I'd start at two or three points maximum. I suspect larger, older subs might need higher totals than newer, smaller subs to achieve the same effect. Something like the sub's age in years might be a scaling cap, max 3 points for a 3 year old sub, max 7 points for a 7 year old sub - or base it on subscriber numbers.

    Third, we need to see both the total number of votes cast and the total weight of those votes as separate numbers. We don't want to lose sight of how many real people voted on something, and we need to see the weighted total to see if this system is having an effect, and if that effect is positive or negative. This is also to help with transparency. Perhaps only the moderators get to see the weights and those aren't made public to the users to avoid people gaming the system - or perhaps the mods can choose to make that information visible to the users if they so desire.

    Fourth, and arguably most important - new submissions all start at the same weight, regardless of who is submitting them. We are not trying to create power submitters, we want empowered voters. If someone's submission starts at 3 points vs a new user's 1 point we will end up recreating digg, where power submitters dominate the content. We don't want to go there. At least if they all start at the same weight, it takes a minimum of two people to bump something. Reddit already does some detection of and compensation for vote manipulation, this protection needs to be applied here as well.

    Fifth, these weights only exist on and only affect submissions and comments (submissions being the most important of the two). One's karma is still calculated at one point per person per vote, no weights, exactly as it is now, and I am not in favor of ever changing that. Weights are for helping the ranking algorithms to moderate content and nothing more.

    Seem like reasonable, safe limitations for a starting point?


    Now, let's get specific. I'm going to use a large/default sub as an example.

    Imagine in the mod control panel, we see something like this...

    • [ enable ] vote weights
    • cap vote weight totals at a maximum of [ 5 ] points for this subreddit
    • user has been subscribed more than [ 365 ] days, add [ 1 ] points to vote weight
    • user has been subscribed more than [ 730 ] days, add [ 2 ] points to vote weight
    • user has been subscribed more than [ 995 ] days, add [ 3 ] points to vote weight
    • user has acquired more than [ 100 ] comment karma from this sub, add [ 0.5 ] points to vote weight
    • user has acquired more than [ 1000 ] comment karma from this sub, add [ 1.5 ] points to vote weight
    • user has acquired more than [ 100 ] link karma from this sub, add [ 0.5 ] points to vote weight
    • user has acquired more than [ 1000 ] link karma from this sub, add [ 1.5 ] points to vote weight
    • user has acquired more than [ 100 ] karma from sister subs [ list ], add [ 0.5 ] points to vote weight
    • user has acquired more than [ 1000 ] karma from sister subs [ list ], add [ 1 ] points to vote weight
    • user has voted well on calibration posts, add [ 1 ] points to vote weight (explained below)
    • user has voted poorly on calibration posts, subtract [ 3 ] points from vote weight (explained below)

    No matter what happens, the max here is 5 points, the minimum is 1 point. The user's actual weight is calculated based on the other metrics adding or subtracting from his total. Anything within [ brackets ] in those examples is set by the moderators however they want it to be set.

    Here's what we're trying to achieve with this system.

    • around 10,000 top contributors, oldest subscribers, and moderators have effectively a 5 point vote
    • next tier of 100,000 users are mature subscribers and regular contributors, between a 3-4 point vote
    • next tier of 1,000,000 users have been around long enough to learn the ropes, 2-3 point vote
    • remaining 2,000,000 users aren't even a year old yet, between 1-2 point vote
    • the 150,000 new subscribers this month have a 1 point vote at best
    • bans now have real teeth, earning one's way back in is not easy and takes time
    • brigading voters have a 1 point vote since they don't subscribe or contribute, blunting brigades
    • spammers are less effective at bumping content using new accounts

    In theory this will result in more on-topic and good faith moderation of content using the voting system.


    The drawback, of course, is that the algorithms for the front page and /all and any multis now have a dramatically harder time accounting for the ranking of the content when they compare subreddits with different weights and subreddits with no weights. If the weight is stored as a separate number, and only used when looking at the sub directly - not on aggregate views - this problem can be worked around for the time being. Ideally we'd have a much better ranking algorithm that takes cues from vote weights and the median/mean/average votes posts get in any given subreddit, but that's a topic for another discussion. ;)

    This means each user's vote weights are unique to every single sub on the site, based on their subscription date, level of participation, and quality of content submitted to each sub. What you earn in one community will not help you in another community - with the exception of that 'sister subreddit' feature. I think that's important to have to promote groups of subs that all form a single community, such as gaming, television, sports, music, images, etc. The NFL subreddit will want to reward participation in the team subreddits, and vice versa.

    A user's effective weight does not need to be calculated in anything near real time, as the vote weight once calculated does not change often. Best suited to some intermittent background processing task.


    Now, let's talk about a way to calibrate the vote weights if age and karma aren't doing a proper job. This is a tricky solution because if it's used improperly it could make quite a mess out of the weights. If it's used wisely, however, it'll guarantee the weights go to the people who vote with the sub's best interests in mind.

    A 'calibration post' is something that no one interested in the sub's topic matter could ever downvote in good faith. Examples of this would be the genre roundups and yearly best of lists put together in listentothis. Anyone downvoting those is either ignorant of the rules or not interested in the sub's topic matter and shouldn't be subscribed in the first place. Moderators choose what to flag as a calibration post. Regular users remain unaware of them.

    The way these work is simple. Reddit tracks these posts in some special way, and keeps track of the users - specifically, did this user upvote more calibration posts than he downvoted, yes or no? Based on the answer to that question, the mods can choose to boost and/or penalize the user's vote weight. The mods might also need a system to 'reset' this record and wipe all subscriber's slates clean.


    All of these options, taken together and used wisely, should be able to at least blunt if not outright reverse the effects of Eternal September. It is a bit pathetic that the internet hasn't solved a simple social problem from 1993 by 2015. Most don't even try.

    21 votes
    1. [2]
      cfabbro
      (edited )
      Link Parent
      Excellent post and well thought out, as usual, @amarok. However, one thing you forgot to mention that we have talked about in the past that I think is absolutely key to all this vote weight and...

      Excellent post and well thought out, as usual, @amarok.

      However, one thing you forgot to mention that we have talked about in the past that I think is absolutely key to all this vote weight and moderators being chosen by various "good faith" metrics (aka Rep) is Rep decay.

      Power moderators are a problem whether they can acknowledge it to themselves or not. No one person should be a top moderator for several dozen or even hundreds of communities. Too much personal power can be an issue but that's not even the main reason power mods are a bad idea; It's purely a matter of time and attention. Someone who moderates several hundred communities is not able to give any one community enough time or attention to justify their being given such extraordinary power over them.

      Not only that but we have seen time and time again on reddit moderators that disappeared from a community, yet keep their top mod spot and then come back months or even years later and cause serious issues in the community because they, personally, don't like the direction the community has taken.

      Rep decay is needed to counteract this problem. If someone stops participating in a community their reputation (for the sake of vote weight and mod tool availability) should decay rather rapidly. I am not talking about someone taking a vacation and then coming back with no power... but 3-6 months without doing absolutely anything in that community? They should be basically back to square one. IMO you should only be able to moderate a couple top tier community at most and several sub-communities under them in that same hierarchy. Ideally rep decay should handle that for us.


      p.s. All this reasoning is why when I quit using /u/cfabbro actively on reddit I removed myself as moderator for 99% of the communities I was a moderator of and only kept my position in a few that I genuinely cared about e.g. /r/suicidewatch, /r/reddithax, /r/ideasfortheadmins, /r/help which I continued to participate in and assist moderating in the background until I finally deleted my account entirely.

      The only reason why I was given the vast majority of those mod positions was to assist with CSS stuff anyways... but immediately after I was done I should have been removed from those mod positions since it was way way way too many subreddits for one person to make any meaningful ongoing contribution to as a moderator. However it does bring up a good point that we should perhaps consider allowing actual active moderators to be able to give temporary rights to users who are brought in to help them with stuff like CSS and whatnot.

      And for reference, for those of you who don't know me, and wonder what possible "authority" I might have to speak on this... IIRC @deimos said that at one point, other than the admins, I was the top mod on reddit in terms of the sheer number of communities (many of them defaults) I was on the mod team of. That is not a good thing, BTW!

      14 votes
      1. Amarok
        Link Parent
        Yes, we definitely need some sort of decay metric to prevent the system from becoming calcified - something that promotes turnover and participation.

        Yes, we definitely need some sort of decay metric to prevent the system from becoming calcified - something that promotes turnover and participation.

        5 votes
    2. [9]
      arghdos
      Link Parent
      One thing missing from this very good discussion (which I'm glad to see has migrated from our wiki to this site) is the context of storing all this information balanced against the maxim of user...

      One thing missing from this very good discussion (which I'm glad to see has migrated from our wiki to this site) is the context of storing all this information balanced against the maxim of user privacy on ~.

      First, and obviously -- the information that goes into user's vote-weight calculations shouldn't be visible to anyone. I'm even torn on if the moderations should be able to see a user's vote weight, to be frank if you're active on a ~group, you'll know who the quality contributors are.

      Second, the stored information should be anonymized / perhaps encrypted if that doesn't add too much computational effort to the vote calculation. Additionally, with the exception of the calibration posts no user votes on individual content should ever be stored -- we'll have the karma / vote aggregates, and that should be enough. At most, a time-series type representation of karma accumulation over time -- which might be useful to turn down the vote-weight for long-time users who go inactive for a while, or who

      The opt-in model for user-privacy on tildes -- which, I assume we're following -- might conflict with this somewhat. One potential compromise is for users who do not opt-in is that they will never gain additional vote-weight.

      8 votes
      1. [3]
        Deimos
        Link Parent
        Maybe I'm misunderstanding, but the site definitely has to store who's voted on what. Otherwise, it wouldn't be possible to prevent the same user from just voting repeatedly on any post. However,...

        Additionally, with the exception of the calibration posts no user votes on individual content should ever be stored

        Maybe I'm misunderstanding, but the site definitely has to store who's voted on what. Otherwise, it wouldn't be possible to prevent the same user from just voting repeatedly on any post.

        However, what I would like to do is make this data temporary. For example, let's say that we close voting on a post after it's 30 days old. At that point, we could delete all the individual vote data and just keep the final total—it's no longer important which individual users voted on the post. But before doing this, the voting behavior definitely could have been used to affect that user's "trust" or voting weight.

        10 votes
        1. [2]
          arghdos
          Link Parent
          Nope, I'm the one definitely in the wrong here -- I guess I was talking about long term storage of votes, etc. The 30 day solution sounds good to me

          Nope, I'm the one definitely in the wrong here -- I guess I was talking about long term storage of votes, etc. The 30 day solution sounds good to me

          6 votes
          1. Amarok
            Link Parent
            I kinda like that as a default goal - anything retained for functionality ought to get wiped after a month, with only generalized anonymized stats remaining behind.

            I kinda like that as a default goal - anything retained for functionality ought to get wiped after a month, with only generalized anonymized stats remaining behind.

            3 votes
      2. [5]
        Amarok
        Link Parent
        We don't necessarily need to show the numbers to the users or track any of this data on a per-user basis. In fact I think that might be a mistake - it incentivizes point-chasing behavior and karma...

        We don't necessarily need to show the numbers to the users or track any of this data on a per-user basis. In fact I think that might be a mistake - it incentivizes point-chasing behavior and karma farming. Someone however does need to keep an eye on the 'real vote' vs 'weighted vote' numbers, otherwise we won't know if the system is working or if we've struck a good balance in the weighting mechanism.

        6 votes
        1. [2]
          BuckeyeSundae
          Link Parent
          So we're basically talking about looking at group numbers? I would think we'd be able to get access to the group numbers, abstracted from any particular user. So say one tier has 150,000 members,...

          So we're basically talking about looking at group numbers? I would think we'd be able to get access to the group numbers, abstracted from any particular user. So say one tier has 150,000 members, and we can get a breakdown for that group that shows that this group has:

          • 94% actively voting on a submission an average more than once per day (and would it even make sense to track submission voting and comment voting power separately?)
          • 63% commenting on average more than once per day
          • 36% submitting something on average more than once per day
          • 97% active in commenting, voting, or submitting in a given week

          Being able to get access to group-wide data like that should allow us to validate the system is working without needing to know who is in what group.

          8 votes
          1. Amarok
            Link Parent
            I'd be happy with that sort of large, generalized, anonymized style of reporting.

            I'd be happy with that sort of large, generalized, anonymized style of reporting.

            5 votes
        2. [2]
          arghdos
          Link Parent
          Hmm, how would that work if all the information that feeds into vote-weight is per ~group? It would seem you would need a record of karma totals / subscribe date per user? I guess you could build...

          track any of this data on a per-user basis

          Hmm, how would that work if all the information that feeds into vote-weight is per ~group? It would seem you would need a record of karma totals / subscribe date per user?

          I guess you could build it like an SQL-view, but that might (or might not, now that I think about it) add to the computational cost of updating the vote-weight calcuations

          6 votes
          1. Amarok
            (edited )
            Link Parent
            I was thinking perhaps we have 'tiers' for users sitewide, a basic ranking system of some kind. Think of it like guild ranks in an online mmo. We can basically bump the users up and down these...

            I was thinking perhaps we have 'tiers' for users sitewide, a basic ranking system of some kind. Think of it like guild ranks in an online mmo.

            We can basically bump the users up and down these ranks without needing to keep more than a short-term log of their behavior. The site uses the rank to determine what kind of power the user wields, and we can have all kinds of different systems watching user behavior and pushing them up and down these ranks, without having everything tided to a score or vote total.

            The ranks remain, but all the other data is transient and short-lived.


            Edit: The more I think about this the more I like it. It's a real problem - considering user privacy in the application of these site-wide moderation systems is a new element that doesn't get discussed anywhere else.

            User reports need anon-IDs so mods can give feedback on the reports without knowing who those users actually are.

            Mod actions need anon-IDs so users can give feedback on the actions of moderators without the users knowing who those moderators actually are.

            We talked about letting users in good standing make anonymized comments where only the admins can determine who they are (and for a short-term time window only, to prevent abuse).

            If we want to have this kind of support for anonymization (and I think we need it if we're serious about privacy) then we're going to need a better way to track all of this than just a bunch of scores.

            Even basic ranks (say tier 1 through tier 10) aren't going to be enough, because we're tracking multiple metrics for multiple purposes across multiple axes. We'd need multiple ranks... something like this.

            • User ranks (per-community) for weighting and gating feature access (reporting, assigning or creating flairs, curation and editorial activity, tagging and sorting, etc)

            • Moderator ranks (per-community) for gating access to moderator abilities (removals, editing titles or links, making announcements/polls/stickies, editing stylesheets, turning features on and off, tuning features)

            • Sitewide ranks? Perhaps some general ranking method that really isn't anything more than santa's naughty-or-nice list, to get an at-a-glance view of a user's participation in tildes as a whole. I'm not sure what we'd use this for but I'm pretty sure it will have valid use cases.

            Users validating mod actions, mods validating user reports - it has a certain symmetry. As long as it's kept pseudo-anonymous I think it might work.

            7 votes
    3. huadpe
      Link Parent
      So this is definitely interesting. I definitely see what you're saying about growth challenges changing culture. I've been with NP's mod team since about 50k subs and we're closing on 250k now....

      So this is definitely interesting. I definitely see what you're saying about growth challenges changing culture.

      I've been with NP's mod team since about 50k subs and we're closing on 250k now. It's been an adjustment, and we have to deal with a lot more bad faith content than we used to.

      One option I'd propose separate from the prioritized vote system would be an automatic removal-on-report for users with a history of constructive reports. So if, e.g. 90%+ of the time a user reports a comment or submission, a mod does end up removing it, then it might make sense that their reports result in removal, with a mod/admin then coming along later to confirm the action or not.

      Not logistically possible on Reddit, but it should be possible here I suppose.

      4 votes
    4. [2]
      nothis
      (edited )
      Link Parent
      I like that this suggestion jumps straight to the core of the problem: Careless voting in large subs. Ideally, moderation should only be necessary for things like spam or obvious trolling but the...

      I like that this suggestion jumps straight to the core of the problem: Careless voting in large subs. Ideally, moderation should only be necessary for things like spam or obvious trolling but the real difficulty is moderating more subtle content policies, which in theory should be community-curated.

      I don't know how complex this system can be made before it becomes too convoluted or unpredictable but it seems reasonable that, if subscriber count is directly related to a lack of voting quality, that there should be a function that weighs against that.

      4 votes
      1. Amarok
        Link Parent
        One of the biggest problems we're going to face is our desire to do too much, too fast. If we fire a scattershot of ten solutions at a problem, how are we going to know which ones are helping and...

        One of the biggest problems we're going to face is our desire to do too much, too fast. If we fire a scattershot of ten solutions at a problem, how are we going to know which ones are helping and which ones are doing the best job, even which ones are even truly necessary? After spending a decade on reddit bitching about reddit we've all got a mountain of ideas about how things 'should' work, but not one goddamn drop of valid, rigorous scientific data to tell us if we're right.

        We've obviously got to ease-in to these systems, take it slow, gather user feedback and gather data about how the systems are functioning with some level of rigor. It's the only way to test things in a dynamic, complex environment like a social media site. Some of these systems might solve the problem on their own right out of the gate, some might exacerbate problems or create new ones, some features may end up working against each other.

        One good thing, though - knowing the general shape of all of these potential solutions now does help us plan to code for these features in a way that will help us avoid major rewrites and false starts, even if we don't rush to implement all of them.

        5 votes
  2. [7]
    Deimos
    Link
    Thanks for posting this, it's definitely one of the most important topics that I haven't yet written up properly in the docs (I have a "Future Mechanics" page started with this on it, but haven't...

    Thanks for posting this, it's definitely one of the most important topics that I haven't yet written up properly in the docs (I have a "Future Mechanics" page started with this on it, but haven't finished it enough to post yet). Let's make this the "Daily discussion" for today, since I think it's a great topic.

    Some of the others in here (especially @Amarok) already went into some of the aspects of what sort of system I'm hoping to have, and I'll try to add a bit more. The crux of it is that I want to try to find some way to turn "choose new moderators from the quality members of the community" into a natural, automatic process. That was always my approach to recruiting mods on reddit, and I feel pretty strongly that it's the right way to do it.

    Massive growth has pretty much always been the death of online communities, and I think a lot of the reason is that we just haven't yet found a good way for them to be able to "defend" themselves as they grow. So overall, the goal is to try to find a way of determining "trusted users", who show through their behavior (both posting and voting/tagging/etc.) that they have a strong understanding of what "good content" is in that group's culture.

    Trusted users should both have stronger votes as well as access to stronger functions/tools. For example, the comment-tagging system that's already implemented (tagging comments as Joke/Offtopic/etc.) will most likely not always be completely "open" like it is now—a user should probably need some level of trust in a particular group to be able to tag comments in it, or maybe only to use particular tags. And then, like @huadpe mentioned in the context of reports, we look at how the user actually uses that tool, and decide if we can trust them more or less with it. Again with reports as an example, maybe it takes 3 reports from less-trusted users to remove a post, but only 1 from someone with high trust.

    One point that was mentioned as well that's extremely important is that by needing to build up "trust" like this, getting banned can actually really hurt. On reddit, unless you're a moderator, getting banned has almost zero impact—a brand new account will have access to all the same capabilities/voting strength/etc. as your old one did. Because of that, other than attachment to a particular username, people don't really have any fear of being banned, because it doesn't do anything. If we can change that, it makes it much harder for people to consistently behave badly, but will have little effect on good users.

    7 votes
    1. [6]
      arghdos
      Link Parent
      One aspect that hasn't been mentioned yet is the question of account selling. We're proposing a large number ways to empower users, which I definitely agree with, but conversely we absolutely have...

      One aspect that hasn't been mentioned yet is the question of account selling. We're proposing a large number ways to empower users, which I definitely agree with, but conversely we absolutely have to consider that this creates a much larger incentive for power-users to sell their accounts.

      I don't know how much of an issue it will actually be -- as far as I can tell, the types of people who would sell their accounts probably wouldn't be power-users. In addition, with a rep-decay system in place there is a fixed time-window in which this would be an issue, but the window is still there and presumably damage could still be done.

      Something to consider at least

      7 votes
      1. [3]
        cfabbro
        (edited )
        Link Parent
        With rep decay and -rep mechanics (for misuse of powers) that won't benefit people all that much. Even if someone bought an account with a decent amount of rep in several communities, if they...

        With rep decay and -rep mechanics (for misuse of powers) that won't benefit people all that much. Even if someone bought an account with a decent amount of rep in several communities, if they don't maintain it by being active or begin abusing it they would quickly find themselves in possession of an account no different than a newly created one with zeroed out rep, other than perhaps name recognition.

        As for -rep mechanics that is where action auditing comes into play. If we keep a temporary record of actions, e.g. tag, url, link changes. You can easily identify someone misusing the tools available to them, undo their changes and punish them accordingly with -rep.

        5 votes
        1. [2]
          Amarok
          Link Parent
          Rep decay won't help at all with account selling. The accounts that are the most valuable will be the ones with high rep, and they'll get used immediately by whoever is purchasing them, ostensibly...

          Rep decay won't help at all with account selling. The accounts that are the most valuable will be the ones with high rep, and they'll get used immediately by whoever is purchasing them, ostensibly to bypass whatever mechanisms we have in place to keep bad actors out of the systems. Sold accounts should be perma-banned as soon as they are detected.

          Rep decay will help a lot, on the other hand, with people who create a lot of alts and leave them sitting around like a quiet army so they can manipulate content. Those accounts are traditionally idle most of the time, which means with rep decay they'll be no different than a new user, and therefore largely ineffective at brigading or changing the conversation or whatever people want to use them for.

          5 votes
          1. cfabbro
            Link Parent
            I think you responded while I was still editing my comment. Sorry, that is a bad habit of mine... I really should completely formulate my response and reread it a bunch before hitting reply. Added...

            I think you responded while I was still editing my comment. Sorry, that is a bad habit of mine... I really should completely formulate my response and reread it a bunch before hitting reply.

            Added section on -rep mechanisms. Which most definitely will help with sold accounts. I do agree they should be banned as soon as they are detected though.

            4 votes
      2. [2]
        Amarok
        Link Parent
        I was wondering about that. The one thing that always changes when an account is bought/sold is the writing style of the person using it. That sort of thing already has good narrow-AI systems for...

        I was wondering about that. The one thing that always changes when an account is bought/sold is the writing style of the person using it. That sort of thing already has good narrow-AI systems for analysis. We could have a watchdog of some kind that just hones in on the writing styles of each user over time and then makes an alert if that abruptly changes. Of course, that'd require us to be tracking the user's writing style over time, which is probably something we don't want to be doing if we're concerned with privacy.

        Of course, we are tracking each user's posts and what they write, that's part of their posting history which is as permanent as anything on the site gets, so these tools could just analyze the posting history in an on-demand fashion. It'd hammer on the database but if it's local it can be run on the side on a read-only copy just there for these sorts of analysis.

        2 votes
        1. arghdos
          Link Parent
          Possibly it could be a script that gets triggered if a user's Rep starts falling precipitously? Or if a mod in good standing notices a suspicious link or pattern of submissions (e.g.,...

          Possibly it could be a script that gets triggered if a user's Rep starts falling precipitously? Or if a mod in good standing notices a suspicious link or pattern of submissions (e.g., sock-puppeting comments)?

          We wouldn't necessarily have to store data on the user's writing style at all -- their comment history will still be on the site, the script could randomly select comments over their history and scan them for a change?

          2 votes
  3. [26]
    Amarok
    Link
    A few other thoughts... now that we're on tildes and this discussion isn't within the context of reddit anymore. I can't see any way to run things without someone being a real, honest-to-god...

    A few other thoughts... now that we're on tildes and this discussion isn't within the context of reddit anymore.

    I can't see any way to run things without someone being a real, honest-to-god moderator with expanded access. Someone has to tweak community settings, answer reports, answer questions, fiddle with the styling, and turn community features on and off. We don't need the reddit hierarchy but perhaps we could do something akin to vote-kick systems in video games - where the mods themselves reach a quorum of votes to enable a feature or invite (or remove) someone.

    One of the largest problems with reddit honestly isn't the mod teams - it's the admin's unwillingness to actively engage with their moderation teams, both designing and implementing new features, and stepping in to handle moderator drama when it does break out... and make no mistake, that'll happen here at some point, in a human-powered system the drama is simply inevitable. If tildes' admins take a more active role that'll help a great deal. It might be worth paying a few people to do this full-time if/when tildes ever has something approaching a payroll. People like krispy and cupcake - only with actual power to help, rather than being limited to lip-service and tied to a corporate agenda.

    Also I think there's a lot to be said in favor of public moderation logs. Sure, we hear all the time about how this will turn into a 'moderator witch hunt' on reddit - but we already have those moderator witch hunts on reddit anyway. Might as well have the transparency since we're going to pay the price either way. All the moderator actions should be logged in some way for the users to review - though perhaps the data is obscured slightly so that the actual name of the moderator doesn't appear, instead it's some anonymized-ID that only a site admin can correlate back to the moderator who performed the action. Maybe it's a different ID every time, maybe each mod has their own ID and it's always the same for them in the logs. We could even potentially enable some sort of feedback mechanism where users can weigh in by reviewing mod actions and voting on them. Slashdot tried that long ago with their meta-moderation review system. I'm not sure how that worked out for them, or if they still use it.

    That also goes for the reporting system. I don't like reports in a vacuum, they are meaningless. I want to see the users who are doing the reporting - again, not necessarily their user names, just some anon-ID that can be used to track all the reports from a specific user, with the goal being to determine if we assign more value or less value to any given user's reports - or ignore their reports entirely. You've all seen what happens on reddit to the report queue when some idiot gets a bug up their ass and goes on a report-rampage - people who do that should lose access to the reporting system.

    I also think we're going to want some kind of mod-backroom feature. All of the mod teams on reddit have either a private sub, slack, discord, IRC channel, or something like that so they can talk about issues in private and come to a consensus decision on whatever issues they are dealing with. I think baking that right into the community as a feature is a good idea... think of it like wikipedia's talk pages. They have the article, and then a lot of discussion about the what/how/why of the article, but it's not visible unless you go looking for it. Perhaps the content in these backrooms is private for a time, then becomes publicly visible after a 30 day wait, so the users can stay involved. /shrug

    6 votes
    1. [4]
      BuckeyeSundae
      Link Parent
      Toward the end of my time actively moderating any large subreddit, I wrote up a small essay on how to build and maintain trust between an authority like moderators and the larger served community....

      Toward the end of my time actively moderating any large subreddit, I wrote up a small essay on how to build and maintain trust between an authority like moderators and the larger served community. It included a small response to the idea of publicly posting moderator actions.

      The basic idea is that the trade off isn't witch hunting specific mods vs. not witch hunting specific mods. As you said, if that's the fear the problem of witch hunting or something similar already exists and happens. The challenge is that moving to a transparent solution increases the stress of a moderator/team, and that potentially chills how active the moderators are willing to be in moderating content. In some cases, that pause can be a good thing (because it means that a person who knows an action may be unpopular will consider why it would be unpopular more carefully). But in other cases, where actions would be good for the community and unpopular anyway, it could decrease the amount of healthy actions that happen.

      When it comes to tildes, I think we can make sure there is enough moderator support that the darker sides of increased transparency, the heightened stress and frozen moderator teams, can be mitigated. I don't think this is an argument against posting moderator actions publicly. It's an argument in favor of making sure the overall balance of the tools and support in the moderator's toolbox lets them feel comfortable engaging their communities in a healthy way.

      But the fact is, if a moderator action reporting system exists, the most likely people using it will be people whose content was removed. Most community members will see the mod logs and think "Oh, okay good. That's a boring thing I don't need to look at ever."


      When it comes to reporting issues with a comment or post, I think it would make sense to have information giving like a correlation value for a user's report and moderator action (should be a simple calculation, but also with a time limit of going back like maybe a month or so). The information a moderator should be able to see should just be the correlation value, not anything about the account itself.

      That could later be looped into a moderating-promotion system. Where if a user's correlation value stays above 0.8 or something for X amount of time (and above Y total reports), they could get some moderating power.

      8 votes
      1. [3]
        insubstantial
        Link Parent
        I think it cannot be stated enough that community members will not like mods, no matter what. Mods will always have to remove rule breaking posts, so community members will find it unfair. Public...

        I think it cannot be stated enough that community members will not like mods, no matter what. Mods will always have to remove rule breaking posts, so community members will find it unfair.

        Public mod logs add more fuel to the fire. "Look at how much stuff they remove! Hey, I liked that! Rabble rabble!". It's not really a witch hunt, but it does sow discontent.

        3 votes
        1. [2]
          Amarok
          Link Parent
          Juvenile community members chanting the 'free speech' mantra don't like mods. They can go to Voat until they wise up, we don't want them. They bring little or no value to the forum in the first...

          Juvenile community members chanting the 'free speech' mantra don't like mods. They can go to Voat until they wise up, we don't want them. They bring little or no value to the forum in the first place, so they won't be missed. We're not trying to get to 5 billion users. We're trying to get better quality content than the rest of the internet.

          Frankly, those people are not going to like tildes, because it isn't built with them in mind - it's built for a more mature class of user. They'll have low rep until they shape up - we will probably be doing things like analyzing the english grade reading level of all comments to inform the ranking algorithm in comment threads. "Lol" goes straight to the bottom, a five paragraph post goes straight to the top, and that's before the votes kick in. That's going to punish juvenile behavior and reward adult behavior... and it won't be the only system doing this sort of thing. This forum will be internet-hard-mode by design.

          The more mature users understand the necessity of rules in an online forum, and they don't have problems with it in general, provided there's accountability, no selective enforcement, and the rules make sense. These are also more often the best content submitters, more likely to become mods themselves. Our heavy handed mod threads in listentothis generate plenty of praise, far more than the people who bitch about 'restrictive rules'. I think it's party a matter of making sure the community is on board with the rules and the reasons for those rules. Some subs are rather arbitrary and do a poor job of explaining themselves, or maintaining consistency - those definitely get more heat on reddit.

          The 'children' (and adults exhibiting childish behavior) have overrun every single corner of the internet today, reducing it to a bile driven memetic hurricane. That's not going to happen here, and heavy moderation is the only way to make it work.

          6 votes
          1. insubstantial
            Link Parent
            In r/leagueoflegends, we have a lot of mature people who also do this. Mostly because of the inconsistencies of our team in general, but i think that providing a better framework for mods to get...

            In r/leagueoflegends, we have a lot of mature people who also do this. Mostly because of the inconsistencies of our team in general, but i think that providing a better framework for mods to get started and know what's expected of them as a community leader can help lower the amount of rabble rabble that happens.

            The community doesn't always have to be on board with the rules, but you do have to enforce them the same , and have a good reasoning for why it exists, even if they don't like it. For example, putting images and short videos into self posts isn't a popular idea, but it has to be done or else that's all you get.

            Giving that support in guide form to moderators of ~groups may be the difference between chaos and community.

            3 votes
    2. [6]
      arghdos
      Link Parent
      I really like /u/huadpe's idea here -- no need for any identifiers at all, but a moderators actions on the report (removal or ignoring) will tie back into the value of the user's report,...

      That also goes for the reporting system. I don't like reports in a vacuum, they are meaningless. I want to see the users who are doing the reporting - again, not necessarily their user names, just some anon-ID that can be used to track all the reports from a specific user, with the goal being to determine if we assign more value or less value to any given user's reports - or ignore their reports entirely. You've all seen what happens on reddit to the report queue when some idiot gets a bug up their ass and goes on a report-rampage - people who do that should lose access to the reporting system.

      I really like /u/huadpe's idea here -- no need for any identifiers at all, but a moderators actions on the report (removal or ignoring) will tie back into the value of the user's report, potentially resulting in auto-removals of user-reported posts (if their record is good) or simply silencing it (of the record is bad)

      6 votes
      1. [5]
        Amarok
        Link Parent
        We can give users report-weights as well, and set report-weight thresholds for automoderator activities. Enough users with enough weight report something and action gets taken or at the least the...

        We can give users report-weights as well, and set report-weight thresholds for automoderator activities. Enough users with enough weight report something and action gets taken or at the least the mod team is notified.

        I don't like the idea of one report being able to remove something, though. Whatever the system is, it's gotta have distributed power rather than concentrated power.

        4 votes
        1. [4]
          huadpe
          Link Parent
          I think this really depends on scale. Even at pretty large scales, multiple reports on a comment are just not very common. E.g. at 500k+ subs and with hyper-strict rules, it's quite uncommon for a...

          I think this really depends on scale. Even at pretty large scales, multiple reports on a comment are just not very common.

          E.g. at 500k+ subs and with hyper-strict rules, it's quite uncommon for a CMV comment to get multiple user reports, and that's usually only for like a controversial top comment on a high-upvote post, and usually they're frivolous.

          Even without any report weight system, we enacted a rule that 2 comment reports has automod remove the comment for mod review, and that is a pretty uncommon thing (maybe 1-2 a day, and they're usually right).

          I also wouldn't necessarily tell users that their reports are powerful enough to remove comments.

          Alternatively, you could combine report weight with like a flame-score on the comment based on sentiment analysis, and allow have the high-report-weight reports only remove when combined with a low sentiment score.

          6 votes
          1. [3]
            Amarok
            Link Parent
            Reddit's report system is an unmitigated clusterfuck. It's impossible to find, hard to use, offers no accountability, and until recently didn't even let people register the reasons for their...

            Reddit's report system is an unmitigated clusterfuck. It's impossible to find, hard to use, offers no accountability, and until recently didn't even let people register the reasons for their reports. That's the reason nobody uses it, even in large subs.

            Also, tildes has no downvote. I think that's the silver-bullet that's going to drive orders of magnitude more users into tildes' reporting systems (and also to use the tagging systems). It's the only way they can register their displeasure with content.

            I agree with not telling the users how powerful their reports are. The last thing we need is a cabal running around with a chip on its collective shoulders. We'll have enough problems with ego without feeding it. ;)

            4 votes
            1. [2]
              cfabbro
              Link Parent
              The new system isn't much of a change for the better though. The custom user input option is buried at the bottom of the report popup under "breaks the rules - other" and not the main "other" (or...

              and until recently didn't even let people register the reasons for their reports

              The new system isn't much of a change for the better though. The custom user input option is buried at the bottom of the report popup under "breaks the rules - other" and not the main "other" (or even right on the main report page) where it logically should be.

              Having default/static options to report is good but for any nuanced issue, which most reports are IMO, especially in subs lacking a ton of specific rules, those are useless.

              3 votes
              1. Parliament
                Link Parent
                I'm pretty sure some subreddits have the custom report reason disabled anyway. I've noticed in missing on occasion.

                The new system isn't much of a change for the better though. The custom user input option is buried at the bottom of the report popup under "breaks the rules - other" and not the main "other" (or even right on the main report page) where it logically should be.

                I'm pretty sure some subreddits have the custom report reason disabled anyway. I've noticed in missing on occasion.

                3 votes
    3. [15]
      insubstantial
      Link Parent
      I don't like the idea of public moderation logs. Users deserve their privacy too. I think a good moderation team can be taught to share information when appropriate, but not everyone's dirty...

      I don't like the idea of public moderation logs. Users deserve their privacy too. I think a good moderation team can be taught to share information when appropriate, but not everyone's dirty laundry needs to be public. It's not a matter of "mods will get witch hunted". It's a matter of privacy for all. and not everyone needs to know everything. The culture of "information I don't know exists, so even if it's not my business I need to know it" is part of the problem.

      Most of the issue Reddit had with keeping communities under control is the lack of backup from admins. Users were allowed to think that because they were "the community" they could do what they wanted. Admins never stepped up to reinforce that they could do what they wanted...within the rules the mods set for the community. And then little to no help or communication was ever provided to back that up. And some admins directly undermined mods, fueling that fire.

      3 votes
      1. huadpe
        Link Parent
        NeutralPolitics has a public modlogs system which one of our mods built which keeps a lot of information private and is pretty customizable. https://modlogs.fyi/r/NeutralPolitics

        NeutralPolitics has a public modlogs system which one of our mods built which keeps a lot of information private and is pretty customizable.

        https://modlogs.fyi/r/NeutralPolitics

        5 votes
      2. [4]
        Amarok
        Link Parent
        I wouldn't say I like the idea either - nobody likes an anonymous horde sitting on their shoulders second-guessing and constantly misinterpreting their actions. If mods just sit around being...

        I wouldn't say I like the idea either - nobody likes an anonymous horde sitting on their shoulders second-guessing and constantly misinterpreting their actions. If mods just sit around being judged all the time, there won't be many mods.

        There has to be some level of public accountability, though. We expect no less of anyone working in government, and that's what moderators are - a form of public service self-governance. I like the idea of open moderation logs with anonymized moderator IDs rather than the mod's usernames, because it shuts down the witch hunting. If users can give feedback on the mod's actions while viewing the log that's probably enough.

        As for backroom discussions (such as handling a self-harm situation like /r/suicidewatch) things get a lot more complicated. I'd still like to see some of that stuff become public, though it doesn't have to be immediately public, there probably should be some minimum delay (months, probably). That said, if the mods aren't pleased with how it works, they'll just go to slack/discord and we'll lose that level of accountability and engagement. The backrooms will just turn into ghost towns.

        3 votes
        1. [3]
          insubstantial
          Link Parent
          Or we can encourage respectful questioning and expect mod teams to give real answers. We have a chance to build from the ground up and teach potential moderators best leadership practice.

          Or we can encourage respectful questioning and expect mod teams to give real answers. We have a chance to build from the ground up and teach potential moderators best leadership practice.

          3 votes
          1. [2]
            Amarok
            Link Parent
            That we most definitely can do. Reddit has never even attempted to offer any kind of real, useful 'moderator guidance'.

            That we most definitely can do. Reddit has never even attempted to offer any kind of real, useful 'moderator guidance'.

            1 vote
            1. insubstantial
              Link Parent
              There was a botched set of video guides once, but they weren't useful at all. Perhaps some of us should get together and brainstorm what best practice would be, and build that resource before it...

              There was a botched set of video guides once, but they weren't useful at all. Perhaps some of us should get together and brainstorm what best practice would be, and build that resource before it becomes an issue?

              3 votes
      3. [9]
        cfabbro
        (edited )
        Link Parent
        Auditing is very important though. You can have public moderator logs for auditing that are also anonymized so there is no witch-hunting. You just assign a temporary ID to the mod behind a mod...

        Auditing is very important though. You can have public moderator logs for auditing that are also anonymized so there is no witch-hunting. You just assign a temporary ID to the mod behind a mod action and if it is then determined that the mod action in question was done in bad faith the mod who did it can still be punished by the system accordingly (with -rep)... all the while not necessarily knowing specifically who took that action.

        Although I would argue that full anonymization is not a good thing either since that prevents you from identifying habitual abusers. And with @deimos actually willing to do the work required to protect mods from witch-hunts (unlike reddit) and willing to step in when he needs to (also unlike reddit), I think you may be overreacting to the negative potentials of user identifiable public mod logs.

        Also another thing to consider is that ideally, the rep required to get access to mod tools will be low enough that there will be a lot more of mods in every community than on reddit. So if one person takes an action but 500 others back that action up... there is less likely to be witch-hunts against any one particular mod.

        2 votes
        1. [7]
          Parliament
          Link Parent
          I think the mod team itself should be anonymized too. What benefit is there to having a list of individual names in the sidebar of a ~group? In my experience, it only facilitates witch hunts and...

          I think the mod team itself should be anonymized too. What benefit is there to having a list of individual names in the sidebar of a ~group? In my experience, it only facilitates witch hunts and misdirected mod mail.

          If there's a problem with a mod's behavior, let the mods be addressed as a group when doing so publicly then take action against the individual privately. No one should face the wrath of the masses for abusing their power or breaking site-wide rules in an online forum.

          2 votes
          1. [5]
            cfabbro
            (edited )
            Link Parent
            I disagree there as well. If nobody not already a mod can identify the mods in a community then there is no way non-mod users can complain about them and their actions even with legitimate reasons...

            I disagree there as well. If nobody not already a mod can identify the mods in a community then there is no way non-mod users can complain about them and their actions even with legitimate reasons to do so.

            Mods are not above being shitheads. There is a perfect example of this in the powermod /u/davidreiss666 on reddit. Him and I have butted heads a ton of times over the years for his vindictive and abusive behaviour. Someone replies to something he said elsewhere that he didn’t like and then he abuses his powers in subs he mods to punish them. He did that all the time back when I moderated with him and he probably still does it. How the hell he is still a mod anywhere on reddit is a mystery to me since he has proven time and again he is willing to abuse his authority.

            So how can people like that be called out for such abuses if you cannot identify what communities they mods, what actions they took and in which communities they are potentially abusing their authority?

            2 votes
            1. [4]
              Amarok
              Link Parent
              You can visit the mod logs and downvote/report his actions. He won't be a mod for long. Also, if mods are anon, he can't get internet famous and inflate his ego with mod positions. Plus, I don't...

              You can visit the mod logs and downvote/report his actions. He won't be a mod for long. Also, if mods are anon, he can't get internet famous and inflate his ego with mod positions. Plus, I don't think we're going to be allowing one person to be a mod of a lot of places on tildes.

              I think refocusing on 'mod actions' rather than individual mods has some real potential.

              1 vote
              1. [3]
                insubstantial
                Link Parent
                But I think it's a good thing to have visible mods. People are more likely to follow those that have built a rapport with their community. Anonymizing mods removes the ability to do that, and may...

                But I think it's a good thing to have visible mods. People are more likely to follow those that have built a rapport with their community. Anonymizing mods removes the ability to do that, and may backfire in the long run. "Who are our mods? Are they just bots? Do they even know our community?"

                I think people are more likely to be disgruntled with anonymous mods than they are likely to witch hunt and abuse non anonymous mods.

                5 votes
                1. [2]
                  Amarok
                  Link Parent
                  You have a point about community leadership. Maybe that can become a step in the mod rankings. Imagine a new user hits the threshold for moderator access, and it's his first time modding anything....

                  You have a point about community leadership. Maybe that can become a step in the mod rankings.

                  Imagine a new user hits the threshold for moderator access, and it's his first time modding anything. We can PM them mod guidelines and ask them if they'd like to try it. If they opt-in, we give them a day or two to play at moderator where the controls are visible, then it goes dark again. The other mods can review their actions. If that review goes well, they get to mod more and more often. They are still an 'anonymous' mod at this point.

                  If they do well enough, perhaps that's when we hit 'official invite' territory. Rather than quiet modding from time to time, they get to become a full time moderator. At that point, though, they lose their anonymity and become an official mod of their community or hierarchy. Again they have to opt-in to this leadership role.

                  Perhaps that's when they get access to the mod backroom and join the 'team' - and it becomes a human team driven system, with some kind of mod-only voting mechanism to help the mods self-govern.

                  1 vote
                  1. insubstantial
                    Link Parent
                    I think that by the point you make someone a mod, you need to know what you're getting. It's not fair to have the community see a new mod, even if that mod is anonymous, and then take them away...

                    I think that by the point you make someone a mod, you need to know what you're getting. It's not fair to have the community see a new mod, even if that mod is anonymous, and then take them away after a couple of days of live training. Then you have to explain why they're not there anymore.

                    Humans don't like rapid change. They get used to things being a certain way, so if there's a new person, and then they're gone, that's something they have to process, which leads to confusion, which leads to hijinks.

                    Plus, if this new mod makes actions that isn't within rules, we've hired them to model and enforce. Mods leaving up posts that break rules, or interacting in posts that break rules can really be a detriment, as it's a passive condoning of the behaviour and undermines the whole team. It takes a long time to undo damage like that. I wouldn't trust people to "play at moderating" especially with a larger community. That's why in the subreddits I moderate, there's an observation period before any new mods can take actions. We have to know, as a team, that anyone we bring aboard will be able to learn how things are done in our community.

                    2 votes
          2. Amarok
            Link Parent
            You know... it's not much more of a stretch to having anonymous mod teams from where we're already going here. That could really put this bullshit issue of 'mods vs users' to bed once and for all....

            You know... it's not much more of a stretch to having anonymous mod teams from where we're already going here. That could really put this bullshit issue of 'mods vs users' to bed once and for all.

            As someone who is in the top mod spot of a default sub on reddit, I could do without the constant stream of PMs that are supposed to be going to modmail, as well. :P

            1 vote
        2. insubstantial
          Link Parent
          I don't know just how important outside mod log audits are. Our communities aren't stupid; they can tell when word things are happening, or if someone in a mod team makes a mistake. We should...

          I don't know just how important outside mod log audits are. Our communities aren't stupid; they can tell when word things are happening, or if someone in a mod team makes a mistake. We should encourage users to bring it up respectfully, and train mods to respond in kind.

          Perhaps make top mods take a short training on how to lead before they can create a community?

          1 vote