9 votes

Topic deleted by author

5 comments

  1. Flashynuff
    Link
    Unfortunately if you have public comments, it will fall on someone to moderate them. Make it easy on yourself. I would try to avoid re-inventing the wheel, because you'll just end up having to...

    Unfortunately if you have public comments, it will fall on someone to moderate them. Make it easy on yourself. I would try to avoid re-inventing the wheel, because you'll just end up having to re-invent all the moderation tools as well. I'm not sure what format you are going for but you could always use something like disqus to get a lot of common features right out of the box. That will leave you free to focus on a quality community and development of other features.

    10 votes
  2. DataWraith
    (edited )
    Link
    I'd say you can't really get away with not implementing a moderation queue. Apologies if this is too obvious, but to ease the burden, you can implement a variety of auto-moderation rules (Akismet,...

    I'd say you can't really get away with not implementing a moderation queue.

    Apologies if this is too obvious, but to ease the burden, you can implement a variety of auto-moderation rules (Akismet, phrase blacklist, phrase whitelist, maybe even a trainable spam/hate filter).

    My suggestion, though, would be to simply not display a comment at all unless it was approved by a moderator.
    This really kills the joy of posting hateful comments -- they write their comment, and then... nobody ever sees it, so they'll stop doing it.

    You can manually or automatically whitelist commenters after they contribute a good comment. This doesn't necessitate a full-blown login system, you can ask the user to provide their email address (and then don't display or use it in any way other than as ID) or a code phrase that identifies them uniquely.


    Edit: I wanted to expand a bit more on what I imagine.

    First, I didn't make it clear enough that I think you have to mostly manually moderate content -- no filter is perfect, especially with regard to more-or-less subtle trolling.
    However, that doesn't mean that technology can't help expedite the process.

    For example, in addition to a manual blacklist/whitelist, a statistical spam filter can be a line of defense.

    The classification can include a "From:"-Header that specifies the identifier of the commenter (e.g. a HMAC of their name and email).
    Repeatedly commenting users can thus automatically acquire a reputation (as far as the spam filter is concerned), but that reputation will not necessarily override what they are actually saying.

    You can sort the moderation queue by the spam score and quickly approve or reject the lowest/highest scoring comments first. Another advantage of a statistical filter is that they generally come with a way to display the Top/Bottom 10 words or phrases that triggered them. You could highlight those in red/green on the moderation queue message view, so that it becomes easier to see at a glance if the message is malicious or not.

    5 votes
  3. Amarok
    Link
    I think the simplest, quick and dirty way to get there is just to focus on vetting users. What you can do is have a trusted/non-trusted switch of some kind. The comments from the people in the...

    I think the simplest, quick and dirty way to get there is just to focus on vetting users. What you can do is have a trusted/non-trusted switch of some kind. The comments from the people in the non-trusted group can be displayed highlighted only to mod/admin types, under the assumption that if the comment is any good, they'll just approve it. After some number of approvals they'll auto-elevate to trusted status.

    This sort of simple system can go a remarkably long way towards keeping the community healthy. It's easy to get distracted by all sorts of ideas and tweaks and tricks to extend the trust model (like I often do in ~tildes threads) but the simple fact remains that this gatekeeping method has always worked for smaller communities and most communities never get big enough to need something more sophisticated. You should be able to get into the 250k-users range before it starts to break down.

    5 votes
  4. [2]
    vord
    Link
    Moderation is inevitable. I personally like using sites with community moderation and a meta-moderation mechanic. I'd pair that with an invite-only scheme that does swift banning up/down chains of...

    Moderation is inevitable. I personally like using sites with community moderation and a meta-moderation mechanic. I'd pair that with an invite-only scheme that does swift banning up/down chains of bad actors. That should minimize the burden a bit, especially if invites are capped at a manageable threshold.

    Perhaps allow question submission publicly, but they only get displayed once someone replies, perhaps allowing the question submittor to post only in that question thread.

    3 votes
    1. [2]
      Comment deleted by author
      Link Parent
      1. vord
        Link Parent
        Possibly, but not necessarily. Suppose there's a chain of 1 grandparent, 10 parents, 200 children split evenly among parents. 20 children bad from 1 parent? Ban parent. Grandparent has a recurring...

        Possibly, but not necessarily.

        Suppose there's a chain of 1 grandparent, 10 parents, 200 children split evenly among parents.

        20 children bad from 1 parent? Ban parent. Grandparent has a recurring pattern of inviting parents that get banned? Ban Grandparent, but not necessarily all the parents.

        It gets hairier when the 'bad actors' (think bot networks) mix in good actors to hide. Having a measure to make it easier to distinguish them, like X quality posts over rolling 3 months, and pruning invites from accounts that don't meet those metrics.

        4 votes