37
votes
Daily Tildes discussion - more details about handling removed posts
Sorry, I've been busy with open-source-related things and have been bad about the daily discussions for the last couple of days (late today, and completely forgot about doing one yesterday).
Today I want to ask for opinions about some specific details of how removed posts should be handled. To be clear, this is related to posts that are removed manually by me (and maybe someday by other users, in response to reports, etc.). This is not related to posts that have been deleted by their author.
Specifically, I'd like to answer these questions:
- Should the author of a removed post always know that it's been removed?
- When informing the author that a post was removed, should it be a "passive" notification (like an indicator on the comment noting that it's been removed), or should they get an actual separate notification telling them? The difference is mostly that "passive" ones may never be seen if the author doesn't look back at the comment after it's been removed.
- Should the removed comments/topic still be visible on the user's profile page, when other users look at it? That is, is the comment/topic only removed its "context" but still visible from their profile, or is it completely removed and no longer visible anywhere?
Please let me know what you think for those specific questions, as well as any other suggestions or concerns you have about removed posts in general.
I can't think of any negatives that would occur if we did notify the user that their post was removed.
Bringing attention to each individual removal action makes removals feel like a much bigger deal, angering more users and taking more of moderators' time when angry users argue against removals.
A community manager I talked to at a certain Internet community said they tested this approach and it backfired because they didn't anticipate the additional flak from users, and lacked the moderation bandwidth to deal with it. They pretty quickly rolled back the policy. They had thought the users would appreciate the transparency, but the volume of complaints was too much.
This also has to do with the volume of removals and how hard-assed the mods are, how complex the rules are. If the group doesn't have clear mod policies defined, or if those policies are more than half a page long in brief, people aren't going to bother learning them, and lots of otherwise good users are getting painted with the 'rule breaking' brush just because they didn't know.
Educating users on what's expected of them is part of this equation. Most sites don't even try to do it, reddit certainly doesn't - hell, they've done their best to make sure no one ever sees a sidebar again. Mods on reddit literally have no reliable way to communicate the rules to their communities.
If the rules are simple, clear, and hard to miss, it should make a lot less work for moderators. Also let's not forget that here on tildes, there will be editing options (even link and title) and tags, so tildes mods will have a larger toolbox to use dealing with problems than mods on reddit. Dealing with a bad title on reddit requires removal and reposting (including communicating back and forth with that user), where here it will just take a moment to edit.
That internet community you mentioned - how did they handle bans? Was it easy to get around them?
As someone who's moderated large subs before this is in line with my experience.
I prefer transparency and letting users know, but this
is going to piss off a lot of people. From the user's point of view, you are removing a very useful feature they used to have and you are being less transparent than you used to be.
It's better to not have something now and add it later than to have it now and then have it taken away.
Thank you for sharing
Are you THE talklittle? As in, the creator of Reddit is Fun?
It's likely that it's him.
You've obviously never been a moderator on Reddit! :)
I have received actual death threats just for removing people's questions on /r/AskHistorians.
For context: these were questions that were asked in order for the supposed asker to deliver an agenda. The two most common topics for this tactic were the Holocaust, and African civilisation. People would ask questions about the Holocaust, only so that they could argue with our historians to say the Holocaust is an unproven story made up by the Jews. People would ask questions about civilisation in Africa, only so that they could argue with our historians to say that Africans are less intelligent than other people.
When you remove those questions and stop those people spreading their agendas, you get nasty comments in the forum, you get nasty posts in other forums about your forum, you get nasty messages sent to the moderation team, and, if you're really lucky, you get your own private messages containing death threats.
Luckily, I've always been a very private person on this account. However, some Reddit moderators have been doxxed by people like this. The death threats get a lot more serious when they include your real name or your address.
Correct! Nor do I want to be.
I've been on reddit for maybe 4 or 5 years now, but mostly ever lurk. I'm thinking with Tildes I actually be apart of the community this time, and contribute to it. It was nice talking with you again Asimov, thank you for sharing ;)
Please call me "Algernon". Everyone here keeps addressing me by my surname and it makes me uncomfortable. In my culture, addressing someone by their surname to their face is impersonal and sometimes rude ("Mr Asimov" would be respectful, but just "Asimov" is rude in my culture). You wouldn't call me "Asimov" to my face in conversation, you'd call me "Algernon". I'm using to being "Algernon". Please... everyone... I'm Algernon.
I was under the impression that this was your username, not your real name. I've met people on here called ducks for instance. Obviously if I were to ever meet you in person I would refer to you as either of what you've already mentioned. Nevertheless this is the internet, if you have an issue with people referring to you as "Asimov" online, I would suggest picking a username that wouldn't offend you when referred to it. I hold no ill will towards you Algernon, have a great day/night. I look forward to the next time we chat
This is not my real name, but I deliberately constructed this username like a real name: with a first name and a surname. Because it's constructed like a real name, and because I connect strongly to this name, and because I've been using it frequently for 7 years, it is a kind of alternative name for me - above and beyond just a random throwaway user account. I have a strong connection to this name. It has become part of my identity. If someone called out "Algernon" on the street, I would probably turn around to see if they were talking to me (silly, I know!).
I'm also puzzled that, in seven years of using this name on Reddit, everyone referred to me as "Algernon", but in only a month on Tildes, four different people have referred to me as "Asimov" but noone calls me "Algernon". I got used to being "Algernon". I'm not used to being "Asimov". I like being "Algernon".
And, as I said, referring to someone by their surname is rude in my culture, so every time someone calls me "Asimov" instead of "Algernon", it feels like they're being rude (even though they might not be).
I never said or implied that you did. I merely explained, very politely, my preference about how people address me.
I tend to think of it like weeding a garden. Some users are flowers, some users are weeds.
If someone responds to a removal like that, they should be banned from the site completely, no warnings, no second chances. Take a hardline policy like that when people respond to removals like rabid animals. Once they grow up they can earn their way back in.
I think the ease of circumventing bans is the main reason why people feel like they can get away with acting like animals. If there is no easy way for them to evade the ban, there's pressure on all of the users to behave. Those that still misbehave get the axe, preferably in a nice public exposure of their bad behavior as a warning to other users not to act that way.
When an account gets banned, I'd like to be able to visit that user's page and see the cause of the ban - in full. Their comments/submissions, relevant warnings and mod discussions, whatever it was that they did to deserve the ban. Consider it a 'tombstone' stamped on top of the dead account for all to see.
#1 shadowbanning has often been thought of as a "cop out" or "cheap" move by reddit, with people calling it a way for them to shut people up without having to deal with them. Now, that's exactly what it is, and by design so people don't go creating more accounts to get around the ban. As soon as they find out though, which isn't really that hard since many subreddit mods will tell them, that's what they'll do, and the shadowban is largely useless. It's just a deterrent.
#2 This ties to #1 and depends on the style of moderation and the goal of bans. If the goal is for people to know, then there's no sense in hiding it. If the goal is for people to not know, then there's no sense in telling them anything.
#3 Depends on what the comment was removed for. If it was sharing personal information / doxxing, then it wouldn't be good to have it up anywhere. I'm split on things like blatant and continued harassment, though: removing it prevents people from seeing it, but also lets the user scrape by on a sort of "you can't prove I've done anything wrong" vibe.
I don't really have any specific ideas unfortunately. Dealing with everything on a case-by-case basis sounds like a great idea until volume increases and then you have review queues like on StackOverflow that never empty.
Edit: spelling
I largely agree with your assessment. I'm just going to add my opinions: Shadowbanning and intransparent modding seem ineffective, but they also are... mean. It's really easy to second guess yourself and think you've been shadowbanned, when your comments are just uninteresting. It's also hard to know when you're actually doing things wrong and being moderated. Have some faith in humanity, and dare to treat them with dignity. If that and people exploit the site, you can always revert that decision.
Regarding 3.), I agree with you as well. My conclusion would be to differentiate. We're deleting a comment here after all, and we do that so the comment can do no more harm. How does that harm take shape, and can we sabotage that effort without complete removal?
But apart from deleting comments, my idea to deal with offensive, but not directly harmful comments would be to let moderators attach a sticky comment (always sorted to the top) where the mod can deliver a verbal slap. Then take the original comment, lock it from being edited or voted on, autocollapse it and when expanded distinguish it graphically (red background, smaller font, idk). This way, viewers of a conversation don't feel as patronized by the removal of comments as tends to happen on reddit, where a whole thread could be nuked and you don't know why, or a particular comment that was deleted later sparked an interesting discussion (usually about why what the commented said was bad). This should solve the issue of comments that are bad when read in good faith, but are actually a learning opportunity if you know you're reading misleading nonsense.
Edit: "lock it from being edited or voted on and distinguish it graphically" -> "lock it from being edited or voted on, autocollapse it and when expanded distinguish it graphically"
I'm gonna be the devil's advocate here, and say that if Tildes wants to maintain our current standards of quality, we will eventually need it. Once Tildes becomes public, there will someday be troll / spam accounts on here. It's not a matter of if, it's a matter of when. If you just ban these users, they'll simply keep creating new accounts; and while the trust system will certainly help mitigate and contain the damage, I don't think it's going to be enough if Tildes takes off. And as long as it's applied conservatively, I don't think it's policy; I trust /u/Deimos that the only people to receive this would be assholes.
But shadowbanning seems to be completely ineffective. I'll just create two accounts and stop using A if B notices A's comments not showing. Hell, I could just use as many accounts as I like.
As celeo correctly replied, the mere presence of shadowbanning is enough to basically poison the well.
So, you're planning to highlight the offensive comments to draw attention to them? When I open a thread, the comment saying "all faggots should be sent to an island where they can die of AIDS" will be highlighted in red to make sure it draws my eye and catches my attention. Yay! I certainly wouldn't want to miss that literary gem.
Yes, I'm being sarcastic, but there's a serious point under there. If a comment is offensive, we should do the opposite of making it stand out by giving it a bright background or any different formatting which would draw attention to it. If we're going to keep an offensive comment so that people can still see it, we should reduce its visibility. Grey out the text, for instance.
But I don't see why we would keep offensive comments visible at all. "All faggots must die" is still offensive to me, whether it's greyed out or not. Why should I have to see that here on Tildes? Why should it remain visible?
Transparency, education, and context.
Transparency - If a comment is removed completely, the community has no way to determine whether it as a fair action. Even a reason placeholder like "Removed due to offensive slur" leaves question as to how bad it was, if at all, and if punishment is being meted out consistently and equally elsewhere.
Education - Concrete examples of what will get locked and, after repeated offense, banned. It's one thing to have the vague and encompassing language of rules, it's another to have actual infractions to refer to.
Context - It's a nightmare to sift through a comment graveyard on sites like Reddit, particularly when non-removed child comments are involved (e.g. [removed]-reply-[removed]-reply, etc.)
I do agree that it should be demoted instead of highlighted. Lock the comment and the thread. Collapse it by default, which significantly reduces visibility (single line of light grey text). Mark the collapsed line with a text tag like "Locked", or perhaps a slightly more specific reason like "Offensive" or "Troll", so a viewer understands why it's collapsed and can decide for themselves whether to view it. Include additional information (e.g. why it was locked) with the comment when expanded.
Unless the tag says "this comment is homophobic", there's no way for me to know that it's offensive and directed at me. Now, I'm a tough old bastard, and I can handle seeing "all faggots must die" on the internet (I've seen a lot worse in my six years moderating on Reddit). But I know that not everyone can handle seeing that.
And, simply putting a generic tag like "offensive" on a comment doesn't tell the poor unsuspecting young gay reader that it's an attack directed at them. Humans are curious animals; we are driven by curiosity. If you put up a sign that says "Wet Paint", people will touch the wall to see if the paint is wet. A collapsed comment with a tag saying "offensive" is going to trigger that curiosity. Someone opens it, and - hey, presto! - they've been subjected to hate speech anyway. Sure, you can say you warned them, but is that really good enough? I can't know in advance how a particular removed comment is offensive, or to what degree. It might be as gentle as "gays are silly" or as harsh as "I wish there were more nightclub shootings like at Pulse to wipe out all faggots".
I simply can't see a justification for retaining offensive material or hate speech in a thread just so that people can double-check that moderators are doing the job to their expectations, and to learn how not to be offensive.
Just a moderator telling our young gay man that what they just read is not acceptable should be enough for most to not be bothered. It's instant validation that that's not acceptable. Beyond the fact that that comment would be the kind I specifically suggested to remove from public visibility. ( But not user account visibility)
With all due respect... no, it's not enough. It is not good enough to say to some poor young gay person (or any other persecuted minority) that the offensive and hateful thing they just read, and which has upset them, is "not acceptable". It's too late then. The damage is done.
And, if that is all it takes... if all we need to do is tell people after they've read an offensive comment that it's "not acceptable", what's the point of a moderator? Anyone can tell people that an offensive comment is not acceptable. As per your other reply to me, moderators don't even need to collapse the offensive comment. You, yourself, could just post a reply saying "That's not acceptable", and all the people who might be upset by the offensive comment are now magically not going to be upset. Noone else needs to do anything. No removal, no hiding, no disciplining of the writer. Just say "that's not acceptable", and the problem disappears.
As you might tell from my tone, I strongly disagree with your approach. The point of moderation is to get rid of that shit, not allow it to stay there with a pointless "that's not acceptable" tag.
I keep agreeing with what you say, which is incredibly handy, because I'm lazy and it saves me typing out things you've already very eloquently stated.
I think Algernon is kind of misrepresenting my proposal. Keep in mind I want to only use this technique for comments which are not harmful and present a teaching opportunity. Most comments won't fit this.
Damnit, I read that Algernon is a /r/askHistorians mod or so. He must be familiar with the nuking of comment chains in /r/askscience for not providing adequate evidence. That's the kind of comments where this applies really well to. Remove from visibility, give a short comment about why the evidence is insufficient (or nonexistant), but allow people to read it. It won't kill anyone. I actually find it patronizing to be denied reading a terrible comment, like you're not smart enough to handle a bullshitter. I reckon a lot of readers on there are undergrads. Allow them to learn what makes bad science.
Ah! I understand your approach now. Yes, we encountered a lot of people like you when I was moderating /r/AskHistorians. "Leave the bad answers up so we can learn from them!" Never mind that the bad answers outnumbered the good answers, and often got upvoted more than the good answers, so noone could find the good answers. The bad answers were learning opportunities, so we moderators shouldn't remove them.
Now that I know what you mean, I won't waste any more of your or my time discussing this. Suffice to say that I strongly disagree - especially when those "bad answers" are actually offensive comments (by your own admission).
I was going to presume you're misreading me intentionally, but I'll explain anyways:
Your quote ("all faggots must die") is specifically something I would remove from public view, for the exact reason you outlined. It's quite possible this comment will hurt someone, thus remove. This is indicated by me writing "Beyond the fact that that comment would be the kind I specifically suggested to remove from public visibility."
As I originally wrote ("to deal with offensive, but not directly harmful comments") - this "leave intact, but publicly shame that opinion" approach is reserved for a narrow range of comments. This area is full of judgement calls, but I reckon your "gays are silly" wouldn't qualify either, there's just nothing to shame there except for an unsubstantiated opinion. The kind of comments that this would really apply to are those that have sparked good discussion that is only valuable in context. I'm thinking maybe a racist claiming that blacks are stupid because one study found they have lower IQ, but then people in sub-comments discussing that study's flaws like not controlling for parental wealth etc. Here, a mod could give an evidence-based smackdown and everyone would know that that's just a nutter with a fringe opinion. And they just learned why that fringe opinion is nuts.
I'm sorry, I seem to have a short fuse today, it took me way too long to write this response to be dignified and reasonably reasonable, and I can't really read your response in a way that does not suggest you're intentionally misreading me. I'm out.
A) I seem to have forgot to mention that the comment should be auto collapsed. That'll do the job of reducing visibility. Then if you decide to open it up, you'll know what's up because the backdrop tells you you're gonna Wade into bullshit.
B) I distinctly remember limiting that section of my comment to less harmful comments. That, I think, does not apply to your example, considering it is toeing the line of being criminal at best.
You wrote "offensive, but not directly harmful comments". What's an example of a "directly harmful comment"?
Yeah, I've never been a fan of shadowbanning. It feels like a low-effort dodge of a problem, and the system even existing on a platform results in people worrying about it.
I like your breakdown of comment type. I'd also really like to steer clear of nuked threads, agreed.
When the mod makes the sticky comment, would you have the comment being addressed still show who posted it? It could lead to the posted being called out on being reprimanded, and the posted would know who they were anyways.
Of course leaving who posted a comment visible only ever makes sense if mods call out the user.
Edit: Or rather, only makes sense if you're not obfuscating bans and deletions.
Dealing with things on a case-by-case basis makes the most sense to me. If people are posting with good intentions but unknowingly breaking a rule, they should be notified. Which is the situation the other comments here have been talking about so far.
But posters with bad intentions, like trolls intentionally picking fights with mods, that's a tough one. Sending them reminders about their past successful shitposts might not result in a positive outcome.
And spambots etc. I don't think it's worth the storage space to send them reminders about each of their spam posts.
I agree. Banning should be a last resort; people should be given the benefit of the doubt and told how they can improve or align with the goals of the community / site. Trolls will troll, and will eventually have to be shut down. And yeah, spambots are trash. :)
Yes. There could be some exceptions to this eventually, but for now I'm thinking it's for the best people are notified. I'm seeing this a lot on reddit recently, where automoderator is being used to "shadowban" people or delete their comments without notification. I find myself pretty often spending 5-15 minutes researching sources to back up a post, only to find out 6 hours later that my comment has zero votes. If someone is doing something that warrants their comments be deleted, in almost all cases they should be told about it and given an opportunity to self-correct their behavior.
Could be tied to the trust/reputation system. Deleted posts are visible only to high-trust users. This would allow users with mod powers to moderate moderation actions taken by other users. Some new user games the system enough to delete posts, other people keep reverting their deletions, this should count as a negative rep mark, ideally removing their ability to delete comments.
Exactly. We need those sorts of feedback mechanisms to keep things honest, but we don't have to give every single user on the site access to these kinds of systems on their first day. Moderation-level activity should be reserved for the top tiers of trust, long term users with records of good behavior. Those users will be fully acclimated to the site and its culture/norms, having built those norms themselves. They are the most fit to be given the ability to judge issues as they arise. Over time those who have this access will become a large group of people, possibly tens of thousands.
I was reading about the whole @hypnotoad fiasco, and I think removed posts should stay on the account. I also think out of common courtesy that you would notify the person via pm (not sure if this is an option yet) much like the way reddit does it.
So long as the reason it was removed isn't because of doxxing or because they included any other information that really shouldn't be public, I agree.
Not just common courtesy, being notified is the only way people can learn from their mistakes. Not being notified (and the users at large not being able to see the reasoning behind removals) also leads to conspiracies and distrust, IMO.
Why? If a post is bad enough to be removed from one part of Tildes, why isn't it bad enough to be removed from all of Tildes? Contrarily, if it's okay to stay on a user's profile, why isn't it okay to stay in the group it was posted to?
Yes, the author of a removed post should always know that it's been removed. That's just polite.
The author should get an actual separate notification telling them their post was removed. In a perfect world, this notification should tell them why the post was removed and how to make more suitable posts in the future. This notification would most likely be in the form of a reply to the removed post. Because the post is removed, noone else will see it or the reply; only the author will see the reply.
The removed post should not be visible on the user's profile page when other users look at it. It should still be visible to the user themself, but if it has been removed from the site for good reason then it should be removed everywhere.
You start out asking about posts, and then include comments. My answers above apply only to posts, not to comments. It's possible to notify someone their post has been removed without disrupting anyone else: just post a reply to the removed post, where only the user can see it.
Things are different for comments. If every removed comment gets a reply from the person who removed it, that will clutter up threads. There would have to be some sort of private notification for removed comments, like a direct message to the user. However, that can very quickly become quite time-consuming. This is a harder problem to solve. There has to be a balance between notifying a user their comment/s has/have been removed, and keeping threads free of mod-clutter.
Of course. They may just not know. Also, that is so frustrating. Reddit has its little tricks like it will remove the body of a post, but the person posting will see the body and everyone else will see Removed. Or certain posts just won't how up in new but that same person can post something else and it be in new.
Plus it is just going to make someone angry. Remove it and tell them, but don't be a jerk about it. Reddit mods are so bad about immediately coming back to someone asking why it was removed with, "OH you're arguing are you! Don't think the rules apply to you do you!" When all they did was ask, "Could you please tell me why my post was removed?"
Maybe there should be some interface making it easier for group mods to provide removal reasons (basically somethink like Reddit's reporting system, including standardised messages “by sitewide rules” and “by group rule #n”). From that interface the mods could choose to send a separate notification.
As several others have already stated, doxxing/illegal stuff should not be visible. For other stuff deleted by mods, I think it should be visible, and moreover, it should be visible to all users that it was deleted (however, the user can still completely delete own stuff, which would make it invisible). I believe this would improve the quality of Tildes and its user community. Users will know that if they systematically violate the site's rules, then it will be observed by others. On the other hands, mods will know that if they moderate in the “bad” Reddit style, then it will be observed as well.
There is a lot of interesting discussion and viewpoints here. Ideally some kind of transparent system would work, but this is the real world and sometimes we have to make compromises.
Is there some way to be scientific about it? Maybe some kind of a/b testing?
Of course setting up testing if you have limited resources can be time consuming, and if the user base is small the resulting data might be limited. Plus designing tests and interpreting results can be hard - I'm not sure what metrics you could capture ;) But maybe it would be worth it?
a/b testing is cruel and evil. Lots of people on Reddit get confused when they open up the website/app and it suddenly behaves differently to yesterday - and, not only that, it's behaving differently to everyone else's experience, so when they go to /r/Help to ask what changed, noone can help them.
Precisely right. A/B testing only makes sense in certain circumstances, and I wouldn't consider an online forum to be one of those places. I don't like having 'two versions' of a site active at once, and it creates a host of problems and challenges. It's far simpler and imo better to just have beta features people can choose to test when something comes along that's a big enough change to warrant it.
A more likely approach here is to implement a system and then take the time to actually study and refine it. There will be differences in how we think something will be used and how it will really be used. If we collect feedback metrics and feedback from users, we should be able to iterate any system to the point where it serves the purpose it was designed for. That might even require removing one system that's failing and replacing it with a better one.
Most sites don't bother doing that. They stick with whatever vision they originally had for a feature and if it's not working quite right or causing other problems, they just leave it in and move on to the next thing they think will make money. Meanwhile the users are telling them how to fix it every day in meta threads and it seems like nobody listens to the feedback or learns from it.
Those sites, and some software projects I've been aware of, get caught up in the sunk cost fallacy: they've already devoted a lot of time, effort, and money to their particular development approach and they're not going to just throw that all away and start over, no matter how much the users complain.
"[x] Opt-in to shadowbanning beta feature" ;)
Agreed.
The problem here is the users can be wrong too - some popular ideas might sound great but reality and the internet are complicated and sometimes counter-intuitive. This is one possible benefit of some kind of scientific approach.
I think anything that increases moderation transparency is a good idea. To that end, users should be actively notified in some fashion, and any removed posts that can should stay on the user's profile. They should be marked as removed from the thread on the profile, and I think it would be best if the reason for removal to be displayed next to the comment or post.
I probably have the most removed posts on Tildes, so I have some feedback here.
In most situations, the author should know their post was removed. However in some situations, like with a spammer, you may not want them to know.
Maybe a notification in the form of a private message. I've always realized pretty quickly that my posts were removed however it's not always clear why.
I like the way Reddit does this - on a user's profile, a removed comment is only visible to the mods of the appropriate subreddit. If it is removed, it should not continue to be easily visible just by looking at a user's profile page.
I thought about this some more and I also have answers to the questions:
Yes, ideally. It seems like the right thing, and the simple/straightforward thing.
Separate notification. This seems like it would fit with the existing notification system (e.g. a reply displays in your notification feed). So you could see a notification Your comment 'XYZ' in 'SOMEGROUP' was removed because REASONS, with the comment displayed inline, and maybe an appeal button?
No. If e.g. a message was removed because it contains spam links, then it should not still be visible anywhere (except possibly to the user/admins/mods), otherwise it will be possible to host spam/illegal content - visible on the user pages. They could still be visible to the user maybe? (that way they act as a place to find and appeal wrongful removals)