21
votes
How will Tildes work with bans?
Basically, with groups being admin controlled rather than the reddit system of mods, will all bans be site wide or should group bans be possible? Let's say somebody is harassing or being an asshole on ~LGBT, should they be banned from ~LGBT or from the website?
If you steal an apple at the grocery store you're getting kicked out of the building, not the produce section.
With all the difficulties real-life analogies for digital concepts usally have, I think this is a good one. On the other hand, following your analogy, it's much easier keeping someone out of the building than keeping him out of the section. That said: Since it's easy to revoke someone the rights of posting in/reading a specific group, does the extreme of banning the user from Tildes need to be taken?
Yes. Someone who is an arsehole in ~LGBT is someone who deserves a sitewide ban because they've been an arsehole in ~LGBT.
You say this is an extreme. I'd suggest the reasonable way to avoid a sitewide ban is "Don't be an arsehole anywhere on the site". This is not an onerous requirement. It's not burdensome.
I saw the posts of the person that was banned. It wasn't because they were debating what it means to be trans or if it was right or wrong. They were banned for going into an intro thread and telling people they have an illness. It was not the right place for such a discussion and this was repeatedly pointed out to them in the thread. Their choice of language was also far from neutral.
I saw their post on Reddit too, and let's just say it was one-sided.
Neither of these features - temporary bans, thread-specific bans - exist on Tildes yet. It's an all-or-nothing deal: you get banned from the whole site forever, or... nothing. There will probably be more nuanced moderation tools available here in the future but, for now, a full permanent ban is it. Deimos has to use a sledgehammer because it's the only tool in the toolshed so far.
EDIT: Inserted "whole" in "the whole site".
A person's inability to chose a suitable group to post that discussion in would make that person an arsehole. The behaviour - trolling a group of people getting support - makes the person an arsehole.
What if someone posts something that is a unpopular opinion for that subgroup and subsequently get banned for it? Should it still be sitewide?
Nobody will get banned for having an unpopular opinion. Hate speech is against the rules, not unpopular opinions. If you can't express your opinion without using hate speech, there's a problem.
Unless Tildes is 100% algorithm-driven, you can't say this as a fact. Moderators are humans. And even if Tildes were 100% algorithm-driven, there are humans creating those algorithms. Every community starts off saying "No, of course not..." A lot of subreddits did too.
I'd suggest that holding an unpopular opinion isn't the same as behaving like an arsehole.
Just like @alexandre9099 said, it makes sense that they're first banned form the group and then from the site if they keep it up.
That's true, but a website isn't the real world and how something is handled online isn't necessarily going to be how something is handled in real life.
If someone proved beyond a shadow of doubt that he is an intolerable on a given group, why do we want him in any other groups? Makes zero sense to me, either online or IRL.
That is true, but what if someone is always derailing discussion on ~LGBT because of politics but they're fine on the rest of the site? Do they get a site ban?
For the record, I don't believe people will be banned for derailing discussions. It's being disrespectful that will earn bans.
My two-cents, we're a community and I don't believe you can disrespect me in one group, and not do it in another group. Personally in favour of either a full ban or not. Don't really see the point of a group ban.
Agreed yeah, group bans just don't make sense when working with so few groups that are also admin controlled. What about temp bans?
I can see the merit, but personally don't think it's worth the effort.
For things that toe the line, I would say send a warning, and if they still don't correct the offending behaviour, ban them. For obvious trolling and such, straight up ban. Why waste anyone's time?
If they're intentionally derailing discussion anywhere, for any reason, shove their ass out the door and never look back at it. If they are justified in deserving a ban on any group they are justified in getting banned on the entire website.
Now if they are banned unjustifiably from one of the groups for any reason, that's an entirely different problem and we need to address it, however that's not what Tildes works like at the current moment.
We haven't really discussed this topic with the users here before. It'd be nice to get everyone's opinions on how they'd like to see this system work.
I like the 4chan-model, believe it or not:
These aren't levels that need to be worked up to, any can be applied at any time. It's based on level of infraction, and likelihood of learning from your actions. You're also given specific reasons behind your ban.
While the rules and allowed conduct on 4chan are certainly more fast and loose than what we'd like here, their system has levels of nuance which I think are important to have. Not all infractions deserve a permanent ban, and not all those who receive permanent bans are incapable of learning from their mistakes and growing as a person.
Such a system would certainly need a robust team of trusted moderators though.
I think that groups shouldn't affect a group that is the root of the group where the person got banned under all circumstances, but should kick a notification for a review upwards, but should affect subgroups. So if someone did something that doesn't mesh well with a group, it only affects that group, but if someone did something REALLY bad, you can keep kicking it up the chain.
First, the user is banned from the group, if repeated somewhere else the user should be banned from the website, IMO
EDIT: or maybe like a temporary ban like 1 day or 1 week depending on the offense
I hope the default ban is really a ban on comments and making posts. Voting as it's only positive does not really need to be banned. It also makes sense if it's a gradual - tied into the same system that decentralises mod duties. Maybe something like a cap on comments per day before a full fledged bad also sub specific seems appropriate.
Voting can also be negative: such as circle-jerking. Someone could be banned in 5 different accounts and still circle-jerk his other posts 5 times.
Banned for me means banned. Your account is gone, you done fucked up good, we don't want you here, bye.
I don't think I've ever heard of a site where voting is banned, but either way I agree.
Depending on the offense, I think there should be site wide bans. If someone is being disruptive and spewing hate at a group, they're awful people and I don't want to see them on any other group. They should be held accountable for their actions. If someone does something harmless like flood a specific group with memes that no one wants to see, I think they should only be banned from that group.
I always wonder about bans, how is it enforcable that the banned user doesn't create another account and gets back into the site? Right now it's invite-only but at some point it'll be open to everybody. Does there need to be a cost for creating an account to stop people from smurfing?
There are options to explore here.
While the site is invite only, banned users have to get a new invite code to re-join. It's not hard, it just takes time (days, possibly weeks). It's a small barrier to entry. Even that is more than most trolls care to invest.
Once it's open, though, that presumes we've got the mature trust system in place. That means a new user basically has only three things they can do - comment, submit stuff, and vote. That's it. No tagging, no editing/modding, no access to any real power. Earning that will take months and earning the highest levels might take years. Also, those new users will have the lowest vote weight and no access to special stuff like the 'exemplary upvote' we've discussed. In order for them to earn any of this stuff, they have to participate well... for months at a time. Inactivity will cause rep to decay. Even when it's earned, it's only earned in the groups where they participate.
So, basically, we're making new users into powerless gimps. This won't bother most people, I doubt they'll even notice. For most people, they aren't here to be jackasses, they are just here to join in on the conversations. That means they'll only have to go through this earning of trust once.
For abusers, they'll have to do this every single time they get caught and outed. We won't do IP-bans because they are useless, but there are some fascinating possibilities for fingerprinting users by ip/geolocation, times they are active, browser, os, screen size - basically all of the horrible things facebook is tracking about all of its users right now. These could be used to detect ban evaders, put them on mute or limited posting.
We'd have to discuss how comfortable people here are with us using tactics like that, it's not a very privacy-focused thing to do. If the data is short-term (like a week) and then deleted, it can reduce the privacy concerns somewhat. We could even obfuscate this info like the current email account recovery system - store a hash of the fingerprint without storing any of the actual information about the user, because all we want to do is compare the fingerprint, we don't care about the actual details.
Some groups might limit the posting capabilities of new accounts as well. That'll depend on what mod tools we use. Comments from new accounts could be invisible for example to all but a certain tier of trusted users, marked in some way for them to review as they read the thread. A simple unhide or unhide-on-upvote system could let most users here easily approve of that new user's content. Meanwhile spammers and trolls won't get the approval.
I don't think we have a firm plan for this sort of thing yet. We've talked about all kinds of ideas/methods, but all of them do end up punishing new users in some way, even if it's a light touch. I'm not sure there's a way to do this without treating new users as a 'risk' or 'untrusted' in some way.
We could also do something like charge a buck a year for membership here. Fuck up, we keep the buck, nuke the account, and you get to pay to spam/troll every time. A lot of sites use this, The Well and SomethingAwful for example. It does work. That could become free depending on your level of trust, so that trusted users don't have to pay a thing, your contributions are your payment in a way.
How will we do it? I don't know. Lots of possibilities to explore.
Normally IP bans would be the way to go, but even those have major pitfalls.
IP ban is just dumb, even more dumb if the IP is dynamic, in my previous ISP (and my current ISP with some thinkering) the IP could be changed just by restarting the router, if anyone else with my banned previous IP decided to use the service he/she would be banned for absolutely no reason.
Maybe this is possible with IPv6 where each IP should correspond to one device
Yep, IP bans have serious issues.
That is true, right now I don't really know how it should be dealt with. Does tildes even ban users currently?
Deimos has banned three people that we know of:
Someone who abused the comment tags (leading to those comment tags being disabled for a while).
Someone who posted transphobic comments in ~lgbt.
Someone who repeatedly made bad faith posts.
(FYI: @dubteedub).
There are possibly a few more bans we don't know of. Deimos announced the first few bans, so he could tell us what people were being banned for. There haven't been any announcements for a month or so, but that doesn't necessarily mean he hasn't banned anyone recently.
All bans here are currently site-wide, because the feature to ban users from a single group has not been built yet. I mean, the ability for anyone apart from Deimos to ban users hasn't been built yet! Remember that Tildes is still in "alpha testing"; that means that not all features have been built. It's very much a work in progress. What you see now is most definitely not what you will get.
Thanks for mentioning this! I feel like a lot of people are treating this closer to a beta or even a finished product (for that, I blame the tech industry's current trend of releasing everything as a "beta"). I don't think there's much use in constant navelgazing over how every little issue like bans, tags, or UI implementations are going to work when we don't even truly know how the big issues that those depend on are going to work yet!
I think people also don't necessarily consider what it means to have one person working on almost everything. When I'm doing something—no matter what it is—that's the only thing that's happening. If I spend an hour reading, writing, and replying to meta threads like this, that's an hour of development that doesn't happen, or an hour of reviewing open-source contributions, or... anything else.
Some of this will probably improve over time, and eventually users will be able to help with a lot of the management of the site itself (and multiple are already helping with development), but it will always be a factor. This will probably never be a large operation, and it may never even get past this one-person state.
There have been a few more, nothing interesting though. A couple blatant trolls. A couple of people that I sent a warning to that replied back telling me to go fuck myself. A strange SEO spammer the other day.
I love it when they make it so easy. Anyone who does that removes all doubt. :)
I assumed as much. I figured there was no way this site could get to nearly 6,000 subscribers and attract only 3 troublemakers - even if we are operating on an invite-only basis.
And, while the first few bans are newsworthy, after a while they just become business as usual: not worth announcing.
And just to add a bit to that, for anyone who wasn't around for the known bans - they've been pretty cut and dry. And those users all showed a strong pattern of behaviour. They were not one wrong comment or accidentally let an discussion get heated.
That doesn't necessarily mean the user was banned, just that the comment was removed. I have noticed that as well, though.
Not necessarily true. A few of the threads I was reading when the comments were removed and have definitely seen the users active since, so they're definitely not banned. They may have been sent a private warning though. Generally, a lot times, it seems to be to stop certain arguments that are already derailing and becoming hostile to continue.
I've noticed we've been getting a mini-wave of new users and this seems to happen a bit whenever that happens.
Yep. There's another 300ish waiting in the wings in the latest /r/tildes invite thread, the majority of which came in from an /r/outoftheloop post about reddit gold becoming reddit premium. Reddit's basically selling super-upvotes now. I'm sure that will end well for them. :P
oh wow this is the first I'm hearing about reddit premium, lmao they're actually calling the super upvotes 'super gold'
Lol, that's crazy.
Nah, most of those removals haven't been related to bans. A decent number of the removed comments lately have been two people having some kind of argument back and forth about something that has little to do with the topic, and usually starting to escalate into becoming more personal.
Wait, comments have tags already? I thought that was planned for the future. Anyways, good to see that the site has a system in place for pesky users.
We did, and then they were abused and are currently disabled.
They are on the list to be reimplemented eventually.
Okay, that makes sense. Yeah, comment tags are one of those things where I don't see how you can prevent abusing the system.
When they are reenabled they will probably be limited to trusted users.
That makes sense, but how do you define a trusted user? Votes or something? Still seems easy to troll just by... not trolling for a few months.
Trust will be earned through consistent positive contribution to the group (both through posting and other actions like correctly reporting, tagging, voting, etc.). Reputation decays over time, so users will have to keep participating, and it can be lost quickly if abused.
The site docs go into more detail
See also this comment by u/evilnight
Decay, now that's a genius idea I hadn't even thought of. Only time will tell how this system will pan out but I have faith in Deimos.
That's "admin", singular. There is currently only one administrator here: Deimos. He has some advisers who he discusses ideas with, and there are some outside developers contributing small parts of code (it's an open source project), but Deimos is The One True God. This is his project, and his alone.
Eventually, he'll have some moderators, and maybe some other admins. But, for now, it's all him. He is our benevolent dictator.
Ah, thanks. Fixed.
It'll be a combination of things. Partly votes on their post, partly how they treat the low-level moderation powers they get.
Doesn't anyone read the documentation before they jump into this group to ask questions?
I understand that, what I mean is who is really trusted and who is pretending to be trustworthy to unlock tools that can potentially be used for trolling?
Anyone pretending to be trustworthy is going to have to play a loooong game. I reckon it'll take many months for someone to progress from brand-new user to trusted user to low-level moderator to high-level moderator. In my experience, trolls don't have that much patience. They'll slip up somewhere. For example, if they start using their low-level moderation powers to put silly tags on other people's posts, they're certainly not going to get the next-level power to move posts from one group to another. And so on. Each new moderator-power will only be available to people who've demonstrated they can use their lower-level moderator-powers responsibly.
That is true, most trolls don't want to wait and would probably just spend their time leaving edgy comments rather than trying to work up trust.
I use to write docs (manuals and stuff) for a living and I can honestly say nobody reads anything, which I honestly find really funny on Tildes, because the answers still need to be read. Anyway, I think it's a great idea to point newcomers to be docs, but still answer their questions as they come up.
Same here, on both counts!
You get a link to them (to the mechanics section) in your inbox in a message entitled Welcome to the Tildes alpha the first time you load the front page. No one really has any excuse for not at least skimming the docs.
"You can lead a horse to water, but you can't make it drink."
I've learned from hard experience as someone who has written training documentation for software, and who has trained people in how to use that software, that most people will not read documentation unless they absolutely have to.
There's a reason that "RTFM" became an abbreviation - it gets a lot of use!
Agree; it just seems like people ask questions they know or should know would be in the documentation they were linked to in their welcome message. I don't want this to become a "RTFM" kind of place, but linking to relevant sections of the docs is often the best way to handle people who haven't bothered to read them at all. I get that the site is new, and people assume because it's alpha they can ask basic questions, but when most of them are already covered, it becomes repetitive to answer the same questions.
meh...asking people to read docs in order to use a fairly basic site is a bit much.
I'm all about reading docs, but also believe intuitive design is more important than docs. I've also been around since May, so when someone brings up a reposted topic, I generally can recognize it, but don't expect newcomers to really go looking for them, especially since we don't have an integrated search engine.
Having said that, don't get me wrong, I do find myself engaging less and less with each repeat. And honestly, some of them are repeated topics within a couple days...
I know they don't: that was a rhetorical question.
You mean, like the one they already get? ;)
I made my account on 1st June, and I got that introduction message, so you must have just missed out! :P
I think @dubteedub did just miss it. I joined in May too and don't think I got a message either. Of course even if we did, the docs were pretty sparse then. :P
Comments had tags, but they were limited to 'joke', 'flame', 'troll', and 'noise' and were mostly intended for flagging off topic comments. Some folks were using this to spam other users whose comments were on topic but they disagreed with. One of these users was banned as mentioned above, and they've since been removed until a better implementation is found that is harder to abuse.