Daily Tildes discussion - A Group is Its Own Worst Enemy
It's another busy day - Tildes is getting a fair amount of attention in /r/TrueReddit, so I've been trying to reply to questions in there (and have also ended up inviting quite a few more people as well). We've also already got multiple good discussions going on in ~tildes on a whole bunch of topics, so for the "official" daily discussion today I'm going to try doing something a little different.
I read a lot about online communities—there are all sorts of articles, books, talks, research, etc. that I've read that have helped me figure out what the major issues were, and how we could try to do things differently with Tildes. So instead of writing something myself today, I'm going to try linking to one of these, and encourage people to talk about the topics that it covers. If this works out well, this might be an interesting thing to do once a week or so: I'll post a link to something that's influenced the plans for Tildes and see what you all think of it.
I'll start it off today with what I would probably consider my single favorite talk about online communities of all time. It's worth noting that this is from 2003, which is before almost all of the major platforms we have today were even started:
This is definitely something to bear in mind during the tildes development. So far it's been going pretty well, with everyone behaving "correctly", with one (or is it two?) exceptions. But the more invites that get sent out, the more chance there is for the site to get overrun by those negative behaviors.
Edit: Having now finished the article, I have to say I really enjoy the last part, specifically the idea of a kind of segregation of users with different capabilities. Sounds a lot like the trust system that's been talked about a bit.
Oh I saw that guy when that thread was new. I had an inkling he'd get the banhammer.
Yeah, I just read through that. He doesn't see how what he did was inflammatory or even derailing the thread.
Good riddance.
Yeah they really didn't understand that there is a time and place for all discussion, in most any other thread I don't think they'd have been banned for saying what they said, but to quote someone from the reddit thread;
was it removed? can't seem to find his comments here on ~ or /r/tildes
He deleted the thread... probably because he realized he was not getting any sympathy and just digging his hole deeper with every reply.
https://old.reddit.com/r/tildes/comments/8nm9wb/banned_from_tildes/
thank you, my drama llama is slaked.
If that's only a small piece of what you've read in preparation for creating this community, then I'm impressed. Bravo.
A lot goes into making a healthy community. I think the one driving force here is that we're all looking for quality content and discussion. For now, I have high hopes for this community.
This is incredibly dense. Tonnes and tonnes of different things in it and I can see many different reasons you would want this community to read it Deimos. Some things that stood out a lot to me (that aren't addressed numerically) were -
Groups defending themselves incorrectly. This occurs in company teams too, people can end up leaning towards negative behaviours that prevent progress from occurring or being done. The task that a group is setup to do will be sabotaged by group behaviours that attempt to not do the job of the group. This is why you techie types have often heard the phrase "actionable" within management of teams. Marshalling a team towards performing what is actionable as part of the goal of the group is important because taking no actions is not acceptable. Do things that move forwards, don't just do things that sandbag and suggest that no things should change on account of the many issues surrounding the changes. Recognising what is actionable and moving forwards with it is important. If the net-benefit of doing something one way or another is questionable then testing to provide data that removes its questionability should be performed.
Not everything is programmable. Groups exhibit certain behaviours and sometimes you can not fight this through programming. Whatever you do will be fought against by the users, or even the core. It is extremely important for community moderators and community designers to understand this notion. Systems must sometimes firmly accept that certain behaviour can't be stopped and so you must build around the acceptance of this behaviour occurring. It will always occur. A comparative (but not exact) example of this occurring that springs to my mind is users who tell people "search first" when anyone comes into a community asking a question. They act under the mindset that they can fight against users who ask questions without searching when in fact it is an impossibility. People will always ask without searching and fighting it is useless. Working with the knowledge that this will occur and making the community and better place with the understanding that it will occur (these duplicates can't be left unanswered because they appear in google unanswered harming future users) is better than fighting it uselessly or worse - wasting resources trying to magically stop users from doing it, you won't stop them. Ever.
No service is permanently scalable. This comes from a quote I feel I need to add to explain it:
What this brings out to me is something that I have noticed in attempting to grow a discord community I operate that receives several hundred thousand chat messages per month. A chat channel has a ceiling for communication. There is a maximum number of people that can participate in conversation at any single point in time before the chat is moving too quickly for further participation to occur. This causes two issues in a chat community. The first is that your new users do not participate as easily as they did before the chat became so large and active. You have a harder time getting new users to participate because they are put off from participating due to the high activity level. It is easier to just watch. The second issue is that your existing users don't participate when things move too quickly either, they lurk and watch conversation instead. You want to get everyone to participate spread out across your entire community simultaneously but instead you find the whole community concentrated in a small space focused on one thing.
This occurs on ALL communication platforms in one way or another. It occurs on forums, but at a different speed. It occurs in email groups, at a different speed. It occurs in voice chat groups, at a different threshold. It occurs on reddit in threads, on /r/all, at a smaller scale in subreddits, it occurs everywhere. Super threads with 5000 responses in just hours have 4500 comments that are nearly pointless. They aren't being seen by anybody and they aren't being heard. Smaller subreddit communities and threads gain effects similar to it, I don't have the data but I want to say it's in the 200-400 comment range, further comment activity drops off at some point or another and new people to the thread may just read the existing comments. I'm speculating a bit based on general feeling from experience.
I'm not saying any of the above is a bad or good thing. I am saying that it is a fact that occurs and is worth keeping in mind for the marshalling of user behaviour and building of a quality design. It will occur on all communication platforms in one way or another. I do not know of a platform where it does not occur one way or another.
Whew So now that those are out of the way - Talking about the things that are actually covered numerically:
This ties into the "You can't solve everything with programming" point. - You must accept that social behaviours will occur and build programming to work WITH the behaviours, not AGAINST the behaviours. People will exhibit behaviours you are foolish to try and fight. You can't fight them. Please god. Learn from history. Learn from the same thing hundreds of community creators have been forced to learn from time and time again.
Allow your core to differentiate themselves visibly from the rest of the community. They are going to see themselves differently from the rest of the community anyway. They are going to define themselves if you do not define them as a service. They will act as a pressure group on the service together and you need to work with them to keep them happy while simultaneously working to prevent the group from being the earlier point "A group is its own worst enemy", working against themselves and bringing a halt to any change and improvement ever happening, or worse, harming the service. The core and the service working together while under the influence and marshalling of the service (and being liked by the core knowing full well they are being marshalled) is the ideal. This requires a great deal of trust and (in my opinion) good communication between the service and the core.
I also think it's important to add that (in the context of an only community) the core is not the mods. The core is your most active and socially aware userbase who have interlocking relationships with other core members. They might include some mods, but they are far far larger than just a modteam of the service. In community management this group (the core) is usually covered by the golden rule - that 10% of the users participate while 90% lurk.
Covered already in my response to 2 - understand that "the core" is a pressure group on the service and not listening to the core will cause relationship breakdown between the service and the core. Communication between core and service is important to stop this. But the core understanding this concept is also highly beneficial. The core don't want to see their service fall apart either. But the core doesn't always recognise when they're performing the earlier "the group is its own worst enemy". Juggling all of these concept is hard. Being open to their existence internally and juggling them all (from both sides, the service and the core) is key to preventing them in the long term.
I think it's important to understand that, unfortunately, ensuring that the core is educated on all of these concepts for the benefit of preventing "the group is its own worst enemy" from occurring is futile as the group (users and core) scale in size. Communicating with the whole to educate them on it(tens of thousands, hundreds of thousands, and millions - as growth scale occurs over time) just is not possible.
A juggling act will occur. Mistakes will occur. Back and forth will occur. Working to prevent it is an ongoing permanent issue that only gets harder and harder the larger and larger things become.
Numerical things to design for
The point here somewhat was that the group (users and core) will police themselves via reputation if identity is recognisable. People generally try to conform to the rules of a group, the community culture, the behaviours and etiquette that the group has set for itself. They do this if you can define the group well, make users feel part of the group, and make them aware that they are RECOGNISABLE within the group.
Doing this is hard. I see this at high-speed in chat communities. A new user joins a chat community on day one, may break half the rules (written and unwritten) of the chat/group on day one, but within a few days of participation if they feel like they are of the group then you will see them settle into the group behaviour. Over time you may in fact see them change their ENTIRE personality. I operate(with many other people) a discord community where lgbt people have surprisingly come out in droves despite us not being an lgbt community. Far above the average of other communities I have the experience of operating. This is occurring (I believe) because of the friendliness of the group that we have. It is so friendly and so nice that more people are discovering themselves introspectively than in other communities I have the experience of moderating and marshalling. Another instance of this happening is that initially toxic individuals have, over time, changed significantly in personality by the longterm being around and influence of the nice-culture of this particular chat community. I have had many users come to me personally through private messaging to say that this particular community has been a positive influence on them, that it has fundamentally made their lives better, that they have changed from a toxic individual that sought out negativity or conflict into a person that actively promotes positivity.
They learned from the group.
I'm NOT saying that negatively behaved communities are bad either. Don't interpret the above in that way, it should include zero judgement of community behaviour at all. There are spaces where people thoroughly enjoy highly toxic behaviour, 4chan for example, and exactly the same concept of people learning and mimicking the group happens on 4chan too. People join and become a part of the group they participate in frequently, they then start to take on significant portions of that group's mindset.
The examples above are not judgements of right or wrong. They are simply things that occur in group behaviour. Understand that they occur and will happen (already have happened in some ways) here on Tildes is valuable. Deimos wants us as the early-adopter community to understand this concept because it is a fundamental thing he wants to build up. He needs to build a group that is a specific type of group for the service he wants to grow and he needs to make that group a group that outsiders and new people WANT to be part of. Retaining new users is absolutely and positively essential to the growth of any service.
What's important here also is that a toxic person on one service may not be a toxic person on another service. Behaviour differs depending upon the group that a user is participating in.
The group established here will establish the future behaviour of the whole community. New users will form into the existing group and take on the community culture established here. These new users will take on the behaviour of the group, whether they are toxic elsewhere or not. (in some cases requiring banning because they absolutely refuse to join the group at all, for the protection of the group that is wanted)
Over on reddit the intended method of solving this problem is the downvote. It worked well. It still works well, in a way. It allows a "group" to police the behaviour of the group by punishing one another. It's somewhat negative, it creates a lot of conflict, and it causes other issues, but it does empower the group to defend themselves. Personally some (but not all) of the issues I have with this system are that it eliminates 1 to 1 discussion, people on reddit aren't talking to the person that they are responding to, they are talking to the group, the people that will vote. There are adversarial issues with it in trying to seek "winning" a thread through getting the group(voters) to downvote the other person. For obvious reasons this isn't necessarily what you want. Good honest conversations in 1 to 1 occur because 2 people are talking to each other alone, the only person to gain respect from is the other person in a 1 to 1, whereas when the user is trying to gain respect from the group instead of the person they are responding to they will be less open to the person they are responding to aim more to gain "win" with the group voting instead. This in my opinion causes a huge amount of conflict on reddit, especially within the political subreddits, but it happens literally all over reddit in every single thread.
I'm not posing solutions here. I don't necessarily have them. Coming up with them would take a huge amount of time to digest the issue in the background of my head over time. I hope and assume Deimos has done so for years and years as it has been his responsibility having worked at reddit, I can't say I've considered solutions because I never thought I'd be in a place where we would discuss them or shape the possibility of a site that solves them. I am excited to see whether success happens and am here for the ride and potential to learn by watching it. I've moderated communities for 15 years, from scratch, from joining a team in already well established ones, as a full time paid member of staff, and more, I've seen all types, from forums to chats to voice to subreddits to email groups. Hence my interest and participation. It's a passion.
Don't even get me started on this one. Do I have a solution? Fuck no. I don't have a solution. I don't even know where to start with a solution. It is such a visible and clearly problematic issue on all platforms. Yes you can do things to solve identifiability and make core users clear and visible and empower them and blah blah blah - but the ceiling that occurs with maximum level of activity preventing participation or harming participation across your community? That is a problem I do not have a solution for. The problem differs per platform (chat/forum/twitter/fb/reddit/etc), with different thresholds, but it persists as a problem. Rarely have I seen anything work as anything other than a bandaid. It's a HARD issue. (if you have any suggestions for solving the chat ceiling I am super super interested because it's one we've actively tried to solve for several months now in one of our communities with very little effect).
Missing things
I don't want to drag this out too much further. I suspect I'm really testing the size of a post that can be made here. But one thing I do want to touch on (but by no means comprehensive of other things) with groups and the "core" is one not mentioned in this article. The core in online communities and to a lesser extent the tech-sector is often full of, hmm, let's say "quirks", to put it nicely. I in no way want to be offensive. But the people that gravitate to the core of communities often happen to have significant social differences to the rest of society. It takes certain personality types to become highly active people that happen to stand out in an online community enough to be considered the "core". I've seen this also inside tech companies, it takes certain personality types for children/teenagers to gravitate towards spending ridiculous amounts of time tinkering with computers in their rooms instead of being social, learning computers, programming, or online communities. I'm speaking anecdotally of course and don't know if evidence exists, but it has been my experience in tech companies that there are more people with things like aspergers, depression, social disorders, bipolar, and not just mental problems but just odd quirks of behaviour too. This is not a judgement of those people, just a thing that is often useful being aware of, I've worked with many very very good people with them. Learning to work with and around quirks, oddities and issues inherent to the industry and I believe inherent to the "core" of communities is good. Not everyone will have these and it's not an extensive list, I'm generalising, lots of people are without such traits, it's neither a good nor a bad thing, merely a thing helpful to be cognizant of.
Big read, big ask to try and discuss it Deimos. Maybe it's not possible to actively "discuss" something that's so dense, but still highly useful to try and make people digest and take it on-board regardless of whether discussion about it is easy or not.
Some I got a lot out of, some I've seen in other writing, some is obvious but putting words to it is still useful. Thanks.
Open to discussing or debating any part of this, should anyone get through it all. Apologies for errors or any oddly worded parts, I'm not re-reading to edit when this has taken as long as it has. It'll do.
A wall of text? You spoil me. Good read too! :D
I might take some issue with this. Frankly, I've never seen anyone try to build a system that might allow for that level of communication. The basic systems being used by the groups are, I agree, incapable of this - like the threads on reddit where even though they are massive, still only a tiny faction of the users ever see them. Sticky posts, polls - still most people won't see them. There are other options, though, to reach large blocks of users (or even all users) to gather feedback or inform.
I think there's a value in exploring the options available - certainly in letting mod teams here reach all of their subscribers to talk about important issues, and letting the site admins here reach all of the users to do the same thing. I see the mod teams as the representatives of the 'core' groups, and the admins as the representatives of all of those mod teams.
If we put something together there that can facilitate that sort of large scale communication and feedback-gathering, I think we can expect those systems to create a larger-scale style of group governance (a sort of meta-group of the other groups representatives). We can't know how that will all play out, but I think if we build the option into the system, the groups will discover how to use it, and suggest improvements for us. I think the value here is helping to keep everyone 'on the same page' as far as goals and culture, and giving the larger group some confidence that they indeed have self-governance. It's as if the ~groups here all become people in the larger overall group of the entire website.
The proposed solution here is the weighted votes and trust-based moderation. The weights guarantee that the group will be able to self-select its best/favorite content without interference from outside groups, since they can easily out-vote visitors. The trust angle goes back to our topic tagging, reporting, curating/editing/modding system. In any given group, only the groups' own users will have access to these content management, feedback, and removal systems - and that access should be far, far wider than reddit with just a couple mods. Over a year of average participation, all group members should be able to earn access to at least some of these systems. That should help them all self-police, and it's better than the basic downvote, because on reddit, anyone can downvote, while here, only the group itself can do the tagging and moderation meant to replace downvoting.
I think one of the chief benefits of this is that it creates a significant time-lag between the time a new user shows up and joins the group, and when that user finally acquires influence over that group. During that time lag, they 'lurk moar' and learn the group's culture, get a sense for how it operates and what the norms are. By the time they have access to the tools to influence those norms, they have probably gotten past their initial biases and desires, and what remains (hopefully) are their suggestions for improving things. That should allow for evolution, but without disruption.
Reddit inadvertently discovered a solution when they created subreddits. When groups failed there, they took the lessons learned with them and created a new subreddit, which had a higher chance of succeeding. That might not be the solution, but it did work and slow the decay - so it's a start we can build on.
Groups will live and die naturally - you can't save them all, nor should you want to. What we can do here, though, is facilitate that life-cycle we've seen on reddit. Code for it, rather than fight it. When a topic takes over a group, spin it off into a sub group, but maintain the relationship between that group and the parent and use it to help vet content for everyone. When a group dies just roll it back in the parent until it's time comes along again, when someone figures it out.
The chief challenge here is what happens to the highest-level groups (the equivalent of 'defaults') once they have 2500 children. The solution for that is as that group grows, the moderation becomes tighter, more and more of the submissions come from subgroups rather than from the root group, until eventually, that root group actually dies as a group and becomes, instead, a gateway community to all of the content below it. The users and mods there are focused more on curation of the sub-group content than they are on making their own submissions - though high profile stuff such as an AMA might still be submitted to the root group - also it might serve as a forum to help that hierarchy self-govern, as sort of town-hall for all the subgroups.
That stuff has never been tried anywhere, by anyone. It's simply trying to code for the behavior we saw take place on reddit.
I think some of the issue I have with this is - what would it look like?
A major problem you have is that some users just don't log in very often. Once a week, once a month, these are not uncommon. It's fine if a user is a daily user - they're going to see things daily. But a weekly user? A monthly user? They're going to have an absolutely massive stack of communication from each community modteam that wants to get a message to ALL users in the community and when there's too much to read - it won't get read. What's too much? Sometimes anything a few lines of text. Reaching ALL of your userbase and actually getting them to digest something important is..... Impossible. I think.
You can try. And I'm not averse to finding better ways. But I just think accepting that you can't educate anything other than your most exceptionally engaged users is realistic.
Yep. I have no issue with the proposed methods over here. I'm intrigued and eager to see them in action as the site grows. They feel good in theory.
I don't think it was ever a solution. I think it was the equivalent of a discord adding a new channel. It moved a small amount of activity elsewhere.
The activity within the subreddit itself has its own threshold, as do the comments of each thread in that subreddit.
There's a well understood "quality threshold" for subreddit communities also, where a community reaches a size threshold that you start seeing significant community-behaviour changes that coincide with complaints of "low quality" and the community slipping. I personally believe this is around the 40k subs mark. I've seen other mods put it between 20k and 80k though. It likely differs depending on subject matter/engagement hence the oddly large range, but it happens.
I don't actually think it's a dip in quality that occurs, but just a change in personality. It stops being easy to be close-knit, so you lose some of the personal-nature of the community that was there when smaller.
We're likely to see Tildes go through quite a large number of personality changes as size increases. And it's important that the core users understand that this isn't a thing that can be fought, it will simply happen, and we must not see it as a quality dip just because the personality of our clubhouse changes. Change isn't all bad.
It's usually a progression - 5k, 10k, 25k, 50, 100k, 250k - and after 250k, it's the same as a default for the most part. Those are the user targets where the group behavior tends to show signs of breaking down, typically for the worse each time. Another part of it is how fast it grows - if you double the userbase in a short period, it's going to shift immediately regardless of the numbers.
The goal with large-scale communication is to reach a quorum - ideally on a site like this, half of the group's users. Naturally that's going to be the most active users, for the most part. Reddit, however, won't even let you get to a couple percent of the users, and that's not enough for good self-governance. We can certainly improve on that. The trust system will know who the most active/involved users are in any given group.
I wish I could bookmark this. Your commentary and analysis was really helpful, and it gave me new insights on the original article.
Thanks!
I honestly don't think I digested the whole thing fully. Will need to re-read it a few weeks down the road again to pick up on parts I missed or glossed over because other parts stood out more and occupied the mind-space. It's tough to take in all of something so dense, form ideas, relate those ideas to anecdotal community experiences, form those ideas into actionable bites, and then digest them in some sort of bullet-point format.
After a few weeks of it stewing in the back of the brain there's usually other stuff you start noticing on re-read because then you skip over the parts you have already understood/digested properly.
Can we mention other users here? Another user posted this today: https://www.ted.com/talks/yuval_noah_harari_what_explains_the_rise_of_humans/up-next
It's a TED talk about how humans only separate from animals because we can work together due to a common belief. He mentions the obvious, religion, but he also mentions other beliefs that we prescribe to such as the rule of law.
I mention this to now relate it to this article. This community needs rules to control it and rights to maintain peoples activity.
I know this isn't the best description, but I think it would help to have "10 commandments" and a "Bill of Rights". I mean this metaphorically, but if we have rules and rights that a large group of people can follow and have faith in then it can grow to be a very large and positive group.
Edit: changed to Bill of Rights instead of Constitution.
What do you think of something like this?
Is that sort of like what you're talking about?
Yes something like that, but I have no faith in voat. Granted I never stayed at voat, but I don't think those rules are enforced. Step 1) make rules. Step 2) enforce them. Making rules is easy. Enforcing rules is hard.
Ah yeah, disregard the connection to voat - it is just something i was trying to put together back in the day before the FPH/coontown bans/influxes happened. The idea is what i was trying to ask about - i.e. did you mean something like a pseudoformal constitution as referred to in the OP talk, but more concrete/less ambiguous than the content policy, and independent of the actual software/mechanics of the site?
In my mind, I think something like this might be necessary.
Yes I think something like that would be necessary
That link didn't work for me. Wayback machine being brilliant as normal though. Archived
I came over half an hour ago and am really liking what I'm seeing so far.
That was interesting.
Some of the parts I particularly found important:
So...figure out a system for this and implement it, or someone will and it may not be how you like it
Names (handles) are a start, but there may be more options for this. (Pictures/avatars traditionally. I was also thinking how in the military folks have their own unique pattern of awards/commendations they wear- perhaps this could work for ~ too: communities have their own mini-flag/band of assorted decoration, and someone with distinction in each community wears a mini version of that band next to their name)
This got me thinking in particular about the debates ~ has seen already around tags (on people/comments) and votes, especially the weighting of votes and reputation. Votes and reputation will differ between the smaller sub communities and groups, especially weighting depending on contribution. For example, someone who regularly posts high quality content might be valued more in a community more than' say, someone who gives great and insightful comments. Each subgroup will value different things, in addition to the OVERALL ~ values. (But defining those global values perhaps should only be the defining factors in determining rep of the TLGs)
The author notes that Tibet issue on Usenet, and that (in addition to the three points above) lead me to think a successful system here might be to ensure the most reputable members of the tiniest groups have the control over the tiniest groups, and the reputable members that span multiple groups gain reputation over the larger parent groups as well. (In a similar form to how Deimos was talking in another thread about how the smaller communities’ posts would bubble up to their parent groups once the posts had enough engagement)
I have mentioned this before, but I'm still concerned about the differences in vote weight. I understand the concept and it makes sense and, as the article mentions, it helps protect the members from the users. But I also think power users are pretty dangerous and they can get a different reputation from different people. For example, long ago I used to be a mod in a forum. There was this user that showed up on almost all threads, she was usually polite but most often she offered very unsubstantiated information, but most users didn't realize that and would have probably upvote her if votes were a thing in that forum. I think she ended up being banned by the admin at some point.
Another example could be Wikipedia. The Spanish version is heavily criticized for its excessively strict librarians. I haven't had a problem myself but I've read some pretty unsettling stuff (it might be the vocal disgruntled trolls though, I didn't dig enough to say, so this is anecdotal at best).
Another aggregator o use has this karma system that will show individual karma for every user in every comment they make, and the differences can be pretty high. Their karma also counts towards submissions. They are repeatedly accused of manipulating the frontpage because most of them have similar idiologies, so their votes easily promote or bury submissions.
In any case, I hope the system here will be able to cope with this kind of thing and the power imbalance, although necessary and coded in the software, will prevent excessive bias.
I think part of solving that problem is making it easy to lose trust. If you make one angry attack, that could lose the trust of 10 or even 100 civil, high quality interactions. The penalty could also scale with how good of a reputation you have, so that those who abuse their high reputation hurt themselves more with their abuse.
Yes, I think I agree. Some balance should be introduced there either way.
All in all, it seems to me to be an overly complicated system. I guess one mistake by high rep user should not be very important, and I guess the effects of the mistake/angry attack, would be affected by the rep of the user(s) that reported it, right? If you have to measure each reporter's rep against the rep of the reported user and do some obscure calculations... I find it messy. I understand the rationale, but I prefer things to be as simple as possible. (I don't have a better suggestion at this time, but maybe point out what are pitfalls in my opinion will help someone think of something...)
Another thing I've noticed, users on most social platforms (unless they are trolls) are usually nice to each other most of the time. The can actually be considered exemplary users. Until they're "triggered". They may very well be super civil people all the time, but when talking about specific topics to which they are especially sensitive, things can go downside quickly if they get an aggressive comment, for example.
I guess it's fair that they lose their trust for that as well, and probably it's ok not to have highly trusted users like that. But tbh, I think most people will get these knee-jerk reactions at some point.
I feel like I'm rambling.
I think for reports there is a simpler way. To have a comment reported, there is a fixed amount of user trust required, maybe determined by the mods of that sub. To make up some numbers, if you had reports from veteran, nearly mod level users, it might take 3 reports, but if it were only freshly subscribed users, perhaps 6. It matters who reports something, but not so much who is reported until a mod reviews the post. If it gets taken down, then that person would lose trust in accordance with how much they have, so that way the penalties don't lose their teeth for highly trusted users.
I think the end goal is to design a trust system that will discourage the "triggered" behavior without being too heavy handed. You want users to know that that behavior will get reported or tagged as a flame and dinged for it, but they won't be hit over the head with a ban hammer. Enough to encourage them reconsider, walk away, and cool off, but not enough to make them fearful of contributing.
I see, thanks. Well, I guess we'll have to wait and see how it is implemented and iterated, that's why I felt I was just speculating. Hope it's not too obscure.
Edit: Sorry for the double comment. I'm not sure what happened there. I'm gonna delete the other one.
I'm glad that Tildes is getting a bit more attention. Even if it's currently a closed community, it's good to establish a presence. I'm excited for the site to grow to see how the branching communities will be implemented.
I wonder if the closedness will result in some sort of exclusivity feeling, like they have on private torrent trackers. By simply being part of an exclusive community, people start caring more about their privilege of being in it, and hold back on negative behaviour more.
That's definitely a factor. However, I think the proposed expansion plan lends itself more to a larger community which might be difficult to achieve if it's invite-only. Because of what Deimorz said here, I'm sure people will be hesitant to invite others as it could lead to them getting punished in some way if their invitees are naughty.
The hierarchical organization has me excited. That could be game changing. One of Reddit's flaws was its organization or lack thereof. Many communities sprouted up in spite of it though, but imagine if it was conducive to organization from the outset.
I wonder though, do 'groups' have different moderators? And if so, do parent group moderators automatically inherit moderator privileges in sub groups?
I don't think too much is currently set on stone, but I know @Deimos made a post on it a while back. You can probably find something in the daily discussions or the docs.
I am curious if you are familiar with Tuckman's stages of group development. Since this is a new community I feel like it is applicable, and I believe it is still well within the forming stage.
I've definitely heard of them before, but I'll need to take a look again sometime. Thanks, that's probably an interesting parallel.
Hi all - I just joined the site. I'm old enough to remember Reddit before it blew up. It was a treasure trove of thoughtful people with all sorts of weird hobbies and sophisticated views about the world. Hopefully this will fit that bill!
I just joined after seeing the post on TrueReddit and I gotta say, Deimos, you handled some of our more trolly, cynical members masterfully. Having high hopes for this site
Hey, I worked at reddit for close to 4 years and communicated with the users a lot. Any sensitivity to cynicism that I had before has long been burned away.
Out of curiosity, can you even copyright a name like 'Tildes'? From a quick Google search, I don't see the common definition of Tildes expanding to mean this site in addition to the character unless this place hits like Reddit levels of traffic. Not sure if I'm understanding it correctly.
You can't copyright a name. You can apply for a trademark once it's in use, as long as there isn't another similar type service that has a current trademark on the same name or something really close to it. For example, a burger place with the name "Tildes Burgers" wouldn't be a problem because it's a completely different business category, but another online service with "Tildes" in its name could potentially be an issue if it has a currently valid Trademark. Of course that's in the US, Canada may be different.
You don’t even need to register it, to have some sort of trade (or service) mark protection. But you do have to act as this is your (in Tildes’ case) service mark – state somewhere that it is, have some rules in place on how you may use the Tildes name (and logo), and enforce those rules.
That being said, registering a mark is relatively cheap and easy. @Deimos, I would suggest we start using it as a service mark (or even register it), since currently there is no (word) mark registered for “tildes” anywhere in the world, so perfect environment for claiming it as our own (unless a reviewer thinks it’s too generic, which might lead to having us to explain why it’s not).
Interesting read. It's definitely dated - 4chan uses no handles (tripcodes don't really count) for reputation management, instead a user's reputation is measured by whether they act within the norms of the group in individual posts. I'd say the website has been one of the most influential on internet culture, despite being mostly anonymous. But 4chan is a dynamic thing, and anonymity is but one variable that made it work - some of those variables have been tweaked in recent history and it's hard to say what it's future will hold as a consequence.
I think lots of groups and organizations are like that - something might actually be a good or useful idea, and it might appear that it can't work because everyone has failed at it, until someone finds a combination of variables that makes it work. This is sort of like the "history repeats itself" myth, in a way. It's important to try to learn from the mistakes of people in the past, and I'd say it's probably important not to assume the failure was due to one specific variable, when it might well have been several other things that are invisible to us without a time machine. I think this is a good optimism/creativity filter to adopt.
Voat is a really great example of that kind of distorted hindsight thinking that people sometimes adopt - e.g. "voat failed because it valued free speech so highly" rather than it being due to the inability to scale, and the dumping of behemoth reddit's rejects onto voat's hobby servers (and/or some other variables that aren't immediately obvious).
I am not sure that you can call it dated by saying 4chan doesn't conform to it. 4chan does conform to it, as you said, the exact same behaviour and psychology occurs there but it has just shifted. Instead of the group policing itself through reputation of individual users it shifted to the group policing itself on a post to post basis. It exhibits even greater levels of obvious group defending behaviour as a result of it, where people will be aggressively policed if they are going against the expected group behaviours in their post.
I don't believe the talk isn't necessarily talking about certainties or right and wrong. It is instead talking about behaviours that are ubiquitous to groups that are worth learning about, because you'll see them, in some sort of analysable form, regardless of the way your service is set up. You will see a group behave like a group.
What you do with what is common to a group is what counts. And there are many many different things you can do with it. All with very different approaches. All debateable in their positives and negatives.
Of great importance I think is to remove judgement of behaviour as a negative thing. Voat is not a failure. Voat is a service with a userbase that is very ugly. But it is a service with a userbase. It exhibits group behaviour and all the things discussed in the talk. The thing with Voat is that it became what it is because the group, the core, established itself and now polices the service. If you do not conform to the culture of the group they will use the things they are empowered with to police you, voting mainly. It has its audience and it is an ugly audience. It would certainly be a "failure" in the stated mission goal of Tildes, but as a community that has grown and consumes content and is.... A community. It is one. I hate it, I don't like the people, but analysing it as a failure is incorrect. It may very well still be ticking 10 years from now with it's very nasty userbase, larger and more active than ever. And Tildes may be gone completely. Which would be the failure then? So yeah, I can't call it a failure. I think it's the wrong way to go about interpreting/analysing the existence or non-existence of a community. I believe that the talk is predominantly talking of success and failure as a group forming or a group splitting up and going away.
I was just saying that saying that "handles are necessary because x y z attempts that didn't have them failed" is a dated observation, since 4chan has never had them and has been wildly successful, and that the thought process which led to the above rule might be too constrictive. There is a universe of possibilities with complex systems like these.
Very interesting reading, thanks. I find many of the things referenced in the article also apply to classroom dynamics.
This, for example, is something that certainly works well, especially in high level subjects. Some guidelines are always needed though, moreso for beginners.
Regarding MF closing registration at times, do you plan to implement something like this as well?
Yeah, I think that's a pretty good idea overall, and I feel like we're already doing it in a way, with the really restricted invite threads on Reddit. We've been trying to figure out how we might be able to open them up for later rounds in a way that won't instantly alert all the subscribers and see if we can just have some of the more interested members (that are checking back in the subreddit on their own) trickle their way in.
You're fortunate to have reddit as a source of users. You can reach a lot of people and also vet them a bit.
May the supernatural grace be upon us!
This was a great read and I agree with pretty much everything in that article.
I won’t repeat what others have discussed in this thread either, but instead point out the two things that, from my own experience with running (online) communities, I think are (also) paramount to keep in mind:
Be prepared for both :)
anyone else coming to this thread later:
tl;dr is basically this:
Replace Communitree with Reddit or Hacker News or any other forum.
I myself is a developer, and I came to think of this often.
Like MMORPG, if someone is cheating, then the player would be locked into a server dedicated to cheating.
The same way, we can setup a dedicated "echo chamber" for certain type of people. A troll would view most of the trolling comments. This would require some extensive NLP labeling work in order to work.
That a a crazy good read, even as someone who went into it not knowing much at all about designing online communities or software. Those "four thing to prepare for" seem like...spot on.
FYI - The Clay Shirky link is broken returning 404.
This looks like another link to it: https://gwern.net/doc/technology/2005-shirky-agroupisitsownworstenemy.pdf
Seeing this bumped thread gave me an idea for a new feature--something on the front page that jumps out at you to indicate that a thread is five years old. I had thought that this was a brand new thread, but if, say, the title were modified to "Thread Bumped from 2018: Daily Tildes discussion - A Group is Its Own Worst Enemy," I wouldn't have had even a moment of that confusion. Not that it isn't fine the way it is. I figured it out pretty quickly.
Also, I just want to add that I enjoy that we can bump old threads like this. I'm always disappointed when I see an old thread that I'm unable to interact with. Though it might be smart to let the user writing the comment get to choose whether or not their comment bumps the whole thread or not. Allowing us to label our own comments as noise could be one way to solve that problem.
I saw the warning for commenting on old posts, but I figured a 404 undermining the entire point of the post might be helpful. First day on Tildes, and I'm reading through some of the history.
There are lots of new users here now, so in my mind, bumping interesting old stuff that you come across seems like a good way to help them get acquainted with the site. There were some great discussions at the beginning of Tildes about what goes into making a successful forum, so if you're into that stuff, you have a lot of content to go through.
I was wondering why I couldn't vote on any of the comments. Didn't figure it out until I scrolled down to the comment box to ask about it. I agree, it's confusing.
Added to Gitlab:
https://gitlab.com/tildes/tildes/-/issues/748
Check this feature request out for "aside" comments. It was "in progress" at one point but never got finished though, AFAIK. :(
Thanks for adding that suggestion! I guess I need to figure out all that more tech-y stuff. I'm not even close to a developer, but I should be able to figure stuff out like feature requests, since I'm super interested in at least following the theories and overall management of the site. It's also probably time to start donating, since I'm starting to get excited about this site again. Hopefully throwing some money that way will get some features in place.
No worries. I got a bit swamped by the last invite wave... but I am normally pretty quick, and more than happy, to add feature requests to Gitlab for people who don't have an account there (or are intimidated by the idea). I've just been slacking a bit these last few days on keeping up with that, since I'm currently playing Diablo IV. :P
I will eventually get around to adding all the new feature requests and ideas from all the new ~tildes topics. I just need a bit of a break first before I dive in to doing that. ;)
Thanks, I updated the link to point to the copy that @clem found. Seems bizarre that Clay Shirky removed one of his most famous pieces of writing from his own website though.
It has something to do with a book being published, I remember that much from when I went hunting for it recently.
So I'm still trying to understand this platform a little. Did this get re-pinned to the top because its relevant again or was this bubbled to the top by a user necroing the thread.
Bubbled to the top. Necroing threads is encouraged!