Metaphysics of web forums and avoiding death by entertainment
Hi folks,
I've seen a few posts and comments discussing "what is tildes.net all about?" or even "what does Tildes want to be about?" and I thought I'd throw in a related topic I've been thinking about recently. I am interested in the medium of communication itself, in addition to the goals and general philosophy of Tildes.
To start, the question of "what makes Tildes different from Reddit?" is interesting. One concern about Reddit is the huge proportion of either low-quality posts or attention-chasing memes. And a lot of Tildes users seem to be asking why that is the case; and whether a site like Tildes can be different.
Some say that Reddit is a victim of the profit cycle. As a commercial entity, Reddit must aim to bring in as many users as possible, thereby increasing advertisement revenue. And lowering the bar to new user entry means that you get more and more people who aren't really concerned with making thoughtful, high-value contributions to the discussions.
And there's certainly some truth to that. So by this model, Tildes should be different. It is non-commercial, not profit-driven, and it has mechanisms in place (and in development) that are specifically designed to weed out low-value contributions/contributors.
But still, even at this early stage, when the userbase is small and has been more selectively accumulated, some users are expressing concern that Tildes is showing signs of becoming just another Reddit. True or not -- I don't know.
Beyond the profit goal, another dimension for analysis is the medium itself. "Medium", as in the tools of communication; as in radio vs. print vs. television vs. web forum, etc. In 1985, Neil Postman wrote an interesting book called "Amusing Ourselves to Death" that reiterated Marshall McLuhan's idea that messages are partly shaped (and constrained) by the medium over which they are transmitted. And by extension, some media are better at communicating some types of ideas than others.
Postman was writing in 1985 when television was the dominant medium. He argued that the image-oriented medium of television was best suited for entertainment rather than rational argument or intellectual discourse. And thus the use of television (particularly commercial television) as a medium drifts away from thoughtful, intellectual engagement of the consumer, and toward gripping, decontextualized video clips that imprint ideas on the viewer and keep them coming back for more.
Television is just not as good as print media for communicating deep, complicated ideas that the consumer can engage with. (This isn't to say tv can't do it, but it's just not as good at it.)
So what about web forums like Reddit and Tildes? This is what I've been thinking a lot about recently, and I wonder what other Tildes users think about it.
Web forums are different than television for sure, in that they are mostly text-based, and users can interact with them by both posting text and responding to what others have posted.
But web forums are different from ye olde fashioned books too, in the sense that web forums seem to eschew longer, more highly-structured arguments. (Speaking of that, I hope this post isn't too long!) There seems to be a "king of the mountain" syndrome in web forums, in which posters vie for attention, while watching as posts rise to the top and are quickly replaced by newer, catchier posts.
Is this the fundamental dynamics or metaphysics of web forums? --the rapid turnover of attention-seeking, short posts?
If so, will Tildes get pulled down into that same whirlpool?
I don't think it has to be that way, but I believe it is a strong warning that we have to think hard about how the structure of the medium itself channels the type of content we will see here.
--
Stepping back further in Postman's argument is his deep concern about the effect of the dominant medium on popular discourse in a society.
When mainstream media is reduced to commercial jingles and quick, entertaining memes, the very foundation of liberal democratic society is at risk. People become uninformed about the important issues of the day, and become disengaged from the democratic process. As that disengagement increases, special interest groups (read: corporate lobbyists) fill the void of providing direction to governing bodies. Citizens then become more disillusioned and even more disengaged. This is a well-documented phenomena called "the death spiral of democracy", and it scares the shit out of me.
When I first read Deimos' "Announcing Tildes" blog post, I saw a motivating philosophy that I feel is one of the most important issues of our time. We don't live in a perfect world right now, but we're in a world that appears to be on the edge of tragic yet avoidable decline; a world in which the values I assume many Tildes users would like to promote are being paved over by entities that only value profit.
I think that Tildes can be really, really important, and it needs the user base to deeply engage in the analysis of what will make it work. What is it about the web forum as "medium" that shapes the content we are exposed to here? And how can we devise the mechanisms that prevent it from degrading into another Reddit? Is a shared motivating philosophy enough, or do we need to re-engineer the medium itself?
So into the discussion of "what should Tildes be about?", this post is a long-winded way of saying that I think part of it should be about discussing how we can we construct a sustainable new form of media that improves society and supports our highest values. What does this next generation medium look like?
--
Note: just to be clear, Deimos has already put a lot of great thought into this (cf. https://docs.tildes.net/). I'm just arguing that the topic of the medium and the mechanics of the medium should be a topic that all Tildistas engage with.
100% agreement here. The mission of ~ that particularly interests me is to create a space where substantive, in-depth discussion about difficult issues can exist on the internet. I think it's particularly telling that off-hand, I can only name a handful of places (e.g., /r/neutralpolitics) that even come close to that ideal, and even then are very easy to derail / require large volunteer efforts to maintain. If successful, imo, this is the aspect that will differentiate ~ from every other reddit-knockoff / discussion forum created in the last decade or so---and if I may proselytize for a second---completely change the character of the internet.
In that vein, I come down on the complete opposite side of /u/Bear -- majority rule is a fairly awful way to decide the structure of a discussion platform. Most users simply don't know (or don't care) about the constraints of the platform. For instance, in /r/listentothis, our whole schtick is that we specifically don't want users to up-vote artists they already know -- the purpose of the community is the promotion and discovery of new and under-appreciated artists. Yet, due to the constraints of reddit (I've discussed this at length elsewhere) we get semi-regular threads complaining about the difficulty of posting to /r/listentothis, and the most commonly proposed "solution" is to simply remove the one thing that makes l2t unique in the first place. The users simply see the (legitimate) problems on their end, and either don't know (or don't care) about the reasons why the rules were made that way in the first place. If we opened it up to a vote, I'm sure people would say to just turn off the bots -- fine, but then we're effectively just /r/music and have lost everything that once made that subreddit unique.
Instead, I think we need what you allude to -- a complete re-engineering of the entire medium (internet discussion platform) itself that is specifically geared towards promotion of high-quality content and substantive debate. There are several parts to this: establishing the appropriate culture, building proper moderation tools, but most importantly tuning the voting / bubbling mechanisms such that quality posts / discussion are given more weight automatically.
Clearly this is a difficult task -- what is a quality post? how do you give trusted users the power to promote (what they see) as good content without turning the system into a Digg-esque cabal of power-users? what about account selling? how do you tune the trust-system itself such that we don't end up with highly trusted memers? For that matter, how do you even measure the effect of the systems we build on the prevalence of the behaviours we want to see?
These are questions that have to be answered by trial and error I suspect, right now what we have is a lightning fast website, a lot of high-minded ideals and a whole bunch of good ideas and people giving suggestions. This is where establishing the right culture (especially in the beginning) is critical. You can't let "fluff" thrive for awhile and then decide later to kill it dead without pissing a bunch of users off. You can't let people in political discussions throw around baseless, sourceless assertions unless you want the top comment of every thread to be a chest-beating supporter/hater of Trump. Further, the amount of work required to moderate such a system now increases exponentially, as this is now "the norm". If you want to fix it, you have to go and remove a good hundred comments from every thread.
Next you need to figure out how to get the system to automatically promote these good behaviours -- if it falls entirely onto moderator hands, you end up with all the issues of reddit (moderator work loads, divisions/distrust between mods and users, etc.).
Decisions about the structures, algorithms and accepted behavior that run ~ shouldn't be put up for a general vote, because it will end up as just another reddit. User feedback is an important aspect of any discussion platform but IMO for it to be most useful, it needs to be in the form of questions like "What issues do you have interacting with ___________ community". Then you take that feedback and discuss possible changes / fixes with the people who really understand how the community works (i.e., admins, moderators, high-trust users for that ~group) and have a good understanding of the goals of that particular community and how that change might further / be detrimental to them; this can be an iterative process as well.
Thank you for sharing your thoughts and experiences. I largely agree that these things run better when not put to public vote. I think it helps, so far, that we've got some people on board here (like yourself) that have a lot of experience, and I think that's a good start. I just wonder how we can ensure that (1) the feedback is not noisy (which imo it kind of is so far), (2) the feedback is properly processed down to root causes, and (3) the 'fixes' are able to deal with the issues—issues that often won't be only technological in nature.
All good questions.
Another interesting idea is building polls directly into the website, such that (using my listentothis example) ~music.listentothis could directly poll users asking "which of these ten artists have you heard of before", or "how often do you think songs from the same artist should be allowed", etc. Moderators can of course do this on reddit via a modpost, but they're fighting against the current (low-visibility of sticky-posts, no way to "process" user-input other than scanning very long threads, etc.)
This is where having the proposed 'fixes' being discussed by the higher-trust users is most beneficial. If anyone is going to see the root cause of these issues it's them (informed by the actual feedback of the users themselves). This doesn't guarantee that an issue will be properly addressed, but the important thing to remember is that the trust system isn't static -- if someone has a better idea on how to fix the problems of a community they can naturally move their way up and start addressing them (and likely, discover a few problems they hadn't thought of before!)
This is the real kicker -- how exactly does one engineer good discussion for instance? How do you create a space where people who vehemently disagree can do so in a courteous manner and even, possibly, change each others viewpoints? I don't know, but discussions like these are a good start
I think I might have a solution to some of the 'good discussion' aspects.
Shirky's 'group within the group' is the people who care to an abnormal degree about the success of the group they are part of, and they naturally take on roles of governance and leadership within the group. These are the people we're aiming to turn into moderators using the trust system. One aspect of this I don't think gets enough emphasis is giving this inner group their own private communication space so they can discuss issues and develop consensus on how they act to support their groups. On reddit, most mod teams have slack, irc, discord, or private subs for this.
I think it's important to build that private communication space directly into tildes so that it's available, centralized here within the groups, and easy to use/access. I'd start with a private mod-backroom, probably built as a sub-group (so, something like ~music.listentothis.gov) that becomes available to the team members who earn their way in using the trust system. We're also going to need functionality equivalent to the moderator toolbox on reddit - particularly, the tagging system mods use to place tags and notes on user accounts that only those mod teams can see.
We also need to give them tools to both speak to all of the group members and gather feedback from all of the group members. Some form of announcements system that actively solicits feedback from the users is necessary, because there's no way to govern without the mandate it can provide. We need a system capable of reaching a quorum of any given group's users, and with voting and polling built right in - using offsite mechanisms has always been problematic on reddit for a variety of reasons (lack of engagement, hacking, etc). I don't think this is something that can be handled within the 'submission' framework - more likely, it needs a direct PM or some other unique system/element. In fact the more I think about it, the more I believe it should be a very unique system that everyone knows is associated with governance, not to be confused at all with any other system/mechanisms. If people don't want to be bothered by it, they can opt-out of 'tildes governance' in their profiles.
One other issue that concerns me is the tolerance for ignorance of a group's own rules and systems. Reddit lately has gone out of their way to make the wiki/sidebars/stickies and all other mechanisms whereby rules could be disseminated to users invisible and worthless. We need to find a way to get simple, plain-language rules and community norms in front of users in a way that cannot be ignored or overlooked - so that claiming ignorance of the rules is impossible. I think this can best be accomplished on the submission pages. Also the editing of the css styles on reddit to add an overlay in the reply box for comment norms is useful.
Dealing with the 'echo chamber' effect is, I think, going to come down to how we structure the groups... by being very careful that we're not creating groups that have a one-sided aspect to their content. This is only critical in heated debate spaces like ~politics (making .healthcare and .immigration instead of .libertarian and .liberal and .conservative). It's less problematic in the ~music or ~games spheres. Given how the echo chamber effect works, a subgroup might have their own ideas, but once the post bubbles up to the parent, those ideas are going to be challenged heavily by the larger parent group - so that should help keep the back-and-forth of ideas healthy.
This also works in comments with the lack of downvotes. Since you can't 'punish' the poor or offensive content with a downvote, you are far, far more likely to reply with a counter or to upvote an already-existing counterpoint reply. That means that ideas should tend to attract their opposite viewpoints and clarifications in the replies to those ideas. We could probably come up with better ways to facilitate this. Someone here made a fantastic suggestion about using cuteness to diffuse offensiveness that kinda blew my mind, and I can't find the damn link for it anymore. It represents a new kind of direction for the comment tagging that I've never seen before, and I think it's better than what we've been doing.
Yes, this is a critical aspect that's missing on reddit. Another aspect that might be interesting to try out is to let groups implement a temporary "ban" that forces the banned user to answer questions (set by mods / trusted users) about the rules they broke (or just the rules in general). It's the "draw me a dinosaur to get unbanned" from reddit, but actually focused on the issue at hand.
Damn, we need a search :p
I'll be able to find it again as soon as I can paginate through replies. :D
In economics, if you are proposing or testing a theory you assume that everyone will be rational in their decisions, they will buy the best product at the lowest cost. For this model to work everyone must have perfect knowledge of the lowest prices and quality of goods, obviously in the real world a lot of consumers at some point behave irrationally, not because they want to spend more but perhaps they don't know another shop sells the same product for less, they don't know if they are being ripped off or not.
The same applies to politics. Unless you're some kind of politically active psychopath you want to vote and achieve the best result for yourself. Yet in politics, arguably, more than everywhere people behave irrationally due to lack of information, they are lied to or are unaware about certain issues and can vote for a candidate that may actually make them worse off.
Nowadays with issues such as bias media and false reporting both online and on TV people don't have the full information so accidentally act irrationally. I want Tildes to be a place that helps you gain this 'perfect knowledge' not necessarily just in politics but in all walks of life, people who can understand the bigger issues in all sorts of areas, whether it's gun control or the season finale of Game of Thrones, we should make a place that shares and discusses this information.
I'm aware I'm essentially saying let's make a utopia which is next to impossible but we should at least try to strive towards that general direction, and hey it will be good fun trying to get there.
I think irrationality is the rule, rather than the exception, for most human activity. Confirmation bias and cognitive dissonance are present in just about every domain of human life, and we're just rational enough to get by and procreate. I'm probably not able to objectively judge if this is a good or bad thing, but it can certainly be frustrating to deal with.
I think what you've described is a good direction to aim at. If we can become aware of the tells for cognitive dissonance and confirmation bias and whatever other irrational habits we may engage in, and decide to go beyond them, that would be phenomenal. If this were a place for aggregating something akin to this 'perfect knowledge', that would be pretty amazing (though I'd hope it is mediated by humility).
Have any ideas on where we could start on this?
Don’t forget the complimentary side of confirmation bias too.. authority figure contrarianism. Some people, especially those that are conspiratorially minded, instantly distrust and disbelieve anything from authority figures (of any sort) just because they are the source of that information. The anti-vax movement is a perfect example of this where anything a doctor, health researcher or government regulatory body says is dismissed because “they are in the pocket of big pharma” or whatever other justification they can come up with to ignore confronting their own bias. Rather than judging information on its own merits they judge based on their perception of the “true” motivation behind the person who releases it.
How do we prevent this? Requiring sourcing for claims and assisting in objectively vetting those sources, would be my recommendation. But achieving objectivity in that regard is incredibly difficult, for obvious reasons, especially when it comes to opinion based news and subjective experiences/perceptions of events. But it’s better than simply allowing people to make wild, unsubstantiated claims whenever/wherever they please without being required to back them up in any way, at least.
Part of the problem is the lack of memory on issues like this. We can remedy that.
You've seen those wall-of-link rebuttals that pop up on occasion in /r/politics or similar places where someone gets so sick of explaining a topic that they collect all relevant links/posts/data and make a sort-of monster-post about it. Those get lost, and don't get enough long-term visibility.
I'd like posts like that to stick around and become a sort of bible-page for the group that creates them. Whenever someone brings up the same old BS again, they get linked to that by the users - and we could even do this with shorthand links like we do with @usernames and ~groups. They can either walk away and ignore it, or tear it down and improve it. Over time, those sorts of posts should evolve into a fairly well vetted and thoroughly sourced response to common discussions. Think of it like finding a way to set a discussion 'free' of the format of thread after thread after thread and enter a more permanent record... probably stored in the group's wiki pages using a unique system.
If this record exists, it's very hard to derail or enter a cyclical circlejerk debate when those subjects pop up, because that debate has to start over right where the last one left off, instead of starting from scratch again and again and again.
So, we need to give groups a collective 'memory' to build and draw upon.
This also goes beyond discussions - music forms can use it for music charts, etc.
Ever seen "Adam ruins everything?" I like to think of it like having an Adam at your beck and call to lay the smack of education down whenever it's needed. ;)
One way that really works is finding a way to make the laziest/easiest 'default' behavior into the best behavior using the design of the system itself. Unfortunately this is a very hard thing to do.
Interesting. Do you know of any instances where this has been done, even in any other domains? If there are specific examples maybe it will stimulate creativity.
I can give you some examples from tildes and the plans we have...
Link submissions having a (suggest title) feature that will auto-populate the title, tags, etc. Far easier and lazier to one-click that button and then edit the results than it is to type it all in, which most people, being lazy, would never bother to do in the first place. I think this is the key to making tildes the easiest to use on mobile of any website. Paste the link, touch suggest, touch submit.
No downvoting. That mechanic is a gateway to an entire class of bad behaviors. Replacing it with tags is at least tossing the downvote or upvote or tag into a bucket that provides some context for the action, rather than a binary up/down which provides no context at all. Plus, you're more likely to upvote a reply that expresses your own views (or make one) if you can't just downvote the parent and move on.
Stick to one-click or one-touch actions (the epitome of laziness) for behaviors you want to promote and make commonplace - like comment tagging, saving links, etc. Make the workflow more intensive (multiple clicks, dropdowns etc) for behaviors you want to provide some small barrier to using (such as reports) so people won't use them frivolously.
Reply box at the bottom - having to scroll down past the existing discussion should help cut down on duplication and low-effort replies, rather than having the reply at the top screaming 'fill me with your bs before reading the thread'.
Thank you for sharing these! They do sound like they may encourage the kind of behavior you would prefer.
I should probably clarify though, when i saw your comment saying "One way that really works is ", my asking for examples was basically a sort of covert "citation needed". :P I am curious about whether this sort of thing has demonstrably "really worked" in other contexts, or if your phrasing was more just aspirational - nothing wrong with that if it was - it's conversational, and persuasive, just sort of piqued my interest in what is behind that belief.
Stack Overflow's question submission is full of these - it guides you through adding tags to your question with autocomplete and related tags, and when you type in the question name it performs a search of related questions with a message like "Maybe one of these solves your problem and you don't need to make your own post".
I think it's great for guiding users into falling into the pit of success.
See nudge theory and similar methods that use incentives or subtle design to make the best behavior into the easiest choice.
I don't believe this is the fundamental dynamics or metaphysics of web forums, but rather it is the fundamental dynamics of strangers seeking to establish their respective places in unstructured social interactions. Kids on a playground, survivors stranded on an island, users in a web forum - everyone is just trying to establish their role. I'll admit I was disappointed to see usernames in the posting and commenting interface when I logged in to tildes for the first time (which happened to be today).
Perhaps take away a user's identifier - their username, account number, etc. - and this dynamic will lose its power. Replace it instead with a composite score that quantifies their contribution to the discourse, should be doable given the thinking that Deimos has put into the architecture of comment tagging.
I think the *chan boards have shown just how much “quality” can come out of completely identityless communication between strangers. Yes, some wonderful gems can be found buried there but the vast, vast, vast majority of output in those style of communities is just noise, hateful opinions and utterly pointless garbage, IMO.
The chan boards have shown that anon users and throwaways are the wrong path, I completely agree.
I'm not advocating throwaways or complete lack of identifiers, but rather proposing the replacement of a user's name in the interface with a composite score showing their overall contribution to the community - something like 550.43.3 (ThoughtfulComments.HighEffortPosts.LowQualityComments) - which could be tracked through users' tagging of that users posts/comments. Or perhaps the composite score could accompany the username, if indeed usernames are really necessary.
It’s certainly an interesting idea. However some issues I can spot with it right off the bat.
How would habitual bad actors be identified in such a system? It certainly couldn’t be crowdsourced since the identifiers would be constantly in flux.
Doesn’t that system lead to serious potential for creating even worse echo chambers and group think than the current system? With some sort of highly visible score being the primary identifier on the site, many people will be constantly attempting to maximize that rather than meaningfully contributing and the easiest way to do that is to simply say high appeal, uncontroversial things that other people agree with.
My thinking is that the composite score could be used to set filters on an individual and/or group basis, such that those who are constantly rated as making low-value comments would be filtered from view or flaired/tagged aggressively.
This really is a tough nut to crack. What you're describing is what has become of reddit, and my suggestion is that we test whether some kind of qualitative scoring of a users' cumulative contributions might be able to shine a spotlight on low-value contributors. If the community can train itself to use comment tags and voting as they are intended, then the score would reflect not high-appeal, uncontroversial things, but rather high-quality contributions.
I don't know if it would work, but I think it would be worth testing, as I agree with OP's core point that ~ is currently pursuing a similar interface to reddit / discourse, and will likely result in the same kinds of posting and commenting behaviors.
I agree with your general idea. To me, usernames have several disadvantages and reinforce identity dynamics that I don't usually like.
However, I disagree with the composite score idea since I think showing any kind of number would lead to some form of karma/trust grabbing strategies, and in my opinion that should be avoided.
I already posted an idea similar to yours, so I'm just linking to it instead of rewriting all of it. It's here. Take also a look at the comments since me and other people discussed some modifications to what I proposed. I'm not sure if or how any of that would be finally implemented, but I certainly would prefer an opaque system regarding trust in order not to get "authorities" developed. Sure, some threads or groups might work best with some kind of recognition (I'm thinking flairs in r/AskHistorians for example). But for most groups I don't think any kind of identity is required and maybe beneficial. Basically, what I suggested was creating random usernames for threads while keeping an underlying identifier for modding/trust purposes for a limited time, so that the same user would have the same name on the same thread in order to have some coherence and avoid identification and, although that wasn't the first intention, also these dynamics you mention could be maybe avoided.
This way we could avoid authority figures, identity/role dynamics, and karma-whoring, while at the same time retaining the benefits of earned trust and tools for punishing bad behavior.
Thanks for linking that post - I am indeed late to the party!
In particular, I agree with everything you replied in this comment, and furthermore would say that we're indeed talking about the same idea.
I'm not hung up on making the "composite score" as I've referred to it visible - moreso that it could run in the background, generated from the community's tagging of a user's activity, and that it could be used to drive trustworthiness flair or similar. In the event that any visible artifact is undesirable, and we really want the posts/comments to stand utterly on their own, the score could be used to drive the visibility of a user's comments/posts in default filters, users' individual filters and so on.
One issue I have with this is that it's sacrificing a core role that forums can't really do without - community leaders. You're essentially sacrificing that role (and all other roles) without a replacement for it. Can 'leaderless' communities actually function? Even tildes is not leaderless - Deimos is going to be there at the top of the tree, even if he's the only name publicly known as a leader. How are we going to identify editors, curators, and moderators to their own groups without handles? How are we going to keep them accountable in the logs? Ditching the handles makes it a lot easier for bad actors to hide in the noise - once you've trained people not to look at usernames.
We might be able to come up with a 'forum mode' for this, though. There's no reason that all of tildes all has to operate on the same basic mechanics beyond the group-subgroup-trust model - presentation is definitely up for grabs and experimentation.
So, I agree with you; there are cases where handles are indispensable. The ones you've mentioned - editors, curators, moderators - are excellent examples. However, I strenuously object (is that how it works?) to those same folks participating in discussions under those handles outside of the actions inherent to those roles. An authority is - by virtue of their standing and the tendency of a community to be vulnerable to common errors in thinking - unable to participate in discussions in the same way as the average user.
The concept of scoring -> filtering (or perhaps flairing users) that I'm discussing here is meant as a solution to preventing abusive patterns.
This is awesome to hear!
I think reddit accidentally struck a decent balance on that with their ability to distinguish moderator comments in green - giving the presentation of an 'official' capacity. That's already here on tildes in a way - the unique color of the official posts made by Deimos.
Several mod teams also made 'official' accounts for posting rules and other official interactions - though those were usually made with the intent to avoid being witch-hunted by users for doing their moderation jobs under their own handles. It was pretty common for mods to have their entire histories downvoted and be stalked by anti-authoritarian users for several years, until reddit implemented systems to block and punish people for downvoting from profile pages.
Far too many people let that little green tag go to their heads, however - barging into any/all discussions using their green mod tag as a badge of authority. That never set well with me - I'd only ever use mine posting announcements and 'official' responses to questions directed to the moderation team in other threads.
We need to find a way to handle that lack of maturity. Having hundreds or thousands of mods per group (instead of a dozen or less) should help with that somewhat - the flag has less uniqueness when it's in the hands of a group that large. It doesn't carry the same 'bragging rights' here that it does on reddit.
I'm glad to know that. I'm not sure I understand this, though:
Default filters: Set a threshold for standard visibility of posts. The threshold could be a given trustworthiness / high-value contribution / helpfulness / etc. score that "follows" the user, even if never made visible to other users, and even when a user has dissociated their account from participations in a given / all discussions. Start all users at a default score that is above the threshold, with the ability for their score to rise/fall based on the community's tagging of their participations. If the score falls below the threshold, their comments could be demoted further down page, or at severe levels, hidden altogether (effectively a shadowban, though I'd not recommend reddit's old (still operational?) model of shadowbanning without notification or recourse).
Individual filters: allow me as a user to specify that I don't want to see a users' posts/comments at all if their score, for quality of submissions or for effort of comments, falls below thresholds that I select/specify.
Got it, thanks. I'm not sure if affecting visibility of comments is a good idea though. I think votes should probably take care of that. The difference in weight between different users should, imo, not be very big at any point.
I believe tagging should be enough. Once a user has been tagged as trolling/flaming enough times, a high trusted user (or several) could be warned and intervene.
I originally posted this in this comment in your conversation with @Tom_Richardson, but since use tagging doesn't work, I thought it'd be better to reply to you directly.
Doesn't tagging only solve the problem on a per topic basis though? With no way for users (trusted or not) to identify habitual bad actors on a site wide basis, since there is no permanent identifier, it means so long as users don't concentrate their bad behavior in one topic their overall habitual behavior will go largely unnoticed. Look at Hypnotoad as an example. Each of his posts in isolation could simply be viewed as harmless enough or even an honest mistake. However when looked at in context of all their other actions on the site, the pattern is far more clear that they had ill intent. With no permanent identifiers, how would habitual bad behavior even be recognized and dealt with?
IMO Identity and Accountability go hand in hand. Remove Identity and you render Accountability nearly impossible, especially since you can no longer rely on crowdsourcing identifying of bad faith actors.
Not necessarily, I think. If several users see different flame comments and tag them as such, after X number of such tags have been appended to that user, high trusted users or mods or Deimos could just look into that person. It doesn't matter if the comments are made in different topics or if users identify the username from post to post, I think, since all goes to the same account.
With Hypnotoad, I think many users identified their bad faith. If everytime HT posted some of their weird topics someone different noticed at least one of them and tagged it as flaming or trolling, they would have accumulated these bad tags and those could have been reported to mods. Like, for example, I only noticed them in the homosexual marriage thread, and even replied trying to get them to post in a different fashion. But I hadn't noticed the rest of their posts, or their lack of comments. But if I had tagged that post and someone else tagged another post, etc. The "mods" could have received some warning saying "Hypnotoad has 6 trolling reports". These reports are crowdsourced. Or are you referring to something else?
Also, I'm thinking, having the usernames visible would maybe promote some kind of witch hunting. So if somebody didn't like the tone of someone's comment, they might be over-aware of tha user's comments and maybe be harsher than needed.
Trust will be relegated to specific groups people are active in and it decays over time, it's not a site wide mechanic. A high trust user in one group will likely have none in most others.
And so sure, mods could identify that one instance of bad behavior in the group they have high trust in, and deal with it accordingly there and there alone, but without static identity they can't look for longer term patterns of behavior from that same user across the whole site. This would mean the responsibility to detect patterns of bad behavior would all be on @deimos and he would have to look in to every single instance of bad behavior since that's the only thing trusted users could report. When the site is small that can work, but at scale that falls apart.
E.g. In Hypnotoad's case, @deimos was vaguely aware of him but I am the one that spotted the pattern of bad behavior and made @deimos aware of it... and I was only able to do that because Hypnotoad was a name I began to recognize over the course of many such threads he made a cross the entire site. Had his identifier been some ever-changing value across all those instances there is no way I could have noticed all those idiotic topics and comments were from the same person. That is the value of static identity when it comes to accountability.
P.s. My other biggest issue with this system is, what is the problem that removing identity is even trying to solve? Because it clearly creates a ton of new ones that are incredibly hard, if not impossible to solve, and I think those new problems far outweigh any potential gains you would get.
Replying to your edit.
Several issues. I think most of them are pointed out in the anonymity thread, and also @Tom_Richardson added some more above.
To summarize before I fall asleep:
Privacy:
-users can't be easily identified by adding up info from their comments, which many people are not comfortable with
-users can participate in threads they wouldn't otherwise (talking about private stuff, identifying stuff)
-users would be desincentivized from deleting past comments, leaving the site's threads intact
Dynamics:
-avoiding figures of authority (voting someone just for who they are, or because of their past performance, their reputation, etc., instead of just for the content they share at a particular time)
-avoiding creation of cliques of people voting each other because they're friends, instead of because of what they say
-avoiding witch hunting
And I'm sure I'm forgetting some more.
I actually don't think it creates a ton of problems. Unless I'm forgetting something (which might very well be) you mentioned just identifying patterns. But troll patterns can emerge from individual reporting/tagging, I think, even across topics and groups. What other problems does it create?
E: spelling
Yeah sorry that is a really bad habit of mine... I always assume people will not respond immediately so wind up editing a bunch, either to attempt to clarify or add more to my comment, while they are already in the process of responding. It's something I really need to stop doing. Sorry about that.
That is not a positive IMO. Identity is tied to accountability. If a user doesn't want to be identified they can do so using static identifiers too, especially if the site is privacy conscious, which tildes is. User history has no pagination yet but even once it does the history is likely to only show a limited amount (3-6 months maybe) then be unavailable to view to anyone but admins. That amount of time is just enough that trusted users can easily identify patterns of behavior but not enough so users can be easily doxxed unless they those users are idiots about revealing personal information.
That is solved with throwaways and @deimos has already said he plans on adding support for throwaway topic and comments for accounts.
That is solved by thread archiving with username "disassociation" tied to it. @deimos has also talked about allowing people to manually "disassociate" from comments/topics as well... perhaps even having a setting that does it for you after X months.
What is wrong with figures of authority? Not everyone's opinions should be judged the same or on equal footing to start. Figures of authority, based on being experts in their particular field of study or their reputation/history on the site, is an important factor in judging whether or not someone should be believed and trusted. One can still judge someone's individual comments on their own merit but their past is and should be a major factor for consideration, IMO.
Is about the only thing that anonymity actually does help with. However, I think you have a misconception about just how many trusted users this site is potentially going to have. If 20% of the site's total active users are "high trust" users that can "peak" at the real identity as you suggested they should be able to, then that renders anonymity for the sake of preventing witch hunts rather impotent. Not to mention that the way you stop witch hunts IMO is by severely punishing those that witch hunt... not by hamstringing yourself and every user on the site by rendering crowdsourcing detection of bad behavior impossible. This is also antithetical to one of the major Overall goals of the site:
https://docs.tildes.net/overall-goals#trust-people-but-punish-abusers
G.I.F.T. - Greater Internet Fuckwad Theory.
Don't worry about the edit. Luckily I saw it.
(BTW, just to be clear, I'm not advocating true anonymity or pseudo-anonymity by default)
I also argued somewhere else I can't remember that, IMO, this shouldn't be the case. If it depended on me I wouldn't have user history available to anyone except the user, and only after a number of reports have been received about that person I would allow high trusted users to see the comment history. But in any case, if some history should be shown, 3-6 months is a lot of time. I admittedly don't have reddit mod experience, so maybe I'm missing something, but I believe there is no need for such a long period. If a user is disruptive, that is usually apparent. If you see a "bad" comment, would you inspect 3-6 months of comments to see if there is a pattern? I think that is disproportionate. Ad in any case, history shouldn't be kept based on time, but on amount of submissions, I think (say 50 submissions).
As you say identity is tied to accountability, but not identifiability. As long as reports of individual submissions go to the same identity, there is no need to be superficially identifiable.
Now I think I'm lost. Isn't this precisely what @Tom_Richardson and I were arguing for and you against? I mean, not showing the username. I'm not sure how this is different, honestly. I actually think Deimos mentioned it in relation to that thread.
Yeah, cool. But notice that that only works if the throwaway/anonymity system is implemented. I'm still confused if we're are maybe arguing for the same thing with different words or something.
OK, please note that I'm not arguing that all comments should be pseudo-anonymous by default. But that this should be an option. While in several cases (expertise) having authority figures makes sense, for most others it's not that important. I don't know, maybe I'm too impressionable or something, but I have noticed (in forums in general) that I tend to vote/support people I've come to like. While that is not something terrible, I think I lose some objectivity when I catch myself doing that. It goes the other way around as well. In my view, assessing a submissions should not be related to past performance. And actually, otherwise, it becomes some sort of site-wide alternative trust system.
Well, first, as I said, I wouldn't allow those peaks automatically, but only until a number of reports have been received. And second, I think high trust users wouldn't be the ones entering a witch hunt.
:/ I still don't get why you think crowdsourcing detection is impossible. Why isn't crowdsourcing reporting a working method using pseudo-anonymity but it works with the "throwaway" system (which I'm trying to show is basically the same thing)?
Thanks for the link, but isn't it related to all types of anonymity? Wouldn't the current username system fall into that as well? And notice this fragment:
And with the trust and punishment system, there shouldn't be a especially strong GIFT effect, anyway.
PS: Look, my computer is unusable, so I'm writing all of this on mobile. Maybe that's why I'm doing such a lousy job at explaining myself, since it's difficult to re-read what I wrote and restructure it in this miniature box. So sorry in advance for the extension and lack of succinctness.
Yeah it's entirely possible we're similar in mind just struggling to come to terms with each other's specific overall meaning. And I totally understand how much of a PITA using mobile is to formulate long comments. Especially ones arguing over minutiae in which you need to reference the previous comment of another users, so no worries. :P
One thing I will say however is that for throwaway topics and comments as I think @deimos invisions them: When all accounts are static identifiers and that is the default it makes those using the throwaway topics and comment option stand out all that much more and so naturally puts them under more intense scrutiny as far as monitoring their behavior goes. If they misbehave in a significant way while attempting to hide under the guise of throwaway, they will be much more likely to be investigated by admins who will still be able to "peek under the hood" at their true identity. If it turns out they have abused the throwaway mechanic they will be much more likely to be banned. However if everyone is operating under non-static identifiers by default then there is no distinguishing between users so nobody stands out except in particularly egregious cases.
So with this system, static usernames + option for throwaway, it allows people to still talk freely about things they do not wish others that perhaps know their username to know about, but while also providing an even stronger force multiplier on scrutiny when they do so.
Thanks for understanding.
Yes, that is exactly my suggestion. So I guess we agree more than it seemed at first.
I'd still want to see a hard limit in both "public" (or low tier) access to user history and "high tier" (seeing through "throwaways") access to it. If high tier users have automatic access to all (or 3-6 month) user posts across usernames, then the pseudo-anonymity feature is defeated, and I for one would just prefer using a real throwaway or just not participate in any discussion I wouldn't want linked to my main account. I mean, I can give some trust to high trust users, but not a lot either. I'd prefer to have 0 trust in my account than have high trust and be unable to manage my history and access to it.
I might be on the overcautious side of this, I guess. Not sure how many people feel the same way.
LOL yeah I suspect we are on the same page then and were "arguing" over nothing. It was still a good discussion though so I don't mind.
As for gating viewable user history behind trust levels, there definitely has been talk of that. What is likely to be implemented is probably something like this:
Those with no account (viewing the site publicly) will not be able to look at user history at all.
Lurkers with no trust anywhere will also likely not be able to see history either.
Low trust will likely have a limited view of user history, perhaps as it is now, i.e. no pagination.
Moderate trust users may be able to look back a months or at X number of comments.
High trust users may be able to look back to the limit (3-6 months) or X*Y number of comments.
Now the exception to this may be topic/link posting history. I think there is an argument for potentially allowing people to see a much more extended post history for assisting interesting submission discoverability. Often times when someone submits something very thought provoking that I particularly enjoyed I like being able to look at their other submissions. My only thought is perhaps limiting that extended viewable post history to links only, rather than self-text since those tend to be more personal. Can you see any potential problems with that?
Another thing that is possible is having history length be a user setting as well. Such as having the default set to the minimum value necessary for effective moderation (e.g. 3-6 months or whatever the experienced mods suggest it should be) for high trust users but allow the users to opt-in to showing even more history should they desire... as many users, myself included, prefer to be transparent and don't mind our entire history being visible, especially to ourselves so we can go back and find old comments/submissions.
All of that seems reasonable to me, except the fact that high trust user should have automatic access to the maximum. I think that should be available only when suspected bad behavior is detected through crowdsourced reporting. Or at least as an opt-in feature (similar to whatsapp's showing you read receipts only if you allow others yo see yours).
I understand the feeling. But I think the "right to privacy" should come before the "right to curiosity".
Also, isn't "following" someone's posts similar to what Facebook or even Reddit's profiles are doing and what is driving some people off? I very very rarely look into what anyone has posted, so I personally don't see a big advantage here, whether they're links of self posts.
Yes, I definitely agree your own history should be permanently accessible to you. But not to others.
I guess it all comes down to how each person uses the site. From my perspective, the more choice we have to configure these settings individually, the better.
I understand earning trust in a specific group, and losing it. However, don't you think trolling/flaming should affect site-wide? Like, if Hypnotoad were getting flame tags, I think it would make sense to penalize them site-wide. Otherwise, if they behave the same everywhere there will have to be many punishment processes for the same user, one for each group. That sounds like a bit too much work.
I don't know, maybe some people will be assholes just in some groups, but I'm not so sure and I don't believe it's worth the extra work.
So, in my head the idea is that even if identity is not visible or static, high trust users (mods) should be able to see the underlying identity if necessary. And in any case, couldn't patterns be revealed by users tags? I mean, you saw the pattern in HT because you were alert. But if each people that saw a problem with any of those posts tagged it and you (or whoever) would get a notification saying "X user has surpassed the threshold of trolling votes", wouldn't that save you the effort of looking for the pattern?
I'm not sure I'm. making myself clear. It's pretty late already. :S
Hmm if it's truly a requirement that trust have no site-wide component, then I agree that dissociating posts from usernames (visibly, not in the background) would create more problems than good.
However, I think there are certain aspects of trust, highlighted excellently by the Hypnotoad case described above, where a site-wide dimension of trust would be a better solution if it means having the ability to test whether thread-specific pseudonyms can truly improve discourse and thereby help ~ avoid the same pitfalls as other discussion platforms.
This largely works, except it presents an issue in trying to identify who's talking to who (in discussions)
Unless there'd be identifiers per thread
Identifiers per thread would work well I think.
Supposedly the dominance hierarchy is wired into our neurobiology through evolution from a time that is older than trees. What you say about the cause makes sense to me - though i am not as certain as you are that it is a bad thing fundamentally.
I'm not certain that it's bad per se, I'm more responding to the OP's concern that by adopting the same functionality and visible interface elements as most web forums, ~tildes may well get pulled down by the same dynamics. I don't know that eliminating usernames or other unique identifiers from the interface would help this, but it could be tested.
Same as how you pointed out that text-based mediums aren't all the same (a message board does not equal a book), I would say the same is true of message board to message board.
From my experience on message boards over the past... idk, decade and a half-ish, give or take, it also depends on what the goal is. And by goal, I don't necessarily mean community values so much as what the users are trying to make together. The format of Tildes follows that of many link aggregator sites, in which the goal kind of was "news," which is why it was about posts rising to the top then being replaced.
But I used to spend a lot of time on roleplaying boards and forums or boards based around other hobbies, and perhaps there were posters vying for attention, but you'd also get long, sustained engagement around one thread or topic because the goal was creating a story together.
Not entirely sure yet if these thoughts are relevant, but just throwing them out there: I think that people have been coming at Tildes as place where people hold debates, and while I think it's good to have constructive arguments, a difficulty with some debates is that the goal is convincing the other party. The conversation ends when that goal is achieved, or if both parties decide there is no middle ground.
Rather than debate, perhaps we need to start thinking more deeply about "discussion," which can draw on longer as people go deeper together, if that makes sense.
As for the "king of the mountain" sentiment: I'm not sure if there can be a way to create a more flat community, but I do think that this sentiment will definitely be the case on Tildes. If the site aspires to have a system by which users are rated as "high trust" versus "low trust," that will incentivize people to vie for that attention, though hopefully positively. I just worry that the measurement will occur by number of posts and attention and noise someone can make rather than necessarily quality engagement. (We're not necessarily pursuing the moderator-model here on Tildes, which I think will be interesting, but I think some of the best mods that we've had on my sub were people who weren't necessarily well-known posters within that community—but I saw that the few they had despite mostly lurking were very positive. Or perhaps they just got lost in the riff raff.)
Some thoughts as I'm reading this:
I am not certain I agree with this - or at least the general sentiment.
Humans have evolved with the spoken word, a lot of people even vocalize internally as they read - and many many find it easier to comprehend a topic if they speak with someone about it rather than read about it. Reading is new... like, really new. And while it has flourished as a tool for preserving and transmitting ideas over space and time, this doesn't mean it is necessarily best for that purpose as technology and history continue to evolve.
Humans are primed to react to novelty with interest, because it might be something good or it might be something dangerous, and because we haven't evolved with television our senses are sort of confused and think it is real enough to pay attention to.
Entertainment and drama on television is packed full of stories borne out of our collective unconscious (including whatever propaganda messages the show creators want to put into it), which make sense for us to attend to because there might be something useful to us in those stories.
The fact that there is more entertainment on television than educational material is not evidence that a visual/auditory medium is not suited for informative content, merely that it is useful for entertainment. As a counter example, I would point to the MASSIVE amount of content on youtube that is deep, meaningful, intelligent, instructive, thought-provoking, etc. - and that exists alongside the entertainment content. I would say that around 80% of the channels I regularly view on youtube are not primarily entertainment-oriented.
What is most visible to the average user depends on many factors - not least of which is the same problem reddit has that the lowest common denominator generates the most interest, and for youtube that makes it profitable to feature that above other things. We saw with the "adpocalypse" that there are many hidden variables at play in what sort of content is incentivized on that platform.
I think you touch on some interesting and useful ideas, and I want to express the sentiment that history does not repeat - it evolves. Humans are really clever, and I mean really clever. We find fun and novel ways to use even the simplest of tools. I'd say at least in some parts of the internet, meta-humor and irony are so evolved as to be alien to those who are not in contact with those segments of the web. In my opinion, the medium is a strong part of how people use it, but I think we're so creative and imaginative that the role it plays is primarily one of persuasion, and that beyond that we can mutate just about any medium and how we use it to suit our needs. If the medium persuades in-depth high-quality content and discussions, that's great - but it all depends on what the people involved want.
In the link @deimos posted not long ago, it spoke about how you have to be mindful about assuming what your users want. In my opinion, it ought to be a sort of co-created thing via continual dialogue between the idealistic designer/admin and the users, without one side dominating too much.
As for what I want on tildes, I find the idea of a place for general discussion on whatever without the risk of flaming, and maybe with at least a semblance of attempts at civility and mutual understanding and respect in the case of disagreements, to be appealing. I think if such a thing were widely used, it would be tremendously useful for healing rifts between peoples, and possibly for sorting out ideas independent of egos. If that is easiest with a text-only medium, that's fine - but I think it is a concept that can operate in just about any media. I would enjoy entertainment content on the site - joy and humor are not evil... and a balance is in there somewhere.
I just wanted to point out that while what you say is true, and reading is newer than speaking, TV is not equivalent to a conversation as I think you imply. TV is just listening, there is no two-way interactions.
On the other hand, spoken word flies away, with written word you can go back, reread and analyze, which is not really feasible with TV.
There are some great ideas in this thread. I haven't replied until now because I didn't want to get in the way of these other comments (and because I couldn't keep up...), but I do want to make a few additional points.
For one, this type of brainstorming about Tildes' future is really important, and it is encouraging to see so many people put thought and energy into thinking about this.
There were several comments here about what deimos has built and what his intentions are. I'll go out on a limb and say that I'm pretty sure deimos wants to see the users take a sense of ownership over building and maintaining the environment that they want to see. Not necessarily coding, but shaping the topics and culture. There will be a lot of fun leisure conversations and links on Tildes, but I'm especially looking forward to connecting with those of you who see how Tildes can be something important.
The main reason I wanted to leave this last comment here was to tie in a couple articles that I think frame the value of Tildes.
The first is an article by Tobias Rose-Stockwell called "This Is How Your Fear and Outrage Are Being Sold for Profit", and I highly recommend you read it: https://medium.com/@tobiasrose/the-enemy-in-our-feeds-e86511488de The article is about how the modern news media has given up its societal role of informing the public, and now instead preys on our mental habits with algorhithmically crafted appeals to emotion that have no value other than to produce advertising revenue. Our attention is being hijacked; our focus is distracted by superficialities, trivialities. We don't have to look far today to see where this disengagement is leading us.
Rose-Stockwell says:
And that's where Tildes is important. This is well said in the second article I wanted to connect; the same one deimos used to close his "Announcing Tildes" blog post. It is by Dan Hon and is called "No one’s coming. It’s up to us". https://medium.com/@hondanhon/no-ones-coming-it-s-up-to-us-de8d9442d0d
The title sums it up, but I'll add Hon's last paragraphs too:
What can Tildes do? What can we do? For one, I hope Tildes can be a place where we work together in an attempt to see through or around the wall of biased, manipulative, exploitative commercial mainstream media news in order to better understand and find a connection with the truly important issues in modern society. That's going to take input and work from people who think it's important. Who's in?
Thanks for linking the first article, very nice read.
I'd say that it will look like whatever the community ultimately wants it to look like, majority-rule style. The docs are a good starting point, and obviously Deimos has the final say, of course, but not withstanding that, if for example a majority of the user base eventually decides that there is sufficient support for a ~funnycats (or similar) sub-section, then that's what we'll have.
Some might say that Reddit, Facebook, Imgur (etc) already do it better, and if we want stuff like that, to go to those sites instead. Though to me, it seems that humanity has this weird obsession with trying to expand things to be everything to everyone.
Not necessarily, because as you said, “Deimos has the final say”. ;)
I'm well aware of the discussions on "fluff" content here without links, and Deimos has not demonstrated a desire to be heavy-handed.
The discussion on what constitutes unwanted "fluff" content has been a major subject here for some time, with as yet, no clear answer.
IMO the only reason he hasn’t been heavy handed is because fluff hasn’t really become an issue yet. But every time it rears its ugly head he has made a daily dicussion related to quality of topics or fluff so I think that’s pretty indicative.
Exactly. Discussions. Community input and feedback. But not heavy-handedness. Thanks for proving my point.
Of course, community input is clearly important to @deimos... but that doesn’t mean bending over backwards to demands either. If they go against the founding principles of the site, which low-effort, fluff type content/communities like your ~funnycats example does, I very much doubt @deimos will cave to community pressure on it no matter how intense.
At least as Tildes currently stands, then if he does not "cave" as you put it, then I think Tildes will eventually die off. This is just my unfounded opinion of course, but I base it on the facts that discussions on fluff have yet to produce any serious conclusions, just rehashing the same stuff, plus a discussion on getting lurkers to post more (what forum hasn't had that discussion? lol) pointed out that people want more categories, many of which will likely be fluff, because many people enjoy things that make them smile and laugh.
It's not that there isn't a worthy selection of intelligent discussions to be had, it's just that there's more established places than Tildes to have them on, and so far, Tildes has no big factors that mark it as being significantly different. No ads is a good start, but I can get that with practically any site using uBlock Origin.
I think that general topic link aggregation and discussions with subgroup communities, tagging and filtering to facilitate them, but with no fluff content, harassment or hate speech allowed is exactly the type of differentiation between tildes and other social sites out there that you say it lacks. ;)
Hacker news and lobste.rs are entirely tech focused with no subgroups. Voat and notabug are free speech and anything goes content-wise, focused. Metafilter is a highly restricted, self-text only, walled garden. What else is there?
The big one - Reddit.
Using RES and uBlock Origin, both commonly used by power users, Reddit is very comfortable place. Especially since they don't force people to use the new layout. (Using old.reddit.com is one way, among others)
Make a subreddit there, and you're the moderator, and you can have any discussions you want there, as long as the content does not break site rules, and you have access to tools to manage users, customize your subreddit, etc.
Having said that, I have moral issues with Reddit over their support of hate speech subreddits.
Reddit is a meme-fest and the redesign will only make it worse, IMO. Reddit also tolerates hate groups/hate speech and the user culture that has been allowed to fester as a result is making the overall experience there intolerable to many.
And mentioning moderators you bring up another interesting distinction between tildes and reddit. On reddit each subreddit is a kingdom/city-state unto itself where the creator is the petty tyrant and ultimate arbiter of what is acceptable. On tildes the trust system will lead to an entirely different dynamics where active users of groups are the arbiters of their direction.
P.s. Reddit is also millions in the VC hole, has 200+ employees HQd in the most expensive cost-of-living city in the US, runs on expensive () cloud computing and content delivery network service and will eventually need to turn a profit soon which means the push to monetize will just keep making it worse.
Whereas tildes is a non-profit with 1 employee who actually cares about building a healthy, sustainable community site.
All subreddits? You certainly left no ambiguity.
Yes, and I have moral issues with them because of it.
Again, not leaving any ambiguity. Why do you tend to speak in absolutes?
As far as I know, that's still an active discussion here, among many, with no clear path. Hell, people barely agree on making "invited by" information public or private! If we can't agree on such a small detail when we have so few users...
Tildes 'goals' are not up-for-grabs - those are pretty handily set in stone. The mechanisms for accomplishing those goals, however, are definitely up for grabs and what we spend a rather disproportionate amount of time discussing around here. :)
I think you can learn a lot about the decline in quality on Reddit from looking at when and why it seemed to happen. The first big decline that I personally noticed in the quality of content, etc. was after the Digg v4 update and exodus to Reddit. Before then, Reddit seemed to be a relatively small online community with a dedicated user base that had formed a pretty strong culture of mutual respect and general disdain for low effort posts. Obviously you'd find exceptions but for the most part, and certainly in comparison to other comparable communities like Digg, I would argue that was the case. The problem was that when the larger Digg community imploded, Reddit was flooded all at once with new users who did not share that culture. Pretty quickly the kinds of abusive comments and low effort posts that were par for the course on Digg became more common on Reddit as well.
Since then, I would argue that every noticeable dip in quality on Reddit corresponded with a period of sharp new-user growth. I think if this community is to be a place for thoughtful, interesting discussion, without requiring overly strict moderation, there must be a system in place to somewhat limit the speed at which this community grows so that new users can acclimate to the culture and not overwhelm it completely.
NLP based filtering, collaborative filtering is the key tool to solve all the hassles.
I'd like spend 30% my time one memes, 60% on serious discussions of matters I care about, and 10% on other stuff. The recommender system should deliver topics based on my proportions. Manual taggging/group mechanism is futile.