252
votes
Please help keep the signal high and the noise low
I've found I need to use the noise label a lot more than I used to.
Given the influx of new user, this was to be expected in some way but I also found that, at least in the first couple of weeks, new people were behaving a lot better.
So this is a friendly reminder to try to keep the noise level low. Recurring themes I would like to see less of:
- Shallow dismissals / dismissals based on zero evidence.
- Pointless cynicism
- Taking cheap shots
I don't want to see these being normalized. If this isn't you, then great, but this post is also for you: It's an invitation to (very politely) call these things out when you see them.
Opinion -- why this matters.
Dismissing something such as an article or comment (in the classic HN/Reddit style) has a very low barrier of entry. I'm talking about the "pick a trope - any trope!" that is turned into a one-liner zinger. Corporations bad, rich people evil, republicans dumb, etc and play on that.
It's easy to do, and easy to get people to agree, because it's a trope, thus obvious.
All it serves, though, is to add to people's anger and despair. It does not help the situation if the dismissal is right; and if it's wrong, it is actively harmful.
Fictional example: Let's say there's a post about a new drug that cures cancer, and the top comments are "It will never work" and "This only exists because rich people get cancer too". These dismissals don't bring data, just an unprovable hypothesis, that is only here to make people sad.
OTOH something like "I don't trust a drug that comes from ThisCompany because of X, Y, Z" would be an informed dismissal and actually useful.
What I'm saying is, if you're going to make people sad, don't do it for no reason.
The term I use for this type of behavior is "performative cynicism", and unfortunately it's become so prevalent and accepted on the internet that it's a lot of people's default way of engaging with everything. Like you say, it's usually not based on much except knowing that being dismissive of something (or framing their dismissal a certain way) is the "right" opinion, and that doing it will attract attention/votes/whatever. And it works, so that just encourages more of it.
Like you said in the topic text, using the Noise label on these types of comments is helpful, but I'll also add that it's even more helpful if, after labeling them, you do not reply to them, even if you disagree. The purpose of the Noise label is to give the comments less prominence by auto-collapsing them and moving them to the bottom of the thread, but if you reply to them you're doing the opposite of that. When people see something like [5] next to a collapsed comment showing that there are replies, it's tempting to expand it to see, then they're more tempted to reply too, and so on. It can end up with the majority of discussion in a thread being below something that is (or should have been) just labeled as Noise and ignored.
There are probably some site mechanics that could be adjusted to try to help with this more, but—like many other community issues—in the end it's more of a social problem than a technical one, and it can't really be fixed with technical measures.
"Performative cynicism" is a good articulation of the phenomenon.
I actively avoid engaging in it, but even I sometimes fall into it. I try to excise it from my life: I see it infect many corners of the internet across the entire ideological spectrum. It prevents us from deeply discussing and examining our complex, multifaceted reality and learning new insights. Instead, as @Adys said, it makes us sad. I'd go further in saying that it encourages simplistic, defeatist thinking. It makes us feel overwhelmed and powerless.
I agree that it is more a social problem than a technical one. I think that tech companies like Meta, Google, and others rely too much on technical solutions to solve social problems: behavioral nudges, AI, keyword detection, auto-banning, and various other trickery and manipulation done in the name of good. But I feel that these approaches only leave people feeling disempowered and dehumanized.
When I studying Human-Computer Interaction, I was always a little unsettled by the treatment of users as, well, a mass of subjects, molecules, or particles rather than as an organic society. Think tanks like Google Jigsaw do try to do good, but they have been less-than-effective because, again, you (probably) can't solve social issues with technology.
We're all friends on here, or we can be.
We definitely hope to be friends here or at least civil companions.
The rules on r/capitolconsequences, which monitors the legal outcomes, charges and trials related to the January 6 insurrection include rule 11 - No unfounded negativity and general Doom and Gloom To head off a toxic environment, posters are expected to act in good faith. Repeated comments of nothing but negativity, doom and gloom and/or goal post shifting are grounds for post removal and/or a user ban.
Do you think that technology is inherently incapable of fixing social issues (particularly those created by technology in the first place) or that may be technology for profit cannot fix our societal issues.
I'd be interested in if you have any suggested reading on it. But my warm take on it is that technology could fix our societal issues just not while continuing the current model. A holistic approach would require actions the Giants of industry will not allow.
Insofar technology can 'fix' social issues it must do so by facilitating social solutions. (Of course, this is only what I think). We are to become hypercyberhuman — but still human in the end. There must be human relationships and society; trust and mechanisms to build and maintain trust.
I don't know about profit, but I think that a lot has to do with the nature and scale of the medium. In a normal healthy society, we police ourselves and each other through our relationships. Online, there are far too many people and ephemeral out-of-network connections happening in 'one place' (like a 100,000-person thread) for real relationships and connections to form, so that precludes normal social policing behaviors online.
At some point, people need to take responsibility as digital citizens and not wait for some faceless corporate or governmental agency to make the internet wonderful. I think that's a good thing about Tildes: it's small enough for digital relationships to form, people care about and respect those digital relationships, and altogether people want a certain kind of culture here. So we as users take responsibility for setting boundaries with each other and democratically shaping the culture.
Suggested reading.
Hello Avatar by Beth Coleman. Cool short book going over how people can inhabit online social spaces.
The Social Machines: Designs for Living Online by Judith Donath. A must read for anyone intellectually curious about social technology and yet is insufficiently stimulated by New York Times cyber-pop-psych books.
Oooh my, let me dust off my old... EVERNOTE notes from 2015, haha. Back then, I had noted passages and sentences that leapt out at me. She had a massive influence and impact on me. Many of my ideas and perspectives have been colored by hers.
(There are a bunch of spelling errors and missing words because I probably took these notes very quickly at the time and didn't mean for them to be read by anyone beside me.)
Thank you for the in depth reply! I will pore over your notes and check into the suggested reading and come up with a proper response.
“Any news of progress in cancer treatment or battery tech”
Evidently subjects like these are red meat to the ever-vigilant online cynics who have been burned (?) in the past by over-hopeful science journalists (who are, by the way, nothing but conmen and charlatans these days!).
So many interesting posts on the other site would get any useful conversation shouted down by the cynics. My answer was to try to get r/techtracker off the ground.
It was meant to be a hub for new content on different areas where the tech is slowly developing, such as fusion or precision fermentation or solar panels. Each topic would get tags and on the sub you’d be able to follow the ups and downs of each field’s advances and even retractions. But it required too much setup and none of the cynics responded well to a constructive solution.
Hopefully tildes can keep the right attitude regarding this. I’m heartened by your sophisticated response here about social media phenomena.
Hold on - drug companies do research that shows tiny effect sizes with lots of caveats, the press release misleadingly uses more positive language, the busy journalist without much training in reading scientific articles misses the problems and re-writes the press release to be even more positive -- we absolutely do not need to be repeating these claims without any critique.
There's an Alzheimer's drug that has a rabid share-holding fanbase that's recently done this, but worse because those share holders have vigorously attacked any attempt at honest discussion of it not working.
And Alzheimer's meds in general have been severely delayed because any attempt at criticising the prevailing model ("hey, maybe the amyloid hypothesis isn't very good, because none of those meds actually do anything") got crushed.
Sometimes our reasoned response should be "why is this sparkly new med / battery tech / etc different from all the non functioning examples that we've seen?"
I don’t think there’s anything wrong with skepticism or critique - the issue is with lazy, default skepticism or critique. If someone posts an article about a new drug, and you read the article, and write a post which elaborates and explains why you think said drug is bunk, that’s fine. A thoughtless, throwaway cynical remark is the issue.
Consider the following pattern as an example of a lazy, dismissive critique.
Someone posts an article suggesting X causes Y.
Another user asks: but did the researchers consider possible confounder A?
The user's objection might appear reasonable, but the problem is that the answer is almost definitely yes and explored in the study. If someone can think of an objection in the twenty seconds it takes to read a headline, then the researchers certainly would've thought of it in the dozens/hundreds of hours they spent conducting their study.
This is a great question to ask! But it's exactly that -- a question. If an article suggests sparkly widget A can solve some particular problem, but you can recall another article in which sparkly widget B was proposed to solve the same problem, it's fair to ask how the two solutions differ. But if you don't know, it's much better to ask How are these technologies different? than simple state I don't see how these two pieces of technology differ. In the former case, you're inviting someone to explain the new development; in the latter, you're implying the new technology has no utility.
While I agree with the overall point being made, I think the latter part of the example isn't the best way to illustrate this.
At its face value, and I would say definitely if we're trying to be charitable, someone admitting that they do not understand something ("I don't see how ...") is not necessarily the same as them implying that said thing is impossible.
Admitting to ignorance is a step towards new understanding, even if it isn't as direct or inviting to explanations as straight up asking for one.
Of course, context matters, and such reasoning can be used as you describe, but I would not categorically consider all such statements negative in this respect.
For instance, one might say
The understanding may be wrong (maybe they don't share trait A or B) or incomplete (what about trait C?), or the reasoning may be wrong (maybe R is a simplification of the more accurate R', under which Z is a material difference), but I think it is a reasonable response, even if it doesn't explicitly ask for explanations.
If we are then trying to be charitable, someone stating only that they do not see how it could work (and no other context), I think we should give them the benefit of the doubt and assume that it is a genuine statement.
That said, I will repeat that I agree with the overall point, and note that asking "how is it possible" instead of saying "I don't see how it's possible" reduces the need for this type of charitable interpretation because there is less interpretation to do in the first place.
I originally ended my comment with the following (before editing it out for brevity's sake):
So yes, I agree with you. But ironically, you might have misinterpreted my beliefs due to my lack of clear wording. :p
Absolutely fair. I don't know if I misinterpreted your response in any meaningful way (the underlying intent seemed clear and communicated - keep an open mind, be inquisitive rather than dismissive), more so that I think the nuance added from the section you did not include is important context for that specific example.
In either case, all good.
Certainly. And there’s no shortage of informed commenters pointing out each of these problems on nearly all of these posts. The breathless hopium that cynics describe the rest of us falling for is resoundingly crushed each time. Don’t worry. There are absolutely failings, as you say, in science publishing and science journalism and the intersection of capitalism and research.
But that isn’t what I’m saying. I’m talking about the first sixty comments on any new advance being a low-effort variation on “Someone tell me why this won’t work in 5… 4… 3…”
It doesn’t add to the conversation. It’s just social media clutter. My idea with r/techtracker was to solve this problem and the one you describe, instead of just complaining more.
Interesting that we're focussed on the replies, and not the original posts.
Because to me the problem is with people who post a link to a newspaper that has mis-reported a press statement that has inflated the claims of the original research paper (which isn't available yet), and when that paper is available the researchers have correctly said "...in mice", or "...these effects are statistically significant but very small", or "...here's an important confounder, and we haven't been able to eliminate it, so caution is needed". Because for the vast majority of medication stories that's the same pattern that's been happening for decades. And that's the good version. The bad version is that the researchers are also the investors and they're lying so they can sell the company. Elizabeth Holmes was not unique.
With that context, and knowing that ...
... this happens, maybe the onus should be on the submitter to say upfront why they think this new thing is different.
Your post is well intended, but that isn't really what happened. That's the way the media have explained it.
One drug was pushed through by the FDA, and the reasons did not make sense or demonstrate that it would actually be therapeutic. Since then, two similar drugs have passed clinical trials with actually therapeutic results (as in, quality-of-life-extending). There is a whole new set of problems to address, particularly that they are wildly expensive. They won't help anyone but the very wealthy at first.
Another issue is that one scientist almost certainly did fabricate/reuse results that underpinned most research on a particular amyloid type. It only invalidated that niche area of study, not broader amyloid research.
Finally, researchers have known for decades that amyloid is not the only culprit and may not even be the main culprit. People didn't shout them down. Major funding has gone to many, many hypothesis-driven studies that would make no sense if the amyloid-is-the-whole-problem were assumed to be true.
This isn't an attack, just a chance to clarify some points that have made it into the popular, educated zeitgeist but are not true in the way that people think they are.
A problem I see with not engaging is that these people won't learn why they're getting "noise-voted", likely to end in frustration, at least for some people where this has become a default behaviour. I see the point about tildes treating any reply as a bump, but in my mind it goes against the whole assuming good intentions that I like about tildes.
I might say that it might be useful to engage with them, if only to point this out. I'm guilty of this myself, as an old Reddit habit, and I'd be thankful if someone pointrd it out in the spot, because I might not have noticed myself. Not that it's anyone's job or anything, but I suspect a lot of is aren't meaning to make the place worse, and lapse into nasty habits sometimes.
I believe the Victorian Sufi Buddha Lite rules would apply well to this site as well.
"lite" as in 2/3 of these should be true isntead of the 3/3. The relevant part of topic here would be:
Because yes, I completely understand there are times where Lawful Evil "kindness" can be used to downplay misinformation, or try and excuse bigotry. So we shouldn't let "angry tone" shirk otherwise proper criticism. Fortunately, there isn't too much defending of those topics on this site to begin with, in my experience.
In this hypothetical example, shouldn't it be on the OP or (preferably) the article writer to show that the new potential cure is different? To me, most of the articles of this sort are just noise, but Tildes has no way of marking that.
The cynicism is justified. These articles are almost always nothing more than clickbait. At best, the author misunderstood the results of the study, but more often than not, they greatly extrapolate the results beyond any meaningful measure. Sure, this kills cancer. It also kills people. Or, this kills cancer in mice, but ongoing human trials have shown no results, but the article just talks about how promising the mice trials are.
I haven't written any of these short comments you complain about, but I understand them. It's much easier to respond to noise (and these articles are in fact noise) with noise, rather than type up what I just said every single time a new pop science article comes out.
I think the userbase of Tildes has been (much) better than other sites about not posting bad science articles, but it's not immune to it. There's no shortage of bad science articles or hopeful rodent trials to post, and I think posters should have a responsibility here as well.
Does it? Giving undue weight to something that deserves to be dismissed can actively make the situation worse.
In general it probably doesn't help much because as they said, it's a low barrier to entry, they're not stating anything that anyone else couldn't also. Furthermore, if you're arguing that these low effort dismissals might influence someone out of a bad thought/decision/outcome and thus it can help the situation, is it really helpful to encourage people to be receptive of something that didn't have any real reasoning to it? That didn't require any effort on the part of the person who conveyed it?
To me I view the dismissals/cynicism OP is talking about as one of those things where it kinda feels better to predict something bad rather than good, because you sort of get a win no matter what. If you are wrong, then the outcome was good (since you predicted a bad outcome) and you win and the good outcome helps you easily forget all about being wrong, and if you're right then you get the self-satisfaction of being right or seeing something other people didn't see which also might help cope with dealing with the bad outcome.
While to some extent it's not fair to say people are doing it for literally no reason or no justification, if they express it without effort or bother to explain their reasoning in a meaningful way, then that's how it can be perceived by others. Basically if you really believe something to be true and worth expressing, especially if it can have deleterious effects on others, the least you can do is prove it is something you care about by putting the effort into explaining it.
Tangential to the discussion at hand, but an obstacle to high-quality comments is a lack of engagement (which is inherent to how small Tildes is). For example, on the most recent recurring thread in ~books I've posted a review which has no engagement at all, and the most recent Exemplary label I've received is on a comment with fewer than 10 votes and no responses, despite it taking me about an hour to type up. You have already pointed out the low barrier of entry for posting low-effort comments - I would argue that the barriers to higher-quality conversation on more niche topics are twofold.
I can completely understand this. It's weird, because compared to before, it's easier for posts to fall through the cracks due to the increased amount of users.
Actually, I think a filter for posts with zero comments (@Deimos, @cfabbro) may help with this particular problem. Or something like the opposite of the activity sort. Reddit faced the same issue, hence the whole "knights of New" subculture that developed there.
I actively try to participate in posts that don't have much engagement but are otherwise interesting. But I also need to be interested in it in some fashion.
Yeah, I think zero comment activity topics get punished way too harshly here now, often disappearing way too quickly for the majority of people (who likely don't ever check the New sort). And the Activity sorts also often exacerbate conflict by constantly drawing attention to the topics where discussions get heated, to the detriment of other more thoughtful comments in less-contentious topics. And I have also definitely noticed a lot more new comments getting completely ignored these days, when otherwise they would have resulted in at least a few votes and renewed activity in a less active topic.
I have recommended in the past reserving a few of the top slots on the front/group pages to highlight new topics, or topics that have higher than normal votes but less comment activity. But I honestly don't know if that's the best way to help balance things a bit better between Activity and boosting the visibility of new or comment-less topics, or valuable comments in less-active topics. But a filter or Sort that highlights new and comment-less posts could potentially help with that. No idea how exactly something like that could be implemented, but IMO it's definitely worth investigating and considering. Added to Gitlab: https://gitlab.com/tildes/tildes/-/issues/812
cc: @slug
And cc: @Deimos, since I remember you mentioning something about wanting to implement a custom front-page of some sort eventually. And I think these engagement issues with valuable posts falling through the cracks more often now, is something that a custom front page could also potentially help with.
I've posted a couple of reviews as their own posts and received four or five upvotes over several days which is not much compared to other topics. Not everyone on the site reads the megathreads. Also many of us have widely different preferences for book genres. But you have inspired me to go look for your review.
@slug This is all relative. Before the influx, I was happy getting five votes or so. That's enough to know some people read your comment and liked it. What more can we expect from a forum comment?
I don't remember it, but believe at one time Tildes tried getting rid of votes and people didn't like the lack of feedback. But I think it might be a good idea to cap votes at 10. If you get 10 votes then you win the "people liked it " award. Anything more is just discouraging to people who disagree.
Speaking as an old-timer here, prior to our recent big influx, I almost never used the
Noise
label. It's not an exaggeration to say that weeks or months went by without me labeling anything negatively. Pretty much the only one I used wasExemplary
.This is not a criticism of anyone who joined. I'm happy y'all are here. It's been great to see things more active on Tildes, and it actually acted as a good stress test of our labeling system. Before this, things were kept in check mostly via social norms (i.e. people choosing not to post noisy comments in the first place), and the influx forced us to switch over to our structural measures. I think the past three months have shown that those work decently well.
One thing I have noticed is that I now am labeling a lot more than I ever used to in the past, even as our activity has slowed back down a bit. It would be nice if we could get back to where we were, when noise was hardly posted in the first place. Is our current noise just people acclimating to Tildes' norms, or have some of our norms fundamentally shifted and the noise is here to stay?
I'm not sure, but one of the things I've always loved about Tildes is that I've always felt the people here are earnestly, honestly, trying to be better over time. We've never been content to just sit and accept things -- both in ourselves and in our wider community -- and have always pointed ourselves in the direction of improvement. Speaking to the new people here, it's hard to convey just how much this was a latent, unspoken part of the site. At its best moments, it inspired positive change across the board. At its worst moments, it was almost an itchiness under our skin that we couldn't shake as we tried to theorycraft an unattainable social perfection and enforce (to our own detriment) it in others.
This paradigm of improvement carried a lot of good weight though -- far more good than bad, I believe. It's part of what I think has allowed the site to exist for as long as it has without fizzling out. There was a core group of users here who all knew each other. We undoubtedly had our frictions, but one of the things that saw us through that was the idea that we and they could all be better. This isn't the kind of "be better" that's meant in a dismissive, backhanded way -- it's the kind that's hopeful, warm, and inspiring. This pursuit of a genuine "better" allowed for forgiveness and growth in our community. Some of my favorite users on the site are people who I initially disliked. Some I even got into arguments with! In our time on the site, I've changed and they've changed, and even if we still don't see eye-to-eye on everything, one of the things I know is that we're all trying for a similar ideal of being better over time, both individually and together.
I hope I'm not putting @Adys too much on the spot here, but he's a great example of this. The two of us have had our moments of friction in the past, and I'm sure we have some continuing misalignments between us, but mostly I just have a lot of fondness and esteem for him. In our time on the site, I've come to learn that he's someone who unquestionably points himself in the direction of "better". He wants better for himself, sure, but far moreso for others. He bends over backwards to help people.
This very topic is a good example of him keeping his sights set on that "better". If you don't know him, it might be easy to read his words as simple nagging or bellyaching. As someone who feels like he "knows" Adys though (as well as two internet strangers can), I can assure you his words are coming from a good, positive, supportive place. He wants us and this place to be better, and he knows we can be.
This direction was always an implicit cultural norm of the site, and it's one that I hope doesn't get lost, because it's not necessarily easy to see or pick up on. It can easily get drowned out -- especially on a wider internet that loves to push cynicism and hopelessness. We also don't really have any structural things in place to support it because, well, I don't know if you can even begin to systematize it in the slightest. It's an attitude, not an outcome.
Nevertheless, I think it's there in the pulse of the site. It looks like a lot of different things, but for this particular topic, it looks like us striving to be a place where we avoid posting noise in the first place, rather than just adequately labeling it after it's happened. If you're wanting to comment on something, push yourself just a little bit: what can I say to make this comment of mine better? It doesn't mean you can't complain or express negativity -- it just means that, if you do, give it some legs and something to stand on.
Noise is noise because anyone can write it, easily. What, instead, can you say if you give a little extra effort?
Aw man, you make me blush ;) ❤️
So this reflects my experience, but I don't think this is people "acclimating" -- rather (and the reason I posted this) I think it's the opposite: It's people getting acclimated and a bit too comfortable, and falling back into bad habits.
The only evidence I can offer is that, during the first couple weeks of the influx, even as we had more users than now, other than a few users who were straight up on the wrong site, I still wasn't finding much noise. It seemed that people were trying hard. It makes sense: you want to make a good first impression and either try more in your first posts, or take more time before your first posts.
I'm hoping we can gently nudge everyone in the right direction here.
I've thought a lot about this over the past few months. I've seen a lot of analogies thrown out when it comes to places getting bigger and the growing pains that come alongside them. People have likened it to small communes, to train stations, or even to expanding gardens and all these analogies have merit in their own ways. What I have noticed, however, is that there's something inescapable that comes with size, or perhaps more aptly put something that becomes an issue of culture rather than of rules or suggestions or individual behavior.
While this is thread is about a particularly technical endeavor, I am none the less reminded of my own requests for changes of culture around this website (and well, frankly, everywhere I go in the world). A long time ago I started some discussions around how this community could be better. I started these discussions for very much the same reasons - in an attempt to preserve the parts of the culture I so greatly enjoyed. What I saw was also an issue of growth, of new users not understanding the culture and bringing ways of interacting with the world that we had decided to cast off in this environment as we had come to recognize that they were harmful. I realized, at least to some extent, that it was a fight that could not be won every time someone new showed up. Not only could it not be won every time, but that people would eventually leave the society that we had built for other places and even if you won over the hearts of some, there would always be new people at the door.
What I've been thinking a lot more about in recent days is how much is truly under our control. If we imagine that it takes 6 months of regular interaction to unlearn some of the harmful behaviors we've learned on the internet from other places and the dedicated efforts of many to help provide a space to unlearn this, is there an upper limit on how large a space can be before culture is drowned out by new voices? If people are joining so quickly that the chances of interacting with the 'old guard', so to speak, became less and less likely, at what point do you lose the culture and instead receive whatever culture the new folks bring with them? It never really occurred to me that communities could lose so much culture and vision merely by a stampede of people, but coming to this realization and how it's particularly relevant in places that can grow at blistering rates (such as in online spaces) has given me a new perspective on what is and is not within our control.
The fleeing of users from reddit made me really come to terms with what's possible and what should be expected when dealing with issues on the scale of culture. I saw culture even more rapidly decline on many places where these individuals were fleeing to. I saw many of the same discussions crop up - how to maintain civility, how to discourage reddit style behavior, and how to encourage each other to be better. It's been an interesting ride, watching this all play out, both as an observer and now as someone running their own social media website (when will I learn? 😂). The more intimate I become with issues of culture the more I am recognizing just how difficult of a task it is - getting a large group of people to change their behavior is not easy! Doing so when the group of people is constantly changing certainly isn't any easier. I don't think the difficulty of a task is a reason to give up hope or halt all attempts to make things better, I mostly just wanted to reflect for a little on how tough I've come to realize this all can be.
As a member of the most recent diaspora, although I have been here for two full months, it is noteworthy that the influx as of the last numbers I saw is around 10,000 users, roughly a third of total users all though of course not everyone is active. There was a graph that i don't know how to find. Lemmy and Kbin absorbed far more because joining Tildes was rate limited.
I also kinda feel like I've recently seen Exemplary used a lot as a reddit-gold equivalent to just shout louder than others in an argument, whether or not the comment is very informative and helpful.
I don't think that's actually true. AFAIK, only the Noise label works in that way.
Edit:
Nevermind. You're right.Edit 2: Nevermind... I'm an idiot. Sigh. Exemplary does actually activate at 0.5 weight (and every user's labels have 0.5 weight). Apparently every other label only activates at 1.0 weight though, so not just Noise.https://gitlab.com/tildes/tildes/-/blob/master/tildes/tildes/models/comment/comment.py#L336-341
@cfabbro already answered correctly, but just to confirm it, yes, Exemplary is the only label that usually "activates" when only one user uses it. This is because it's quite rare for multiple people to use their Exemplary label on the same comment (especially since they're limited), and it felt like it would be pretty unfair to make most of them effectively do nothing.
Looking at the last 30 days, there have been 250 different comments that received at least one Exemplary (out of 22,288 total comments), but only 35 of them had 2 or more Exemplary labels.
I don't think misuse of them has been frequent enough to really worry about, but if it's ever necessary there could probably be some kind of in-between solution. Like maybe the first Exemplary label is only visible to the labeller and the comment author, and it takes at least 2 before it starts showing the comment as Exemplary to everyone else.
See my second edit. Exemplary actually does activate at 0.5 weight, which equates to 1x Exemplary label being applied.
That's a feature of the label system in general - the number of people it takes to 'activate' a label can be set on a per-group basis. This is so in the future if/when we have thousands of people active in a single group we can safeguard against a small number of people using labels in an abusive fashion. As a given community grows, the weight required to trigger moderation features can scale up with it.
It probably wouldn't even need to scale up very much, either. It's truly a small number of people who get hyperactive with the mod tools like that, even on reddit. The feature is just a way to keep an angry or excited 1% from making a mess of things.
I feel like this is an uphill battle that needs constant attention, so thank you for making this post.
I also have a question: What happens when I label a comment as noise, joke, offtopic or malice? I've seen plenty of comments labeled as exemplary. Do the other labels work in the same way and I've just not seen comments labeled as noise? Does the user whose comment is labeled get a notification?
Tildes has some docs that go over this!
The short of it is the Exemplary label is the only one that visibly decorates a comment. Off-topic, Joke, and Noise all can contribute to pushing a comment down the page if enough people use the label. Noise also goes so far as to collapse the comment.
Didn’t even know this existed, I knew of exemplary but not of the others, any other good features of tildes I may not know?
Here is the front page of the documentation with hyperlinks to different aspects. Most linked pages also have sidebars with links you can follow. The site is thoughtfully designed, well crafted and carefully explained by people who value it. https://docs.tildes.net/
This thread might also answer some questions and even give you the opportunity to ask more. https://tildes.net/~tildes/182p/new_users_ask_your_questions_about_tildes_here_v2,
https://tildes.net/~tildes/15my/new_users_ask_your_questions_about_tildes_here
~tildes and ~tildes.official are the place to find these kind of discussions.
You can find the documentation on the labels here.
I don't necessarily want the noise tags to be visible (either publicly or to the owner of the comment), but I do kind of wish that there was a way to know that your comments were being tagged as noise. Having it as a way to hide unhelpful comments is great, but you're really not likely to know when you're doing it so there's not really a great way to know it's happening and I can see that leading to more noisy comments being posted.
Worth noting is that all the comment labels (formerly called "comment tags") were publicly visible in the early days (though who used the label was never publicly revealed), and it turned into an absolute clusterfuck. People were constantly assuming the person they replied to had been the one to apply the label, resulting in them responding angrily to that user, and often labeling that user's comments in retaliation, so then accusations of label abuse started flying left and right. And that's why labels are no longer visible to the user whose comment they got applied to (other than Exemplary). cc: @pi-rat
AFAIK Deimos has private messaged users who recently received a lot of noise labels on their comments, warning them about that. I don't think there is any automated system in place for it though. But perhaps there should be? Maybe not necessarily letting people know which comments specifically were noise labeled, so we don't get the accusation-spiral going again... but just warning them in general about having made too many noisy comments after they reach a certain number of noise labeled comments.
Not that I particularly care about visibility of the labels, but Hacker News had a simple and elegant solution to this: you can't downvote a direct reply to you.
of course, Tildes doesn't have a traditional downvote, but it may be something to consider. Maybe the repliee can still mark for malice, but it may be best to let a 3rd party decide on joke/offtopic/noise.
In addition, you need a certain karma level to downvote on HackerNews. Not relevant for the current scale of Tildes, but for the future it may be worth considering some sort of karma or account age threshold. This would encourage users to read through comments and understand the culture for a while, instead of haphazardly marking everything they disagree with in labels.
Relying on a third party to apply labels would be incredibly labor intensive even at present volume, and wouldn't scale very well past that. And the whole reason it's crowdsourced is to avoid those issues.
There is no karma system on Tildes, and there already is an account age threshold on using the comment labels. At present it's only 7 days but it might be worth extending. Although TBH, I think 7 days is fine when combined with the "Trust people, but punish abusers" philosophy, since Deimos has already removed plenty of user's ability to use labels due to them abusing the system.
But I haven't actually seen too much label abuse going on, even after this massive new influx of users, so I don't think it's worth totally revamping the system to address a nonexistent (at present) problem. At most we might just need to increase the label weight of all the old, still active users just to help combat the Noise a bit more effectively.
oh, my apologies. My use of "third party" simply meant "not the replier nor the repliee" in this case. I wasn't suggesting an entire layer of 'neutral moderation' on top of what we already have.
in the context of the comment you made to me: a "third party" is simply an acknoledgement that I will have bias at someone replying to my comment and that I'm not the best judge of character in if such a comment is noise/offtopic, especially if it disagrees with me. So let others users outside us 2 in this chain determine if it is noise. Maybe I can judge "Joke", but there's not much increased benefit from me personally marking it over others.
Ahh, great. I wasn't aware. I think 7 days sounds fine for now, especially if you haven't noticed many problems so far.
Ah, gotcha. Not counting labels from users directly involved in the comment thread makes sense. That's definitely worth keeping in mind to implement if label abuse starts becoming more common.
That's a really useful idea in general. One can participate, or one can moderate, but not both in the same thread at the same time. It knocks down most of the potential abuse cases.
I wish I could label my own replies as noise. I've had comments where I genuinely feel like there is something to be discussed and that particular thread has moved away from the post topic, though not so far away that I think it's worth an entirely new post either.
That feature is coming soon™. It's been on the official list for like three years but I'm hoping to dip my feet into the open-source waters by implementing this in the next few weeks. So hope for the best and expect more of the same.
I remembered that, and that was why I didn't want to endorse publicizing it. I remember we were talking about having "fuzzy" ways to let people know, but nobody ever came up with any concrete solutions in the comments.
I do like the idea of continuing what's happening now and just automating it so that there's less human labor involved.
@cfabbro I appreciate getting some of the historical context for the why we can't see labels; that doesn't really sound fun nor a direction that supports actual constructive conversation. I'm relatively new here and am doing my best to understand how best to show up.
I see that this is a fairly tricky issue to navigate both from an admin perspective as well as possible automated solutions.
I know I love feedback (as I mentioned before I think it is a gift) but not everyone is great at delivering it (relevant, timely, and focused on actual actionable things). I think that's why I suggested the idea of you can add a comment to the tag only the user gets to see ( like we have for Exemplary) to provide such feedback.
I think if we were going to do an automated route maybe a combination of what @plik and @Eji1700 suggest - you have to take a slight hit if you choose to check, but the only 'noise' things you'll see are there once it has reached a certain number of reports.
There could be a lot of interesting solutions here, but does the worth of it outweigh the time*effort equation?
I think overall I would like for people to be aware of what the intent of each label is and to be able to use them sparingly and wisely.
::Edits:: some typos
I read those comments and liked their suggestions too. I think they could work... however as you said:
And, yeah, that's the ultimate hurdle. Especially since occasional social nudges, like this topic, seem to be working well enough to keep the Noise down to manageable levels, at least so far. And there are so many other features that IMO the site needs way more right now, e.g. Filtering/searching/browsing by domain. So anything Noise label related is likely to remain on the back-burner for quite some time... unless someone decides to implement it as an open-source contribution to the site.
I have mixed feelings on not being able to see if a comment is noise.
in favor of seeing noise labels / other labels (as commenter):
I personally really enjoy feedback so just like the "exemplary" labels have an option to let the poster know why you believe the comment is such, I think getting feedback on why a post is noise would be great as well.
I try and remember that feedback is given to try and make me a better person - this in itself is a gift. Not everyone is the best at giving gifts, so I have to do my best to receive them with grace (EG "thank you for your feedback" vs "well I said/did that because...").
against seeing noise / labels (as commenter):
Sometimes seeing the label with no feedback can make me spiral into the "what did I do wrong" spiral and if I hadn't see in I would have remained happy and blissful (but ignorant on how I am degrading the experience of others on this platform).
Conclusion:
I think seeing the label and having that feedback would help people recognize when they are not adding to anything and hopefully that feedback will keep the discussions well thoughtout/polite/etc.
I definitely have the fear that people could potentially abuse this, but at this stage I'd rather be optimistic and trust people to do the right thing to keep the community strong.
I think this can quickly turn noise tags into downvote with extra steps. It's an inherent problem to all these systems, but "use this when I disagree with someone" is a problem.
@eji1700 I hear that concern but I do hope that people see noise as "this comment doesn't add anything to the conversation" (eg: "first!", "me, literally", other trope-y one liners / not really thought out comments).
I think also being able to see if something was labeled noise you could distinguish between "was I labeled noisy because I didn't add to the conversation" or "this person just is disagreeing with me."
If your comment wasn't actually noisy and someone is just 'noising' your comments because they disagree with you shouldn't the repeat offender have decreasing weight attributed to their noise posts? My thought on this isn't fully formed but if you label noise to one users posts in a quick succession the weight of each noise label decreases.
I think this is an inherently difficult issue to address and have function well in a scalable manner, but difficult ≠ impossible.
Again this all assumes fair players, and worse, who's going to do these determinations?
From my understanding, it's our one Admin, and that's it. At the end of the day processes that put undue burden on admin's just cannot work here. If people start spamming noise tags because they know it's a way to let someone know "hey you suck" it could be a major issue.
Now granted it could just be a "hey we ban you for this" thing, but I think it's going to lead to just a lot more work for very little gain, as there's that many more people to asses and ban.
Maybe have it be some x % of noise tags can show up on the user side, and that number is hidden, so if you see one you know a lot of people have marked your comment as noise, but you won't know about the odd one off, but even then I think that's a risky proposition given the resources available.
Maybe an option in your profile to see how "noisy" you are as a user over a weekly bar graph, but the consequence is you lose label (and vote?) privileges for a week every time you choose to view it?
That way you aren't telling users specific comments marked as noisy, but maybe they realize that one Tuesday when they got into an argument on the "Pineapple on pizza yes/no" thread may not have been an example of their best behaviour.
It also gives them a week to cool off before being able to go back and mark all the evil anti-pineapple users' comments as malice.
@plik I think that's an interesting way to go about it.
My concern is that it kinda feels like you're being punished for seeking feedback in that regard, but I don't disagree there should be a safe guard in place to prevent someone from acting in retribution as you described.
Maybe this but with the other reply I just mentioned with having label weight decrease after the viewing period?
Could do it that way, it seems a bit complex, and like it might have the same end result as just disabling labels, since you might have situations where other people don't label the same comments, thus resulting in the lower multiplier basically becoming 0x.
I like my way because it is simple, and acts as a sort of maturity test. If a user really cares that much about their noisiness, they should be able to prove they can handle it by acknowledging they are willing to give up labeling for a week by clicking "yes I understand labels will be deactivated for a week if I click this button to view my stats".
The time frame doesn't have to be a week. It could be less, along with a lower labeling timeout. It should also be on a cooldown so you get maybe one check a month.
Just my thoughts. I am not an expert.
Maybe it would make sense to have your comment's labels be collapsed until explicitly expanded? That way, you can see the feedback, but only if you want it.
I'll go a little further. If someone posts an article suggesting "What if X?" and your immediate inclination is to draft a comment suggesting "What if not X?", you should ask yourself to what extent you're conveying an ernest belief versus being contrarian (do note that this is a sliding scale). Naturally there are some ideas that are abhorrent and deserve to be ridiculed. But if I were, for example, to take Hacker News at face value, I'd conclude that everybody's wrong about everything.
To this end, when you critique an article, it's worth bearing in mind:
all models are wrong, so there will always be flaws with some proposal; and
the author is generally not available to respond to your criticism, so you should frame your argument as charitably as possible.
To recurring themes that maybe we can avoid, I will add: dismissing an article because it's by an author you don't like or it's from a publication you don't like.
We all make gut decisions like this. There are popular authors and publications commonly shared here that I dislike too. It's an easy way to decide to ignore an article.
However, whoever posted the article probably didn't do it because they wanted to start a "contempt trial" discussion about the author, other things they wrote, and other things they've done in their lives. A criticism that goes beyond "I liked their other articles better" and is more like "the person who wrote this is a terrible person" is in danger of turning into a discussion that isn't about the article itself and isn't fun for anyone.
This isn't to say that adding context for people who are unaware never makes sense, but when done quickly it's in danger of being a shallow dismissal, and when done with serious intent it's often not any better. Character assassination isn't fun.
Is that somewhat directed at me for criticizing the Daily Mail the other day? If so, sorry, but I absolutely disagree with you on that particular point. IMO informing people when certain publications entirely lack credibility is valuable and often necessary, especially when they have a history of manipulating the facts, or even outright fabricating stories like the Daily Mail does. And if the Daily Mail started getting posted and heavily voted for here (like it is on reddit), I would genuinely start to worry about this place, since it is the amongst the worst of the worst when it comes to British tabloids. TBH, I think it should actually be outright banned here as a source, especially since way too many people outside the UK on the English speaking internet don't realize just how horrible it is, and so link to it thinking it's a credible source for news when it is absolutely not.
I think your (or possibly someone else's I'd have to go look) criticism of the Daily Mail was a positive example of pointing out the problems with a source.
I'd say a negative version would be like "fuck that it's the Mail" or something without the context. I'd also generally agree that I wouldn't want to see it here generally. But not everyone knows that they're unreliable and I think it's worth education!
Yeah, fair enough. I didn't just say "fuck the Daily Mail", and actually did genuinely try to make my comments about them way more informative than that. And rereading @skybrian's comment I think that snarky, offhanded, low-effort type response is what he was specifically referring to, not informative criticism about an author or publication.
Nope, actually I was thinking more of what happened when I posted a mediocre blog post from Nate Silver, and my own likely reaction to anything from Jacobin or Cory Doctorow. (I think I'd wear myself out explaining what's wrong with them, so it's tempting to just write something dismissive.) And there are definitely some substacks and blogs I'd think twice about posting anything from; it won't get a fair shake just because of the source.
Also, people will go out of their way to investigate the authors of controversial opinion pieces and tell you why they are bad, even if they never heard of them before.
It seems like a bad trend? I'm starting to think that just posting an interesting link without putting it into context is a bad idea for a discussion site.
Do you have a link to a previous post you've made about Mr. Doctorow? Nate Silver and Jacobian have plenty of critics, but this is the first I've encountered an “it's him again” reaction to Cory Doctorow.
Sorry, I'm going to decline. I think it would be pretty hypocritical to start a "contempt trial" here, after explaining how I don't like them.
I can say personally while I agree with a lot of what Doctorow says he does kind of come across as a bit smug or masturbatory at times. Actually "a lot" would be an understatement, his writing has informed a lot of my ideals and I still point towards Printcrime as one of the most important short stories of our time that everyone should read.
But he does have a bit of an acerbic "I told you so" style on boingboing and elsewhere that can be abrasive if you're not on board with his idea of a better future. I am, and I imagine you are, so it doesn't come up.
I thought it might have been about me for the way I was jumped on for posting an article written by Seymour Hersh recently. Admittedly I'm out of the loop on Hersh but I didn't see many people engaging the merits of the article, just perceived downsides of linking writing from that author.
Bumping this topic because I think it's incredibly important to reiterate this message for all the new users. And even though I don't want to single anyone out, there has been a lot of top-level comments lately that I felt deserved to be labeled as Noise.
Ideally, labeling wouldn't be necessary though. So if your comment matches the criteria Adys mentioned above, or is only few words long, doesn't add anything substantial to the discussion, and/or is just a hot take, knee jerk reaction, or joke... please consider the fact that making such comments is generally frowned upon here, especially if it's a top-level comment. And the more people that make comments like that, the lower the overall quality of discussion here gets.
p.s. Jokes are generally considered acceptable as replies though. You can still have fun and joke around with other users, but try not to clutter up the top-level comments with them please.
I'm potentially guilty of this in the sense that I am/was an active Redditor and use my Redditisms when posting here, even though Tildes is probably the most anti-Reddit alternative platform here - most of the Reddit crowd flocked to Lemmy instances.
As previous posters have written, it's fine to be more Redditesque with lower posted content. Keep the top level comments are informative or strong opinions with facts, etc. If you have a little to add, or some humour, sure, do that, but to the posts further down the chain where the noise doesn't matter.
I rarely have something unique to say, but I like to participate, like the kid at the party in the corner, so every once in a while I'll throw out something to that lower hanging fruit.
One thing I like about the moderation on Hacker News is that certain topics get flagged in the belief that certain topics always end up with bad discussions.
If, for instance, many participants think there is just one way to think about a topic and that anybody who thinks differently is by definition a bad person the best thing to do may be to let people talk about that somewhere else.
I think Tildes might need that feature someday, but it's both good and bad. It's good to know your limitations; some conversations just aren't right for the forum. On the other hand, it also narrows what you can discuss when some subjects become no-go zones.
It's also the case that sometimes the topic is fine (it's important news), but the link isn't a good one. It's easy to post the first link you see, rather than doing a news search and finding a better one.
Replacing the link is an easy fix for the moderators, but maybe a cluster of links would be better, resulting in a more collaborative approach to finding the best coverage for some news?
The fact that news articles come in clusters is an existential problem for news aggregators: that is, if there is some event, be it a sports game or a political debate or a mass shooting, there will literally be hundreds of articles in different newspapers on the topic. A good automated news feed will show one headline or one group of headlines for each event and will ideally be smart enough to know which new articles report something significant and which ones are just tardy.
My RSS reader and agent YOShInOn has a great clustering system for coarse topics (deep learning, soccer, climate change) but I haven't been so successful at finding duplicates which are a little annoying for my 3000 item per day input feed but would be worse for a higher volume feed. Clustering with
https://sbert.net/
is easy and orders of magnitude better than systems I've built before, I use k-means clustering for coarse topics and DBSCAN for duplicates, if I tune up the parameters for DBSCAN I can probably get deduplication working better.
As for human curation, I see great clusters of news articles on Metafilter, such as
https://www.metafilter.com/200377/The-Cost-of-Living-Housing-Crisis
where there are a bunch of links, a few quotes, and commentary as a jumping off point for discussion. I see Metafilter, Tildes and Hacker News forming a triangle of similar services.
As for HN, I have predictive models that, given a headline, will predict: (a) probability of the headline getting >10 votes, and (b) probability of the headline getting a ratio of more than 1 comment per 2 votes. (a) is not such a great model because the input data is noisy (the same article might get submitted 3 times and get 2, 35 and 215 votes) and it doesn't take into account the actual contents of the article, (b) is almost as good as YOShInOn's first recommender model. What I noticed about (b) is that the highest scoring articles tend to be "clickbait" and actual realized comment/vote scores on Hacker News are an even better "red flag". The moderation system tends to suppress articles that get, say, 50 comments and 15 votes because pretty frequently a discussion goes that way because a few people are yelling at each other.
I've gone looking around for a good open source sentiment analysis model for social media (as opposed to product reviews) and haven't found one. The trouble is the data set. The machine learning part of a model that makes angry toots go away is pretty easy in 2023 but since I don't want to read angry toots, the process of collecting 5000 angry toots would probably destroy me.
Content-based suppression of angry content would be highly effective if it were built into the platform because it would reduce
https://en.wikipedia.org/wiki/Transmission_coefficient_(epidemiology)
and could spread out the debilitating work of rating content but it would run into the awful political problem of people for whom being angry is part of their identity believing they are singled out. If I made YOShInOn more of a Mastodon client than it already is I could "enjoy the silence" in peace but I'd have to do 100% of the rating work.
It's possible for special classes of threads / types of topics to have their own behaviors in theory, the framework here is meant to be extended like that. Here are some examples so you can see what the thinking is...
I think that having more powerful tools like this would help immensely with many problems we have managing large communities at scale. It's at least partly due to trying to use the exact same thread models for everything. These things don't exist yet, but the code expects this sort of thing to happen in the future, so it won't be a refactoring mess trying to add them someday.
You bet! I particularly like the megathread idea and also about ideas that try to produced something permanent (a wiki) based on ephemeral discussions.
An example of what does not work in my mind is the threading model in TwitterX and Mastodon which has numerous problems. Some of them are self-inflicted, for instance when I follow a link to TwitterX sent by someone who wants me to see a whole thread it seems like I usually see just the first message because I am not logged in. One time I followed a link to what was supposed to be a sequence of 30 tweets or so by the same author and I see nothing but this person's followers saying "SQUEE!!!"
A big problem there is that people try to use threads to tell a complex story and that works... sometimes but then they get mad when other people butt in and mess up what they are doing. Those kind of sites definitely need a better threading model than they have.
I'm not a moderator (Tildes doesn't really have those; Deimos is admin and he's busy as it is).
But I also think it's important other users learn to recognize this pattern.
Just telling individuals doesn't scale and the community is thus bound to eventually be overwhelmed. I believe a community that actively uses the opportunity of such comments to teach users to recognize them is on a more sustainable path where their frequency always trends down.
May I ask what gives you that impression? I always thought Tildes had a pretty healthy set of guidelines outlined in the documentation: https://docs.tildes.net/
Do you have any examples of other places that you feel have better "explicit rules"?
Just to add on to this, Tildes definitely has rules (which is why users are occasionally banned). What this site lacks, however, is an explicit itemization of every infraction that will get you banned. And that's by design. An explicit list of rules is really just a list of loopholes; see "On a technicality". Rather than let assholes bend the rules repeatedly until they drive away everyone else, assholes are just banned when they prove themselves assholes.
Edit:
I can't seem to find the blog post I'm referencing -- it might have been called something else. If you know what I'm talking about, leave a link!Edit 2: thanks to @boxer_dogs_dance and @Minori for the link!
Here it is. https://eev.ee/blog/2016/07/22/on-a-technicality/
Also, if anyone wants to read past discussions of ideals for how people behave on this site, ~tildes is a good place to find those discussions. On the other hand, there is no obligation. The rules are few and loose but the ban hammer will absolutely come down if rules are flouted. On the other hand aspirations are high. I think most of us want to build a civil, pleasant, fun, informative community.
I think this is the blog post you're looking for: https://eev.ee/blog/2016/07/22/on-a-technicality/
Found it from this previous discussion on Tildes' code of conduct: https://tildes.net/~tildes/189l/the_code_of_conduct_doesnt_say_enough
Tildes has had that discussion over the past five years, and the consensus is that explicit rules lead to a lot of really tricky problematic behaviours.
One person's signal is another person's noise. I find that policing others' behavior is rarely effective and often more annoying than the offense itself.
Just vote for what you want to see.
There are a few ways to respond to your post.
I could ignore it. I could post a snarky dismissal and point out that OP has a bunch of upvotes in this thread and an exemplary tag, while your comment? Not so much. I could gently talk about all of the discussion Tildes has had about this in the past about this thing.
What's the best thing for me? For you? For Tildes?
The thing that's best for me is probably to ignore it. The thing that's best for Tildes is for me to ignore it, or to engage politely. The worst thing for Tildes, and possibly for you, is the dismissive snark.
Strongly disagree, at least in the context of a site like Tildes. The idea here has always been to promote a specific type of discussion and community, and some types of posts simply do not do that. I'm talking about low effort jokes, throw away lines, and generally exactly the type of noise @Adys is talking about here. While some people may enjoy that type of content, I would say that ideally Tildes is not the place for it. There are a plethora of sites online that one can go to to get that fix, and meanwhile allowing it here only serves to lower the quality of interactions around these parts.
If the history of online content aggregators as taught us anything, it is that curation by voting alone rarely works, and you quickly get low quality, low effort content dominating the site. The issue is that voting is easy, and people tend to vote for things that they agree with or make them feel good, regardless of if that content is actually making a meaningful contribution. Look at any large subreddit for an example. Meanwhile, active community moderation has been effective in maintaining site standards. See places like Stack Exchange for proof of that.
We have decades of real world examples to show that "vote for what you want to see" on the internet devolves into what many would call "jokes" and "noise". That's part of the issues this site was created to address. I argue major part of Eternal September isn't when a bunch of comments going against the established cultures comes in, because ever space will have its noise and fluff. It's when those comments are "voted" to the top by the lurkers, who far outnumber commenters, and it starts to shape how even regulars behave.
You are correct, but this site does have its own definition of "noise": https://docs.tildes.net/instructions/commenting-on-tildes#noise-label
I imagine this is an agreeable definition to abide by with regard to noise.
Nah