27 votes

Topic deleted by author

30 comments

  1. [3]
    kfwyre
    Link
    I think, for me, "trust" can most succinctly be distilled down into a heuristic through the idea of uncertainty. A new user, making their first post, carries with them maximal uncertainty. Are...
    • Exemplary

    I think, for me, "trust" can most succinctly be distilled down into a heuristic through the idea of uncertainty. A new user, making their first post, carries with them maximal uncertainty. Are they here to be part of this community? Are they a spammer? Are they a troll? Are they here to start fights? Are they here to shitpost? None of this is known, because we don't have past behaviors that yield meaningful insight. High uncertainty; low trust.

    On the flipside, a user who has been here for a long time and consistently posts things that are in line with the community norms is someone for whom there's a lot of certainty. Their intentions and methods are known because they've demonstrated over and over again, through their actions, enough for us to infer their purpose here (which is to be a contributing member of the community). Low uncertainty; high trust.

    A middle case could be a user who has been around a long time but posts infrequently, a user who has been around a short time but posts quite frequently, or a user who has demonstrated many positive AND negative contributions to the community. Maybe they regularly make comments that are tagged as Exemplary but they also regularly make comments tagged as Malice too. A middling trust score, for whatever reason, says "we're fairly certain they can meet community standards, but we're not giving a total green light just yet." Middling uncertainty; middling trust.

    So, with this idea in mind, I think we can most easily codify "trust" as the accumulation of certainty through repeated actions. I think this is important because I don't want "trust" to become a proxy for "power user" or create a hierarchy on the site. Furthermore, I don't think trust should be linked to things like ideological position or even knowledge level. To me trust is about meeting the expectations of the community. This, I think, can be incredibly valuable, because if I want to engage someone I disagree with, I'm much more likely to do so if I see that they have a high trust score. That means I know that, even if we're on opposite sides of a very divisive issue, they're likely to uphold the norms of the site because they've demonstrated a consistent ability to do so in the past. As such, we're far more likely to have a productive exchange than an inflammatory one, all because both of us can be trusted to abide by a shared set of values in the form of the social contract here.

    26 votes
    1. [2]
      Gaywallet
      (edited )
      Link Parent
      I'm wary about the idea of making some of this visible. One of the reasons I enjoy Tildes is that I don't necessarily have to pay attention to who is saying some so much as the message they are...

      I'm wary about the idea of making some of this visible. One of the reasons I enjoy Tildes is that I don't necessarily have to pay attention to who is saying some so much as the message they are conveying.

      Any visible trust score will necessarily result in some individuals who are invested in the site attempting to game the system to become more popular or noticeable. You mention exemplary and malice contributing to a users trust - if the number of exemplaries, for example, were displayed there would necessarily be a bias towards individuals who have been on the website longer and have accumulated more exemplary tags. This would also influence the behavior of other users, who upon seeing that this individual has contributed a lot of exemplary content would then be more likely to tag any reasonably good content by them as exemplary. I think they'll likely be able to get away with more malicious posts as well, without being tagged as malicious.

      The same can be said of the reverse - people with a lot of malice would likely be more critically examined and when these individuals are making good points they may be less likely to receive exemplary, further pushing their score in the undesirable direction.

      When considering what kind of trust scores might be beneficial without necessarily causing harm, the only thing I can think of is a 'truth' score - where people (or an algorithm) could determine whether a user ever puts forth false arguments. But even then, I could see this being abused in some fashion if users had any input into this (as many things aren't quite so black and white) or seeing it hurt someone's score because they previously believed and posted about something which they have since changed their minds on - perhaps even after being replied to by someone more knowledgeable on tildes.

      Unfortunately, I'm not sure that for tildes any sort of trust scoring should exist because the very visibility of such a score will significantly modify how we interact with other users. If someone's history of arguments is important to you, an extension which allows you to tag them might be useful, but I'd argue that very minimal use of such a tagging system for indicating 'trust' should be used. I personally only tag repeat offenders and tag them with the specific offense that they repeatedly offend, such as something like "believes black people have lower IQ". Even though I try to take that context into consideration only when it's important (such as during any discussions involving ethnicity or psychology, given the previous example tag), I still find myself treating many of these individuals slightly more hostile than I normally would. It can be a risky tool.

      **minor spelling/grammar edits

      12 votes
      1. Amarok
        Link Parent
        I think you're right, but at the same time, I don't think it matters all that much. There's no need to share all of the metrics with all of the users all of the time. It's probably wiser to show...

        I think you're right, but at the same time, I don't think it matters all that much. There's no need to share all of the metrics with all of the users all of the time. It's probably wiser to show only those metrics that are necessary for that user, in that context, to make a decision based on trust. It's likely some of the metrics would only be shown to higher trust users.

        Racking up counts of exemplary and malice tags for example - there's no reason every user needs to see that displayed as a badge next to everyone's username. The only time it'll come up is when bailiff-type moderators need to make a decision on if someone stays on the site or gets a mute/ban of some kind. The presence of that data will indeed bias the bailiff on how to handle the situation - but that's precisely the point of having that information in the first place.

        Whenever we show something to a user, you can bet your ass it'll have a feedback effect on behavior. The comment labels here, all of them, were once displayed colorfully and with counts. It prompted tag-war behaviors instantly, which is why it was turned off except for exemplary labels (and we still argue about keeping those visible as well).

        The trick is finding a way to make that feedback have a positive effect, rather than a neutral or a negative one. If that can't be done, it's probably a bad idea to display that particular bit of information and let it distort the community norms. There will come a time when this place hosts hundreds of communities, and each of those communities will begin creating their own systems and tools for their own purposes. It'll be like having a reactor that's constantly brewing up new tools/toys. If we watch those experiments closely, we'll be able to figure out what works and what doesn't. The users can and will do this with or without the help of the site itself (with browser addons like RES/TEX) so there's no point trying to stop that trend. Better to embrace it and put it to work for everyone.

        If this sort of data is being hidden from the users to prevent distortion of community norms, you'd think that would damage the trust in Tildes itself as a whole. That's unlikely to happen, though. The code is open source, so the developers and users can see exactly how it all works, even though they can't necessarily see what actual data is being stored. They'll still know the mechanisms and even the exact structure of the database. Tildes can keep the process open even if it's not sharing every metric through the interface.

        5 votes
  2. [3]
    anahata
    Link
    I'm going to turn this on its head. Do we need this? What benefits does codifying trust grant? This sounds like a technological solution to a social problem, and those never, ever work. The...
    • Exemplary

    I'm going to turn this on its head.

    Do we need this? What benefits does codifying trust grant? This sounds like a technological solution to a social problem, and those never, ever work. The computer security industry exists as proof of this.

    Furthermore, I don't think we can codify trust. It's a social, subjective, ephemeral, ineffable, non-discrete thing. How do you say how much you trust someone? Why limit yourself to what a computer can model? This loses a lot of nuance of the human experience. Let users decide who to trust, don't try to codify it. Computers are miserable at modelling complex human relationships. Trust is a complex human relationship.

    18 votes
    1. Amarok
      Link Parent
      The history of online discourse and current state of any large website is all the evidence I need as proof that yes, we do need this. Literally everything else everyone else has ever tried has...
      • Exemplary

      The history of online discourse and current state of any large website is all the evidence I need as proof that yes, we do need this. Literally everything else everyone else has ever tried has failed catastrophically over time, with the sole exception being communities that require the user's real world identity to be known (and thus link trust/reputation with the real world). Those communities remain small, however, since most people aren't comfortable with that level of accountability.

      People think of this like teaching the 'Tildes' system how to judge trust - as if there's a technical solution to the social problem. That somehow we can program the website to make all of these judgements. We all know by now that's bullshit. That perspective is the wrong way to think about this problem and as you point out, it'll never lead to useful solutions.

      A better way to think about it is this: What form of digital prosthetic can we build to enable Tildes users to judge trust? The website is just a flight recorder, really. It's the humans using it that have to weigh in on trust and make decisions using whatever trust information is collected by normal everyday interaction with the communities and other users on the site. Comment labels are at the beginning of this process right now.

      You have to design it for humans and not for algorithms. Every human social behavior (arguments, friendships, collaborations, conversations, crime, forgiveness, etc) that appears in the real world needs a digital prosthetic of some kind to enable it to work both online and at a larger scale than in real world interactions. When crafting those tools (which could be very simple, or fiendishly sophisticated) we also have the opportunity to put a thumb on the scale a bit and tip the balance of the interaction towards the better angels of our nature. Positive feedback loops, rather than negative ones.

      What we're really talking about here is how to identify and mitigate the damage from the 1% or less of participants that enjoy wrecking the place for everyone else. It doesn't need to get more sophisticated than that to beat every other forum that's come before it. It could certainly go a lot further over time, but that's all we need to make progress on these issues. We don't need this to become as complex as a credit score and create detailed profiles on every user - that's kinda the opposite of the Tildes philosophy on data collection.

      The trust system just needs to do a couple basic things.

      • gate access to feedback mechanisms (labels, tags, reports)
      • gate access to different toolsets (editors, curators, bailiffs)
      • maintain some sense of a user's level of participation in any given community
      • maintain a sparse behavioral record (does this user make everyone else's experience worse or better)

      If we can figure out how to do that, we have a shot at moving the eternal september problem into uncharted territory and making some progress.

      9 votes
    2. 9000
      Link Parent
      I agree, however, computers are already in the mix messing things up. Brains are good at remembering faces. Brains are good at remembering up to a couple hundred people they interact with...

      I agree, however, computers are already in the mix messing things up. Brains are good at remembering faces. Brains are good at remembering up to a couple hundred people they interact with regularly before defaulting to heuristics like tribalism. And look, we're already breaking those rules. We can't see faces, and there are hundreds of people here already, and someday there may be thousands of active members or more.

      My point is not that you're wrong, my point is that it will already be difficult to maintain trust relationships on a large forum site, so if that's something we want to do, we might have to augment the UI or structure of the site to enable it. Will that change perfectly capture the essence of trust? No, but it might help grease the wheels of trust, allowing it to operate smoothly.

      For the record, I don't know what that change should be, but I like some of the ideas in this thread.

      2 votes
  3. [9]
    skybrian
    Link
    I'm not sure what trust is, but I can say what it isn't. Trust isn't transitive. Just because I trust you and you trust someone else, it doesn't automatically mean I trust them. It also isn't a...

    I'm not sure what trust is, but I can say what it isn't.

    Trust isn't transitive. Just because I trust you and you trust someone else, it doesn't automatically mean I trust them.

    It also isn't a single number. I may value your expertise for one subject but also think you're kind of batty about a different subject. (That's okay, we're all a little crazy sometimes.)

    So I think each user has to use their own judgement? For now, you can click on the name of a user to see if you've interacted with them before and what they've said.

    I agree that it might be nice for Tildes to help a little with remembering other people. I like the idea of letting you "star" usernames, purely as a private way of remembering them, and you can use stars for whatever purpose you like.

    It might be interesting to let people add an arbitrary emoji to their name as "flair", though I'm wary that they could take hidden meanings that make people new to the site feel like they don't understand the UI.

    17 votes
    1. [8]
      umbrae
      Link Parent
      In your view, trust may not be transitive, but is trust aggregable? For example, is it worthwhile to know that a large portion of a particular community has some level of trust in a particular...

      In your view, trust may not be transitive, but is trust aggregable? For example, is it worthwhile to know that a large portion of a particular community has some level of trust in a particular user? And if so, is that a form of transitivity?

      (Edit: just for clarity I am genuinely asking, I don’t have a very strong opinion on this except in how it may make the site operate.)

      6 votes
      1. [5]
        spacecowboy
        Link Parent
        A term for aggregated trust is reputation. The problem with any global reputation system (or, one number per user) is that it is often easy to game.

        A term for aggregated trust is reputation.

        The problem with any global reputation system (or, one number per user) is that it is often easy to game.

        11 votes
        1. [4]
          umbrae
          Link Parent
          I think that’s true in the base case (like karma) but I wonder if there are other more novel ways to aggregate trust. Do the folks I trust trust this user in aggregate? Does a random sampling of...

          I think that’s true in the base case (like karma) but I wonder if there are other more novel ways to aggregate trust. Do the folks I trust trust this user in aggregate? Does a random sampling of users trust this user? Does the specific community I am a part of (like, a tilde section) trust this user?

          There’s probably more here but I agree with you on the general reputation case and it being gameable.

          5 votes
          1. [3]
            NaraVara
            Link Parent
            So the issue with aggregated trust is we all have different frameworks for who is trustworthy. The David Duke fan club, for example, likely has completely different standards for what makes a...

            I think that’s true in the base case (like karma) but I wonder if there are other more novel ways to aggregate trust.

            So the issue with aggregated trust is we all have different frameworks for who is trustworthy. The David Duke fan club, for example, likely has completely different standards for what makes a person trustworthy than any club I would be in.

            So the first step to "trust" is a clear and unambiguous definition of what the "model" community member is and what sorts of behavior and what kinds of behavior are verboten. People tend to be very leery of explicitly defining these out of fear of censorship or having "chilling effects" on certain perspectives or types of speech.

            In a large enough community groups will break up into factions and they all might be worthwhile contributors, but various groups might also hate each other. So suddenly you've got a Jets vs. Sharks situation where they have plenty of trust within the faction, but very little trust of another faction.

            Out here in the real world, we form bonds of trust and affinity based on outward signifiers. Most of these are subtle like expressing familiarity with certain cultural references (music, movies, etc.), types of clothing you wear, how you decorate your house, what brands of items you use, etc. If you wanted to import this sort of community affinity to an online space, you'd probably want ways to do that. Like, have options for "pieces of flair" attached to peoples' posts or profiles so folks can get a sense of what they're about.

            The forums of old had things like this with avatars and forum signatures and things. But those could very quickly become annoying for their own reasons. There might be other options.

            4 votes
            1. [2]
              Amarok
              Link Parent
              I'll plug hats and coats again. ;) I still can't think of a not-shit way to make that work in the Tildes interface, though. Part of the reason other sites are so easily gamed is that 1) there are...

              I'll plug hats and coats again. ;) I still can't think of a not-shit way to make that work in the Tildes interface, though.

              Part of the reason other sites are so easily gamed is that 1) there are no consequences for bad behavior and 2) other sites don't make an effort to forget or to forgive - anything they collect is written in stone, more or less. On reddit, you can rack up a massive karma score in hours, and it doesn't decrease/decay over time, and if you get banned you can just do it again making a new account in seconds. Most online spaces are locked into making money, and their capitalistic incentives won't allow them to ban or punish in any meaningful way - that would stunt their growth metrics. They can never have enough users, so they can never ban anyone.

              If we're going for a human-centric approach, it does make some sense to have a mechanism that works like an avatar, because humans like to express themselves. Part of me is still disgusted by the idea because I don't go in for that sort of thing. I'm atypical, though, and smart enough to see that just because I don't like something doesn't mean it hasn't got merit. I'll content myself with trying to imagine a way to make it work well.

              I think what intrigues me most about the concept is the idea that some aspects of your avatar might be out of your own control, set by other users rather than just yourself (see that thread I linked above). I can't think of any examples where that's been tried before. There's an untapped well of network effects lurking there.

              3 votes
              1. NaraVara
                Link Parent
                This is a good point. Not being wedded to raw subscriber/user counts and focusing on meaningful contributions/interactions instead definitely does give us more scope to punish bad actors. When I...

                Part of the reason other sites are so easily gamed is that 1) there are no consequences for bad behavior and 2) other sites don't make an effort to forget or to forgive - anything they collect is written in stone, more or less. On reddit, you can rack up a massive karma score in hours, and it doesn't decrease/decay over time, and if you get banned you can just do it again making a new account in seconds. Most online spaces are locked into making money, and their capitalistic incentives won't allow them to ban or punish in any meaningful way - that would stunt their growth metrics. They can never have enough users, so they can never ban anyone.

                This is a good point. Not being wedded to raw subscriber/user counts and focusing on meaningful contributions/interactions instead definitely does give us more scope to punish bad actors. When I was a forum mod the most useful tool at my disposal was the 24 to 48 hour ban. It was a nice way to let people know that "Hey we value you as a person and don't want to exile you, but you're being kind of a jerk and it's for the best if you take a break and cool off." Sites like Reddit and Twitter don't provide much scope for these kinds of light-touch, non-bureaucratic interventions that keep the tone collegial. Consequently all their interventions end up being heavy and loaded down with wide-ranging implications.

                Part of me is still disgusted by the idea because I don't go in for that sort of thing.

                I hear you. I go through so much effort in my company Slack channel to make people select avatars for themselves so we can have faces to names, and half of our security devs have picked pictures that are generic silhouettes. I don't get it. . .

                As a security/privacy nut though, we maybe would want to give people a little warning as to whether they're really sure they want a picture of their face, their kid, or any other personally identifiable information associated to their posts.

                I think what intrigues me most about the concept is the idea that some aspects of your avatar might be out of your own control, set by other users rather than just yourself (see that thread I linked above). I can't think of any examples where that's been tried before.

                Hmm this is an interesting idea. I'm thinking of the character portraits in Cursader Kings 2, where there are mods that cause the portraits to change based on your character's stats. At a base level it does stuff like giving you a double-chin if the character is fat. But it can also do subtle stuff, like if you have a pet cat it'll have a cat in the background or if you're really high on martial skill it might include military props or various types of helmets and garb.

                I'd be averse to user based tagging though. I feel like the most fun implementation would involve some sort of NLP on your posts to evaluate general tone and language choice and give you badges for things. Elaborate sentence structure or vocabulary might give you a quill pen. General combativeness might have you with a spear in your hand. If a lot of posts responding to you have key words like "stubborn" or "asshole" maybe it replaces your face with a butt. . .

                The possibilities are endless! That's not really a reputation system though. That's just a fun little forum quirk.

                Maybe the real move would just be to let people tag each other's contributions in a thread with a member of the flame warrior roster.

                4 votes
      2. retiredrugger
        Link Parent
        That could be an interesting way of measuring it. Perhaps we could quantify the number of interactions between two users to establish a basis of trust. That way, whether or not they have opposing...

        That could be an interesting way of measuring it. Perhaps we could quantify the number of interactions between two users to establish a basis of trust. That way, whether or not they have opposing views points the fact they engage with one another often demonstrates a level of respect as they're willing to take the time to effectively communicate their points with another.
        Trust between individuals could then be measured by the amount of times they interact and how many characters they use in said interactions. These interactions within certain communities could then be used to establish a relationship between the individual and the community.

        3 votes
      3. skybrian
        Link Parent
        It seems like that would end up turning into a slightly different way to determine popularity. Let's say a community is divided, and people on each side vouch for different people. Then if you...

        It seems like that would end up turning into a slightly different way to determine popularity. Let's say a community is divided, and people on each side vouch for different people. Then if you count the number of people who vouch for them, it would be sort of like a follower count on Twitter. You know someone with a high number is popular with their followers, but don't really know if they're trusted by your side (assuming you have a side).

        So, I'm not sure that keeping score in that way is a good idea.

        3 votes
  4. [12]
    Odysseus
    Link
    My greatest gripe with this idea is that it puts too much on credibility of the individual rather than the words they're saying. Tagging users only help to let us more easily avoid those whose...

    My greatest gripe with this idea is that it puts too much on credibility of the individual rather than the words they're saying. Tagging users only help to let us more easily avoid those whose ideas we might disagree with and acts as a way to separate those we consider to be inside from those we consider to be outside. In my opinion, usernames should be hidden except from moderators.

    7 votes
    1. [5]
      skybrian
      Link Parent
      I see both sides to this one. As a writer in an Internet forum, I think it's good to remember when talking with strangers that they don't know who you are. If you make a bare assertion of fact: "X...

      I see both sides to this one.

      As a writer in an Internet forum, I think it's good to remember when talking with strangers that they don't know who you are. If you make a bare assertion of fact: "X is true" then it basically means "Some rando on the Internet believes X is true." Reading bare assertions of fact by strangers is often not that useful to the reader, which is why it's often good to remain humble and act like an anonymous worker bee. A good approach is to do things Wikipedia-style and back up what you say with links.

      But this is a fairly unnatural way of doing things and many people are not used to writing this way. They expect their beliefs to be taken seriously and get offended if you don't. (And on Twitter, people get made fun of by telling basic things to experts without realizing who they are talking to.)

      Also, it is nice to interact with genuine experts.

      And I think users avoiding each other if they don't get along is a fine way to keep the peace. We don't need to solve every personality conflict.

      And, well, don't you want to meet people and make friends?

      So there are benefits to users getting to know each other and forming a genuine community.

      7 votes
      1. [4]
        Gaywallet
        Link Parent
        The thing is, a genuine expert is going to be pretty easily noticeable based on how they interact with you and the kind of information they present. Knowing that I have a degree in neurobiology...

        Also, it is nice to interact with genuine experts.

        The thing is, a genuine expert is going to be pretty easily noticeable based on how they interact with you and the kind of information they present. Knowing that I have a degree in neurobiology means nothing, as I could have forgotten all the information by today and be hiding behind the guise of a degree to espouse beliefs that are not necessarily represented in the scientific literature.

        However, if I make a post or reply about neurobiology where I go in depth into how something works, provide sources, and in general contribute something of high value, it doesn't matter whether I have a degree or not - it's clear that there is expertise purely by the content of the message alone.

        The risk of flagging 'experts' is that expertise does not guarantee knowledge and it allows people to turn a blind eye to bias, especially if they are not versed enough to understand that there is bias in the message being conveyed.

        5 votes
        1. [3]
          skybrian
          Link Parent
          It seems like disclosing some things about yourself can be useful to help the reader account for bias? (For example if you're writing about your employer, it's often expected that you say so.)...

          It seems like disclosing some things about yourself can be useful to help the reader account for bias? (For example if you're writing about your employer, it's often expected that you say so.)

          You're right that this is often done in the post itself when it's relevant and we remember to do it. But putting it in your profile doesn't seem so bad either?

          Also, getting to know other people changes what questions we ask and what we choose to talk about. You're more likely to ask questions about topic X if you're hanging out in a forum where there are other people who have experience with topic X. There are probably interesting things we could talk about that we don't because we don't know each other very well.

          Something like "ask historians" on Reddit is an extreme case. People ask all sorts of questions that they wouldn't ask about if they didn't know there were other people reading who might actually know the answer. (Creating a history forum doesn't do anything by itself.)

          3 votes
          1. [2]
            Gaywallet
            Link Parent
            Agreed Agreed, but I think that can grow organically as a section in Tildes. There's no need for any sort of visible trust system for this to grow - the only problem is getting enough people to...

            You're right that this is often done in the post itself when it's relevant and we remember to do it. But putting it in your profile doesn't seem so bad either?

            Agreed

            People ask all sorts of questions that they wouldn't ask about if they didn't know there were other people reading who might actually know the answer.

            Agreed, but I think that can grow organically as a section in Tildes. There's no need for any sort of visible trust system for this to grow - the only problem is getting enough people to frequent that section for there to be good answers.

            3 votes
            1. skybrian
              Link Parent
              Maybe it would be better to change the goal from "trust system" to "helping people get to know each other?" That will probably happen naturally too, but software does make a difference sometimes...

              Maybe it would be better to change the goal from "trust system" to "helping people get to know each other?" That will probably happen naturally too, but software does make a difference sometimes in how a community grows.

              2 votes
    2. [6]
      anahata
      Link Parent
      You can mostly accomplish this with client-side browser tools. I hide a bunch of stuff on reddit, for example, like points and karma. Going into who's a moderator or not may require some JS or...

      usernames should be hidden except from moderators

      You can mostly accomplish this with client-side browser tools. I hide a bunch of stuff on reddit, for example, like points and karma. Going into who's a moderator or not may require some JS or other complicated machinery, though.

      1 vote
      1. [5]
        Odysseus
        Link Parent
        Yes, I'm aware I can do it locally, but ideally, they would be hidden from everyone to avoid the creation of celebrities in prolific users like we see on reddit. One thing I've seen on reddit,...

        Yes, I'm aware I can do it locally, but ideally, they would be hidden from everyone to avoid the creation of celebrities in prolific users like we see on reddit. One thing I've seen on reddit, that I haven't seen here yet, but I can definitely see happening is people dismissing oftentimes (though not always) valid points and ideas they disagree with by pointing to a user's post history or other associations. By hiding people's identity, it forces people to address what is being said rather than try to attack or dismiss the individual based on their history. So long as moderators are able to see usernames and post histories, it wouldn't impede active and effective moderation either.

        4 votes
        1. [2]
          Comment deleted by author
          Link Parent
          1. Odysseus
            Link Parent
            I think that would be great! comments could still be ranked based on votes, and we should be able to see our own votes, but otherwise, it doesn't need to be public, as that might encourage the...

            I think that would be great! comments could still be ranked based on votes, and we should be able to see our own votes, but otherwise, it doesn't need to be public, as that might encourage the bandwagon effect.

            2 votes
        2. moonbathers
          Link Parent
          On the flip side, knowing who you're talking to, especially in a small community like this, means you don't have to rehash every single part of an argument every time you talk to someone because...

          On the flip side, knowing who you're talking to, especially in a small community like this, means you don't have to rehash every single part of an argument every time you talk to someone because you already know who you're talking to and at least partly what they think about a particular issue. On larger communities like Reddit everything has to be infinitely rehashed because it's new people talking every single time. In addition, being able to see post histories lets you decide if people are arguing in good faith or not. That's not an issue on Tildes right now, but if Tildes was bigger it might become one.

          4 votes
        3. [2]
          anahata
          Link Parent
          A concern I have with anonymity is that it can result in radicalization like we see on 4chan and 8chan. I appreciate the “consider what is said, not who speaks” philosophy, and how a user’s...

          A concern I have with anonymity is that it can result in radicalization like we see on 4chan and 8chan.

          I appreciate the “consider what is said, not who speaks” philosophy, and how a user’s history can undermine their point, but sometimes that’s necessary, e.g. to suss out trolling or other malicious action. This isn’t just the provenance of moderators. Everyone should be able to determine if someone is trolling.

          3 votes
          1. Odysseus
            Link Parent
            I think that's a great point, but I disagree that it can result in radicalization as unlike 4chan or 8chan, users would not be truly anonymous, nor are they be given their own space for dangerous...

            I think that's a great point, but I disagree that it can result in radicalization as unlike 4chan or 8chan, users would not be truly anonymous, nor are they be given their own space for dangerous rhetoric. 4chan and 8chan had /pol/ while reddit has the ability to make your own spaces. Tildes forces people to interact under set, fairly benign categories, where myopic, ignorant, or even dangerous ideas can be exposed for what they are- trolling or not. My unease with allowing people to further categorize each other is that it allows for people to create their own echo chambers where they can more easily disregard dissenting opinions- be it hateful ones or ones that denounce their bigotry

            2 votes
  5. [2]
    retiredrugger
    Link
    Before we start discussing how to measure "Trust" should we not elaborate on perameters the perameters of what defines "Trust"? Trust is inherently a subjective quality depending on the biases of...

    Before we start discussing how to measure "Trust" should we not elaborate on perameters the perameters of what defines "Trust"? Trust is inherently a subjective quality depending on the biases of a community. Furthermore, should real world marks of character (eg an Eagle Scout) be automatically labeled on Tildes?

    5 votes
    1. mike10010100
      Link Parent
      In addition, I personally comment under the exact same username across every social media network I join. I am easily Googleable, and don't hide behind my username. Does that give me a level of...

      In addition, I personally comment under the exact same username across every social media network I join. I am easily Googleable, and don't hide behind my username. Does that give me a level of "trust" if I am willing to verify my IRL identity? Or does that not matter when faced with potentially more anonymous users who might not care as much about their online identity and could simply "gang up" on others?

      3 votes
  6. gergir
    Link
    Isn't it better to assume everyone's all right until they show that they're not? Trouble- or noisemakers usually give themselves away quickly, no?

    Isn't it better to assume everyone's all right until they show that they're not? Trouble- or noisemakers usually give themselves away quickly, no?

    3 votes