14 votes

How to Protect Tildes From Subjugation by Propaganda

Right now it seems like a pretty happy and active alpha community.

But even if you set aside all the Russian-bought advertising during the last US election, all social media sites currently have a problem with propaganda. You can call it by other names: hail corporate, shills, AstroTurf, bot armies, sockpuppets, etc. But it eats quality discourse, kills genuine community, and at the most extreme can serve to radicalize the young and marginalized.

What pieces of Tildes are currently in place or planned fight this scourge of digital communication? What are the pitfalls and what are the successes you've seen? Because I believe if we ignore this eventuality, something really special will die.

12 comments

  1. [4]
    SaucedButLeaking
    Link
    Part of it is the reputation and trust system, making it difficult to push a thread with an army of sockpuppets. This brings about the issue of selling accounts or taking payment to post / vote...

    Part of it is the reputation and trust system, making it difficult to push a thread with an army of sockpuppets.

    This brings about the issue of selling accounts or taking payment to post / vote with a high trust account. I have no good ideas for how to combat this other than making public and ugly examples of anyone found to be doing so

    7 votes
    1. Archimedes
      Link Parent
      I think the trust system is key. The best way to discourage selling accounts is to make it take significant effort to accumulate good trust as well as allowing that trust to erode rapidly if the...

      I think the trust system is key. The best way to discourage selling accounts is to make it take significant effort to accumulate good trust as well as allowing that trust to erode rapidly if the user starts displaying bad behavior. That way getting a trusted account would be relatively expensive and it wouldn't last very long once it is being abused.

      This, of course, is easy enough to say, but the devil is in the details to implement. How do you actually detect bad behavior and prevent legitimate users from losing trust due to false positives?

      4 votes
    2. [2]
      gksu
      (edited )
      Link Parent
      Right now there are advertising companies that have proprietary bot farms that raise up little Reddit bots from brand new to a year plus old. They copy popular posts and thread responses, upvote...

      Right now there are advertising companies that have proprietary bot farms that raise up little Reddit bots from brand new to a year plus old. They copy popular posts and thread responses, upvote each other, and only after getting a bit of success through karma do they start advertising, managing reputation, or spead propaganda. How is Tildes reputation/trust system any different than that?

      3 votes
      1. SaucedButLeaking
        Link Parent
        Most karma farming happens in /r/aww and /r/funny, since low effort content goes straight to the top on those subs. It's a lot harder to automate a trust farm when you need to have actual...

        Most karma farming happens in /r/aww and /r/funny, since low effort content goes straight to the top on those subs. It's a lot harder to automate a trust farm when you need to have actual discussion that doesn't get shot down by existing users.

        Plus, reddit is in the top 5 trafficked sites. It has a huge target on its back. We might not get big enough to be worth astroturfing. Though that is probably a naive sentiment

        The real answer is that the community here needs to be able to recognize such activities and nip them in the bud whenever possible. How successful we will be at that remains to be seen

        7 votes
  2. [7]
    ContemplativePanda
    Link
    Tildes will have to make sure to implement good back end methods and teams to help moderate this. Perhaps a user based system too, since that is one of the best ways to fight it. With information...

    Tildes will have to make sure to implement good back end methods and teams to help moderate this. Perhaps a user based system too, since that is one of the best ways to fight it. With information and the power of the community. We have to be sure we don't confused true propaganda with opposing opinions though, as that kills discourse as well. Classification is important.

    4 votes
    1. [6]
      gksu
      Link Parent
      Yeah, classification is a big deal these days. YouTube uses sophisticated AI to police their uploads for discovery and monitization. But they're having trouble with their AI seeing anything, for...

      Yeah, classification is a big deal these days. YouTube uses sophisticated AI to police their uploads for discovery and monitization. But they're having trouble with their AI seeing anything, for instance, tagged 'trans' as bad. But this is because the AIs are (probably) fed hateful videos as input during development for "we don't like this stuff" and go for the easy classification.

      It's a serious problem because you've got marginalized groups making sincere content and running afoul of the algorythm, but the fix isn't as simple as telling a human moderator not to do that. I get the outrage but I also see the hard place YouTube is in because the AI isn't human written so its a bit of a black box. The only solution is to tweak the inputs on the next generation and see if that helps.

      5 votes
      1. [2]
        Archimedes
        Link Parent
        I'm guessing it's at least partly because users (and/or bots) maliciously flag trans related videos.

        But this is because the AIs are (probably) fed hateful videos as input during development for "we don't like this stuff" and go for the easy classification.

        I'm guessing it's at least partly because users (and/or bots) maliciously flag trans related videos.

        3 votes
        1. gksu
          Link Parent
          Partly. But it could easily be a bunch of trans hate videos they feed the algorithm but the bots take the wrong piece from it. Here's a great video on machine learning works and how errors lie...

          Partly. But it could easily be a bunch of trans hate videos they feed the algorithm but the bots take the wrong piece from it.

          Here's a great video on machine learning works and how errors lie that could happen.

      2. [3]
        ContemplativePanda
        Link Parent
        Yeah, I think AI is the solution but we are still developing the practices to tweak it. Though, Youtube has done so many things so poorly in the past I understand the contempt with yet another...

        Yeah, I think AI is the solution but we are still developing the practices to tweak it. Though, Youtube has done so many things so poorly in the past I understand the contempt with yet another mistake on their part. I believe that AI systems aiding humans in making decisions (as well as making easy ones on its own) will be sufficient for Tildes at the moment. Especially if it is combined with some sort of community wide user backed system to help moderate.

        1 vote
        1. [2]
          gksu
          Link Parent
          Yeah, AI is really the future of all moderation and a lot of other jobs. I imagine there'll be independent bot farmer jobs in the future. We already saw that a touch with that DeepFakes trash.

          Yeah, AI is really the future of all moderation and a lot of other jobs. I imagine there'll be independent bot farmer jobs in the future. We already saw that a touch with that DeepFakes trash.

          1. ContemplativePanda
            Link Parent
            Yeah it's kind of scary but when trained right it's so much more effective than humans.

            Yeah it's kind of scary but when trained right it's so much more effective than humans.

            1 vote
  3. [2]
    Comment deleted by author
    Link
    1. gksu
      (edited )
      Link Parent
      I agree on theory, but our everyday lives are so corporatized that I'm not sure how viable or popular that would be, not to mention how it would even function. I mean, I know I need to...

      I agree on theory, but our everyday lives are so corporatized that I'm not sure how viable or popular that would be, not to mention how it would even function. I mean, I know I need to de-corporatize myself more, but do I have the right to force that on anyone else? Does Tildes?

      Edit: autocorrect

      1 vote