29 votes

So thinking ahead... how do we stop bots?

They've ruined reddit and Facebook for me. I can't help but to double guess every divisive or snarky comment on those platforms. I've heard as much as 1/3 of Twitter are bots.

Does Tildes foresee an API (or does one exist already)? I honestly don't know much about how bots operate but figured APIs were the obvious route.

44 comments

  1. [17]
    Amarok
    (edited )
    Link
    We talked about this - I think we're asking the wrong question. Bots can do a hell of a lot of good work, too. The kind of bots that attack your site won't do well with the invite-only model and...

    We talked about this - I think we're asking the wrong question. Bots can do a hell of a lot of good work, too.

    The kind of bots that attack your site won't do well with the invite-only model and they'll be perma-gimped by the trust system if they ever do sneak in. I'm more interested in the other kind of bots - the ones users and mods write to automate or help with something on the website. Sure, a ton of those are shit, but some of them are gold, and plenty of places couldn't even function without bots. Automoderator was a bot before it was part of reddit.

    So, maybe we ask - how do we tame bots and make them a force for good?

    I'd argue for a special type of account - a bot account - and it should be very clearly labeled, and have no access whatsoever to any of the voting mechanics. Most 'good' bot activity is not about voting, it's about commenting or moderating - though, here, I think if there's a good idea for a tool, we're likely to clean it up and make it a part of the site code, rather than a bot extension. Maybe bots can be used to proof-of-concept some features, especially the ones we're not sure will work out.

    We also get to know exactly who is behind the bot account and what they are doing, and that's public information. We might even make the logs of the bot's activity very, very public - moreso than regular user behavior.

    If we want to get crazy about this, we could also require all the bots to be open-source, and even run the bots themselves on tildes' servers, downloading and compiling the code directly from a public open source repo like gitlab. That would also cut down on duplication of effort and give everyone a chance to see exactly how the bot works and make it better at its job.

    As for the kind of bots that are dragging reddit down right now - no thank you.

    35 votes
    1. [8]
      burntcookie90
      Link Parent
      A "bot account" is a necessity imo, and should be clearly distinguished in API usage documentation. Additionally, that account should be tied, at some point, to an actual human user via invite...

      A "bot account" is a necessity imo, and should be clearly distinguished in API usage documentation. Additionally, that account should be tied, at some point, to an actual human user via invite codes or something.

      20 votes
      1. [4]
        Scion
        Link Parent
        I agree. Every bot account should have an "owner", a normal user account to which it is linked. This is a half thought out idea, but perhaps the owner account should have to meet some kind of...

        I agree. Every bot account should have an "owner", a normal user account to which it is linked. This is a half thought out idea, but perhaps the owner account should have to meet some kind of prerequisite to be eligible to open a bot account (time on the site, certain number of votes, etc.) . This prevents brand new users from creating bots without first acclimating themselves to the website and communities in which their bot will act.

        10 votes
        1. [3]
          safari
          Link Parent
          I wonder if bot accounts should maybe also be limited in scope, i.e. what communities they're allowed to post or act in. That could be listed on the bot's profile, and enforced by the site. In...

          I wonder if bot accounts should maybe also be limited in scope, i.e. what communities they're allowed to post or act in. That could be listed on the bot's profile, and enforced by the site. In order to be allowed to act in a certain community, it has to get approval from that community somehow, or the admins. In order to act sitewide, it has to get admin approval.

          7 votes
          1. [2]
            Amarok
            Link Parent
            I think those are very reasonable restrictions. We've seen bots 'go rogue' on reddit before, and this sort of thing would also help limit the damage they can do, if the bot account is somehow...

            I think those are very reasonable restrictions. We've seen bots 'go rogue' on reddit before, and this sort of thing would also help limit the damage they can do, if the bot account is somehow compromised.

            5 votes
            1. Natanael
              Link Parent
              Yup, plenty of reddit mods, me included, have asked for the ability to greylist all bots by default, not allowing them to make visible posts without moderator permission. As it is today, they can...

              Yup, plenty of reddit mods, me included, have asked for the ability to greylist all bots by default, not allowing them to make visible posts without moderator permission.

              As it is today, they can appear out of nowhere and spam serious discussions with stupid crap. Novelty bots are extra annoying. Valuable bots can definitely tolerate being audited before getting permission.

      2. [3]
        Amarok
        Link Parent
        Agreed - I think the 'bot accounts' would probably require admin-activation. Someday tildes is likely to have a sandbox for developers to play in - if you've seen SubredditSimulator, well, this...

        Agreed - I think the 'bot accounts' would probably require admin-activation.

        Someday tildes is likely to have a sandbox for developers to play in - if you've seen SubredditSimulator, well, this site is being coded by the guy who wrote that. People who want to play with making bots could do it in the test sandbox, so that they don't bother anyone while they work out the kinks in their toys. Once it's ready for primetime, we can audit it and set it loose on the site. This way, people won't really need to hassle tildes until the toys are ready.

        7 votes
        1. [2]
          Scion
          Link Parent
          Perhaps only allow bots to function in ~test without admin approval.

          Perhaps only allow bots to function in ~test without admin approval.

          1 vote
          1. Amarok
            Link Parent
            I was thinking only on a dev.tildes.net rather than anywhere on the real tildes.net. Completely isolated.

            I was thinking only on a dev.tildes.net rather than anywhere on the real tildes.net. Completely isolated.

            9 votes
    2. [3]
      creesch
      Link Parent
      Crazy being important here, if you want to stop people from playing with the API this would be the best way to do it. I can't count the times that I made a one-off script on reddit to do something...

      If we want to get crazy about this, we could also require all the bots to be open-source, and even run the bots themselves on tildes' servers, downloading and compiling the code directly from a public open source repo like gitlab. That would also cut down on duplication of effort and give everyone a chance to see exactly how the bot works and make it better at its job.

      Crazy being important here, if you want to stop people from playing with the API this would be the best way to do it. I can't count the times that I made a one-off script on reddit to do something or simply wanted to test before making it a full bot. Often ending up not using it or not completing it, however for the API there is no difference. I am using it so I should publish the code which is of course a bit silly.

      Also, one of the things that allow a lot of websites like this to grow is having an easy to use api with which people can experiment and see what is possible. Putting a lot of heavy restrictions on it will discourage a ton of people from even trying.

      The most sensible way to approach this is simply do the same thing discord does, flag bot users as such and limit some of the actionable functionality like voting and tagging.

      Also, to go back to something you said a bit above the restriction proposal:

      Most 'good' bot activity is not about voting, it's about commenting or moderating - though, here, I think if there's a good idea for a tool, we're likely to clean it up and make it a part of the site code, rather than a bot extension.

      I think you are overly optimistic about how useful tools can be for everyone and unfair to the creativity of individuals to still make it happen. Something I have learned from developing mod toolbox over the years is that a lot of different people have a lot of unique individual use cases that barely overlap with those of others.

      What an API allows is for people to implement their specific use case without it needing to be part of the regular website. This allows the website to keep a clean interface and functionality while still being abel to satisfy people that fancy some of the more obscure use cases.

      6 votes
      1. [2]
        crius
        Link Parent
        Well, there are always the middle-ground options. An example: You get access to the apis by requiring a token from your account profile. Tokens give you access to n request per day. You want to...

        Putting a lot of heavy restrictions on it will discourage a ton of people from even trying.

        Well, there are always the middle-ground options.

        An example:

        • You get access to the apis by requiring a token from your account profile. Tokens give you access to n request per day.
        • You want to get more? Join the "production" program that have requirements. One of them is having your code being compiled by tildes.net server from a public repository.

        It's really not a big deal and will assure transparency only on products that the developers are sure they want to make "public".

        5 votes
        1. creesch
          Link Parent
          Well, I specifically did respond to the idea of being required to open up your code. I really think that as long as bot users are clearly labeled as such and restricted from things like voting you...

          Well, I specifically did respond to the idea of being required to open up your code.

          I really think that as long as bot users are clearly labeled as such and restricted from things like voting you already have more than enough control over it while still keeping it very inviting for people to tinker with.

          In many cases the prospect of potentially being limited in API usage (like twitter does) is already enough to keep a lot of 'good' users from even consider making something for it.

          Having said that, if the plain rate limit is reasonable enough I wouldn't be opposed to giving open source consumers a higher limit. Basically the same reddit does for cookie authenticated vs oauth api users.

          Also, if you require consumers to be open source at some point you basically also are now in the position that you need the means to audit their code and determine that is really the same code that is actually used. If you don't it is just a token requirement and if you do it puts a lot of strain on your early organization.

          2 votes
    3. [5]
      Pilgrim
      Link Parent
      I like everything you said. Question though. Won't people just make bots that use real user accounts or are the bots so fundamentally different (API-based I suppose) that is just not a concern?...

      I like everything you said. Question though. Won't people just make bots that use real user accounts or are the bots so fundamentally different (API-based I suppose) that is just not a concern?

      Also, something mentioned in another thread I think, the current invite system is cool. There was some mention of users being responsible for who they invite. I love that. Maybe that could be public info in the same way tildes are, like we might have ~music.rock.alt. But for users it'd be @deimos.Amarok. It'd create clear accountability and if a group of folks are abusing the system/brigading it might be easier to detect. Sort of a rough thought there.

      Edit: Removed "I" from the above. It wasn't my idea lol. That's how you know it' s a good one, you try to steal it!

      1 vote
      1. [4]
        Amarok
        Link Parent
        A real user doesn't actually use the API for anything, they use a web browser. That means a user account doesn't need the same kind of access to API functionality (or any at all, really). People...

        A real user doesn't actually use the API for anything, they use a web browser. That means a user account doesn't need the same kind of access to API functionality (or any at all, really). People can still write bots that scrape the page and interact that way (through the browser rather than the API) but that's much harder to do, and there are plenty of ways to detect and sabotage that behavior as well.

        We can certainly make this a lot harder than anyone else ever has before.

        1 vote
        1. kaushalmodi
          (edited )
          Link Parent
          I was actually going to request API support so that I can fetch comments related to my blog post links here and render them below my post. So that's one use of API by a real user. Update: I...

          I was actually going to request API support so that I can fetch comments related to my blog post links here and render them below my post.

          So that's one use of API by a real user.


          Update: I elaborate more on this, here.

          3 votes
        2. mocahante
          Link Parent
          A real user would still use the API through a third-party client, but that type of usage is much different than a bot's

          A real user would still use the API through a third-party client, but that type of usage is much different than a bot's

          1 vote
        3. Pilgrim
          Link Parent
          Thank you for the explanation. That's what I wasn't sure of. Sounds like stopping bad bots on Twitter and Reddit isn't quite as hard as I thought - just shutdown the APIs - but I understand why...

          Thank you for the explanation. That's what I wasn't sure of. Sounds like stopping bad bots on Twitter and Reddit isn't quite as hard as I thought - just shutdown the APIs - but I understand why they don't want to do that since I'm sure they'd impact a lot of legitimate stuff as well, and I'm sure it's nice to report to shareholders all of the "users" they have.

          I really like the idea of bot accounts.

  2. [2]
    shiruken
    Link
    How about some kind of automated moderation feature? An 'AutoModerator' if you will.

    How about some kind of automated moderation feature? An 'AutoModerator' if you will.

    9 votes
    1. Scion
      Link Parent
      If only we could find someone who knew how to build such a thing!

      If only we could find someone who knew how to build such a thing!

      11 votes
  3. [3]
    fringly
    Link
    Good bots are needed, but can we please avoid the kind of spammy joke bots that thrive so much on reddit, yet add nothing. Recent ones that we have removed from our sub there include one that...

    Good bots are needed, but can we please avoid the kind of spammy joke bots that thrive so much on reddit, yet add nothing.

    Recent ones that we have removed from our sub there include one that posts "ヽ༼ ຈل͜ຈ༽ ノ Raise ur dongers!" on hundreds of posts, several 'cat facts' bots that activate when they see the word "cat" and a 'tupac cares' bot that triggers on a comment that says "no one cares" and says "Tupac cares, if don't nobody else care."

    These are all from the last week or so, there must be hundreds out there and none of them offer anything quality - if we can avoid them here then it'd be so welcome.

    9 votes
    1. [2]
      Amarok
      Link Parent
      Hell. Yes. Those bots provide nothing of real value. Value-adding bots are like the gameofthrones flair bot, or listentothis' music popularity checkers, or the various...

      Hell. Yes. Those bots provide nothing of real value.

      Value-adding bots are like the gameofthrones flair bot, or listentothis' music popularity checkers, or the various collect-all-the-videos-in-one-comment bots (for some communities - others ban those).

      6 votes
      1. fringly
        Link Parent
        So long as a bot has a real purpose then I am happy with it - otherwise no thank you!

        So long as a bot has a real purpose then I am happy with it - otherwise no thank you!

        3 votes
  4. [12]
    burntcookie90
    Link
    The invite system, for the foreseeable future, will help. Every account is tied to a user that generated the invite code.

    The invite system, for the foreseeable future, will help. Every account is tied to a user that generated the invite code.

    6 votes
    1. [6]
      Emerald_Knight
      Link Parent
      I recall seeing a discussion about the invite codes and the ability to mass-ban accounts tied to them. I don't recall the details or where, though!

      I recall seeing a discussion about the invite codes and the ability to mass-ban accounts tied to them. I don't recall the details or where, though!

      2 votes
      1. [5]
        jgb
        Link Parent
        It's been mentioned in a few places. It's a simple concept really - everyone is invited by someone, so the entire set of users is in fact a tree structure. If user A invites {B, C, D}, then B...

        It's been mentioned in a few places. It's a simple concept really - everyone is invited by someone, so the entire set of users is in fact a tree structure. If user A invites {B, C, D}, then B invites {M, N, O}, C invites {S, T, U}, and D invites {X, Y, Z}, then that's 12 users all invited, directly or indirectly, by A. If the admins decide that this group is a group of spammers, or trolls, or bots, or whatever, then they can simply ban the clade with A as the common ancestor, solving the problem in one fell swoop.

        8 votes
        1. [3]
          cfabbro
          (edited )
          Link Parent
          Yeah, we just have to be very careful with our pruning since we don't want to inadvertently ban good faith users. E.g. If someone has sent out 5 invites and 1 turns out to be a bad-faith user we...

          Yeah, we just have to be very careful with our pruning since we don't want to inadvertently ban good faith users. E.g. If someone has sent out 5 invites and 1 turns out to be a bad-faith user we should not assume they themselves are one as well so a simple warning may suffice when that 1 they invited gets banned. A bunch of warnings in a short period or lots over a long period? That's when things escalate, we evaluate the situation and we consider pruning at the source if necessary.

          6 votes
          1. [2]
            Amarok
            Link Parent
            Even if we're planning to whack a tree, I'm pretty sure there's going to be user review within that tree before it happens. That said, it's possible we'd end up knocking over a real account...

            Even if we're planning to whack a tree, I'm pretty sure there's going to be user review within that tree before it happens. That said, it's possible we'd end up knocking over a real account someday, so that should probably come with an appeal process of some kind.

            3 votes
            1. crius
              Link Parent
              Absolutely, especially because there are ways to automate the "bot" detection as of today as that bot on Reddit prove.

              Absolutely, especially because there are ways to automate the "bot" detection as of today as that bot on Reddit prove.

              1 vote
        2. Amarok
          Link Parent
          It's like cutting a dying branch off of a tree, so that the tree can get better. ;)

          It's like cutting a dying branch off of a tree, so that the tree can get better. ;)

          2 votes
    2. [5]
      Pilgrim
      Link Parent
      I really, really like that. That should be permanent. It could be like the subs even: ~Pilgrim.Pilgrim's friend "billy".billy's friend "Tom. users all the way down

      I really, really like that. That should be permanent. It could be like the subs even: ~Pilgrim.Pilgrim's friend "billy".billy's friend "Tom.

      users all the way down

      1. [4]
        Amarok
        Link Parent
        If you check user pages, you can see who invited whom. It's fully public since day one. We don't make the entire tree public (so you can't see all user relationships at once) but if you get...

        If you check user pages, you can see who invited whom. It's fully public since day one.

        We don't make the entire tree public (so you can't see all user relationships at once) but if you get suspicious about something, you can at least check suspicious accounts for common invitees - and if you find them, that's probably worth messaging an admin (or ideally, some kind of 'message all the admins' proper ticketing system, not that hacky reddit shit).

        I suppose it might be possible that for well trusted users that information could go further and let them see larger parts or even all of the tree. It's tricky business because we're trying to protect privacy, but we're also trying to leave just enough info out there to help our users keep a weather eye on their own userbase. Like most of the ideas we're kicking around, we'll have to see how it goes when real abuse cases start happening.

        4 votes
        1. [3]
          Pilgrim
          Link Parent
          Personally, I don't see why the entire tree couldn't be public. It's like the bitcoin ledger, but for accounts.

          Personally, I don't see why the entire tree couldn't be public. It's like the bitcoin ledger, but for accounts.

          1 vote
          1. [2]
            tildesatwindmills
            (edited )
            Link Parent
            I see the concern about privacy... but also think that one of the important factors about community (irl) is that people know who you are and who you bring to a party. Public knowledge of the...

            I see the concern about privacy... but also think that one of the important factors about community (irl) is that people know who you are and who you bring to a party. Public knowledge of the invite tree and trust/reputation effects from poor invites could cause people to think about their actions before acting... something sorely needed in this world.
            (Edit: spelling, sheesh)

            4 votes
            1. Pilgrim
              Link Parent
              Thank you. You eloquently expressed something I was struggling to put into words. Side note, I love your user name. Any thoughts about the upcoming Quixote movie?

              Thank you. You eloquently expressed something I was struggling to put into words.

              Side note, I love your user name. Any thoughts about the upcoming Quixote movie?

              1 vote
  5. [7]
    Fantastitech
    Link
    Honeypots are quite effective against bots but they have the negative side effect of breaking certain accessibility features like screen readers. Personally, if bots and spam are a big issue I...

    Honeypots are quite effective against bots but they have the negative side effect of breaking certain accessibility features like screen readers. Personally, if bots and spam are a big issue I think the issues raised are worth it.

    I've been involved in one capacity or another with the logistics of managing gaming communities for over a decade. The last one is the only one I ever managed to completely eliminate spam on. It used a combination of blacklists, honeypots, and fingerprinting to block spammers and it was incredibly effective. In several years of the anti-bot measures being in place not a single confirmed spammer made it through registration even after disabling CAPTCHA. After seeing how effective it was I implemented it into some other web apps I'd been working with.

    It was this Xenforo plugin that started it all.

    4 votes
    1. [6]
      eladnarra
      Link Parent
      A bit offtopic, but how do they mess with accessibility features? (My guess is stuff like visual captchas that obviously can't have alt text, but I don't know anything about honeypots.) Also: did...

      A bit offtopic, but how do they mess with accessibility features? (My guess is stuff like visual captchas that obviously can't have alt text, but I don't know anything about honeypots.)

      Also: did you find ways around those issues?

      1. [5]
        Fantastitech
        Link Parent
        A screen reader is basically a bot that parses the HTML and reads it out. Honeypots are most commonly hidden fields in a form that a human can't see because of CSS and/or javascript, but will...

        A screen reader is basically a bot that parses the HTML and reads it out. Honeypots are most commonly hidden fields in a form that a human can't see because of CSS and/or javascript, but will still show up in the HTML if you inspect it. So a blink person trying to navigate a page with a screen reader may have trouble because the screen reader would find the hidden fields that a human is not intended to interact with.

        I didn't spend much time researching how to avoid those issues as the amount of people requiring accessibility features in my communities were a self-selecting group of zero. Not a lot of blind people out there playing competitive FPS games.

        Here's a detailed but concise SO thread on the topic. Ideally, a honeypot is part of a well-rounded approach to combating bots. A lot of larger anti-spam solutions for web apps will "rank" a user, applying weighted points for each category.

        This is just made up off the top of my head as I don't remember the grading system I used before but here's a rough example:

        • +3 points if user touches a honeypot field
        • +3 points if the form was submitted within X seconds of page load
        • +2 points if a user's IP is a know proxy
        • +1 point if user has javascript disabled
        • +1 point for unknown user agent strings

        You would set some sort of threshold where a user is detected as a bot based on points. Let's say six points and the registration is blocked. That means that if the form was submitted instantly after page load AND touched a honeypot fields OR is a known proxy, has javascript disabled, and is using an unknown UA then the registration would be blocked. This allows for outlier cases like people who keep their javascript disabled or connect through a popular VPN service.

        You can create a lot of these rules based on the behavior you see from bots hitting your site and adjust the weights so regular users see absolutely no inconvenience but bots are effectively crippled.

        10 votes
        1. eladnarra
          Link Parent
          That's really interesting, thanks for taking the time to write such a thorough reply. I'd come across the concept of screen readers accessing things that aren't visible to sighted users, but it...

          That's really interesting, thanks for taking the time to write such a thorough reply. I'd come across the concept of screen readers accessing things that aren't visible to sighted users, but it was in the context of things developers actually want screen readers to access.

          So if I'm understanding correctly, a system incorporating honeypots might not automatically block someone who is using a screen reader, since you can set things up so the honeypot points aren't enough on their own, but it could make things confusing at the very least.

          [And yeah, I understand how certain audiences negate the need for certain accessibility features. At my old job I helped make online training, and since a large subset of the employees worked in highly physical jobs with hearing and vision requirements, much of our training simply didn't have the need for it. Since ~ is hopefully attracting a broader audience and accessibility is one of my interests, I'm curious to learn more and encourage its inclusion. So thanks again! :) ]

          4 votes
        2. [3]
          Amarok
          Link Parent
          Sneaky. This is a pretty solid strategy. Here we might also analyze their comments. I've been arguing for a stupidfilter for a while, though not everyone on the team believes me when I tell them...

          Sneaky. This is a pretty solid strategy.

          Here we might also analyze their comments. I've been arguing for a stupidfilter for a while, though not everyone on the team believes me when I tell them we should be setting vote multipliers of zero on comments of ' l o l ' and multipliers of 1.1 on five paragraph replies. :P

          Bots have a weakness here, because they can pretend to be stupid, but they sure as hell can't pretend to be smart. It ought to be pretty damn easy for a stupidfilter to give us some kind of red flag on a user that's got a lot of dumb or repetitive statements, or copypasta from old threads and youtube video comments.

          2 votes
          1. [2]
            Fantastitech
            Link Parent
            I've been wanting to play with using TensorFlow for bot detection but it's difficult to get a huge sample of confirmed bot posts for seed data. It would be interesting to see how accurate it could...

            I've been wanting to play with using TensorFlow for bot detection but it's difficult to get a huge sample of confirmed bot posts for seed data. It would be interesting to see how accurate it could get at spotting bot traffic.

            1 vote
            1. Amarok
              Link Parent
              We've discussed using narrow-AI here to help - as it seems to be all the rage in silicon valley lately, and apparently it'll 'save facebook'. In truth these systems are very, very difficult to set...

              We've discussed using narrow-AI here to help - as it seems to be all the rage in silicon valley lately, and apparently it'll 'save facebook'. In truth these systems are very, very difficult to set up properly. You need fantastic datasets, and to keep them focused on the narrowest kind of tasks possible... if you can get two of them into an adversarial feedback loop, you might even get something useful out of it. Doing that only takes teams of AI experts several years to get right - a tiny percentage of the time.

              I do think, however, we could have some narrow systems doing precise targeted analysis on very simple tasks like identifying profanity. Maybe another identifies racism. Maybe yet another can identify copypasta. Keep them all narrow as possible and you might get somewhere. Then tie all the narrow ones together into a larger, more robust interface and present it all as a mod tool.

              That's the real kicker - we don't trust these systems enough to allow them to take direct action themselves. Frankly, at this point, nobody should. What they can do, however, is bring things to the attention of mods and admins, raise flags or warnings - then humans can come along and investigate. If we tie the human response into the narrow's feedback (on a pass/fail basis only), it should, painfully, slowly, start to get a little smarter/better. Maybe someday that turns into a useful tool.

              That's 5-years-from-now low-priority science project if-we-have-cash-to-burn territory. It's a ton of work for (as of now) not great bet on return investment. One day though, if this place lasts and grows, we will have a giant amount of seed data... and all of it, tagged by human beings.

              2 votes
  6. SourceContribute
    Link
    Wikipedia has a bots...a lot of them; they're surprisingly very useful in tagged pages that need some work or that have broken links, or to revert vandalism. Unfortunately for reddit most posts...

    Wikipedia has a bots...a lot of them; they're surprisingly very useful in tagged pages that need some work or that have broken links, or to revert vandalism.

    Unfortunately for reddit most posts are simply there to submit links and post comments and try to blend in. There are some good bots such as TLDR (or whatever it's called) and a bot that will point if a github repo has no LICENSE file in it; very useful to know! But they're out-weighed by the other bots out there.

    I'm not sure how Wikipedia does it but somehow they don't seem to have a huge malicious bot problem.

    2 votes
  7. [2]
    BBBence1111
    Link
    Paying attention I suppose. Bots are actually easy to recognise if you are looking for them on Reddit, as you can check post history. We just need to make sure to pay attention to repeat threads...

    Paying attention I suppose. Bots are actually easy to recognise if you are looking for them on Reddit, as you can check post history. We just need to make sure to pay attention to repeat threads and that shoudld get rid of most of them.

    1 vote
    1. crius
      Link Parent
      Also bots thrive and disguise themselves on low effort content (one liner comments or low effort threads) so they should be easily spotted here... or evolve so much that they actually provide...

      Also bots thrive and disguise themselves on low effort content (one liner comments or low effort threads) so they should be easily spotted here... or evolve so much that they actually provide interesting discussions anyway?

      Also tildes should be driven by content, not users, so bot kinda lose it's power when there is no "karma" to drive it's content forward.

      5 votes