On the heels of Reddit getting fined for not being aggressive enough with its age verification, they will now implement human verification to allegedly combat the rampant bots on the platform. I...
Exemplary
On the heels of Reddit getting fined for not being aggressive enough with its age verification, they will now implement human verification to allegedly combat the rampant bots on the platform.
I was initially intrigued by the idea that bots can effectively be labeled and filtered out on a site I’ve long suspected has been succumbing to Dead Internet Theory. One does not have to look far to see that Digg’s reboot couldn’t even last a year due to supposed bots overwhelming its platform. In an age of AI agents, it’s no surprise this is likely a serious problem that large social media sites have to deal with.
Whereas Meta may soon be embracing the bots, it’s interesting to see Reddit is primarily labeling them. It appears they will allow “good” bots on the platform while banning the malicious ones (something they claim to already do to 100K accounts a day).
While Reddit’s CEO claims this will be done in a privacy-safe manner, and that they do not want their users’ identity, this doesn’t seem to track with what they are actually doing. First off, they already do collect users’ identity through email verification which in turn makes Reddit’s ads business more enticing to advertisers. Secondly, their proposed methods of human verification don’t pass the sniff-test:
Passkeys - This does nothing other than verify you have a separate device or service to store the passkey. Bitwarden does this for free in a browser extension. While this may increase the effort to run a bot, it does little to prevent a motivated bot farmer from operating.
Third-party Biometric Verification - This is an area already fraught with privacy violations due to poorly implemented age verification systems over the past year. To make matters worse, the one example given is Sam Altman’s controversial iris scanning crypto company (although Reddit’s CEO vaguely describes a hypothetical method that wouldn’t use iris scanning…hand-waving away any privacy concerns).
Age Verification - The very same technology that is currently being shoved down the Internet’s throat via draconian laws, questionablevendors, and suspect lobbying sources. It’s an easy scapegoat to deflect blame by claiming it is a method of last resort, yet there are no laws requiring Reddit to perform human verification in the first place.
It makes me wonder if it’s incompetence that has led to the CEO sharing half-baked ideas rather than launching with a fully executed strategy, or if there is something more nefarious going on. It sure is convenient that they claim not all users will be requested for human verification, just the ones engaging in “fishy behavior”. This could easily be used as a tool to censor users and/or gather more information on political dissidents. The fact that there will be no actions taken against human accounts that post AI slop makes me question what their desired end result is here.
Perhaps it’s the privacy advocate in me just getting paranoid at the continuing erosion of Web 1.0 freedoms. Perhaps it’s my jaded cynicism at the motives behind Reddit (and social media at large) over the past 10+ years. Perhaps I’m just not ready to accept the brave new world we’re entering.
Not even that. Security Enthusiasts might get mad at you for not using encrypted storage, but there's nothing about the way passkeys work that would stop you from just generating a private key...
This does nothing other than verify you have a separate device or service to store the passkey.
Not even that. Security Enthusiasts might get mad at you for not using encrypted storage, but there's nothing about the way passkeys work that would stop you from just generating a private key right there at the top of spambot.js and saving it in a text file. The whole idea of using a passkey as proof of humanity is so nonsequitur that it doesn't even qualify as "laughable".
(Of course it's all doable in software. What else can they do, demand you compute an HMAC with paper and pencil?)
I'm not an expert but I think they mean "passkeys signed by one of the lead providers". Essentially they're delegating human verification to Google, Microsoft, and the like.
I'm not an expert but I think they mean "passkeys signed by one of the lead providers". Essentially they're delegating human verification to Google, Microsoft, and the like.
The passkey standard specifies an optional "device attestation" feature which would have made this possible, but Apple and Google both decided to not to support that feature years ago.
The passkey standard specifies an optional "device attestation" feature which would have made this possible, but Apple and Google both decided to not to support that feature years ago.
Even if you go full privacy invasion, I just don't see how you truly verify that actions being taken are not done so by a bot. A human can instruct a bot to act on their behalf, and the bot can...
Even if you go full privacy invasion, I just don't see how you truly verify that actions being taken are not done so by a bot. A human can instruct a bot to act on their behalf, and the bot can pass any verification checks back to the human when needed.
I think long term, large, broadly anonymous online communities are going to go extinct and real people will move to much smaller communities primarily populated with people they actually know in some capacity (like Discord servers)
I think its mostly already separated into two groups, one place for mindless scroll content and a separate place where people actually communicate. Places like Tildes have mostly gone extinct.
I think its mostly already separated into two groups, one place for mindless scroll content and a separate place where people actually communicate.
This is already kind of my case. The only "social media" I actively interact with is here on Tildes and discord servers I've made with either friends I've met for years or new people I find on...
This is already kind of my case. The only "social media" I actively interact with is here on Tildes and discord servers I've made with either friends I've met for years or new people I find on online videogames, which seem to be a better human verification than most others lol.
I think dead internet theory is mostly massively overblown. We're a species that has been commenting "first" or repeating the same damn jokes in comment chains since we got the internet moving. It...
I was initially intrigued by the idea that bots can effectively be labeled and filtered out on a site I’ve long suspected has been succumbing to Dead Internet Theory.
I think dead internet theory is mostly massively overblown.
We're a species that has been commenting "first" or repeating the same damn jokes in comment chains since we got the internet moving. It just turns out most people aren't very creative, and people are perfectly willing to signal boost the low effort noise rather than the higher effort content.
Looking at this report from 2023, it looks like ~24% of the ad traffic on Twitter/X was invalid/bot traffic. While I agree that Dead Internet Theory may have started as an online joke, I think...
Looking at this report from 2023, it looks like ~24% of the ad traffic on Twitter/X was invalid/bot traffic.
While I agree that Dead Internet Theory may have started as an online joke, I think there’s some truth to it when a quarter of one of the largest social media sites was bots. Now, there are a lot of reasons why there could be bots, and surely not all of them are posting content, but the ones that do can generate more in an hour than a single human can in a day. Now with the rise of AI agents and Clawdbots, the barriers for creating bots are getting lowered every day. Paired with such large financial incentives to create engagement and steer public discourse, I can only imagine the number of bots has gone up in recent years.
To your point about low effort content getting signal boosted, the bots don’t need to be producing high effort content either to attract upvotes. Bots can dominate conversation simply by generating superfluous content to drown out the genuine humans.
It’s a commonly recognized phenomena for new bots to make reposts on Reddit (particularly in r/aww and r/todayilearned) to farm karma, thereby establishing a false legitimacy before moving on to more nefarious astroturfing. When you look at the comments on these reposts they are carbon copies of the top comments on the original posts. These aren’t just low effort jokes either, they can be word-for-word anecdotes. The bots commenting on bot reposts could be fueled by bot farms looking to quickly karma farm multiple new accounts.
It’s very telling when you look at Reddit’s auto-generated username format (e.g. Adjective-Noun-XXXX, where X is a number). The bots don’t even need to generate their own username and avoid collisions anymore, Reddit does it for them! Once you see it you’ll notice it everywhere, particularly in comment threads, and that’s why I feel Dead Internet Theory may not be such a joke after all.
I'm aware of such reports, and my problem is that they're from the advertisement point of view. EVERY interaction on these platforms gets served ads. Two bots on full private mode blasting...
Looking at this report from 2023, it looks like ~24% of the ad traffic on Twitter/X was invalid/bot traffic.
I'm aware of such reports, and my problem is that they're from the advertisement point of view. EVERY interaction on these platforms gets served ads. Two bots on full private mode blasting messages back and forth get served ads. The % of bots in content average users interact with is the relevant number, and it is almost certainly a much smaller fraction of that %.
While I agree that Dead Internet Theory may have started as an online joke, I think there’s some truth to it when a quarter of one of the largest social media sites was bots.
To be clear, it says a quarter of the ad's severed were to bots. That is does NOT mean a quarter of the users are bots.
More importantly its mixed data of ad impressions and clickthroughs, and it segments by region. To quote:
North America had by far the lowest IVT rate, at 7.5%, followed by EMEA (17.4%, including 18.8% in the U.K.) and LATAM (18.2%).
Worth noting that this is a weird categorization because NA is Canada, US, Mexico, while EMEA is basically the entire stripe from England to South Africa including the middle east. This makes sense when you're looking at reporting by time zone, but not when you're trying to identify regions with more bot activity, as I suspect if you separate Europe it'll have a MUCH lower IVT rate for a variety of reasons (basically cheaper to run bot farms in countries with cheaper costs).
Now, there are a lot of reasons why there could be bots, and surely not all of them are posting content, but the ones that do can generate more in an hour than a single human can in a day. Now with the rise of AI agents and Clawdbots, the barriers for creating bots are getting lowered every day. Paired with such large financial incentives to create engagement and steer public discourse, I can only imagine the number of bots has gone up in recent years.
You're underrating the barriers and overrating the numbers needed to push an agenda. If you want to inflate views, bots are great in mass. If you want to steer an agenda, well one of the things psyops figured out centuries ago is that people are very willing to believe and signal boost as is. A couple of coordinated pieces in the right place with a few "i assent" voices goes a LOT farther than a shotgun approach. "couple" and "few" being %'s of the millions of real people on these platforms.
To your point about low effort content getting signal boosted, the bots don’t need to be producing high effort content either to attract upvotes. Bots can dominate conversation simply by generating superfluous content to drown out the genuine humans.
I'm not really arguing that? I'm saying that humans gravitate towards low effort content and will gladly create low effort content. You're ascribing a lot of effort to what's more than likely just the average person suddenly getting on reddit and the average redditor realizing the average person is willing to post "lol nice" every time 420 69 comes up. Plenty of companies (hell musk) MAKE MONEY selling to this very very large slice of the population. Contrary I think most people don't realize how sheltered and isolated their communities were until they became popular and trended towards lower effort.
It’s a commonly recognized phenomena for new bots to make reposts on Reddit (particularly in r/aww and r/todayilearned) to farm karma, thereby establishing a false legitimacy before moving on to more nefarious astroturfing. When you look at the comments on these reposts they are carbon copies of the top comments on the original posts. These aren’t just low effort jokes either, they can be word-for-word anecdotes. The bots commenting on bot reposts could be fueled by bot farms looking to quickly karma farm multiple new accounts.
I'm aware of it, i've seen it, i still doubt it's nearly as much content as people think it is. Much like how people are very bad at guessing the speed of a moving vehicle from a stand still, I find that most people will jump all over things they don't like as bot activity when it can just be pointless nonsense. Hell literal people enjoy "copy pasta" which is the SOLE ACT of copy pasting some other (likely false) moment.
I'll even go one step further to say that in places like aww and todayilearned:
I'm not sure how different bots are from the "old fashioned artisanal" karma farmers who posted the same 5 facts every day non stop to farm karma.
I'm not sure how different the actual CONTENT is between bots in users in those cases. More often than not they're popular bot spawning grounds because the comments, bots or not, are going to be the same 100 jokes we've seen in every other topic. Again I don't think it takes bots for people to be uncreative while trying to be creative (can't just say it's a nice dog, that's a "goodo pupper" or whatever...and i'm not judging..just a weird observation about people).
It’s very telling when you look at Reddit’s auto-generated username format (e.g. Adjective-Noun-XXXX, where X is a number). The bots don’t even need to generate their own username and avoid collisions anymore, Reddit does it for them! Once you see it you’ll notice it everywhere, particularly in comment threads, and that’s why I feel Dead Internet Theory may not be such a joke after all.
I find things like this especially concerning as logic.
I've heard basically every naming convention type as possible bot fodder (because as a bare minimum coder let me tell you it is NOT hard to get a list of stupid usernames based around possible sex acts or PM ME YOUR XXXXXX templates).
I know people who, especially on social media sites, are more than fine with the auto generated tags. They're not using them for serious discourse and don't want them easily tied back to them or their other accounts anyways.
So again I'm not saying it doesn't happen, but I'd bet decent money that if you had to identify randomly which comments you thought were bots and which weren't you'd vastly overshoot.
It's been a few years since I've used Reddit, but I think this was underway quite some time ago. Even eight years ago, it was very common to see the same AskReddit posts reposted over and over...
It's been a few years since I've used Reddit, but I think this was underway quite some time ago. Even eight years ago, it was very common to see the same AskReddit posts reposted over and over again, and if you Googled the text of many of the top comments, you'd find some other user posted it verbatim before. Karma farming operations were using discussions that already existed to programmatically create plausible account activity, on accounts that could later be used for astroturfing. No LLMs required, just API access or scraping.
When GPT3 dropped, years before ChatGPT, I noticed some comments appearing occasionally that made no sense but seemed like plausible human speech. I suspect people were plugging GPT3 into Reddit bots as soon as the model became available on the API.
I think what you’re saying is exactly why bots’ posts become so visible and their astroturfing methods work so well. Most people mindlessly upvote/view the same bland overdone joke thousands of...
I think what you’re saying is exactly why bots’ posts become so visible and their astroturfing methods work so well.
Most people mindlessly upvote/view the same bland overdone joke thousands of times. A bot can comment “this” on a post all day long just to farm karma. Its so easy to do that bots carbon copy the OP repost and also the entire comment chain and people still interact with that shit.
Reddit could probably go months without any new content at all and most people would never notice.
I really genuinely think the only way to actually prevent massive floods of bot accounts is to sell accounts for like, $5. Obviously this would not work for reddit and it introduces an entirely...
I really genuinely think the only way to actually prevent massive floods of bot accounts is to sell accounts for like, $5. Obviously this would not work for reddit and it introduces an entirely different monetization scheme that is at odds with advertising and personal information gathering goals, but the only sites I've seen fend it off are niche ones like this with invite systems (that could probably be exploited by an enthusiastic actor if someone had the will to, it's just that there's no will to) or ones that charge people money upfront. I'm becoming more amenable to the second, honestly.
It's difficult to imagine a price point that is low enough to be widely accessible for one account, but high enough to discourage multi-account sophisticated SEO strategies. The amount of money...
It's difficult to imagine a price point that is low enough to be widely accessible for one account, but high enough to discourage multi-account sophisticated SEO strategies. The amount of money that can be raked in by successfully making your brand widely known (bonus if with a positive connotation) is just so large, the motivation is strong to find an exploit. To a point, the exploits being difficult just drives up the value, because then there is less competition once you are inside.
But I hope someone figures it out and successfully implements a community without hidden corporate shills.
There's no "may" in it, the proof is in the pudding. Step 1: find one clearly AI generated account on Instagram Step 2: Try to figure out how to report said account Step 3: There's no step 3, you...
Whereas Meta may soon be embracing the bots
There's no "may" in it, the proof is in the pudding.
Step 1: find one clearly AI generated account on Instagram
Step 2: Try to figure out how to report said account
Step 3: There's no step 3, you can't report an account for being completely computer generated.
All Zuckerberg cares is engagement. AI influencers bring engagement.
Welp, time to get the invite keys ready. I don't even buy the idea that reddit can't identify bots. They definitely have enough data and spam/shadowban filters already use that to do a bunch of...
Welp, time to get the invite keys ready.
I don't even buy the idea that reddit can't identify bots. They definitely have enough data and spam/shadowban filters already use that to do a bunch of automatic moderation. And if those fail... they have one of the largest userbases in the world, more than ready to report suspected bots. Just like most of the site, they can offload much of the bot watch duty to the community.
My only question at this point is who's ring-leading all this behind the scenes. I don't think it's a coincidence all these initiatives for age verification are coming out in quick succession from several large companies, states, and countries.
Because you can identify a couple of obvious bots doesn't mean there's tons of hard-to-identify ones you don't see. Lizard people ? Or maybe it's just LLMs that ramped up bots numbers and make...
Because you can identify a couple of obvious bots doesn't mean there's tons of hard-to-identify ones you don't see.
My only question at this point is who's ring-leading all this behind the scenes.
Lizard people ? Or maybe it's just LLMs that ramped up bots numbers and make people react.
Engineering is a game of "good enough". And optics is a game of perception. People aren't complaining about the subtle bot using the newest model that writes better than the average redditor. It's...
doesn't mean there's tons of hard-to-identify ones you don't see.
Engineering is a game of "good enough". And optics is a game of perception.
People aren't complaining about the subtle bot using the newest model that writes better than the average redditor. It's the blatantly obvious bot throwing in promotional links that people keep reporting but isn't banned. Get rid of enough of those and people will be satisfied, and sympathetic when a sublte bot takes longer to ban.
But i bet they still rely on bots for their traffic numbers, so the incentive isn't there to really "weed out the bots".
Id take it a step further and accuse Reddit of specifically catering to bots because some people who control bot farms have paid them lots of money to allow them to “advertise” on Reddit in this way.
Id take it a step further and accuse Reddit of specifically catering to bots because some people who control bot farms have paid them lots of money to allow them to “advertise” on Reddit in this way.
A fundamental problem is that a human and a bot can share an identity. The simplest way would be to cut and paste AI-generated text. That can easily be automated.
A fundamental problem is that a human and a bot can share an identity. The simplest way would be to cut and paste AI-generated text. That can easily be automated.
People copy-pasting text is fine, that's artisanal botting, the issue is more with the industrial scale people that can spawn 10k bots in a few minutes.
People copy-pasting text is fine, that's artisanal botting, the issue is more with the industrial scale people that can spawn 10k bots in a few minutes.
Yeah, at least there's some kind of human filtering there but I've seen it where there's multiple bots being monitored by someone (or someones) and that seems like a bigger issue. For example, in...
Yeah, at least there's some kind of human filtering there but I've seen it where there's multiple bots being monitored by someone (or someones) and that seems like a bigger issue. For example, in Rocket League there seems to be more and more bots every day to the point where it's more usual that you run into a match with bots than real people (like 80/20 kind of percentage) and these bots sometimes get controled by real people. I asume one person is in charge of monitoring like a bunch of them to maybe correct something, bypass some sort of anticheats or maybe just because they're bored? I don't know the real reasson behind it but it's really clear when a bot is active and when a human is taking control and there's still a problematic ammount of bots.
This is the way how Russian disinfo works currently, they have automated systems arguing with people using stock answers. If someone engages more deeply, you can tell when it gets handed off to a...
This is the way how Russian disinfo works currently, they have automated systems arguing with people using stock answers. If someone engages more deeply, you can tell when it gets handed off to a human because the arguments get more nuanced and they start referencing VERY specific things in history from their playbook.
Source: I argue with people online way too much and have good pattern recognition =)
I never understood this change. If you're posting things you're ashamed of or that you don't want people to see on your profile why post them to begin with?
I never understood this change. If you're posting things you're ashamed of or that you don't want people to see on your profile why post them to begin with?
I got kind of weirded out when someone dug up an old comment of mine in an attempt to discredit something I said. What really struck me as creepy was that the topic wasn’t even political or at all...
I got kind of weirded out when someone dug up an old comment of mine in an attempt to discredit something I said. What really struck me as creepy was that the topic wasn’t even political or at all controversial.
It’s been a long time, so the details are fuzzy, but honestly it was something along the lines of me saying that I was somewhat frugal, and the other person digging up a comment where I said I had bought something pricey.
Anyway, since then I have routinely deleted my old comments. I’m not ashamed of them, but I figure that after a few days they are stale anyway.
I do get where you're coming from, and I think your example is a bit different from a general "hide post history" feature. In your case, you're deleting old comments because they feel stale or no...
I do get where you're coming from, and I think your example is a bit different from a general "hide post history" feature.
In your case, you're deleting old comments because they feel stale or no longer relevant, which makes sense. People change, and really old comments may not reflect who you are anymore. Hiding your entire history is a little different, though, because it hides everything, both recent and old.
The other difference is that deleted comments are much harder for the average person to dig back up, while hidden comments (on Reddit) are usually only hidden from your profile. People can still come across them through the original post, replies, or their feed.
So I can understand wanting to clean up old stuff, but I still think "hiding all activity" is a different thing from just removing outdated comments.
I thought this way, too, until recently. I'm a moderator on reddit still. Barely a moderator, but I still occasionally actually look at reports and answer modmail. And it is annoying when people...
I thought this way, too, until recently. I'm a moderator on reddit still. Barely a moderator, but I still occasionally actually look at reports and answer modmail. And it is annoying when people hide their posts. Makes it a lot harder to investigate and moderate.
But...I have heard rumors that my employer -- the US Govt -- has been looking at social media, including reddit, trying to find people who are speaking out against this administration. I've privated practically all of my real-named social media, or am very careful about not interacting with political content on the one or two remaining IRL-named accounts.
So on reddit, I've hidden most of my comments/posts. At least in subs that are more political in nature or subs that revolve around speaking personally. The only subs where I allow posts/submissions to remain public are entertainment-related. Gaming, sports, movies/shows, etc. I did leave a note on my userpage saying that that's the case.
Though after spending like over 15yrs on the site (yikes...), I'm sure 99% of my comments have been picked up by reddit crawlers and can be found by googling my username. So it's probably pointless.
I've long assumed - to the point where you'd have a hard time convincing me it's not true - that reddit are the biggest enabler (if not provider) of bot activity on their website. The pre-AI bot...
I've long assumed - to the point where you'd have a hard time convincing me it's not true - that reddit are the biggest enabler (if not provider) of bot activity on their website. The pre-AI bot activity on reddit was both overwhelming and easily detected, yet they never put systems in place to detect them.
There's a lot of incentive for a business, who's entire value is user submitted content, to use bots to fake user generated content.
I don't have any faith that reddit is doing this to fight bot traffic, as bot traffic is good for reddit. Maybe to fight the wrong bot traffic according to reddit's needs, which will not align with the users.
Yeah, their subjective definition of what is a good bot gives them flexibility to pick and choose what helps foster engagement from real users (or maybe even just other bots?). We already know the...
Yeah, their subjective definition of what is a good bot gives them flexibility to pick and choose what helps foster engagement from real users (or maybe even just other bots?). We already know the negative effects that rage-baiting and engagement driven metrics has on public discourse at large.
This also reminds me of when Elon Musk bought Twitter to fight the bot problem, and then suddenly became silent about it once he was behind the helm. One can imagine once he saw the actual numbers he realized that bots are a significant portion of traffic, to the point that advertising revenue/profitability may have been dependent on it.
If you believe he bought Twitter to fight the bot problem, I have a bridge to sell you. Back when he initially made his stupid offer, Musk sued Twitter over having more bots than they reported in...
This also reminds me of when Elon Musk bought Twitter to fight the bot problem, and then suddenly became silent about it once he was behind the helm.
If you believe he bought Twitter to fight the bot problem, I have a bridge to sell you.
Back when he initially made his stupid offer, Musk sued Twitter over having more bots than they reported in regulatory filings in an attempt to back out of the purchase. He also has been found in court to have been deliberately trying to mislead investors and lower Twitter's stock price with his public statements during that period. He had multiple financial incentives to exaggerate bot activity on Twitter during this time and has been found to have done so for his own financial gain, so I don't think the sensible conclusion to come to is that he was right then and just stopped talking about it after officially acquiring Twitter. The more logical explanation is that he was full of hot air the whole time and stopped publicly freaking out about bots when it was no longer useful to mislead Twitter investors.
Maybe this is the case for these two but Meta is now investing a hell ton of money on real ID verifications systems and that's because bots don't sell as good when it comes to ads. Knowing who is...
Maybe this is the case for these two but Meta is now investing a hell ton of money on real ID verifications systems and that's because bots don't sell as good when it comes to ads. Knowing who is behind an account and knowing IT IS someone makes your platform value for advertisement way more valuable than just sheer ammount of traffic you get. We'll see which approach seems to be the one that stays afloat... I'm not a fan of either but the internet's enshittification seems unavoidable for these big websites/social medias.
I think I’m actually agreeing with u/spez here. He does have a tendency to talk about half-baked ideas and reddit has a history of failing to implement those ideas well, so I’m skeptical that in...
I think I’m actually agreeing with u/spez here. He does have a tendency to talk about half-baked ideas and reddit has a history of failing to implement those ideas well, so I’m skeptical that in the end we all won’t have to verify that we’re human.
I just have to ask myself what information am I willing to give reddit to use their service and I need to know if they store it somewhere. I’m not an anonymous user though, for me the concern is purely based on if a third party could use the stored data to steal my identity. It’s the same reason why I’m fine with using Apple facial recognition.
Man, I'm so used to seeing comments like this at the bottom of a reddit post that I first interpreted this as "I can't wait till all you heretics get found out by the government!" :(
Man, I'm so used to seeing comments like this at the bottom of a reddit post that I first interpreted this as "I can't wait till all you heretics get found out by the government!" :(
On the heels of Reddit getting fined for not being aggressive enough with its age verification, they will now implement human verification to allegedly combat the rampant bots on the platform.
I was initially intrigued by the idea that bots can effectively be labeled and filtered out on a site I’ve long suspected has been succumbing to Dead Internet Theory. One does not have to look far to see that Digg’s reboot couldn’t even last a year due to supposed bots overwhelming its platform. In an age of AI agents, it’s no surprise this is likely a serious problem that large social media sites have to deal with.
Whereas Meta may soon be embracing the bots, it’s interesting to see Reddit is primarily labeling them. It appears they will allow “good” bots on the platform while banning the malicious ones (something they claim to already do to 100K accounts a day).
While Reddit’s CEO claims this will be done in a privacy-safe manner, and that they do not want their users’ identity, this doesn’t seem to track with what they are actually doing. First off, they already do collect users’ identity through email verification which in turn makes Reddit’s ads business more enticing to advertisers. Secondly, their proposed methods of human verification don’t pass the sniff-test:
It makes me wonder if it’s incompetence that has led to the CEO sharing half-baked ideas rather than launching with a fully executed strategy, or if there is something more nefarious going on. It sure is convenient that they claim not all users will be requested for human verification, just the ones engaging in “fishy behavior”. This could easily be used as a tool to censor users and/or gather more information on political dissidents. The fact that there will be no actions taken against human accounts that post AI slop makes me question what their desired end result is here.
Perhaps it’s the privacy advocate in me just getting paranoid at the continuing erosion of Web 1.0 freedoms. Perhaps it’s my jaded cynicism at the motives behind Reddit (and social media at large) over the past 10+ years. Perhaps I’m just not ready to accept the brave new world we’re entering.
Perhaps I’m not alone.
Not even that. Security Enthusiasts might get mad at you for not using encrypted storage, but there's nothing about the way passkeys work that would stop you from just generating a private key right there at the top of
spambot.jsand saving it in a text file. The whole idea of using a passkey as proof of humanity is so nonsequitur that it doesn't even qualify as "laughable".(Of course it's all doable in software. What else can they do, demand you compute an HMAC with paper and pencil?)
I'm not an expert but I think they mean "passkeys signed by one of the lead providers". Essentially they're delegating human verification to Google, Microsoft, and the like.
The passkey standard specifies an optional "device attestation" feature which would have made this possible, but Apple and Google both decided to not to support that feature years ago.
Isn't device attestation a proof that your device hasn't been tampered with (aka rooted)? That's yet another thing.
Even if you go full privacy invasion, I just don't see how you truly verify that actions being taken are not done so by a bot. A human can instruct a bot to act on their behalf, and the bot can pass any verification checks back to the human when needed.
I think long term, large, broadly anonymous online communities are going to go extinct and real people will move to much smaller communities primarily populated with people they actually know in some capacity (like Discord servers)
I think its mostly already separated into two groups, one place for mindless scroll content and a separate place where people actually communicate.
Places like Tildes have mostly gone extinct.
This is already kind of my case. The only "social media" I actively interact with is here on Tildes and discord servers I've made with either friends I've met for years or new people I find on online videogames, which seem to be a better human verification than most others lol.
I think dead internet theory is mostly massively overblown.
We're a species that has been commenting "first" or repeating the same damn jokes in comment chains since we got the internet moving. It just turns out most people aren't very creative, and people are perfectly willing to signal boost the low effort noise rather than the higher effort content.
Looking at this report from 2023, it looks like ~24% of the ad traffic on Twitter/X was invalid/bot traffic.
While I agree that Dead Internet Theory may have started as an online joke, I think there’s some truth to it when a quarter of one of the largest social media sites was bots. Now, there are a lot of reasons why there could be bots, and surely not all of them are posting content, but the ones that do can generate more in an hour than a single human can in a day. Now with the rise of AI agents and Clawdbots, the barriers for creating bots are getting lowered every day. Paired with such large financial incentives to create engagement and steer public discourse, I can only imagine the number of bots has gone up in recent years.
To your point about low effort content getting signal boosted, the bots don’t need to be producing high effort content either to attract upvotes. Bots can dominate conversation simply by generating superfluous content to drown out the genuine humans.
It’s a commonly recognized phenomena for new bots to make reposts on Reddit (particularly in r/aww and r/todayilearned) to farm karma, thereby establishing a false legitimacy before moving on to more nefarious astroturfing. When you look at the comments on these reposts they are carbon copies of the top comments on the original posts. These aren’t just low effort jokes either, they can be word-for-word anecdotes. The bots commenting on bot reposts could be fueled by bot farms looking to quickly karma farm multiple new accounts.
It’s very telling when you look at Reddit’s auto-generated username format (e.g. Adjective-Noun-XXXX, where X is a number). The bots don’t even need to generate their own username and avoid collisions anymore, Reddit does it for them! Once you see it you’ll notice it everywhere, particularly in comment threads, and that’s why I feel Dead Internet Theory may not be such a joke after all.
I'm aware of such reports, and my problem is that they're from the advertisement point of view. EVERY interaction on these platforms gets served ads. Two bots on full private mode blasting messages back and forth get served ads. The % of bots in content average users interact with is the relevant number, and it is almost certainly a much smaller fraction of that %.
To be clear, it says a quarter of the ad's severed were to bots. That is does NOT mean a quarter of the users are bots.
More importantly its mixed data of ad impressions and clickthroughs, and it segments by region. To quote:
Worth noting that this is a weird categorization because NA is Canada, US, Mexico, while EMEA is basically the entire stripe from England to South Africa including the middle east. This makes sense when you're looking at reporting by time zone, but not when you're trying to identify regions with more bot activity, as I suspect if you separate Europe it'll have a MUCH lower IVT rate for a variety of reasons (basically cheaper to run bot farms in countries with cheaper costs).
You're underrating the barriers and overrating the numbers needed to push an agenda. If you want to inflate views, bots are great in mass. If you want to steer an agenda, well one of the things psyops figured out centuries ago is that people are very willing to believe and signal boost as is. A couple of coordinated pieces in the right place with a few "i assent" voices goes a LOT farther than a shotgun approach. "couple" and "few" being %'s of the millions of real people on these platforms.
I'm not really arguing that? I'm saying that humans gravitate towards low effort content and will gladly create low effort content. You're ascribing a lot of effort to what's more than likely just the average person suddenly getting on reddit and the average redditor realizing the average person is willing to post "lol nice" every time 420 69 comes up. Plenty of companies (hell musk) MAKE MONEY selling to this very very large slice of the population. Contrary I think most people don't realize how sheltered and isolated their communities were until they became popular and trended towards lower effort.
I'm aware of it, i've seen it, i still doubt it's nearly as much content as people think it is. Much like how people are very bad at guessing the speed of a moving vehicle from a stand still, I find that most people will jump all over things they don't like as bot activity when it can just be pointless nonsense. Hell literal people enjoy "copy pasta" which is the SOLE ACT of copy pasting some other (likely false) moment.
I'll even go one step further to say that in places like aww and todayilearned:
I find things like this especially concerning as logic.
So again I'm not saying it doesn't happen, but I'd bet decent money that if you had to identify randomly which comments you thought were bots and which weren't you'd vastly overshoot.
It's been a few years since I've used Reddit, but I think this was underway quite some time ago. Even eight years ago, it was very common to see the same AskReddit posts reposted over and over again, and if you Googled the text of many of the top comments, you'd find some other user posted it verbatim before. Karma farming operations were using discussions that already existed to programmatically create plausible account activity, on accounts that could later be used for astroturfing. No LLMs required, just API access or scraping.
When GPT3 dropped, years before ChatGPT, I noticed some comments appearing occasionally that made no sense but seemed like plausible human speech. I suspect people were plugging GPT3 into Reddit bots as soon as the model became available on the API.
I think what you’re saying is exactly why bots’ posts become so visible and their astroturfing methods work so well.
Most people mindlessly upvote/view the same bland overdone joke thousands of times. A bot can comment “this” on a post all day long just to farm karma. Its so easy to do that bots carbon copy the OP repost and also the entire comment chain and people still interact with that shit.
Reddit could probably go months without any new content at all and most people would never notice.
The temptation to say "this" in reply.
I really genuinely think the only way to actually prevent massive floods of bot accounts is to sell accounts for like, $5. Obviously this would not work for reddit and it introduces an entirely different monetization scheme that is at odds with advertising and personal information gathering goals, but the only sites I've seen fend it off are niche ones like this with invite systems (that could probably be exploited by an enthusiastic actor if someone had the will to, it's just that there's no will to) or ones that charge people money upfront. I'm becoming more amenable to the second, honestly.
It's difficult to imagine a price point that is low enough to be widely accessible for one account, but high enough to discourage multi-account sophisticated SEO strategies. The amount of money that can be raked in by successfully making your brand widely known (bonus if with a positive connotation) is just so large, the motivation is strong to find an exploit. To a point, the exploits being difficult just drives up the value, because then there is less competition once you are inside.
But I hope someone figures it out and successfully implements a community without hidden corporate shills.
Your "suspect lobbying sources" link really gets to the core of this issue, better than anything I've read so far.
https://www.yahoo.com/news/articles/reddit-user-uncovers-behind-meta-154717384.html
Really hope we can rise up again and keep this from happening. We've done it before.
There's no "may" in it, the proof is in the pudding.
Step 1: find one clearly AI generated account on Instagram
Step 2: Try to figure out how to report said account
Step 3: There's no step 3, you can't report an account for being completely computer generated.
All Zuckerberg cares is engagement. AI influencers bring engagement.
Welp, time to get the invite keys ready.
I don't even buy the idea that reddit can't identify bots. They definitely have enough data and spam/shadowban filters already use that to do a bunch of automatic moderation. And if those fail... they have one of the largest userbases in the world, more than ready to report suspected bots. Just like most of the site, they can offload much of the bot watch duty to the community.
My only question at this point is who's ring-leading all this behind the scenes. I don't think it's a coincidence all these initiatives for age verification are coming out in quick succession from several large companies, states, and countries.
Most of /r/all these days is incredibly obvious bot accounts. If they actually cared they would have done something years ago.
Because you can identify a couple of obvious bots doesn't mean there's tons of hard-to-identify ones you don't see.
Lizard people ? Or maybe it's just LLMs that ramped up bots numbers and make people react.
Engineering is a game of "good enough". And optics is a game of perception.
People aren't complaining about the subtle bot using the newest model that writes better than the average redditor. It's the blatantly obvious bot throwing in promotional links that people keep reporting but isn't banned. Get rid of enough of those and people will be satisfied, and sympathetic when a sublte bot takes longer to ban.
But i bet they still rely on bots for their traffic numbers, so the incentive isn't there to really "weed out the bots".
Id take it a step further and accuse Reddit of specifically catering to bots because some people who control bot farms have paid them lots of money to allow them to “advertise” on Reddit in this way.
Saw this recently: https://bsky.app/profile/aaron.bsky.team/post/3mhsahaa5sk2u
A fundamental problem is that a human and a bot can share an identity. The simplest way would be to cut and paste AI-generated text. That can easily be automated.
People copy-pasting text is fine, that's artisanal botting, the issue is more with the industrial scale people that can spawn 10k bots in a few minutes.
Yeah, at least there's some kind of human filtering there but I've seen it where there's multiple bots being monitored by someone (or someones) and that seems like a bigger issue. For example, in Rocket League there seems to be more and more bots every day to the point where it's more usual that you run into a match with bots than real people (like 80/20 kind of percentage) and these bots sometimes get controled by real people. I asume one person is in charge of monitoring like a bunch of them to maybe correct something, bypass some sort of anticheats or maybe just because they're bored? I don't know the real reasson behind it but it's really clear when a bot is active and when a human is taking control and there's still a problematic ammount of bots.
(edit: minor spelling corrections)
This is the way how Russian disinfo works currently, they have automated systems arguing with people using stock answers. If someone engages more deeply, you can tell when it gets handed off to a human because the arguments get more nuanced and they start referencing VERY specific things in history from their playbook.
Source: I argue with people online way too much and have good pattern recognition =)
They say this while doing the work for the bots, like letting people hide their post history.
I never understood this change. If you're posting things you're ashamed of or that you don't want people to see on your profile why post them to begin with?
I got kind of weirded out when someone dug up an old comment of mine in an attempt to discredit something I said. What really struck me as creepy was that the topic wasn’t even political or at all controversial.
It’s been a long time, so the details are fuzzy, but honestly it was something along the lines of me saying that I was somewhat frugal, and the other person digging up a comment where I said I had bought something pricey.
Anyway, since then I have routinely deleted my old comments. I’m not ashamed of them, but I figure that after a few days they are stale anyway.
I do get where you're coming from, and I think your example is a bit different from a general "hide post history" feature.
In your case, you're deleting old comments because they feel stale or no longer relevant, which makes sense. People change, and really old comments may not reflect who you are anymore. Hiding your entire history is a little different, though, because it hides everything, both recent and old.
The other difference is that deleted comments are much harder for the average person to dig back up, while hidden comments (on Reddit) are usually only hidden from your profile. People can still come across them through the original post, replies, or their feed.
So I can understand wanting to clean up old stuff, but I still think "hiding all activity" is a different thing from just removing outdated comments.
I thought this way, too, until recently. I'm a moderator on reddit still. Barely a moderator, but I still occasionally actually look at reports and answer modmail. And it is annoying when people hide their posts. Makes it a lot harder to investigate and moderate.
But...I have heard rumors that my employer -- the US Govt -- has been looking at social media, including reddit, trying to find people who are speaking out against this administration. I've privated practically all of my real-named social media, or am very careful about not interacting with political content on the one or two remaining IRL-named accounts.
So on reddit, I've hidden most of my comments/posts. At least in subs that are more political in nature or subs that revolve around speaking personally. The only subs where I allow posts/submissions to remain public are entertainment-related. Gaming, sports, movies/shows, etc. I did leave a note on my userpage saying that that's the case.
Though after spending like over 15yrs on the site (yikes...), I'm sure 99% of my comments have been picked up by reddit crawlers and can be found by googling my username. So it's probably pointless.
I've long assumed - to the point where you'd have a hard time convincing me it's not true - that reddit are the biggest enabler (if not provider) of bot activity on their website. The pre-AI bot activity on reddit was both overwhelming and easily detected, yet they never put systems in place to detect them.
There's a lot of incentive for a business, who's entire value is user submitted content, to use bots to fake user generated content.
I don't have any faith that reddit is doing this to fight bot traffic, as bot traffic is good for reddit. Maybe to fight the wrong bot traffic according to reddit's needs, which will not align with the users.
Yeah, their subjective definition of what is a good bot gives them flexibility to pick and choose what helps foster engagement from real users (or maybe even just other bots?). We already know the negative effects that rage-baiting and engagement driven metrics has on public discourse at large.
This also reminds me of when Elon Musk bought Twitter to fight the bot problem, and then suddenly became silent about it once he was behind the helm. One can imagine once he saw the actual numbers he realized that bots are a significant portion of traffic, to the point that advertising revenue/profitability may have been dependent on it.
If you believe he bought Twitter to fight the bot problem, I have a bridge to sell you.
Back when he initially made his stupid offer, Musk sued Twitter over having more bots than they reported in regulatory filings in an attempt to back out of the purchase. He also has been found in court to have been deliberately trying to mislead investors and lower Twitter's stock price with his public statements during that period. He had multiple financial incentives to exaggerate bot activity on Twitter during this time and has been found to have done so for his own financial gain, so I don't think the sensible conclusion to come to is that he was right then and just stopped talking about it after officially acquiring Twitter. The more logical explanation is that he was full of hot air the whole time and stopped publicly freaking out about bots when it was no longer useful to mislead Twitter investors.
Maybe this is the case for these two but Meta is now investing a hell ton of money on real ID verifications systems and that's because bots don't sell as good when it comes to ads. Knowing who is behind an account and knowing IT IS someone makes your platform value for advertisement way more valuable than just sheer ammount of traffic you get. We'll see which approach seems to be the one that stays afloat... I'm not a fan of either but the internet's enshittification seems unavoidable for these big websites/social medias.
I’m convinced they sell bot whitelisting to people with lots of money so that those people can use their bots to advertise on Reddit.
I think I’m actually agreeing with u/spez here. He does have a tendency to talk about half-baked ideas and reddit has a history of failing to implement those ideas well, so I’m skeptical that in the end we all won’t have to verify that we’re human.
I just have to ask myself what information am I willing to give reddit to use their service and I need to know if they store it somewhere. I’m not an anonymous user though, for me the concern is purely based on if a third party could use the stored data to steal my identity. It’s the same reason why I’m fine with using Apple facial recognition.
Time for deepfake technology to be properly employed.
Just waiting for the moment everyone who has criticized Israel has their ID looked up by the government.
Man, I'm so used to seeing comments like this at the bottom of a reddit post that I first interpreted this as "I can't wait till all you heretics get found out by the government!" :(
Internet of People, Internet of Things...