I've read the Discourse and I am once again reminded why I left Reddit. I am thankfully not used anymore to the tone of voice, cinisism and bad faith arguments. Makes me even more thankful to be...
I've read the Discourse and I am once again reminded why I left Reddit. I am thankfully not used anymore to the tone of voice, cinisism and bad faith arguments. Makes me even more thankful to be on Tildes.
Some of the arguments given there are just outright hilarious if you know anything about moderation. "Volunteers can't be trusted", ignoring the thousands of organizations, clubs, etc over the...
Some of the arguments given there are just outright hilarious if you know anything about moderation.
"Volunteers can't be trusted", ignoring the thousands of organizations, clubs, etc over the world that literally run on volunteers. Even more baffling of an argument when you consider it is made in an OSS context where a majority of decisions are made by volunteers.
"Shifting responsibility to code", automoderator has been a thing for over a decade now. Even decades earlier automation was already used to moderate forums based on simple word matching and other factors.
I agree with you in theory, but in practice I think it heavily depends on the type of community. Volunteers are great when they run good and interesting communities where they personally value the...
"Volunteers can't be trusted", ignoring the thousands of organizations, clubs, etc over the world that literally run on volunteers. Even more baffling of an argument when you consider it is made in an OSS context where a majority of decisions are made by volunteers.
I agree with you in theory, but in practice I think it heavily depends on the type of community. Volunteers are great when they run good and interesting communities where they personally value the community and enjoy being part of it. But this is often not the case on reddit, where discussions tend to be less deep and interesting because they're short lived and you get a constant influx of newbies, and a lot of the time spent moderating is just shoveling shit. That seems to attract or create stereotypical jaded reddit mods.
Then again, I'm of the slightly radical opinion that if a community needs to have a CoC (that the bot is supposedly enforcing), it's already fucked, so maybe I don't exactly represent the common opinion.
It is clear from your reaction here that you personally have not experienced that on reddit. But I did, it is exactly why I become involved in moderating on reddit at the time. That isn't unique...
Volunteers are great when they run good and interesting communities where they personally value the community and enjoy being part of it.
It is clear from your reaction here that you personally have not experienced that on reddit. But I did, it is exactly why I become involved in moderating on reddit at the time.
and you get a constant influx of newbies, and a lot of the time spent moderating is just shoveling shit.
That isn't unique to reddit. That is the sad reality of anywhere on the internet where people can interact with a relative low barrier of entry. The reason you don't see it here on Tildes is mostly because of scale and the fact you still need to have an invitation.
For OSS projects where you want user interaction, you do need a relative low barrier of entry. Which by definition you do need people who are willing to shovel some of that shit. Which means that those volunteers are still needed, and those will be people who more often than not do care about the project/community and are willing to trade some of their sanity to keep things sane for others.
Does that mean that there are no instances of people on those roles becoming assholes or going on a powertrip? Of course not, that does happen. But it does really irk me that for a lot of people that seems to be what mods are by definition.
I have and I occasionally still do, but I do believe communities like that are quite rare on reddit nowadays. No arguments from me here, I'm a big proponent of having some forms of soft...
It is clear from your reaction here that you personally have not experienced that on reddit. But I did, it is exactly why I become involved in moderating on reddit at the time.
I have and I occasionally still do, but I do believe communities like that are quite rare on reddit nowadays.
That isn't unique to reddit. That is the sad reality of anywhere on the internet where people can interact with a relative low barrier of entry. The reason you don't see it here on Tildes is mostly because of scale and the fact you still need to have an invitation.
No arguments from me here, I'm a big proponent of having some forms of soft gatekeeping (can even just be using outdated looking webforums instead of something modern and inviting) for precisely this reason.
For OSS projects where you want user interaction, you do need a relative low barrier of entry.
I'm not entirely convinced. I think the question here is "do people who require a super low barrier of entry contribute to a relatively nerdy tech project in a meaningful way?". My experience says "probably not", but I have no real data to back that up. Purposefully making contribution mildly more difficult may raise the average quality, but it may also hurt the public image even if it does - in other words I don't hold a firm opinion that this should be the way, but I'm not convinced that what you say is definitely the way either.
Does that mean that there are no instances of people on those roles becoming assholes or going on a powertrip? Of course not, that does happen. But it does really irk me that for a lot of people that seems to be what mods are by definition.
I don't think it's by definition. After all, I'm using a platform made by an ex reddit admin. But it seems to me that as soon a subreddit grows over certain size, the mods become jaded, heavy handed and hard to deal with. And after some subreddits became de facto unmoderated for a while as a protest against the API changes, I watched various discussions on whatever topics (since basically everything was allowed) in subs frequented by mods like /r/theoryofreddit, and thanks to that I realized that there really are moderators of 10+ huge subreddits who have a savior complex and are every bit as delusional and in too deep as stereotypical reddit haters claim. So the mod sanity spectrum is really wide and the ones on the wrong side just stand out much more.
Then again I don't think this has to be the case for something as small as /r/nixos. It's just that specifically with community mods on a platform like reddit, the quality really seems to be quite hit or miss.
NixOS recently had some issues with governance with respect to their volunteers, I'm not entirely sure which side ended up winning (there was some back and forth) but I think they're just weary in...
NixOS recently had some issues with governance with respect to their volunteers, I'm not entirely sure which side ended up winning (there was some back and forth) but I think they're just weary in general about another coup happening.
I've read somewhere on reddit that NixOs being the new shiny enthusiast OS attracted the worse of the arch community. Funny that I was a Gentoo user for years and to this day Gentoo has a pretty...
I've read somewhere on reddit that NixOs being the new shiny enthusiast OS attracted the worse of the arch community.
Funny that I was a Gentoo user for years and to this day Gentoo has a pretty good and civil community. Wonder why Linux elitists with "strong opinions" never got there.
Gentoo was cool before Linux as a whole was (mainstream) cool? I remember it being the trendy thing before social media became super mainstream and before reddit-like annoying people and bad faith...
Funny that I was a Gentoo user for years and to this day Gentoo has a pretty good and civil community. Wonder why Linux elitists with "strong opinions" never got there.
Gentoo was cool before Linux as a whole was (mainstream) cool? I remember it being the trendy thing before social media became super mainstream and before reddit-like annoying people and bad faith arguments became common. There were always overly aggressive people in linux and dev communities, but imo it wasn't that bad anywhere at that time.
Probably. I started with Linux in 2000 I think, with Mandrake first, then Slackware and ended up in Gentoo for years, if not a decade. Gentoo already had a reputation of being a wizard's distro. I...
Probably. I started with Linux in 2000 I think, with Mandrake first, then Slackware and ended up in Gentoo for years, if not a decade. Gentoo already had a reputation of being a wizard's distro.
I plan to go back to it as soon as I build a desktop again.
Interesting to see how it works out. From my experience experimenting with it myself on discord it certainly is possible to use AI for some moderation tasks. Watchdog on the other hand makes a big...
Interesting to see how it works out. From my experience experimenting with it myself on discord it certainly is possible to use AI for some moderation tasks. Watchdog on the other hand makes a big bold claim on their frontpage stating
Fire Your Chat Moderator
Which is just outright bullshit if you ask me. Like with almost all automation, it might reduce your needs for human actions, but they can't be outright replaced. For starters, it takes the "janitor" approach to moderation, where their only role is to clean up bad stuff. Which is part of the task, for sure, but it isn't all of it. Good moderation also involves interaction with the community, intervening where possible before things get out of hand, etc.
But more importantly, llm actions still need to be reviewed, monitored and tuned as they will not be perfect. Looking at the linked discourse discussion, the person experimenting with it on the subreddit does seem to be fully aware of that. So that is a good thing.
When I did experiment with it one of the big things was cost as well. You need one of the models that is good with context, and you need to provide a lot of it. This automatically means that running an LLM locally cheaply or using one of the cheaper hosted models is out of the question.
Looking at the pricing of watchdog this is reflected in their pricing:
Their cheapest offering is $9 per month for 2500 messages. For a somewhat slow moving subreddit that is probably fine, but with a discord server you easily will get more messages.
Their cheapest offering only offers 3 custom rules. This is likely because they want to limit the token usage down for the custom prompt they send with each API call to whatever LLM provider they use.
Of course, watchdog claims that this is cheaper than employing human moderators. Which might be the case where moderators are indeed paid a decent wage and assuming that watchdog is indeed as good as they claim. But for discord communities, subreddits, etc where everyone is a volunteer it simply is a rather big additional cost.
As one final note, it might be the case that it can be done cheaper today compared to 6 months ago when I build my discord solution. Earlier this week GPT-4o mini was made available which is cheaper than GPT-3.5 turbo. But in the case of screening every message I still feel the cost will be substantial in any big enough community. It's for that reason I went for the more complex solution a little while ago.
30k sub is super small... but I'm interested to see how well it works. I solo run a sub of 1.3m and its far from overwhelming (most credit to automod) -- but it would be neat to have some AI going...
30k sub is super small... but I'm interested to see how well it works. I solo run a sub of 1.3m and its far from overwhelming (most credit to automod) -- but it would be neat to have some AI going through comments and flagging assholes.
As a side note, their old.reddit css is neat. I haven't seen that theme before.
I work for a company that provides machine learning functionality for something not dissimilar to this to companies for their internal workplace communications (there are key differences, as what...
I work for a company that provides machine learning functionality for something not dissimilar to this to companies for their internal workplace communications (there are key differences, as what we do is compliance and not moderation, but a lot of the big problems are similar between the two things). When I was first getting interviewed one of the things that was most important to me was that the classifiers we were training weren't taking human review out of the picture -- there's always a human doing the actual actions at the end, and we just make it easier for them to do their job in a targeted way. I think that things like reddit's Automod demonstrate that this approach can greatly improve moderation workloads.
Of course, the NixOS community is probably more keen on new technical innovations than the HR and IT teams we're selling our product to. A great number of our customers (including some of our biggest ones) don't even use the functionality my team develops because they're not sold on that as an alternative to whatever their current strategy is. But I personally would be very hesitant about using an LLM-based approach without human review afterwords. We don't even use generative LLMs at my work for this part of the task -- we do deploy it for other features that we're more confident it can be good at, like summarization, and we've found ways for them to be useful in development, but we're pretty conservative about throwing LLMs at every problem and assuming they'll be better than anything else.
I've read the Discourse and I am once again reminded why I left Reddit. I am thankfully not used anymore to the tone of voice, cinisism and bad faith arguments. Makes me even more thankful to be on Tildes.
Some of the arguments given there are just outright hilarious if you know anything about moderation.
I agree with you in theory, but in practice I think it heavily depends on the type of community. Volunteers are great when they run good and interesting communities where they personally value the community and enjoy being part of it. But this is often not the case on reddit, where discussions tend to be less deep and interesting because they're short lived and you get a constant influx of newbies, and a lot of the time spent moderating is just shoveling shit. That seems to attract or create stereotypical jaded reddit mods.
Then again, I'm of the slightly radical opinion that if a community needs to have a CoC (that the bot is supposedly enforcing), it's already fucked, so maybe I don't exactly represent the common opinion.
It is clear from your reaction here that you personally have not experienced that on reddit. But I did, it is exactly why I become involved in moderating on reddit at the time.
That isn't unique to reddit. That is the sad reality of anywhere on the internet where people can interact with a relative low barrier of entry. The reason you don't see it here on Tildes is mostly because of scale and the fact you still need to have an invitation.
For OSS projects where you want user interaction, you do need a relative low barrier of entry. Which by definition you do need people who are willing to shovel some of that shit. Which means that those volunteers are still needed, and those will be people who more often than not do care about the project/community and are willing to trade some of their sanity to keep things sane for others.
Does that mean that there are no instances of people on those roles becoming assholes or going on a powertrip? Of course not, that does happen. But it does really irk me that for a lot of people that seems to be what mods are by definition.
I have and I occasionally still do, but I do believe communities like that are quite rare on reddit nowadays.
No arguments from me here, I'm a big proponent of having some forms of soft gatekeeping (can even just be using outdated looking webforums instead of something modern and inviting) for precisely this reason.
I'm not entirely convinced. I think the question here is "do people who require a super low barrier of entry contribute to a relatively nerdy tech project in a meaningful way?". My experience says "probably not", but I have no real data to back that up. Purposefully making contribution mildly more difficult may raise the average quality, but it may also hurt the public image even if it does - in other words I don't hold a firm opinion that this should be the way, but I'm not convinced that what you say is definitely the way either.
I don't think it's by definition. After all, I'm using a platform made by an ex reddit admin. But it seems to me that as soon a subreddit grows over certain size, the mods become jaded, heavy handed and hard to deal with. And after some subreddits became de facto unmoderated for a while as a protest against the API changes, I watched various discussions on whatever topics (since basically everything was allowed) in subs frequented by mods like /r/theoryofreddit, and thanks to that I realized that there really are moderators of 10+ huge subreddits who have a savior complex and are every bit as delusional and in too deep as stereotypical reddit haters claim. So the mod sanity spectrum is really wide and the ones on the wrong side just stand out much more.
Then again I don't think this has to be the case for something as small as /r/nixos. It's just that specifically with community mods on a platform like reddit, the quality really seems to be quite hit or miss.
NixOS recently had some issues with governance with respect to their volunteers, I'm not entirely sure which side ended up winning (there was some back and forth) but I think they're just weary in general about another coup happening.
I've read somewhere on reddit that NixOs being the new shiny enthusiast OS attracted the worse of the arch community.
Funny that I was a Gentoo user for years and to this day Gentoo has a pretty good and civil community. Wonder why Linux elitists with "strong opinions" never got there.
Gentoo was cool before Linux as a whole was (mainstream) cool? I remember it being the trendy thing before social media became super mainstream and before reddit-like annoying people and bad faith arguments became common. There were always overly aggressive people in linux and dev communities, but imo it wasn't that bad anywhere at that time.
Probably. I started with Linux in 2000 I think, with Mandrake first, then Slackware and ended up in Gentoo for years, if not a decade. Gentoo already had a reputation of being a wizard's distro.
I plan to go back to it as soon as I build a desktop again.
/r/NixOS in particular has become a very nasty sub of late due some drama in the Nix community.
Interesting to see how it works out. From my experience experimenting with it myself on discord it certainly is possible to use AI for some moderation tasks. Watchdog on the other hand makes a big bold claim on their frontpage stating
Which is just outright bullshit if you ask me. Like with almost all automation, it might reduce your needs for human actions, but they can't be outright replaced. For starters, it takes the "janitor" approach to moderation, where their only role is to clean up bad stuff. Which is part of the task, for sure, but it isn't all of it. Good moderation also involves interaction with the community, intervening where possible before things get out of hand, etc.
But more importantly, llm actions still need to be reviewed, monitored and tuned as they will not be perfect. Looking at the linked discourse discussion, the person experimenting with it on the subreddit does seem to be fully aware of that. So that is a good thing.
When I did experiment with it one of the big things was cost as well. You need one of the models that is good with context, and you need to provide a lot of it. This automatically means that running an LLM locally cheaply or using one of the cheaper hosted models is out of the question.
Looking at the pricing of watchdog this is reflected in their pricing:
Of course, watchdog claims that this is cheaper than employing human moderators. Which might be the case where moderators are indeed paid a decent wage and assuming that watchdog is indeed as good as they claim. But for discord communities, subreddits, etc where everyone is a volunteer it simply is a rather big additional cost.
As one final note, it might be the case that it can be done cheaper today compared to 6 months ago when I build my discord solution. Earlier this week GPT-4o mini was made available which is cheaper than GPT-3.5 turbo. But in the case of screening every message I still feel the cost will be substantial in any big enough community. It's for that reason I went for the more complex solution a little while ago.
30k sub is super small... but I'm interested to see how well it works. I solo run a sub of 1.3m and its far from overwhelming (most credit to automod) -- but it would be neat to have some AI going through comments and flagging assholes.
As a side note, their old.reddit css is neat. I haven't seen that theme before.
Did you check the top comment on the post? Lol
yeah! its perfect. I run two ~30k subs and I could mod them blindfolded. Still a neat thing to try :)
I work for a company that provides machine learning functionality for something not dissimilar to this to companies for their internal workplace communications (there are key differences, as what we do is compliance and not moderation, but a lot of the big problems are similar between the two things). When I was first getting interviewed one of the things that was most important to me was that the classifiers we were training weren't taking human review out of the picture -- there's always a human doing the actual actions at the end, and we just make it easier for them to do their job in a targeted way. I think that things like reddit's Automod demonstrate that this approach can greatly improve moderation workloads.
Of course, the NixOS community is probably more keen on new technical innovations than the HR and IT teams we're selling our product to. A great number of our customers (including some of our biggest ones) don't even use the functionality my team develops because they're not sold on that as an alternative to whatever their current strategy is. But I personally would be very hesitant about using an LLM-based approach without human review afterwords. We don't even use generative LLMs at my work for this part of the task -- we do deploy it for other features that we're more confident it can be good at, like summarization, and we've found ways for them to be useful in development, but we're pretty conservative about throwing LLMs at every problem and assuming they'll be better than anything else.
Also see here for more context:
https://discourse.nixos.org/t/automating-moderation-on-reddit/48425