Reddit have always been heavy-handed with posts that glorify or promote violence, at least ever since Spez took charge. They're now butting heads with the greater community because people are...
Exemplary
Reddit have always been heavy-handed with posts that glorify or promote violence, at least ever since Spez took charge. They're now butting heads with the greater community because people are actually celebrating the death of Brian Thompson because his tenure as United Healthcare CEO led to a trebled rate of rejected insurance claims and a culture where millions of Americans are being denied healthcare.
Another reason why they're butting heads is that people actually want to hold DOGE accountable and know exactly who Elon Musk has employed to harvest data and money from federal systems. But that's "doxxing", apparently. Maybe Spez is doing it because he too wants to kiss the ring just like Peter Thiel, Elon Musk, Mark Zuckerberg and Jeff Bezos have. Or maybe he's scared of Trump and Elon siccing their DOGEstapo upon him.
Shit like this is making people want to rally behind Luigi Mangione more.
I don't think I've ever seen anything like that suggested before? Is that like some sort of conspiracy theory or like something established that I missed or are you just joking?
I don't think I've ever seen anything like that suggested before? Is that like some sort of conspiracy theory or like something established that I missed or are you just joking?
Ah, looking over that page reminded me that I think I read Robert Evans' piece on the supposed "grey tribe" in the Mangione context. I'd definitely consider that much more syncretism than just...
Ah, looking over that page reminded me that I think I read Robert Evans' piece on the supposed "grey tribe" in the Mangione context.
I'd definitely consider that much more syncretism than just MAGAism, yeah.
In practice this gives Reddit admins the power to shape and shift the narrative without leaving as much of a trace. They can flake off unwanted users and control what and by which crowd ends up in...
In practice this gives Reddit admins the power to shape and shift the narrative without leaving as much of a trace. They can flake off unwanted users and control what and by which crowd ends up in Reddit's frontpage.
The bigger problem is the chilling effect it can have. If people are worried something might get them banned then they may avoid interacting with it. This nebulous reasoning allows Reddit to...
The bigger problem is the chilling effect it can have. If people are worried something might get them banned then they may avoid interacting with it. This nebulous reasoning allows Reddit to passively supress loads of content.
I think it's pretty obvious that this is a reaction to the incessant and downright unhinged Luigi posting that's been going on all over popular subs. I mean the users of that website are the first...
I think it's pretty obvious that this is a reaction to the incessant and downright unhinged Luigi posting that's been going on all over popular subs.
Reddit’s automatic moderation tool is flagging the word “Luigi” as potentially violent — even when the content isn’t.
I mean the users of that website are the first ones to explain in detail what "dog whistles" are and why they should be banned/punished. Turns out these concepts don't only apply to nazis though.
Reddit has a myriad of issues from both a content and moderation perspective which is very easy to surface from just a casual perusal of the site. And it's not just what is posted for a wide...
Reddit has a myriad of issues from both a content and moderation perspective which is very easy to surface from just a casual perusal of the site. And it's not just what is posted for a wide audience, if you say the wrong thing you can be on the receiving end of violent threats, racism, misogyny, and any kind of phobia you can imagine.
It's very understandable why this rule is being made, the least we can do is to not pretend that it's for any altruistic reason or that it signals a paradigm shift for the better in the way they generally moderate the site.
Perhaps the violent content is reported and then reviewed after it's been up for a while. I thought that reddit already had a rule to this effect about upvoting violent content, but apparently...
Perhaps the violent content is reported and then reviewed after it's been up for a while.
I thought that reddit already had a rule to this effect about upvoting violent content, but apparently that was only true in quarantined communities previously. They're now expanding it to the rest of the site.
My guess would be that they don't have capability to instantly detect content encouraging violence. Given that, the idea would be to "take names", record who agreed with that content so they could...
My guess would be that they don't have capability to instantly detect content encouraging violence.
Given that, the idea would be to "take names", record who agreed with that content so they could weed them out.
Most people who engage in Reddit don’t post at all. They ONLY vote and their votes are what encourage different sorts of content to be posted. If the posters are the problem it’s easier to skew...
Most people who engage in Reddit don’t post at all. They ONLY vote and their votes are what encourage different sorts of content to be posted. If the posters are the problem it’s easier to skew the incentives for the content creators by not rewarding violent content.
Automated detection of violent content is probably not super precise, so if they assign users a score instead and wait until they’re confident your score is high enough that it’s not a false-positive they can scoop you out without too much bycatch.
Then they probably also want a deterrence effect from the threat, because otherwise they’d have just downweighted your upvotes. I believe, (though could be wrong) that they already do this for detected bots.
It is rather ironic that Reddit is talking tough on moderation given how little attachment most users have to their Reddit accounts. You don't even have to choose a username when you sign up anymore.
It is rather ironic that Reddit is talking tough on moderation given how little attachment most users have to their Reddit accounts. You don't even have to choose a username when you sign up anymore.
So this is weird to me. Musk portrays himself as a free speech advocate, but in practice it’s hate speech, specifically, that he wants to defend. He has a lot less tolerance for so-called “woke”...
So this is weird to me. Musk portrays himself as a free speech advocate, but in practice it’s hate speech, specifically, that he wants to defend. He has a lot less tolerance for so-called “woke” speech (social and racial justice advocacy, gender inclusivity, intersectionality, etc.) and is clearly no free speech absolutist, despite claims to the contrary.
He wants freedom of one particular kind of speech. This was a choice. A choice between bigotry and empathy. Why anyone, when faced with such a choice, would choose the bigotry, is unexplainable save for the simple reason that such a person must be an unempathetic person. Either a literal sociopath or someone with a deeply perverted sense of right and wrong.
What’s weird to me is that the “hate speech” Reddit’s talking about here is anti-Musk speech. Not racist or sexist or any other -ist speech, but specifically speech that opposes… the bigot. Now I get that this is apparently death threats and I’m not so naive to think leftists never threaten their enemies. Violent talk like that should be taken down.
All I’m saying is, that’s the example they want to highlight??
I'd imagine it's just to make users think twice before they upvote. If you're going to "hehe imma upvote this for the lulz", well, you have to weigh that against the risk of being banned, or being...
I'd imagine it's just to make users think twice before they upvote. If you're going to "hehe imma upvote this for the lulz", well, you have to weigh that against the risk of being banned, or being closer to being banned, if you don't actually care all that much about the post you're upvoting.
On the surface maybe but I’m certain it’s just another reason for them to ban people without having to explain much. I have zero faith in their motives.
On the surface maybe but I’m certain it’s just another reason for them to ban people without having to explain much. I have zero faith in their motives.
They can ban anyone for any reason without having to explain at the moment. That's always been the case. A website can moderate as they wish, and has no obligation to explain itself - tildes is...
They can ban anyone for any reason without having to explain at the moment. That's always been the case. A website can moderate as they wish, and has no obligation to explain itself - tildes is famous for not explaining bans, for example.
I haven't felt like reddit had the good of the community in mind for years now, but I fail to see an issue with this new rule. If I don't want a certain type of content on my site, I don't just...
I haven't felt like reddit had the good of the community in mind for years now, but I fail to see an issue with this new rule. If I don't want a certain type of content on my site, I don't just want to remove the users who post that content, but also the users who endorse and support it.
I'm not at all surprised about the Luigi thing someone else mentioned, and I'm sure reddit will find more ways to fuck this up and screw good users. But the rule in general sounds like a good idea to me.
Yes, it of course makes sense to do it from the point of view of people who run reddit. I don't think there's anything good about it though. Based on previous experiences I have no faith in reddit...
If I don't want a certain type of content on my site, I don't just want to remove the users who post that content, but also the users who endorse and support it.
Yes, it of course makes sense to do it from the point of view of people who run reddit.
I don't think there's anything good about it though. Based on previous experiences I have no faith in reddit enforcing this in a way that makes sense and makes the community better, I expect the contrary.
I am also incredibly tired of the trend of infantilizing users and not letting them talk about what they want to talk about. Though I do find it funny that people tend to work around this infantilization by using infantile speech, like using "unalived" instead of killed, probably without even realizing this irony. Reddit is already doing that, people have been calling other people regarded for a couple years because retarded is a flagged word now, and the first reaction to this new change has been people finding out ways to talk about Luigi without triggering the autodetector.
This just leads into more dystopian control of human communication that big social media is increasingly known for, where the censorship is heavyhanded, often completely nonsensical (Luigi's Mansion is flagged when genuinely talked about as a videogame, but talking about the actual guy and just writing Luígí is fine) and with zero options to appeal to a human who gives a shit or even to understand how the system works.
I think you are off topic: I was talking about flagging supporters of unwanted content (and not just the creators) as a way to find unwanted users. You are talking about how unwanted users are...
I think you are off topic: I was talking about flagging supporters of unwanted content (and not just the creators) as a way to find unwanted users. You are talking about how unwanted users are detected, e.g. by looking for certain words. But you can implement stupid word filters without flagging supporters and vice versa.
But to answer your comment: Every time large social sites let their users talk about what they want to talk about, they are turning into a cesspool of hate and garbage. Of course, it's easy to overdo. Moderation is hard, and I suspect it gets exponentially harder with more users. And of course there are bad moderation tools. But no moderation is not the answer.
I think those two things are two parts of the same thing. If you want your reddit comment to be seen, you have to write it in a way that people will actually see, and for that you need at least...
I think those two things are two parts of the same thing. If you want your reddit comment to be seen, you have to write it in a way that people will actually see, and for that you need at least some people to upvote you. This new method is just another type of comment filter, it's dependent on what's written in the comment people are upvoting.
Regarding the moderation, imo all the big social media go too far and do it in a stupid and heavy-handed way. I don't know how difficult it is to do it better, I do know this isn't good. I will always prefer slight undermoderation to overmoderation: I am able to decide what I'm going to read or write about and when I'm going to stop, but whenever somebody else decides for me and a millions of other users, they inevitably decide wrong.
There are plenty of social media sites that moderate very little. 4chan, X and Truth Social are probably the most prominent, and with projects like Mastodon and Lemmy, you can relatively easily...
There are plenty of social media sites that moderate very little. 4chan, X and Truth Social are probably the most prominent, and with projects like Mastodon and Lemmy, you can relatively easily create your own. But I'm sure there are plenty of existing instances.
I don't see any of those as comparable. Mastodon and Lemmy because their userbase is a tiny fraction, Truth I've never visited but afaik it's a safe space for american conservatives, so not...
I don't see any of those as comparable. Mastodon and Lemmy because their userbase is a tiny fraction, Truth I've never visited but afaik it's a safe space for american conservatives, so not exactly general social media (though if it's actually mostly unmoderated, it shows that you don't need strong moderation to have a space suited for a group of people and relatively untouched by their outgroup, but I doubt that), and I don't remember advocating for completely anonymous free for all like 4chan - although even 4chan used to be less politically insane and less popular with regards to politics specifically, before the polarization that is at least partially caused by mainstream social media happened and some of the "refugees" started to go there and make it both more insane and more prominent in the conspiracy far right area.
X is very obviously moderated in an ideological fashion, just in a different direction than it used to be. Though community notes are imo one of the better ideas to handle this issue and one big advantage of X is that it easily allows you to circumvent "the algorithm" and view only posts from profiles you follow, in chronological order, and even make different topical groups of profiles.
In any case, this is most definitely off-topic, it has nothing to do with reddit or how policies that cause further chilling effect are going to affect its quality for the users.
I mean my first thought was that it also helps weed out bots. One posts another 100 upvote and you ban one is going to take a looong time to get em all. Granted I’m SURE this will be used at the...
I mean my first thought was that it also helps weed out bots.
One posts another 100 upvote and you ban one is going to take a looong time to get em all.
Granted I’m SURE this will be used at the whims of many others for less sincere purposes
Users who repeatedly upvote content that violates Reddit's policies, starting with violent content, will now receive warnings. This marks a shift from solely penalizing the original poster.
Reddit emphasizes that occasional upvotes won't trigger warnings; the system aims to address users who consistently engage with violative content, indicating a pattern of support.
What I don't understand about this is why you would announce it. They already do things like shadowbanning and vote fuzzing; behind-the-scenes actions to mitigate potential abuse. Why not just...
What I don't understand about this is why you would announce it. They already do things like shadowbanning and vote fuzzing; behind-the-scenes actions to mitigate potential abuse. Why not just keep a secret score of how many times a user has upvoted certain things, and if you want to minimize those things you have those users' votes count less.
It could have been implemented with no backlash, tweaked whenever they feel like it, and for most users the overall site quality would probably improve.
The change was unannounced originally, but it got some visibility when users started sharing their warnings and when the mods of r/popculture made a stink about it:...
Well, I think kej's intention was to treat it more or less as a shadow ban. And shadow bans aren't exactly being announced to the user. And while shadow banning is easily detected if you go...
The change was unannounced originally, but it got some visibility when users started sharing their warnings
Well, I think kej's intention was to treat it more or less as a shadow ban. And shadow bans aren't exactly being announced to the user. And while shadow banning is easily detected if you go looking for it, having your own votes just routed to nirvana would be undetectable because of vote fuzzing. They could've gone completely undetected on this. As undetected as a certain green plumber if it weren't for some McD employee.
Sure, I guess my point is that you could announce something without giving away the details that will discourage users from participating in your site that relies almost entirely on user...
Sure, I guess my point is that you could announce something without giving away the details that will discourage users from participating in your site that relies almost entirely on user participation. You could make a vague announcement about "advanced filtering to detect TOS violations" or something and still keep the implementation private "to avoid coordinated abuse".
An announcement is an opportunity to craft the narrative. How often do you see someone believing a communique at face value without thinking about if it's portraying the situation honestly or...
An announcement is an opportunity to craft the narrative. How often do you see someone believing a communique at face value without thinking about if it's portraying the situation honestly or providing the full context?
The reddit announcement is purposefully worded in a way to not be clear what the nature of violent rhetoric they're taking issue with is. In previous initiatives, they were more clear about which rhetoric and why but since Spez has come back, that has ceased.
The reason why they can't be silent about it is because it's very noticeable when a comment gets removed by admins, since it leaves that "[Removed by Reddit]" tag on the post and only invites questions. And very easy to tell from context what the subject of it was. So an announcement is actually the ideal way to serve a palatable version of the situation to people who haven't heard or experienced it but want information.
They could do that, but it might have less of the intended effect than announcing it openly. Drawing parallels to penology, you can think of it as a kind of denunciation. The value of Reddit as...
They could do that, but it might have less of the intended effect than announcing it openly. Drawing parallels to penology, you can think of it as a kind of denunciation. The value of Reddit as with other social networks lies almost entirely in the work of its users. People who upvote stuff like this probably represent a not insignificant chunk of the most active user base, and more heavy-handed moderation techniques might hurt Reddit's bottom line.
Reddit have always been heavy-handed with posts that glorify or promote violence, at least ever since Spez took charge. They're now butting heads with the greater community because people are actually celebrating the death of Brian Thompson because his tenure as United Healthcare CEO led to a trebled rate of rejected insurance claims and a culture where millions of Americans are being denied healthcare.
Another reason why they're butting heads is that people actually want to hold DOGE accountable and know exactly who Elon Musk has employed to harvest data and money from federal systems. But that's "doxxing", apparently. Maybe Spez is doing it because he too wants to kiss the ring just like Peter Thiel, Elon Musk, Mark Zuckerberg and Jeff Bezos have. Or maybe he's scared of Trump and Elon siccing their DOGEstapo upon him.
Shit like this is making people want to rally behind Luigi Mangione more.
I thought Luigi was a MAGA stooge though?
I don't think I've ever seen anything like that suggested before? Is that like some sort of conspiracy theory or like something established that I missed or are you just joking?
Pinging @Moah as well.
I wouldn’t call him a MAGA stooge, but his views are syncretic, with a good bit on the right wing of the political spectrum.
Ah, looking over that page reminded me that I think I read Robert Evans' piece on the supposed "grey tribe" in the Mangione context.
I'd definitely consider that much more syncretism than just MAGAism, yeah.
I don’t get what the practical goal is. If they have the capability to detect violent content, why don’t they block the posts entirely?
In practice this gives Reddit admins the power to shape and shift the narrative without leaving as much of a trace. They can flake off unwanted users and control what and by which crowd ends up in Reddit's frontpage.
So basically something like Luigi fan club members can be driven off the site without directly saying as much. Or for example pro Palestine activists. Verge article about Reddit flagging 'Luigi' as potentially violent content (even though the flagged post was about Nintendo's 'Luigi's Mansion 3').
The bigger problem is the chilling effect it can have. If people are worried something might get them banned then they may avoid interacting with it. This nebulous reasoning allows Reddit to passively supress loads of content.
Bet that'll increase stock prices though
Won’t somebody please think of the poor shareholders? /s
I think it's pretty obvious that this is a reaction to the incessant and downright unhinged Luigi posting that's been going on all over popular subs.
I mean the users of that website are the first ones to explain in detail what "dog whistles" are and why they should be banned/punished. Turns out these concepts don't only apply to nazis though.
Reddit has a myriad of issues from both a content and moderation perspective which is very easy to surface from just a casual perusal of the site. And it's not just what is posted for a wide audience, if you say the wrong thing you can be on the receiving end of violent threats, racism, misogyny, and any kind of phobia you can imagine.
It's very understandable why this rule is being made, the least we can do is to not pretend that it's for any altruistic reason or that it signals a paradigm shift for the better in the way they generally moderate the site.
Perhaps the violent content is reported and then reviewed after it's been up for a while.
I thought that reddit already had a rule to this effect about upvoting violent content, but apparently that was only true in quarantined communities previously. They're now expanding it to the rest of the site.
My guess would be that they don't have capability to instantly detect content encouraging violence.
Given that, the idea would be to "take names", record who agreed with that content so they could weed them out.
Most people who engage in Reddit don’t post at all. They ONLY vote and their votes are what encourage different sorts of content to be posted. If the posters are the problem it’s easier to skew the incentives for the content creators by not rewarding violent content.
Automated detection of violent content is probably not super precise, so if they assign users a score instead and wait until they’re confident your score is high enough that it’s not a false-positive they can scoop you out without too much bycatch.
Then they probably also want a deterrence effect from the threat, because otherwise they’d have just downweighted your upvotes. I believe, (though could be wrong) that they already do this for detected bots.
It is rather ironic that Reddit is talking tough on moderation given how little attachment most users have to their Reddit accounts. You don't even have to choose a username when you sign up anymore.
I'm thinking they got spooked when Musk called them out last month and they're scrambling to do something to avoid his attention.
Oh is Elon Musk getting scared?
So this is weird to me. Musk portrays himself as a free speech advocate, but in practice it’s hate speech, specifically, that he wants to defend. He has a lot less tolerance for so-called “woke” speech (social and racial justice advocacy, gender inclusivity, intersectionality, etc.) and is clearly no free speech absolutist, despite claims to the contrary.
He wants freedom of one particular kind of speech. This was a choice. A choice between bigotry and empathy. Why anyone, when faced with such a choice, would choose the bigotry, is unexplainable save for the simple reason that such a person must be an unempathetic person. Either a literal sociopath or someone with a deeply perverted sense of right and wrong.
What’s weird to me is that the “hate speech” Reddit’s talking about here is anti-Musk speech. Not racist or sexist or any other -ist speech, but specifically speech that opposes… the bigot. Now I get that this is apparently death threats and I’m not so naive to think leftists never threaten their enemies. Violent talk like that should be taken down.
All I’m saying is, that’s the example they want to highlight??
I think it's mostly that he has very thin skin.
I'd imagine it's just to make users think twice before they upvote. If you're going to "hehe imma upvote this for the lulz", well, you have to weigh that against the risk of being banned, or being closer to being banned, if you don't actually care all that much about the post you're upvoting.
On the surface maybe but I’m certain it’s just another reason for them to ban people without having to explain much. I have zero faith in their motives.
They can ban anyone for any reason without having to explain at the moment. That's always been the case. A website can moderate as they wish, and has no obligation to explain itself - tildes is famous for not explaining bans, for example.
Sure but that doesn’t change my lack of faith in them compared to here.
I haven't felt like reddit had the good of the community in mind for years now, but I fail to see an issue with this new rule. If I don't want a certain type of content on my site, I don't just want to remove the users who post that content, but also the users who endorse and support it.
I'm not at all surprised about the Luigi thing someone else mentioned, and I'm sure reddit will find more ways to fuck this up and screw good users. But the rule in general sounds like a good idea to me.
Yes, it of course makes sense to do it from the point of view of people who run reddit.
I don't think there's anything good about it though. Based on previous experiences I have no faith in reddit enforcing this in a way that makes sense and makes the community better, I expect the contrary.
I am also incredibly tired of the trend of infantilizing users and not letting them talk about what they want to talk about. Though I do find it funny that people tend to work around this infantilization by using infantile speech, like using "unalived" instead of killed, probably without even realizing this irony. Reddit is already doing that, people have been calling other people regarded for a couple years because retarded is a flagged word now, and the first reaction to this new change has been people finding out ways to talk about Luigi without triggering the autodetector.
This just leads into more dystopian control of human communication that big social media is increasingly known for, where the censorship is heavyhanded, often completely nonsensical (Luigi's Mansion is flagged when genuinely talked about as a videogame, but talking about the actual guy and just writing Luígí is fine) and with zero options to appeal to a human who gives a shit or even to understand how the system works.
I think you are off topic: I was talking about flagging supporters of unwanted content (and not just the creators) as a way to find unwanted users. You are talking about how unwanted users are detected, e.g. by looking for certain words. But you can implement stupid word filters without flagging supporters and vice versa.
But to answer your comment: Every time large social sites let their users talk about what they want to talk about, they are turning into a cesspool of hate and garbage. Of course, it's easy to overdo. Moderation is hard, and I suspect it gets exponentially harder with more users. And of course there are bad moderation tools. But no moderation is not the answer.
I think those two things are two parts of the same thing. If you want your reddit comment to be seen, you have to write it in a way that people will actually see, and for that you need at least some people to upvote you. This new method is just another type of comment filter, it's dependent on what's written in the comment people are upvoting.
Regarding the moderation, imo all the big social media go too far and do it in a stupid and heavy-handed way. I don't know how difficult it is to do it better, I do know this isn't good. I will always prefer slight undermoderation to overmoderation: I am able to decide what I'm going to read or write about and when I'm going to stop, but whenever somebody else decides for me and a millions of other users, they inevitably decide wrong.
There are plenty of social media sites that moderate very little. 4chan, X and Truth Social are probably the most prominent, and with projects like Mastodon and Lemmy, you can relatively easily create your own. But I'm sure there are plenty of existing instances.
I don't see any of those as comparable. Mastodon and Lemmy because their userbase is a tiny fraction, Truth I've never visited but afaik it's a safe space for american conservatives, so not exactly general social media (though if it's actually mostly unmoderated, it shows that you don't need strong moderation to have a space suited for a group of people and relatively untouched by their outgroup, but I doubt that), and I don't remember advocating for completely anonymous free for all like 4chan - although even 4chan used to be less politically insane and less popular with regards to politics specifically, before the polarization that is at least partially caused by mainstream social media happened and some of the "refugees" started to go there and make it both more insane and more prominent in the conspiracy far right area.
X is very obviously moderated in an ideological fashion, just in a different direction than it used to be. Though community notes are imo one of the better ideas to handle this issue and one big advantage of X is that it easily allows you to circumvent "the algorithm" and view only posts from profiles you follow, in chronological order, and even make different topical groups of profiles.
In any case, this is most definitely off-topic, it has nothing to do with reddit or how policies that cause further chilling effect are going to affect its quality for the users.
As reddit is now a public company, their community is their shareholders.
I mean my first thought was that it also helps weed out bots.
One posts another 100 upvote and you ban one is going to take a looong time to get em all.
Granted I’m SURE this will be used at the whims of many others for less sincere purposes
What I don't understand about this is why you would announce it. They already do things like shadowbanning and vote fuzzing; behind-the-scenes actions to mitigate potential abuse. Why not just keep a secret score of how many times a user has upvoted certain things, and if you want to minimize those things you have those users' votes count less.
It could have been implemented with no backlash, tweaked whenever they feel like it, and for most users the overall site quality would probably improve.
The change was unannounced originally, but it got some visibility when users started sharing their warnings and when the mods of r/popculture made a stink about it: https://www.reddit.com/r/popculture/comments/1j5jngg/rpopculture_is_closed/
The article is kind of unclear about whether there was an announcement or clarifying comments or whatever, but it was all retroactive.
Well, I think kej's intention was to treat it more or less as a shadow ban. And shadow bans aren't exactly being announced to the user. And while shadow banning is easily detected if you go looking for it, having your own votes just routed to nirvana would be undetectable because of vote fuzzing. They could've gone completely undetected on this. As undetected as a certain green plumber if it weren't for some McD employee.
Gain favor from trump, musk, etc as others in this thread have mentioned.
Sure, I guess my point is that you could announce something without giving away the details that will discourage users from participating in your site that relies almost entirely on user participation. You could make a vague announcement about "advanced filtering to detect TOS violations" or something and still keep the implementation private "to avoid coordinated abuse".
An announcement is an opportunity to craft the narrative. How often do you see someone believing a communique at face value without thinking about if it's portraying the situation honestly or providing the full context?
The reddit announcement is purposefully worded in a way to not be clear what the nature of violent rhetoric they're taking issue with is. In previous initiatives, they were more clear about which rhetoric and why but since Spez has come back, that has ceased.
The reason why they can't be silent about it is because it's very noticeable when a comment gets removed by admins, since it leaves that "[Removed by Reddit]" tag on the post and only invites questions. And very easy to tell from context what the subject of it was. So an announcement is actually the ideal way to serve a palatable version of the situation to people who haven't heard or experienced it but want information.
Perhaps because certain people are unhappy with them, and they're trying to signal to those people that they're taking action?
They could do that, but it might have less of the intended effect than announcing it openly. Drawing parallels to penology, you can think of it as a kind of denunciation. The value of Reddit as with other social networks lies almost entirely in the work of its users. People who upvote stuff like this probably represent a not insignificant chunk of the most active user base, and more heavy-handed moderation techniques might hurt Reddit's bottom line.