A bit of meta discussion but is it just me or does anyone find this article a bit useless for lack of a better word. Sure it describes the problem but there's no real aim to it just meandering...
A bit of meta discussion but is it just me or does anyone find this article a bit useless for lack of a better word. Sure it describes the problem but there's no real aim to it just meandering about the issue.
To talk about the article itself I don't see how this is a solvable problem and maybe should it just be something swept under the rug so that no other sympathisers can know about it and then join those groups? Sure the sticker function is letting them spread hate messages but they could just as easily send pictures and on any platform they wanted. The "Heil Hitler" emoji is also readily available anywhere. To completely clear myself I just want to say that I have never been and never will be pro fascism as someone of mixed race I find that abhorrent, I'm just a bit cynical as to how this problem can be solved on a large scale.
the biggest commonality with problems like this is that the platforms which have them never bother to do any sort of proactive policing (and usually, don't even bother to police things when they...
the biggest commonality with problems like this is that the platforms which have them never bother to do any sort of proactive policing (and usually, don't even bother to police things when they know there's a problem). you're obviously never going to get rid of 100% of the bad people on a platform as large as whatsapp, but even doing something as little as occasionally nuking groups like the ones mentioned in the article would go a long way in doing something, since it's often more trouble than it's worth to stick around on a platform which actively disrupts your ability to recruit people and radicalize them even semi-regularly.
It's just a hard problem to solve since the messages are end to end encrypted. The only way I can see some sort of moderating work is by sharing the private key of one of the members and then...
It's just a hard problem to solve since the messages are end to end encrypted. The only way I can see some sort of moderating work is by sharing the private key of one of the members and then reporting the group, but I'm pretty sure this will notify the people of a changed device. And it's kind of a security nightmare. Screenshots are also easily fakeable. The place where I can see some fixing would be in the sticker creation apps, that might be possible to moderate.
yeah, i know with whatsapp in particular it's somewhat difficult to navigate since like, half the draw of the app is the encryption--but at the same time, i think you really have to have some sort...
yeah, i know with whatsapp in particular it's somewhat difficult to navigate since like, half the draw of the app is the encryption--but at the same time, i think you really have to have some sort ability to step in or else the end result is that you just create a great haven for people to do things like this (and potentially, with time, create an environment where more hostile things like violence are encouraged and then acted on). actually, i think whatsapp is probably one of the services which has the least excuse for not at least trying to proactively moderate since that's really, presumably, one of the only ways to keep bad actors off the platform.
I disagree. I feel like the nature of WhatsApp as a platform means it shouldn't moderate speech on its platform nearly as much as other major tech platforms. First, it's not a mass medium. It's...
I disagree. I feel like the nature of WhatsApp as a platform means it shouldn't moderate speech on its platform nearly as much as other major tech platforms.
First, it's not a mass medium. It's made for conversations, one-on-one or in small groups. It's technologically limited to two hundred and fiftyish people in a group. It's not a platform on which people particularly publish, and I don't feel like WhatsApp should have strong moderation rights because of this. It's not an editor nor publisher trying to foster a certain kind of community, and I wouldn't want them to be.
Second, the fact that it's not a mass medium means that it is not a particularly useful tool for spreading propoganda or hate. Sure, it can foster it in small communities, but unlike even a private subreddit, those communities are necessarily limited in scope.
Third, I see two main types of hate-speech-type abuse: the kind where they are actively victimizing someone, and the kind where they are consensually conversing with the other participants. In the first case, on almost all platforms including WhatsApp, the victim can report the perpetrators. But, in the second case, while you can report these groups on more public sites like Facebook, Reddit, etc., we can't do that here (and this is what the article is about). In this case, I feel like its still important for people to be able to explore controversial topics in small groups. We need to be able to say controversial things to our friends and let them tell us we're dumb, without some moral authority on high banning us before we can hear reason from our friends. Plus, since we can't rely on victims to report problematic behavior in this case, we would need some kind of filtering or surveillance. Even if they could do this while maintaining their encryption guarantees, I have no faith in WhatsApps' ability to distinguish "Can you believe that Mike said Nazis didn't do anything wrong?!" (excitedly) from "Can you believe that Mike said Nazis didn't do anything wrong?!" (freaked out). We've seen too many times from YouTube, Facebook, etc., that communication has way too much context for modern AI to handle.
Ultimately, I think this is an attempt at a technical, top-down solution to a social, bottom-up problem.
i really have no idea how you can possibly claim this, considering that OP article is literally about people using it as a tool for spreading both of those things and radicalizing people. it's not...
Second, the fact that it's not a mass medium means that it is not a particularly useful tool for spreading propoganda or hate. Sure, it can foster it in small communities, but unlike even a private subreddit, those communities are necessarily limited in scope.
i really have no idea how you can possibly claim this, considering that OP article is literally about people using it as a tool for spreading both of those things and radicalizing people. it's not like we're speaking in hypotheticals here--this is something that is literally happening right now.
In this case, I feel like its still important for people to be able to explore controversial topics in small groups. We need to be able to say controversial things to our friends and let them tell us we're dumb, without some moral authority on high banning us before we can hear reason from our friends.
i'm pretty sure you can do this while also not allowing actual networks of far right propaganda to exist and entrench themselves on your platform, but honestly if someone is exploring nazism and fascism in general in a non-academic way that is just a little bit of a red flag and you're probably not going to talk them out of buying into it if they do?
numbers aren't exactly great metrics by which to qualify how "useful" something is at radicalizing someone, because the form of the radicalism can vary intensely from person to person and group to...
I think you would need actual numbers and statistics to back up that it's "useful" for radicalizing people. Instances of people, who wouldn't have otherwise get caught up in it, stumbling across hate groups on encrypted texting apps could very well be rare isolated instances. Obviously, we have no good way of knowing, but I would suspect traditional social media is quite a bit more efficient at radicalizing random people.
numbers aren't exactly great metrics by which to qualify how "useful" something is at radicalizing someone, because the form of the radicalism can vary intensely from person to person and group to group. you can radicalize a lot of people and have nothing happen, or you can radicalize a few people and have an extremely virulent and violent set of people result. 4chan for example is a gigantic website which receives hundreds of thousands of posts a day and has a massive number of radical people on it, but it was 8chan, which is a fraction of the size of 4chan, has a fraction of the userbase, and probably receives fewer than 10,000 posts a day most of the time that produced someone who blew away 50 people in a terrorist attack. the takeaway from whatsapp having this problem absolutely should not be "is this a thing that happens in large enough numbers for it to be addressed?": it should be "how can whatsapp step in and try to prevent people from being radicalized in the first place?". because it really only takes the one person.
Eh... getting misinformation from the occasional bad source doesn't necessarily destroy a news organization's credibility IMO. And it's not like buzzfeed news is alone in having that problem. Just...
Eh... getting misinformation from the occasional bad source doesn't necessarily destroy a news organization's credibility IMO. And it's not like buzzfeed news is alone in having that problem. Just look at Bloomberg and the Chinese "microchip hack" claim. Nobody can find any evidence whatsoever of it being true and most security experts think it's highly improbable on a technical level. Even Bloomberg is allegedly investigating their own investigation now... so they likely know full well they fucked up.
It's not just that they got misinformation. It's that when the Mueller team made an extremely rare statement saying their information was wrong, and no other news organization corroborated their...
It's not just that they got misinformation. It's that when the Mueller team made an extremely rare statement saying their information was wrong, and no other news organization corroborated their story, they still doubled down on it. That shows that the key decision makers in their organization have issues.
Again, they are not alone in this. Bloomberg doubled down too despite nobody being able to corroborate their story, every CEO directly stating it was not true and a majority of computer security...
Again, they are not alone in this. Bloomberg doubled down too despite nobody being able to corroborate their story, every CEO directly stating it was not true and a majority of computer security experts and even government intelligence agency officials coming out saying it was highly improbable on a technical level. Tim Cook (Apple CEO) is even outright demanding Bloomberg retract the story, which is the very first time Apple has ever done that. But even recently speaking, Buzzfeed and Bloomberg are not alone in this; McClatchy has done similar on a story they reported related to Trump-Cohen, and ditto for the Guardian with their Manafort-Assange Ecuadorian embassy story.
So if your sole criteria for determining a News organization's credibility is whether they got information from a few bad sources which can't be corroborated and they doubled down on their reporting initially instead of retract immediately, then you have basically written off half the world's news organizations... because they have all done that at some point. Admitting a major mistake is difficult, and it usually takes a significant amount of time before any news organization is willing to retract a major story, especially ones that potentially open them up to legal liability (e.g. defamation suits) for them having reported it.
Now, that isn't to say that these mistakes don't affect credibility at all... they do. But IMO they result in a tarnishing, not a complete annihilation of it.
To be honest I've never trusted buzz feed at all. I don't think of myself as educated enough to align anywhere politically but even I can tell that their articles are all really biased and only...
To be honest I've never trusted buzz feed at all. I don't think of myself as educated enough to align anywhere politically but even I can tell that their articles are all really biased and only focused on pushing their agenda.
A bit of meta discussion but is it just me or does anyone find this article a bit useless for lack of a better word. Sure it describes the problem but there's no real aim to it just meandering about the issue.
To talk about the article itself I don't see how this is a solvable problem and maybe should it just be something swept under the rug so that no other sympathisers can know about it and then join those groups? Sure the sticker function is letting them spread hate messages but they could just as easily send pictures and on any platform they wanted. The "Heil Hitler" emoji is also readily available anywhere. To completely clear myself I just want to say that I have never been and never will be pro fascism as someone of mixed race I find that abhorrent, I'm just a bit cynical as to how this problem can be solved on a large scale.
the biggest commonality with problems like this is that the platforms which have them never bother to do any sort of proactive policing (and usually, don't even bother to police things when they know there's a problem). you're obviously never going to get rid of 100% of the bad people on a platform as large as whatsapp, but even doing something as little as occasionally nuking groups like the ones mentioned in the article would go a long way in doing something, since it's often more trouble than it's worth to stick around on a platform which actively disrupts your ability to recruit people and radicalize them even semi-regularly.
It's just a hard problem to solve since the messages are end to end encrypted. The only way I can see some sort of moderating work is by sharing the private key of one of the members and then reporting the group, but I'm pretty sure this will notify the people of a changed device. And it's kind of a security nightmare. Screenshots are also easily fakeable. The place where I can see some fixing would be in the sticker creation apps, that might be possible to moderate.
yeah, i know with whatsapp in particular it's somewhat difficult to navigate since like, half the draw of the app is the encryption--but at the same time, i think you really have to have some sort ability to step in or else the end result is that you just create a great haven for people to do things like this (and potentially, with time, create an environment where more hostile things like violence are encouraged and then acted on). actually, i think whatsapp is probably one of the services which has the least excuse for not at least trying to proactively moderate since that's really, presumably, one of the only ways to keep bad actors off the platform.
I disagree. I feel like the nature of WhatsApp as a platform means it shouldn't moderate speech on its platform nearly as much as other major tech platforms.
First, it's not a mass medium. It's made for conversations, one-on-one or in small groups. It's technologically limited to two hundred and fiftyish people in a group. It's not a platform on which people particularly publish, and I don't feel like WhatsApp should have strong moderation rights because of this. It's not an editor nor publisher trying to foster a certain kind of community, and I wouldn't want them to be.
Second, the fact that it's not a mass medium means that it is not a particularly useful tool for spreading propoganda or hate. Sure, it can foster it in small communities, but unlike even a private subreddit, those communities are necessarily limited in scope.
Third, I see two main types of hate-speech-type abuse: the kind where they are actively victimizing someone, and the kind where they are consensually conversing with the other participants. In the first case, on almost all platforms including WhatsApp, the victim can report the perpetrators. But, in the second case, while you can report these groups on more public sites like Facebook, Reddit, etc., we can't do that here (and this is what the article is about). In this case, I feel like its still important for people to be able to explore controversial topics in small groups. We need to be able to say controversial things to our friends and let them tell us we're dumb, without some moral authority on high banning us before we can hear reason from our friends. Plus, since we can't rely on victims to report problematic behavior in this case, we would need some kind of filtering or surveillance. Even if they could do this while maintaining their encryption guarantees, I have no faith in WhatsApps' ability to distinguish "Can you believe that Mike said Nazis didn't do anything wrong?!" (excitedly) from "Can you believe that Mike said Nazis didn't do anything wrong?!" (freaked out). We've seen too many times from YouTube, Facebook, etc., that communication has way too much context for modern AI to handle.
Ultimately, I think this is an attempt at a technical, top-down solution to a social, bottom-up problem.
i really have no idea how you can possibly claim this, considering that OP article is literally about people using it as a tool for spreading both of those things and radicalizing people. it's not like we're speaking in hypotheticals here--this is something that is literally happening right now.
i'm pretty sure you can do this while also not allowing actual networks of far right propaganda to exist and entrench themselves on your platform, but honestly if someone is exploring nazism and fascism in general in a non-academic way that is just a little bit of a red flag and you're probably not going to talk them out of buying into it if they do?
numbers aren't exactly great metrics by which to qualify how "useful" something is at radicalizing someone, because the form of the radicalism can vary intensely from person to person and group to group. you can radicalize a lot of people and have nothing happen, or you can radicalize a few people and have an extremely virulent and violent set of people result. 4chan for example is a gigantic website which receives hundreds of thousands of posts a day and has a massive number of radical people on it, but it was 8chan, which is a fraction of the size of 4chan, has a fraction of the userbase, and probably receives fewer than 10,000 posts a day most of the time that produced someone who blew away 50 people in a terrorist attack. the takeaway from whatsapp having this problem absolutely should not be "is this a thing that happens in large enough numbers for it to be addressed?": it should be "how can whatsapp step in and try to prevent people from being radicalized in the first place?". because it really only takes the one person.
After the Mueller report revealed just how terrible Buzzfeed's reporting was, I don't see them as a credible news source anymore.
Eh... getting misinformation from the occasional bad source doesn't necessarily destroy a news organization's credibility IMO. And it's not like buzzfeed news is alone in having that problem. Just look at Bloomberg and the Chinese "microchip hack" claim. Nobody can find any evidence whatsoever of it being true and most security experts think it's highly improbable on a technical level. Even Bloomberg is allegedly investigating their own investigation now... so they likely know full well they fucked up.
It's not just that they got misinformation. It's that when the Mueller team made an extremely rare statement saying their information was wrong, and no other news organization corroborated their story, they still doubled down on it. That shows that the key decision makers in their organization have issues.
Again, they are not alone in this. Bloomberg doubled down too despite nobody being able to corroborate their story, every CEO directly stating it was not true and a majority of computer security experts and even government intelligence agency officials coming out saying it was highly improbable on a technical level. Tim Cook (Apple CEO) is even outright demanding Bloomberg retract the story, which is the very first time Apple has ever done that. But even recently speaking, Buzzfeed and Bloomberg are not alone in this; McClatchy has done similar on a story they reported related to Trump-Cohen, and ditto for the Guardian with their Manafort-Assange Ecuadorian embassy story.
So if your sole criteria for determining a News organization's credibility is whether they got information from a few bad sources which can't be corroborated and they doubled down on their reporting initially instead of retract immediately, then you have basically written off half the world's news organizations... because they have all done that at some point. Admitting a major mistake is difficult, and it usually takes a significant amount of time before any news organization is willing to retract a major story, especially ones that potentially open them up to legal liability (e.g. defamation suits) for them having reported it.
Now, that isn't to say that these mistakes don't affect credibility at all... they do. But IMO they result in a tarnishing, not a complete annihilation of it.
To be honest I've never trusted buzz feed at all. I don't think of myself as educated enough to align anywhere politically but even I can tell that their articles are all really biased and only focused on pushing their agenda.
On Tildes I've actually been reading articles unlike with Reddit. I've started to realize how many articles are really mediocre.