14
votes
When limiting online speech to curb violence, we should be careful
Link information
This data is scraped automatically and may be incorrect.
- Title
- Think Twice When You Limit Online Speech to Curb Violence
- Authors
- Cindy Cohn, Alex Davies, Noam Cohen, Arielle Pardes, Megan Molteni, WIRED Cartoons, Liz Specht
- Word count
- 1084 words
I'm gonna try my best to articulate what I've said on the topic in other threads in addition to what I think about this article in particular. It seems like the author is saying that since we can't all agree on what should be said that no one should be banned / deplatformed / etc. The argument about the power Google, Facebook, et al. have is a separate one from this one. Yeah, we all agree that Facebook sucks, but if people are getting banned when they shouldn't have, the solution is to have better moderation, not to throw our hands up and say "well since we can't do this right let's not do it at all."
I don't want Facebook and Google legitimized as utilities either. I'd rather they (along with all the companies that double-dip into internet service and media) be trust-busted, and while the individual components would still be huge, it'd be easier to compete against Instagram or Google Drive/Docs on their own.
This article is really condescending. They say " It can feel viscerally good to try to shut down these forums or chase them from host to host, or to hold someone accountable even if it is for what is said by others." but don't offer a better solution. Given the kind of garbage that already exists on Facebook, how much worse would it be if they didn't even try to moderate the site? We've seen over and over and over that unmoderated spaces on the internet become complete trash. Public chat channels in every large video game are full of racism and trolling. T_D directly has blood on their hands. If they have a better solution, I'm interested to hear it, but in the meantime I don't feel the slightest bit bad calling for 8chan and all the other disgusting right-wing platforms and personalities to be deplatformed.
I'm interested in what ideas Tildes users - presumably well-informed and creative - have on this. I was hoping this posting would result in some constructive discussion about how to limit hateful or violence-inciting speech while also protecting vulnerable users and dissenting but positive discussions. But the trend seems to be dog-piling the argument that free speech is almost always a front for the alt-right.
That feels like a strawman to me. The arguments presented by most people here that I have seen in favor of deplatforming (myself included), is that absolutist free speech (which pretty much only exists in the US) is more often than not used in defense of the "alt-right". And that isn't an entirely inaccurate assessment IMO, as they are currently the most prevalent purveyors of hate, and most reasonable people understand that certain limits to freedom of speech, e.g. hatespeech laws (which exist in almost every Western country but the US), are very much necessary in order to prevent exactly what we are seeing frequently occur in the US right now in terms of hate crimes and hate motivated mass shootings.
What "vulnerable users" and "dissenting but productive discussions" are negatively effected by deplatforming people who harass others and post hatespeech (as defined in most platforms' Terms of Service)?
I'm not an American, so it's news to me that the US doesn't have hate-speech laws. I thought that was universal among the West. Which is the context in which I am arguing for free speech - rights have corresponding responsibilities. So I hope you see my concern here in the situation where speech that incites illegal acts is already taken care of, but then there is censorship on top of that.
See article. There was also an article somewhere recently about drag queen's tweets getting taken down by auto-mod for hate speech. You could argue that this is the fault of technology, but that's exactly the sort of tools that are being applied right now.
Moderation is incredibly difficult (which I can attest to from years of doing it), especially the automated kind with our current technological limitations, and even with human moderation honest mistakes are bound to happen (which I also know from experience). But that doesn't mean we should let perfect be the enemy of good, or we should throw the baby out with the bathwater and never censor or deplatform anyone ever, for any reason... which is what the EFF has been consistently arguing for over the last few years... which is why I cancelled my recurring donation to them.
Yes, deplatforming is not a perfect solution and it won't instantly solve the problem, and yes, there is the potential for a slippery slope (although that's considered a logical fallacy for a reason), and yes, while everyone is figuring out how to effectively target hate speech some innocent people will get caught in the crossfire (though that can and usually is reversed on review). However IMO deplatforming is still the best option we currently have for preventing (or at least slowing down) the spread of these hateful ideologies which threaten to tear apart the very fabric of Western society, since the underlying systemic issues that lead to the rise of hate are significantly more difficult to even identify, let alone actually make progress towards solving.
Deplatforming is not the solution to all our problems, but it's still a step in the right direction, and IMO anyone arguing against it (like the EFF has been) is, whether they know it or not, helping the hate and violence continue to spread.
I don't think it's a step in any direction - it's running in place. It's like plugging one hole in the dam only to have another leak pop up. In theory, with the ultimate wise-person calling the shots (to prevent abuse) and with nowhere else to go, maybe it will work. But shutting people up just adds fuel to the fire. It's like the American prison system - instead of rehabilitating and integrating people with a functional society, they group them all together and act surprised when the end result is just more effective criminals (especially for young people). Echo chambers distort reality, for all sides.
Even if the majority of the audience for hate speech moves platforms when they or their thought leaders get deplatformed, that at least prevents even more people on that previous platform from being exposed to them. Whereas letting hate speech continue to spread unabated on the largest online platforms with the widest reach just allows the purveyors of it to keep recruiting en masse by bringing more new, naive idiots on those platforms into the fold. So if we're going to go down the analogy route, deplatforming is not "running in place", it's more like bailing water to keep afloat.... which is still better than the alternative; Drowning.
False equivalence. Not allowing hate speech on a platform does not make it an "echo chamber", it's allowing it that often leads to that since reasonable people tend to flee from places where it's allowed or prevalent (see: voat, 8chan, gab, etc), whereas only the most hateful and extreme free speech absolutists tend to flee from places where it's not allowed. And it's also ironic you should mention distorting reality, since IMO hate speech is the ultimate distortion of reality and adds absolutely nothing of value to society or any individuals exposed to it, and in fact does the complete opposite as well by actively damaging the very fabric of society, all the individuals exposed to it, and especially those being targeted by it.
As far as I can tell pretty much everyone can agree that Facebook et al.'s moderation doesn't work. We can force them to throw more resources at the problem; lord knows they have enough money to pay more moderators. That doesn't fix the problem of those moderators developing PTSD and those sort of issues because of the fucked-up stuff they see, which could be resolved by either rotating those moderators out regularly or getting a whole lot stricter in how we interact and post things online. Alternatively, if we could encourage smaller "reach" for people's posts in some way (my vote is going back to the pre-2010s internet where we were all (mostly) on smaller forums) I think that would go a long way in solving these issues. Decentralizing would also have the added bonus of making it harder for online propaganda operations to spread. Maybe we do need to hold the Facebooks and Reddits of the world accountable in some fashion.
There's a very simple solution to this problem that has zero to do with censorship on the internet. Gun Control is what will make this happen, not arguing about whether they were radicalized on 8Chan or some other site and whether those sites should be allowed to exist.
Agreed. Now what about in places (i.e. UK) with gun control but an issue with stabbings? Knife control? I'm confident the author was no strictly speaking about gun violence, but violence in general.
Unless you're from Ninja Assassin you probably can't kill 9 people in a minute with a knife like the Dayton shooter did with a gun. Yes, we need to work on reducing violence in general but gun control would go a long way in helping with that.
I'm not arguing against gun control, as I clearly stated I agree with JXM. Gun control is largely American-centric and not the topic of discussion in this article - that would be losing the forest for the trees.
I hope you actually read the article.
I did, I was just responding to your particular comment. I left a broader comment farther down.
I don’t know. Honestly. If I had the answer, I’d put an end to the violence!
But guns allow you to kill so many more people in a quick succession than a knife does.
Off-topic, but - even state-sanctioned violence?
Meaning things like the death penalty?
No, like the police. They are allowed to be violent if you do not cooperate, even if you're not being violent.
Every time someone brings up the 'problem' with censoring hatemongers, the only argument they can come up with is this rickety slippery slope argument with extremely tenuous examples that's supposed to serve as examples. I was hoping that an EFF representative would be able to do this better.
In any case, I did take a second to read the Santa Clara Principles, and I think that it's actually a very responsible way to run a community. I wonder if @Deimos knows about them and what he thinks.
Thing is, the problem is real, there are many cases where censorship went way past what it meant to do. In Turkey it first came in the shape of anti-porn, then the same technical mechanism was used to block hateful and disparaging speech, and today almost everything is blocked: the entire voice of those who do not agree to the ruling bloc, their misdeeds and fraud and corruption, almost anything that has anything to do with sex, slang or use of stimulants.
I don't think calls to violence or malicious radicalising rhetoric should get a platform or their speech be tollerated, and I support deplatforming them. But we should not forget what are the ramifications when it gets out of control, once the mechanism, legal and technical, is there. It is a compromise of freedom of speech, and we should always be aware of that so that we can prevent it being overused or abused.
... or hate speech, which isn't classifies as protected speech in most of the Western world.
It's not really a slippery slope argument - all it's saying is that legislation can be used 'against you.' Just like right to be forgotten can be used to squash malpractice information, anti-hate speech legislation is only as good as the person in power who defines hate speech. That is the principle argument for those who believe limiting government power is necessary in certain situations, as any increase to government power can be used 'against you'.
I should mention of course that there exists a balance that must be struck.
FYI the author is Cindy Cohn, a civil liberties attorney for the Electronic Frontier Foundation.
I'm glad there are reasonable progressive (if you consider EFF to be on that "side") people critical of censorship.
Posted in ~Tech because of other similar posts.