17
votes
Biden wants to get rid of law that shields companies like Facebook from liability for what their users post
Link information
This data is scraped automatically and may be incorrect.
- Authors
- Lauren Feiner
- Published
- Jan 17 2020
- Word count
- 481 words
You should read the BackPage documents before commenting on this, because their behaviour is mostly what's caused this change in mind in US legislators.
Children were being kidnapped, drugged, and raped, and they were being offered for sale in BackPage ads. Backpage i) knew this, ii) did not try to stop it iii) did not help law enforcement and iv) made changes that encouraged the ads and hindered law enforcement.
Children who had been rescued would find images of themselves being raped, posted to Backpage, and they would contact backpage and ask for those images to be taken down. Backpage always refused, and continued to refuse all the way through the court cases.
https://www.hsgac.senate.gov/imo/media/doc/Backpage%20Report%202017.01.10%20FINAL.pdf
Yeeesh I didn't realize Backpage's behavior was so bad. I generally am pro-legalizing (and heavily regulating) prostitution, so I was generally sympathetic to the idea that they were providing a safe platform for it. I kind of just assumed they knew they were on thin ice and would have taken some basic care to try and keep the non-victimless (is there a word that's like "victimfull"?) variants, like child abuse and sex-trafficking, under control.
That said, I think there is a trend in law enforcement now where expansive powers are being given to curtail "sex trafficking" but this is basically resulting in them defining the term overly-broadly to catch lots of stuff that probably doesn't deserve that level of enforcement. It's kind of like how anti-terrorism powers are largely just used to wage the drug war now.
Sex trafficking is already exempted from being covered by Section 230 by a newish law called FOSTA (Fight Online Sex Trafficking Act). So Backpage is no longer an good argument for abolishing Section 230.
You can find a lot of articles about FOSTA and Section 230 on the blog of Eric Goldman: https://blog.ericgoldman.org/archives/2019/06/latest-linkwrap-on-fostas-aftermath.htm (lots of related links in the bottom).
I think there is a basic difference in how platforms like Facebook, Twitter, Reddit, and even Tildes work vs. how the socialization used to work in the pre-social media internet through Usegroups, bulletin boards, IRC chats, and blogs with commentariats. I suspect the former model is inherently flawed but I'm not really sure what to do about it since there is no putting the genie back in the bottle.
I talked about what I think of as the basic flaw in social media here. After rereading that post I find I still agree with the argument a year later. The 1-to-many paradigm might just be fundamentally broken because it creates no incentive to treat people like people, instead the primary thing I interact with is atomized bits of "content" bereft of all context and humanity.
If you'll allow me a bit of a digression, there is a neurological disorder called "Capgras Syndrome" where after a traumatic brain injury, one becomes convinced that a person they see and know is actually an imposter. One of the theories about what's happening is that the brain actually recognizes people on two levels. One level is the straight up facial recognition, but on an emotional level that facial recognition is accompanied by a memory trigger that "reminds" you of your general emotional orientation towards that person. So I see my wife and I know she's my wife not just because this human has my wife's face but because I associate that face with all the memories and feelings and things I know about her. The face is anchored to a whole suite or emotions and thoughts.
Some traumatic brain injuries manage to retain the part of your brain that has a memory of the person's face, but it breaks your brain's ability to pull in all the associated resources it expects upon detection of the face. You brain expects this bevy of anchored thoughts and feelings to come rushing in and when they don't come your brain assumes this is not the person you know. The fact that they have the same face and claim to be this person means they are an imposter!
All of that is a roundabout way of saying there is more to human interpersonal reaction that merely knowing you're talking to a person. You can interact with someone on an intellectual level, but you can't really trust or be open with them unless you can anchor this to an emotional connection built on shared memories and experiences and background knowledge about them. The Twitter paradigm that strips all that context out of the posts you interact with might be hampering your brain's ability to interact with people as humans.
It is all a matter of scale. When human communities are small-scale, they can self-police. When communities grow too large, the economic incentives for moderation break-down. If moderation becomes onerous, people give up and stop doing it, unless the platform steps in and incentivizes it. And the platform operators’ motives are unlikely to align with the community when the community grows too large and attracts interests of the wider world.
I think the tech companies will try to automate the moderation, but the moderation will be circumvented enough that the human users will be drowned out by the bots. I don’t think it’s fundamentally feasible to run a large platform (millions of legitimate, human users) and not sell-out in some way. Either you are selling out in that you are allowing third parties to exploit your legitimate, human users, or your are selling out by paying your human users to moderate themselves. And since the latter option eats into profits, it’s highly unlikely to either scale well or actually be successful compared to platforms that opt for the former.
In a way, this is what's already happening with Mastodon instances and platforms which use ActivityPub in general. Most are geared towards specific interests and have more stringent moderation (in practice as well as policy) than large sites like Twitter, and since users are able to migrate to different instances, they're able to collectively exert more pressure over admins' decisions. However, on the flip side, moderation isn't really democratized per se, so it's effectively replacing the corporate hierarchy of traditional social media with benevolent despots who control smaller fiefdoms. Due to the ability to defederate with other instances, most users are protected from places rife with bad actors and nonexistent moderation such as Gab, but this ability can also be overused due to spats with another instance's admin. Overall, the power structures are very different.
I think that this is a poor solution to the problem at hand. Section 230 was implemented to protect ISP's from having to moderate their users. Do you really want a manual process that decides wether or not your speech online is protected or not? Unmoderated free speech is a core feature of the web. Companies are allowed to moderate, but should not be penalized if they don't.
If I yell fire in a movie theater, should I be prosecuted for my action? Or should the theater be prosecuted because they allowed me on the premises without ensure that I would not yell fire first.
If the theater is aware of the existence of groups dedicated to wreaking havoc in movie theaters by provoking stampedes, and the theater then takes active measures to monetize these peoples' activities despite the harm to their patrons overall I'd say yeah you should probably bear some blame for it. At that point it's negligence on the part of the venue.
The distinction to draw I think is whether a platform is a "dumb pipe" or whether it curates the data somehow (via humans, algorithms, or some combination).