Why profits more than politics? It seems like political strife has motivated quite a lot of bad behavior from the beginning of time? And there is also quite a lot of terrible behavior "for the lulz."
Why profits more than politics? It seems like political strife has motivated quite a lot of bad behavior from the beginning of time? And there is also quite a lot of terrible behavior "for the lulz."
Because the tech giants don't care about politics, but they do care about profit. Facebook, Twitter, Reddit, et al, might not be driving the posts on their platforms which lead to the titular...
Why profits more than politics?
Because the tech giants don't care about politics, but they do care about profit.
Facebook, Twitter, Reddit, et al, might not be driving the posts on their platforms which lead to the titular alienation, and maybe they don't even approve of those posts, but they are certainly the primary beneficiaries of them. Anything - good or bad - which increases traffic on their platforms is a positive for them. They don't care whether that's pictures of cute kittens or manifestos about racist hate. They are content-agnostic. All they care about is getting more people to engage with their platforms, more often, and for longer. Racist manifestos achieve that goal just as well as cute kittens.
So, they have no incentive to decrease hateful content on their platforms. In a perverse way, they do have an incentive to increase that type of content (along with cute kittens) - because it brings in more eyeballs for them to serve advertisements to.
I think this is reductive and doesn't work as a history of Reddit (at least). It ignores how both the management of Reddit and their users are ideologically against heavy moderation ("censorship")...
I think this is reductive and doesn't work as a history of Reddit (at least). It ignores how both the management of Reddit and their users are ideologically against heavy moderation ("censorship") and the outrage when Reddit tries to get rid of some of the worst offenders.
It also ignores the role of large advertisers as a force for centrism. Mainstream advertisers really do not like extremist or even controversial content, will avoid selling ads against it, and will push for moderation sometimes, like when Procter and Gamble temporarily pulled their ads off YouTube. Tumblr lost a lot of traffic trying to make their site more attractive to advertisers.
I wasn't trying to explain the whole of all online platforms' reasons for allowing questionable content. I was just trying to make the connection between the profit motive and that content for...
I wasn't trying to explain the whole of all online platforms' reasons for allowing questionable content. I was just trying to make the connection between the profit motive and that content for you. A full explanation would, as you rightly point out, take many many pages, and I wasn't willing to write that thesis. Sorry.
Except, users of all systems, for decades, have been desiring minimal moderation. It, until recently, wasn't something typically granted. UNIX systems booted users who caused moderation issues...
Except, users of all systems, for decades, have been desiring minimal moderation. It, until recently, wasn't something typically granted.
UNIX systems booted users who caused moderation issues from other systems. Forums for as long as I could tell died a quick death when too lightly moderated.
The difference in these systems, and today, was that there wasn't a profit drive.
I'm not sure what you mean but I vaguely recall diversity in moderation policies, going back to BBS days. It seems like sites differed on whether they carried cracked games, text files about...
I'm not sure what you mean but I vaguely recall diversity in moderation policies, going back to BBS days. It seems like sites differed on whether they carried cracked games, text files about making bombs, and so on.
The scale was a lot smaller then and moderating gets harder at scale.
It's true that the very large social networks were built as for-profit businesses. This isn't necessarily incompatible with naive idealism, but it does mean they tried harder to grow, since that's what startups are supposed to do.
I think it comes down to what the goals are in the end. The goal of companies running our social media spaces is to make money, not to 'take care of' their users or give the users what they want....
I think it comes down to what the goals are in the end. The goal of companies running our social media spaces is to make money, not to 'take care of' their users or give the users what they want.
So, chasing profits, they promote content that angers people, tolerate bad reporting and clickbait, spam everyone with ads, and design their sites like traps to keep people locked into them. They nag you every second with meaningless updates to draw you back in. I'd say it's a small wonder that people are pissed off and becoming disillusioned with the web - and that it's about damn time.
Most of those behaviors would evaporate without the profit motive. That doesn't mean they'd magically get the social/tech problem right with their designs, though. It's just that while chasing profit, they'll never even look twice at goals like community, mental health, or minimizing hateful content or holding content to some standard of truth/facts.
I think this neglects the role of human behavior and particularly human weaknesses. Some analogies: Las Vegas certainly does promote gambling but the casinos would be out of business if a lot of...
I think this neglects the role of human behavior and particularly human weaknesses. Some analogies: Las Vegas certainly does promote gambling but the casinos would be out of business if a lot of people didn't fundamentally like gambling. People would still gamble on their own. Also companies that sell alcoholic drinks do promote drinking but that doesn't mean people wouldn't want to drink if they weren't pushing alcohol. (Prohibition didn't work.)
In the early days of the Internet every newspaper had a comment section. They were not profit centers and they were almost all filled with angry partisans. YouTube comments used to be terrible as well. This isn't because Google wanted them to be terrible, it's because it took them many years before they implemented moderation that somewhat worked.
The profit motive is important but I don't think online conversation is problem-free without it. Whenever you find a forum that's not a hatefest, either it's too obscure to attract the haters or someone is moderating it to keep it that way.
Really? Most people say Internet discussion was fine until 2016, so bigotry outdating that by 20 years would be pretty absurd of them. (Although I haven't been on the Internet that long, so for me...
In the early days of the Internet every newspaper had a comment section. They were not profit centers and they were almost all filled with angry partisans. YouTube comments used to be terrible as well. This isn't because Google wanted them to be terrible, it's because it took them many years before they implemented moderation that somewhat worked.
Really? Most people say Internet discussion was fine until 2016, so bigotry outdating that by 20 years would be pretty absurd of them. (Although I haven't been on the Internet that long, so for me this is just a strange remark)
As someone who's been on the internet since baud modems, I can assure you internet discussion has always been shitty and malicious. A combination of anonymity and just a larger pool of individuals...
As someone who's been on the internet since baud modems, I can assure you internet discussion has always been shitty and malicious. A combination of anonymity and just a larger pool of individuals (more diverse set of opinions) leads to people putting up some pretty off the wall opinions and then other people getting angry at said comment and replying. It quickly devolves into apes throwing shit at each other without heavy moderation and typically also needs self policing / social norm setup.
Yep, this is the root of the problem, really. When people had a first name, last name, and company/university attached to their name, the vitriol was quite a bit less.
A combination of anonymity and just a larger pool of individuals
Yep, this is the root of the problem, really. When people had a first name, last name, and company/university attached to their name, the vitriol was quite a bit less.
A lot of comment sections closed around then or before. Here's an article from 2015 about 7 other sites that closing their comment sections in 2014. Most of the audience wasn't reading them...
Most of the audience wasn't reading them anyway. For example, NPR's comments closed in August 2016. Quoting from the announcement:
The NPR.org audience has grown dramatically in recent years, to between 25 and 35 million unique visitors each month. But far less than 1% of that audience is commenting, and the number of regular comment participants is even smaller. Only 2,600 people have posted at least one comment in each of the last three months –– 0.003% of the 79.8 million NPR.org users who visited the site during that period.
If the comments they got were high quality, they'd probably have kept them open anyway. But this isn't what news websites had to deal with. A comment on Hacker News in 2016:
I see comment moderation as one of the 'unsolved problems' left in this generation of the web. When I worked at Foreign Policy we worked hard to integrate new commenting tools and encourage power users, but we were just buried by the threats, spam, and low-value noise.
Web technology scales, journalism scales (poorly, but a relatively small publication can pull big traffic), but right now there's just no substitute for someone at manually checking out reported comments and banning problem users. When you have a site with as much traffic as NPR, that would probably take dozens or hundreds, and these orgs are loathe to outsource it to cheap countries like the big web players do, mostly due to the ethical challenges.
Maybe moving comments to people's own social groups on FB/Twitter will help to defray the costs, I don't think they're really seeing any discussion value for the most part.
Twitter is effectively what's left after many other organizations got out of the business of hosting comments. The websites that got pretty much all bad comments gave up. Twitter at least has variable reinforcement going for them: sometimes there are good comments! It keeps people coming back.
Sometime very early on in the existence of the web for mass usage, the expectation was set that content and services on the web should be free. If people had to pay to access a website, such as a...
Sometime very early on in the existence of the web for mass usage, the expectation was set that content and services on the web should be free. If people had to pay to access a website, such as a newspaper's website to use an early example, they would simply visit a different site despite being completely okay paying for the underlying product in a different format (e.g. buying the newspaper itself). This has led companies to find other ways to monetize usage of their web services, mainly through selling ad space.
So ads became very central to the internet/web, in such a way that many sites would become inoperable if not for the revenue generated through ads. Improving the effectiveness of ads then became a central focus for these companies, and to do so required collecting data about what each user likes and doesn't like, and using this data to create an ad profile to increase ad effectiveness. This created a huge amount of data that became valuable in its own right, and people quickly recognized that this data was useful outside of the context of selling ads. By understanding user behavior, it could be subtly manipulated - showing just the right content, just the right amounts, at just the right times to keep you using the site, generating more valuable data and exposing you to more ads. This led to echo chambers in a way that simply hadn't been an issue before, because now most of what you saw online was tailored just to you.
By the time politics entered the picture, people were already self segregating into potentially toxic communities and sites that exacerbated the problem. If we can change the underlying profit motive that underpins the structure of the net, I think it honestly would do much to return it to its roots and original potential.
I agree. This is why IMO we are either going to need to pay a subscription for nearly everything that's software or donate a lot of money if we want to have an Internet where we are the main...
ads became very central to the internet/web, in such a way that many sites would become inoperable if not for the revenue generated through ads. Improving the effectiveness of ads then became a central focus for these companies, and to do so required collecting data about what each user likes and doesn't like, and using this data to create an ad profile to increase ad effectiveness. This created a huge amount of data that became valuable in its own right, and people quickly recognized that this data was useful outside of the context of selling ads. By understanding user behavior, it could be subtly manipulated - showing just the right content, just the right amounts, at just the right times to keep you using the site, generating more valuable data and exposing you to more ads. This led to echo chambers in a way that simply hadn't been an issue before, because now most of what you saw online was tailored just to you.
I agree. This is why IMO we are either going to need to pay a subscription for nearly everything that's software or donate a lot of money if we want to have an Internet where we are the main revenue source of the products we use and thus the product owners's main priority and their customers, a title currently taken by advertisers, venture capitalists and shareholders. To quote myself on this:
They don't make money from us so they won't listen to our concerns,which is way too big a price to pay/risk to take and, in my opinion, the only way to change this is to make ourselves the center of their money supply or else we will never be their main priority.
Check out some of Jaron Larnier's thoughts on possible solutions, like this article (unfortunately paywalled). This NYT series delves into the same ideas.
Check out some of Jaron Larnier's thoughts on possible solutions, like this article (unfortunately paywalled). This NYT series delves into the same ideas.
Great piece. One question: where do we go from here? The Internet is part of our DNA at this point; it's an extended limb on our bodies. There's no way to escape it, though we know we're all...
Great piece. One question: where do we go from here? The Internet is part of our DNA at this point; it's an extended limb on our bodies. There's no way to escape it, though we know we're all abusing it to the detriment of our own mental and physical health.
Good article. Could've benefited, I think, from pointing out that the profit motive is the core reason the internet evolved this way.
Why profits more than politics? It seems like political strife has motivated quite a lot of bad behavior from the beginning of time? And there is also quite a lot of terrible behavior "for the lulz."
Because the tech giants don't care about politics, but they do care about profit.
Facebook, Twitter, Reddit, et al, might not be driving the posts on their platforms which lead to the titular alienation, and maybe they don't even approve of those posts, but they are certainly the primary beneficiaries of them. Anything - good or bad - which increases traffic on their platforms is a positive for them. They don't care whether that's pictures of cute kittens or manifestos about racist hate. They are content-agnostic. All they care about is getting more people to engage with their platforms, more often, and for longer. Racist manifestos achieve that goal just as well as cute kittens.
So, they have no incentive to decrease hateful content on their platforms. In a perverse way, they do have an incentive to increase that type of content (along with cute kittens) - because it brings in more eyeballs for them to serve advertisements to.
I think this is reductive and doesn't work as a history of Reddit (at least). It ignores how both the management of Reddit and their users are ideologically against heavy moderation ("censorship") and the outrage when Reddit tries to get rid of some of the worst offenders.
It also ignores the role of large advertisers as a force for centrism. Mainstream advertisers really do not like extremist or even controversial content, will avoid selling ads against it, and will push for moderation sometimes, like when Procter and Gamble temporarily pulled their ads off YouTube. Tumblr lost a lot of traffic trying to make their site more attractive to advertisers.
I wasn't trying to explain the whole of all online platforms' reasons for allowing questionable content. I was just trying to make the connection between the profit motive and that content for you. A full explanation would, as you rightly point out, take many many pages, and I wasn't willing to write that thesis. Sorry.
Except, users of all systems, for decades, have been desiring minimal moderation. It, until recently, wasn't something typically granted.
UNIX systems booted users who caused moderation issues from other systems. Forums for as long as I could tell died a quick death when too lightly moderated.
The difference in these systems, and today, was that there wasn't a profit drive.
I'm not sure what you mean but I vaguely recall diversity in moderation policies, going back to BBS days. It seems like sites differed on whether they carried cracked games, text files about making bombs, and so on.
The scale was a lot smaller then and moderating gets harder at scale.
It's true that the very large social networks were built as for-profit businesses. This isn't necessarily incompatible with naive idealism, but it does mean they tried harder to grow, since that's what startups are supposed to do.
Oh hey, you've been away a while. Glad to see you pop back up again. <3
Thanks. I've been back for a few days, but I'm taking it slow.
I think it comes down to what the goals are in the end. The goal of companies running our social media spaces is to make money, not to 'take care of' their users or give the users what they want.
So, chasing profits, they promote content that angers people, tolerate bad reporting and clickbait, spam everyone with ads, and design their sites like traps to keep people locked into them. They nag you every second with meaningless updates to draw you back in. I'd say it's a small wonder that people are pissed off and becoming disillusioned with the web - and that it's about damn time.
Most of those behaviors would evaporate without the profit motive. That doesn't mean they'd magically get the social/tech problem right with their designs, though. It's just that while chasing profit, they'll never even look twice at goals like community, mental health, or minimizing hateful content or holding content to some standard of truth/facts.
I think this neglects the role of human behavior and particularly human weaknesses. Some analogies: Las Vegas certainly does promote gambling but the casinos would be out of business if a lot of people didn't fundamentally like gambling. People would still gamble on their own. Also companies that sell alcoholic drinks do promote drinking but that doesn't mean people wouldn't want to drink if they weren't pushing alcohol. (Prohibition didn't work.)
In the early days of the Internet every newspaper had a comment section. They were not profit centers and they were almost all filled with angry partisans. YouTube comments used to be terrible as well. This isn't because Google wanted them to be terrible, it's because it took them many years before they implemented moderation that somewhat worked.
The profit motive is important but I don't think online conversation is problem-free without it. Whenever you find a forum that's not a hatefest, either it's too obscure to attract the haters or someone is moderating it to keep it that way.
Really? Most people say Internet discussion was fine until 2016, so bigotry outdating that by 20 years would be pretty absurd of them. (Although I haven't been on the Internet that long, so for me this is just a strange remark)
As someone who's been on the internet since baud modems, I can assure you internet discussion has always been shitty and malicious. A combination of anonymity and just a larger pool of individuals (more diverse set of opinions) leads to people putting up some pretty off the wall opinions and then other people getting angry at said comment and replying. It quickly devolves into apes throwing shit at each other without heavy moderation and typically also needs self policing / social norm setup.
Yep, this is the root of the problem, really. When people had a first name, last name, and company/university attached to their name, the vitriol was quite a bit less.
A lot of comment sections closed around then or before. Here's an article from 2015 about 7 other sites that closing their comment sections in 2014.
Most of the audience wasn't reading them anyway. For example, NPR's comments closed in August 2016. Quoting from the announcement:
If the comments they got were high quality, they'd probably have kept them open anyway. But this isn't what news websites had to deal with. A comment on Hacker News in 2016:
Twitter is effectively what's left after many other organizations got out of the business of hosting comments. The websites that got pretty much all bad comments gave up. Twitter at least has variable reinforcement going for them: sometimes there are good comments! It keeps people coming back.
Sometime very early on in the existence of the web for mass usage, the expectation was set that content and services on the web should be free. If people had to pay to access a website, such as a newspaper's website to use an early example, they would simply visit a different site despite being completely okay paying for the underlying product in a different format (e.g. buying the newspaper itself). This has led companies to find other ways to monetize usage of their web services, mainly through selling ad space.
So ads became very central to the internet/web, in such a way that many sites would become inoperable if not for the revenue generated through ads. Improving the effectiveness of ads then became a central focus for these companies, and to do so required collecting data about what each user likes and doesn't like, and using this data to create an ad profile to increase ad effectiveness. This created a huge amount of data that became valuable in its own right, and people quickly recognized that this data was useful outside of the context of selling ads. By understanding user behavior, it could be subtly manipulated - showing just the right content, just the right amounts, at just the right times to keep you using the site, generating more valuable data and exposing you to more ads. This led to echo chambers in a way that simply hadn't been an issue before, because now most of what you saw online was tailored just to you.
By the time politics entered the picture, people were already self segregating into potentially toxic communities and sites that exacerbated the problem. If we can change the underlying profit motive that underpins the structure of the net, I think it honestly would do much to return it to its roots and original potential.
I agree. This is why IMO we are either going to need to pay a subscription for nearly everything that's software or donate a lot of money if we want to have an Internet where we are the main revenue source of the products we use and thus the product owners's main priority and their customers, a title currently taken by advertisers, venture capitalists and shareholders. To quote myself on this:
Check out some of Jaron Larnier's thoughts on possible solutions, like this article (unfortunately paywalled). This NYT series delves into the same ideas.
Great piece. One question: where do we go from here? The Internet is part of our DNA at this point; it's an extended limb on our bodies. There's no way to escape it, though we know we're all abusing it to the detriment of our own mental and physical health.