Don't become a platform? As a preamble, I find it disingenuous that Reddit tries to call itself a home of "communities". You can't have a community of 100,000 people, just like your metropolitan...
Don't become a platform? As a preamble, I find it disingenuous that Reddit tries to call itself a home of "communities". You can't have a community of 100,000 people, just like your metropolitan city isn't a "community" either. It's impossible to foster kindness when you're on a loudspeaker to thousands or millions of others. The human brain is very good at providing good faith issuances to friends, family, and the people we know—because we know them to be good people, and their actions reflect that. That doesn't extend to strangers and people on the internet. As you grow a website's size, the fraction of "strangers" grows, and so the number of good faith exceptions you'd make in a smaller community consequently shrink.
Tildes should probably remain small—and by small, I mean small, not become elitist. If you want to be a community, you can't have millions of people. I don't think it works. The problem is, that goes against the human psyche of continuing growth. We all want to see things become "bigger", but often bigger is not better, and often bigger problems are harder problems to solve.
If it's a small enough site, a small enough community, disinformation/misinformation becomes harder to spread because it's easier to call it out, easier to squash, and doesn't exist on a vast scale. You're not going to solve this problem with algorithms, 10,000 human content moderators, or technical limitations to de-emphasise misinformation. The probable secret to success here is to remain manageable.
I absolutely agree and it's been a pet peeve of mine every time we discuss this sites growth. There seems to some kind of automatic response among a lot of people that everything have to grow or...
Tildes should probably remain small—and by small, I mean small, not become elitist. If you want to be a community, you can't have millions of people. I don't think it works. The problem is, that goes against the human psyche of continuing growth. We all want to see things become "bigger", but often bigger is not better, and often bigger problems are harder problems to solve.
I absolutely agree and it's been a pet peeve of mine every time we discuss this sites growth. There seems to some kind of automatic response among a lot of people that everything have to grow or become bigger or else...I never felt that way.
It doesn't have to grow but I think it would be nice, some interesting subjects sometimes fail to find an audience here. Also it could be cool to see more sites adopt (self host) the Tildes...
It doesn't have to grow but I think it would be nice, some interesting subjects sometimes fail to find an audience here.
Also it could be cool to see more sites adopt (self host) the Tildes backend for their projects. Discourse (Source) is pretty popular but I think Tildes looks better and isn't as complicated.
You never know what your magic numbers are, but I've sensed a pattern and seen it many times. Every doubling can get you in trouble when it's small. Larger-scale moderation/civility issues start...
You never know what your magic numbers are, but I've sensed a pattern and seen it many times. Every doubling can get you in trouble when it's small. Larger-scale moderation/civility issues start popping up around 50k and get worse. By the time you're at 250k you may as well have 50 million, it's just the amount of work that goes up. By 250k you already have all the mod challenges in play.
Tildes probably needs to pause and refactor the trust/moderation system at each major milestone to handle the larger userbase. Unrestrained growth would be the death of the place just like every other forum.
I think I agree with this statement. There will always be growth of some sort, I think that what may be difficult for Tildes is defining that growth. I grew up before reddit was really a thing and...
Tildes should probably remain small
I think I agree with this statement. There will always be growth of some sort, I think that what may be difficult for Tildes is defining that growth.
I grew up before reddit was really a thing and spent a lot of time on various forums. The thing all those forums had "in common" is that they were all for a specific thing.
In other words, you may go searching out a community to discuss X book series, find a forum, and that's how it grows. When people join the community you know that you will have at least one thing in common that brings you to the same website.
Tildes (and definitely Reddit) don't really have that one unifying thing, and I think it makes it easier for "bad actors" to show up.
Related thread: Masnick's impossibility theorem (or why large and well-moderated social media platforms cannot exist?) Related question of mine, concerning the implication that social media...
Related question of mine, concerning the implication that social media platforms literally can only be good as an extended social network and can't have more than a few thousand people (Edit: a few tens of thousands, whoops) each due to human nature.
I think the very notion of regulating “facts” is misguided, especially for a platform which aims to promote discussion https://www.ribbonfarm.com/2020/09/03/wittgensteins-revenge/
A woman was killed today because she and thousands of others like her believed batshit-insane misinformation they saw on social media, so much that it compelled them to fly to DC and break into...
A woman was killed today because she and thousands of others like her believed batshit-insane misinformation they saw on social media, so much that it compelled them to fly to DC and break into the Capitol building during proceedings.
I don't know what the answer is, but something has got to give. Clearly large portions of our population are fundamentally incapable of distinguishing fact from fiction when they read something on Facebook. It has gone from being "haha these idiots think the Earth is flat" to "there was an attempted coup and people are dead in the nations capitol" in the span of about 5 years.
I think it's less people's incapability from distinguishing fact from fiction, as most people are far from experts on anything1 (including political processes), but more people's willingness to...
I think it's less people's incapability from distinguishing fact from fiction, as most people are far from experts on anything1 (including political processes), but more people's willingness to throw themselves at some tribe with an undying belief. Faith, I guess, is the problem.
1: e.g., have a person hear from three others that the force of gravity on the surface of Earth is (generally): 9.81 m/s^2, 10.75 m/s^2, and 1.00 miles/hour^2. How many people, on average, do you imagine distinguishing fact from fiction?
Current events range somewhere between the tragic and the farcical. But such things have been happening from long before we had electronic communication, and will continue to do so long after,...
Current events range somewhere between the tragic and the farcical. But such things have been happening from long before we had electronic communication, and will continue to do so long after, because they pertain more to human nature than social media specifically, or fact-checking discourse. Curation by “facts” presumes a bubble of thought inside which everyone agrees on assumptions and context. That is a fruitless approach especially after consensus has clearly fractured. The root of the solution lies in building communication and trust, and not in regulating “truth”.
The best solution to discourage disinformation would be to reduce “engagement” and scale on social platforms (but that goes against their business interests). Any other algorithmic/moderation mechanism is misguided, and likely to be window-dressing at best and dangerous to free thought & speech in the worst case.
As crazy as some of the claims being thrown around currently might be, there are a variety of foreseeable dire circumstances in which the ability to make those claims and have hard discussions is probably the best way to preserve a functioning democracy/community. So it behooves us to be extremely careful before rushing to regulate/suppress those possibilities.
You can't tell social media sites how many users they can support, and you can't tell them how to run their sites. Either of these would be grotesque violations of their first amendment rights. It...
You can't tell social media sites how many users they can support, and you can't tell them how to run their sites. Either of these would be grotesque violations of their first amendment rights.
It is unreasonable to expect a group of people not to be able to have rules within their own space, and I think some level of enforcement is required in order to keep those spaces viable places that people want to spend their time.
Whoa, hold your horses! There’s a lot you seem to be implicitly projecting. Do corporations even have first amendment rights? What does it even mean? (And this will definitely vary in different...
Whoa, hold your horses! There’s a lot you seem to be implicitly projecting.
Do corporations even have first amendment rights? What does it even mean? (And this will definitely vary in different countries)
The moment social media sites started curating/prioritizing content, they lost Section 230 protection (in the US). They take on the role of a publisher rather than a platform.
There are far simpler solutions than micromanaging social media sites. Eg: Mandating interoperability with standardized interfaces (as was done for the telephone network). That will break monopolies and enable healthy market mechanisms by which people can freely shift between hosts in a federated network (with curation policies to their taste) without the worry of being locked out of the whole network. (See Mastodon as another great example of how this might work)
I have no qualms with private spaces being regulated; the problem is when public discourse has to pass through monopolistic platforms where people have very little influence over the policies (so Facebook & Twitter, not Tildes).
You should probably do some more research on these topics. You don't have a correct understanding of how Section 230 works or how the First Amendment applies in this space (yes, it applies to...
You should probably do some more research on these topics. You don't have a correct understanding of how Section 230 works or how the First Amendment applies in this space (yes, it applies to corporations, and not in other countries at all because it's a US law).
I was saying exactly that it won in different countries. Btw, to phrase my first comment better, why/how does a corporation’s first amendment rights apply to the content being discussed in its...
I was saying exactly that it won in different countries. Btw, to phrase my first comment better, why/how does a corporation’s first amendment rights apply to the content being discussed in its platform? (I.E. why is considered “their” speech?)
As for section 230, I would appreciate any references justifying why it’s protections ought to apply even once social media platforms started influencing/blocking/curating content. IIUC, they might not have been prosecuted yet to set precedent, but I wonder whether they are on sound legal footing.
It just seems to be a general, fundamental misunderstanding: a core purpose of Section 230 was to enable platforms to moderate based on their discretion, but you think doing that invalidates it...
It just seems to be a general, fundamental misunderstanding: a core purpose of Section 230 was to enable platforms to moderate based on their discretion, but you think doing that invalidates it for them.
I don't think Twitter, Facebook, or Reddit are monopolistic platforms. The fact that the three of them exist independently, along with a plethora of other successful social media sites makes them...
I don't think Twitter, Facebook, or Reddit are monopolistic platforms. The fact that the three of them exist independently, along with a plethora of other successful social media sites makes them not monopolies by definition.
We already have an interoperable system based on standardized interfaces for people to share information. It's called the World Wide Web, and it is far more analogous to the old telephone network than individual websites. Anyone who doesn't like what is happening on one site is more than free to leave it for another, or create their own. I strongly disagree with how Facebook is run as a company, and haven't used the platform in years with no real negative repercussions. Dissatisfied Bell customers in the '70s could not say the same.
Forcing social media websites to make their content and features interoperable would kill their ability to compete and innovate, as the body in charge of those standards would effectively dictate what is and isn't possible. The purpose of these platforms is far more focused and specialized than the telephone system or the world wide web, making "standardization" much more difficult.
How far down would this rabbit hole go? Would we prevent moderators from controlling the content on subreddits that exceed a certain population size? Would we ban moderators altogether and let users effectively decide what is allowed on subreddits using the upvotes and downvotes? /r/AskHistorians is a huge community that only works because of its incredibly strict moderation.
Yes, I think the problem for conversation might be better thought of as "assertions of things not generally believed." Particularly dangerous are bare assertions not backed up by links or stories...
Yes, I think the problem for conversation might be better thought of as "assertions of things not generally believed." Particularly dangerous are bare assertions not backed up by links or stories based on personal experience, as a challenge to anyone who dares disagree.
Don't become a platform? As a preamble, I find it disingenuous that Reddit tries to call itself a home of "communities". You can't have a community of 100,000 people, just like your metropolitan city isn't a "community" either. It's impossible to foster kindness when you're on a loudspeaker to thousands or millions of others. The human brain is very good at providing good faith issuances to friends, family, and the people we know—because we know them to be good people, and their actions reflect that. That doesn't extend to strangers and people on the internet. As you grow a website's size, the fraction of "strangers" grows, and so the number of good faith exceptions you'd make in a smaller community consequently shrink.
Tildes should probably remain small—and by small, I mean small, not become elitist. If you want to be a community, you can't have millions of people. I don't think it works. The problem is, that goes against the human psyche of continuing growth. We all want to see things become "bigger", but often bigger is not better, and often bigger problems are harder problems to solve.
If it's a small enough site, a small enough community, disinformation/misinformation becomes harder to spread because it's easier to call it out, easier to squash, and doesn't exist on a vast scale. You're not going to solve this problem with algorithms, 10,000 human content moderators, or technical limitations to de-emphasise misinformation. The probable secret to success here is to remain manageable.
I absolutely agree and it's been a pet peeve of mine every time we discuss this sites growth. There seems to some kind of automatic response among a lot of people that everything have to grow or become bigger or else...I never felt that way.
It doesn't have to grow but I think it would be nice, some interesting subjects sometimes fail to find an audience here.
Also it could be cool to see more sites adopt (self host) the Tildes backend for their projects. Discourse (Source) is pretty popular but I think Tildes looks better and isn't as complicated.
I mean, I asked and Deimos himself said he wants the site to grow.
(To a point that is, perhaps around 40k active people.)
Huh. Well he didn’t give numbers in his response to me last.
You never know what your magic numbers are, but I've sensed a pattern and seen it many times. Every doubling can get you in trouble when it's small. Larger-scale moderation/civility issues start popping up around 50k and get worse. By the time you're at 250k you may as well have 50 million, it's just the amount of work that goes up. By 250k you already have all the mod challenges in play.
Tildes probably needs to pause and refactor the trust/moderation system at each major milestone to handle the larger userbase. Unrestrained growth would be the death of the place just like every other forum.
I think I agree with this statement. There will always be growth of some sort, I think that what may be difficult for Tildes is defining that growth.
I grew up before reddit was really a thing and spent a lot of time on various forums. The thing all those forums had "in common" is that they were all for a specific thing.
In other words, you may go searching out a community to discuss X book series, find a forum, and that's how it grows. When people join the community you know that you will have at least one thing in common that brings you to the same website.
Tildes (and definitely Reddit) don't really have that one unifying thing, and I think it makes it easier for "bad actors" to show up.
Related thread: Masnick's impossibility theorem (or why large and well-moderated social media platforms cannot exist?)
Related question of mine, concerning the implication that social media platforms literally can only be good as an extended social network and can't have more than a few thousand people (Edit: a few tens of thousands, whoops) each due to human nature.
I think the very notion of regulating “facts” is misguided, especially for a platform which aims to promote discussion https://www.ribbonfarm.com/2020/09/03/wittgensteins-revenge/
A woman was killed today because she and thousands of others like her believed batshit-insane misinformation they saw on social media, so much that it compelled them to fly to DC and break into the Capitol building during proceedings.
I don't know what the answer is, but something has got to give. Clearly large portions of our population are fundamentally incapable of distinguishing fact from fiction when they read something on Facebook. It has gone from being "haha these idiots think the Earth is flat" to "there was an attempted coup and people are dead in the nations capitol" in the span of about 5 years.
I think it's less people's incapability from distinguishing fact from fiction, as most people are far from experts on anything1 (including political processes), but more people's willingness to throw themselves at some tribe with an undying belief. Faith, I guess, is the problem.
1: e.g., have a person hear from three others that the force of gravity on the surface of Earth is (generally): 9.81 m/s^2, 10.75 m/s^2, and 1.00 miles/hour^2. How many people, on average, do you imagine distinguishing fact from fiction?
Also, algorithmic radicalization has been pointed to as a source for quite some time. We don't have that problem here, which is nice :)
Current events range somewhere between the tragic and the farcical. But such things have been happening from long before we had electronic communication, and will continue to do so long after, because they pertain more to human nature than social media specifically, or fact-checking discourse. Curation by “facts” presumes a bubble of thought inside which everyone agrees on assumptions and context. That is a fruitless approach especially after consensus has clearly fractured. The root of the solution lies in building communication and trust, and not in regulating “truth”.
The best solution to discourage disinformation would be to reduce “engagement” and scale on social platforms (but that goes against their business interests). Any other algorithmic/moderation mechanism is misguided, and likely to be window-dressing at best and dangerous to free thought & speech in the worst case.
As crazy as some of the claims being thrown around currently might be, there are a variety of foreseeable dire circumstances in which the ability to make those claims and have hard discussions is probably the best way to preserve a functioning democracy/community. So it behooves us to be extremely careful before rushing to regulate/suppress those possibilities.
You can't tell social media sites how many users they can support, and you can't tell them how to run their sites. Either of these would be grotesque violations of their first amendment rights.
It is unreasonable to expect a group of people not to be able to have rules within their own space, and I think some level of enforcement is required in order to keep those spaces viable places that people want to spend their time.
Whoa, hold your horses! There’s a lot you seem to be implicitly projecting.
Do corporations even have first amendment rights? What does it even mean? (And this will definitely vary in different countries)
The moment social media sites started curating/prioritizing content, they lost Section 230 protection (in the US). They take on the role of a publisher rather than a platform.
There are far simpler solutions than micromanaging social media sites. Eg: Mandating interoperability with standardized interfaces (as was done for the telephone network). That will break monopolies and enable healthy market mechanisms by which people can freely shift between hosts in a federated network (with curation policies to their taste) without the worry of being locked out of the whole network. (See Mastodon as another great example of how this might work)
I have no qualms with private spaces being regulated; the problem is when public discourse has to pass through monopolistic platforms where people have very little influence over the policies (so Facebook & Twitter, not Tildes).
You should probably do some more research on these topics. You don't have a correct understanding of how Section 230 works or how the First Amendment applies in this space (yes, it applies to corporations, and not in other countries at all because it's a US law).
I was saying exactly that it won in different countries. Btw, to phrase my first comment better, why/how does a corporation’s first amendment rights apply to the content being discussed in its platform? (I.E. why is considered “their” speech?)
As for section 230, I would appreciate any references justifying why it’s protections ought to apply even once social media platforms started influencing/blocking/curating content. IIUC, they might not have been prosecuted yet to set precedent, but I wonder whether they are on sound legal footing.
It just seems to be a general, fundamental misunderstanding: a core purpose of Section 230 was to enable platforms to moderate based on their discretion, but you think doing that invalidates it for them.
Hmm, thanks for that feedback, I’ll dig into this.
I don't think Twitter, Facebook, or Reddit are monopolistic platforms. The fact that the three of them exist independently, along with a plethora of other successful social media sites makes them not monopolies by definition.
We already have an interoperable system based on standardized interfaces for people to share information. It's called the World Wide Web, and it is far more analogous to the old telephone network than individual websites. Anyone who doesn't like what is happening on one site is more than free to leave it for another, or create their own. I strongly disagree with how Facebook is run as a company, and haven't used the platform in years with no real negative repercussions. Dissatisfied Bell customers in the '70s could not say the same.
Forcing social media websites to make their content and features interoperable would kill their ability to compete and innovate, as the body in charge of those standards would effectively dictate what is and isn't possible. The purpose of these platforms is far more focused and specialized than the telephone system or the world wide web, making "standardization" much more difficult.
How far down would this rabbit hole go? Would we prevent moderators from controlling the content on subreddits that exceed a certain population size? Would we ban moderators altogether and let users effectively decide what is allowed on subreddits using the upvotes and downvotes? /r/AskHistorians is a huge community that only works because of its incredibly strict moderation.
Yes, I think the problem for conversation might be better thought of as "assertions of things not generally believed." Particularly dangerous are bare assertions not backed up by links or stories based on personal experience, as a challenge to anyone who dares disagree.