Is an ethical social media platform even possible?
I've long been uncomfortable using platforms that have a bad reputation with respect to:
- Human rights / genocide
- Disinformation
- Privacy
All three of those can be connected with advertising revenue, among other things. When I use platforms that are shady in this regard, I know I'm colluding with them and contributing to the problems they create. So it's been a relief to see new platforms like Tildes emerge, as well as those based on ActivityPub.
But even platforms that don't have overt advertising (Telegram?) do have a problem with hate groups that go unchallenged. And I know that if I was running an instance of an ActivityPub compatible platform such as KBin, I mightn't be able to keep on top of moderating things like disinformation.
So I suppose my question is, where do you draw the line? I've deleted my Twitter and Meta accounts and I'm exploring alternatives, but I'm not sure if I'm going from the darkness to the light, or just into shades of grey.
Apart from privacy, honestly I dont' think the monetization method has much to do with it. Hackernews is a good example; I think many people would find the comment sections of many hackernews threads problematic in some way or another (although not in a unique way or anything).
But if you think about it, hackernews is the platonic ideal of a social media website many techy people want: it's a site with zero monetization whatsoever, running on ancient lisp code, no javascript or any client side code, on a single server in a closet somewhere, run by a single person, with no growth plans, no algorithm. They do not make money, will not need to make money, and will never try to make money.
And yet if you extracted the text out and swapped the CSS I don't think you'd find all that much difference between Reddit and hackernews at this point. Which indicates, to me anyway, that it's just people.
It's just how people are in mass. To back to the topic, I'm really not sure what you mean by "ethical". From looking at context, it seems to be about whether or not there's elements you personally find problematic on the platform.
In which case, I would say, no, it is not possible for any "social media" platform with a nontrivial userbase. You should stick to groupchats (obviously, with curation) and groupchat like things (e.g small discord servers).
HN is more and less moderated than your average reddit sub. It's usually better (but not perfect, nothing is) at stopping those deep threaded flame wars between two people, and there's different moderation to prevent the submission feed from being clogged up. e.g. not just removing clickbait, but moderators can merge duplicate submissions so the comments can all end up in one post.
But it can be worse in that nothing else is as heavily moderated if you decide to turn on "Showdead". i.e. any flagged comments become visible. Very Very few comments are ever rendered fully unviewable (basically just doxxing) so you can see how low the bar can fall.
There's also a very simple but obvious barrier that keeps quality better: There are no image submissions nor comments. Can be unfortunate for when you want to properly visualize some complex stuff, but I'm sure we all know by now how quickly quality drops off the cliff when the feed becomes image macros and the comments a bunch of noisy gifs. There needs to be some friction to prevent passive users from upvoting the most easily digestible content to the top over the most informative or insightful.
Even without showdead Hacker News comments can get pretty rough about certain topics -- it's very similar to reddit in regards to any conversation that abuts feminism or trans issues.
The worst topic on HN is anything to do with homelessness. It seems a good 30% of the userbase would love to indiscriminately kill unhoused people.
I agree that the comments are terrible on some topics, but I don't think we can draw any conclusions about the userbase. It's not a poll. Discussions like that are off-topic for Hacker News, and most people are going to skip topics where they don't think the discussion will be good. (And they'll probably get flagged.)
I wish I were surprised by that.
By "ethical" I mean:
Not colluding with human rights violations for any reason
Taking responsibility for disinformation on the platform
Being transparent about privacy practices and possibly even providing opt-outs
Interesting point about Hackernews. I'm not familiar with it but I take your word for it.
It's interesting that you consider Discord the lesser of the current evils, though. Any particular reasons for that? I rarely use it.
If by "colluding" you mean merely allowing content you believe to be human rights violations, then yeah, I don't think it's possible. I mean, for one, what counts and doesn't count as problematic in that way is subjective - even on tildes, there's been a lot of conflict recently about recent events in which normal (by which I mean, not clearly red flags or anything) people, in genuine good faith, have every possible perspective
In those threads, you could easily make a good faith argument some of those posts are supporting "human rights violation" and someone else could make a good faith argument it's not, and there is no right answer, because the world is complicated and there is no universal arbiter.
It's not about discord in particular, like I said, it's groupchat-adjacent. A small discord server is functionally a groupchat.
Thanks. I suppose when I say colluding / supporting what I really mean is the more serious charge of facilitating. Meta made money from genocide because they have a user base sufficiently large to make it profitable. Since they behave that way, those of us who remove ourselves from their service no longer collude with their facilitation.
I don't know where the line is between "social media" and "a website where people can post," but I think this is largely a matter of not letting the scale of the website get too big to moderate.
One obvious sign of getting too big to moderate is hosting content in languages you don't know and countries that you don't understand very well. (As Facebook did.) Facebook grew far, far beyond being a system that where moderation is effective. (They could have done more, though.)
A personal blog is likely to remain small enough to keep up with. A Mastodon instance could get too big to moderate, but only if you let it. (For example, there's no particular reason to accept open signups from strangers.)
On the other hand, I think we do need to use larger systems than that and there's no way to prevent them from being misused without also giving up on privacy. A good example of this is a phone system. You can be sure that people are doing unethical things with telephones, but someone providing cellular phone service has no way to prevent it, since they can't listen in on people's calls.
The only way to stay sane is to say "that doesn't count as collusion." Other people's phone calls have nothing to do with us. We aren't going to try to stop them and that's okay. We aren't going to give up on telephones because someone said they're unethical.
Social networks have lots of public content, though. They're moderated and users expect them to be moderated. That's the key difference.
I think an important difference between telephones and social media networks as they exist today is that phone networks are, for the most part, completely neutral. The network doesn't look at the calls and say, "these calls get priority and these calls get blocked." But social media companies have algorithms that prioritize content to increase engagement and create revenue. Sometimes these algorithms make non-neutral choices, and sometimes social media personnel interfere with content directly. Until we treat social media as a common carrier (and regulate it appropriately), I think it's going to be hard for them to be a neutral platform.
A perception of neutrality is good from a PR standpoint, but I think it’s less important than network architecture.
Voice calls are person-to-person and that architecture suppresses viral behavior. To reach a lot of people using phones, you need phone banks, which are labor-intensive, or robo-calls. This requires an organization (like a political campaign or a business) and doesn’t happen on a whim.
Contrast with silly stuff going viral on a social network for no particular reason than people thought it was funny. Often the algorithm doesn’t help, but the root cause is basic features we all take for granted. There’s a reply button under every post that any stranger can use, along with retweeting or boosting. And even without that, it can be emulated with cut-and-paste and screenshots.
This is enough to amplify a lot of bad behaviors, no fancy algorithm needed. It can be a silly joke or terrible political memes or hate-tweeting, where people amplify things they hate, so you get more stuff you hate in your feed. We need moderation to suppress the bad effects of an amplifying underlying architecture.
The analogy and differentiation of the phone system is great.
I'm reminded of what the Meta/Facebook whistleblower Frances Haugen said about the issue of massive social media companies a few years ago.
Would that completely solve the problem? No. These social media sites would still be potential platforms for people to organize around promoting genocide and the like. So it would have to come with moderation. But without their algorithms, at the very least, it would stop these platforms from amplifying disinformation, misinformation, and hate speech.
Beyond that, content moderation is a tricky subject, particularly here in the United States. It's often an uphill battle, with free-speech absolutists preferring the simplicity and moral purity of black-and-white rules that equate content moderation with censorship. (You see this view a lot on Reddit.) And while I'm sympathetic to the idea that social media companies shouldn't be put in charge of what people can and can't say on the Internet, I think most people can agree that "anything that isn't clearly criminal is allowed" is inadequate.
If nothing else, I think there is broad approval for moderating calls to violence. If so, can large social media sites handle that? I believe they can. Particularly as AI tools evolve. But at the moment, they have quite clearly demonstrated that they are either unable or unwilling to even clear that rather low bar. (Elon Musk would have you believe that Twitter couldn't afford even the rather meager—and I would argue inadequate—moderation it was engaging in before he took over. But color me skeptical.)
I don't think you can truly claim that about Hacker News as they are owned by Y Combinator which is a VC accelerator/incubator company. From their FAQ
So while you are largely right, the platform does not just exist to exist entirely either.
Hacker News is filled with subtle advertisements and the entire site is a giant advertisement for YCombinator. They also occasionally have “YC company is hiring” posts which can’t be upvoted or commented, as well as “Launch HN” which can be upvoted and commented but are guaranteed to show up on the front page.
I will say the advertisements are a lot better than most sites. There’s at most one promoted page at a time (while each page has 30 posts), “subtle” advertisements have to be genuinely interesting or they’ll never make it to the front page, and their comments are practically guaranteed to have criticism (if your app sucks commenters will describe in detail how it sucks; if your app is really good, someone will find something they feel is wrong with it and describe how that makes it suck).
But it’s far from free of monetization. I’m certain it netted YCombinator millions if not billions in convincing at least one successful startup to join the YC program. It’s just advertising for a different audience. Even the bare-bones site design is just a way to signal “YCombinator is practical and BS-free, so you can trust us with your startup”.
The one thing the internet does that couldn’t happen in real life is connecting people with minority viewpoints.
This happens on any website regardless of algorithms or monetization strategies. The only way to effectively stop this from happening is to not index the internet or websites which would basically make the internet useless.
The same way the internet has helped people in marginalized group connect, discuss their issues, and share their perspectives with a wide audience, it has allowed people with hateful and harmful views to do the same.
The internet and social media sites are just tools. There’s no ethical internet or ethical social media, only ethical use of the internet or ethical use of social media.
“There’s no ethical internet or ethical social media, only ethical use of the internet or ethical use of social media.”
In that case the “users” could include those running the companies. They’re not all the same. Some are clearly more ethically operated than others.
The companies don't really have the ability to be ethical. Ultimately, public companies are legally bound to increase shareholder value. Radicalizing your user base just happens to be a very profitable way to operate a social media company.
Even if a single social media site decided to try to reduce radicalization on their site, they would be less competitive and the unethically operated sites would overtake them. Tildes is a good example where because it is not profit seeking and doesn't seek to grow for the sake of growth, it won't unseat the social media giants.
The government needs to be the one that regulates social media companies and the internet.
This is difficult in the US because of the first amendment. The first amendment made sense back when people had to gain a local audience in order to gain a larger audience. If you had a fringe belief, the majority of your community would suppress your voice you before you could connect to others. The first amendment worked because minority views didn't spread very well. The people would self regulate what they believed was ok as a society.
The internet breaks that feature of free speech which is why we see so many liberal democracies having trouble with misinformation and foreign influence campaigns. In an internet connected world, the restrictive, autocrat-led countries do not have this issue.
I don't want to live under an autocrat, but the alternative seems to be allowing hate speech to have a large platform on the internet.
I really don't know what the solution would be, but I think it has to be high level systemic change that changes fundamental aspects of our society.
This is a common myth. There's a duty not to take the money and run. Some things count as fraud. But they aren't required to justify spending decisions as being profitable according to some legal standard. There's a "business judgement rule" which means management gets to do what they want within very broad limits.
A recent example was Facebook losing tens of billions on virtual reality. Did that increase shareholder value? No court is going to touch that.
Another example: paying the CEO too much is usually perfectly legal. (Musk was an exception, according to some rather convoluted legal reasoning.)
Nobody's going to go after a social media company for spending too much money on moderation. They do want to make money, but that's a cultural norm, not a legal requirement.
It's also not necessarily true that more moderation drives away users. Too little moderation can drive people away too. Meanwhile, TikTok is apparently heavily moderated and seems pretty popular?
More generally, it may be hard to see how things will change, but the world does change in unpredictable ways. The biggest change in the last two years was the decline of Twitter. Who predicted that? I wouldn't bet on it happening in any given year, but I also wouldn't rule out the scenario where things get better on their own due to some new social network coming along that people like better.
Ok, yes, but if you strike 'legally bound' from that post, nothing else changes.
Companies that do not perpetually post more profits than the last quarter get punished, that means that it emerges as more of a natural law of a market than a legal one.
Companies in a competitive market will perpetually trend towards sacrificing quality in order to maintain or raise profits. If they don't, they will be "outcompeted" by some other company that does.
It's not precisely true that companies in a competitive market will perpetually trend towards sacrificing quality. There's a perennial market niche for luxury brands, and it prospers as long as it can maintain belief in "quality" among the wealthy.
And yes, those brands often fail when they realize they can make beaucoup bux by selling status signaling to the masses for a while. And the luxury brand owners just start the hype cycle all over again with a new brand.
I fear that Meta is now in a position to keep this cycle going indefinitely with shiny new platforms.
Meanwhile, there's a tiny market segment of old names in material luxury which few of us plebes know about, except perhaps the mechanical watch aficionados. They can't compromise significantly on quality without completely changing the nature of the products they sell. The barriers to entry in the industry are high, since the goods generally require skilled artisans and very specialized capital equipment. They depend on cultivation of direct relationships with patrons. And they're generally privately held small or medium business, not subject to the short-termism of public markets.
I'm not saying that near-feudal patronage based businesses are a good model for social media platforms, but we are here commenting via a platform that's a benevolent socialist dictatorship, so 🤷♀️.
What changes is that spending or cutting back on spending are not inevitable. They are often a choice. Choices do need to be justified, but CEO’s tend to be pretty good at justifying whatever they want to do as somehow best for shareholders, and many of them are pretty well insulated from having to worry about a corporate takeover. It can take many years to find out whether something like Uber’s unprofitable expansion will work out, and investors will go along if there’s a chance.
How much competition there is varies a lot. Sure, a small business like a restaurant is often going to be in a pretty precarious condition, but putting the largest, richest tech companies in the same bucket doesn’t make sense.
It might be a choice, but again: this is about the incentive structure, and trends over time.
If there's a button labelled says "Decrease quality, increase profits," even if the current leadership refuses to push it, there's a decent chance that as leadership is replaced, future leadership will hit that button. And reversing course on policy decisions like that almost never happens, doubly so if the profits are realized.
It's the reason that cable became infested with ads, and the reason paid streaming is also becoming infested.
I agree in the sense that everything changes, often for the worse, and then it’s time to take your business somewhere else.
I don’t agree that incentives are destiny or that it’s easy to figure out which strategy will work best, even in the short term. Companies seem to succeed with different strategies, depending on industry and market segment. Apple comes to mind. They make plenty of money, but it doesn’t seem to be by decreasing quality?
There are some industries, like the airline industry, where incentives do seem pretty pretty powerful and result in every business doing nearly the same thing. In others, there seems to be more variety.
There’s still plenty of room for innovation when making websites where people communicate.
I've heard plenty of people complaining about how Macbook build quality decreased over the last decade.
I'd also contend they're not exactly in a competitive market, given how hard they work to insure that nobody else can sell a compatible computer in their ecosystem.
It did decrease (there were complaints about keyboards, etc) but seems to have gotten better again, which is why I brought it up as an example.
Do you think that there are any instances of this at the moment?
But I’m not so sure. I could set up a hate group tomorrow that doesn’t charge anybody and doesn’t have ads. No capitalism, but not ethical either.
I’m asking about which platforms are closer to your definition of ethical since you use the word yourself when you say it’s possible. (Reading my text, it looks potentially confrontational, but I’m only curious).
But you’ve answered it with the link to GNU Social. Thanks for introducing me to it.
I still decline to cleave to the position that capitalism is inherently unethical. Abuses exist in every system and capitalism is no different, nor particularly worse. I feel a constant need to remind people that we live in the most prosperous, peaceful, safe time in all of human history, primarily thanks to capitalism. As a proportion, fewer people than ever live in poverty. Fewer people die from preventable illnesses. Capitalism in its simplest, purest form states that it will reward someone for providing society with a thing society desires. It is up to us to corral the excesses of capitalism through social organization and democratic systems of governance, but I do not think it is reasonable to abolish it.
I think this applies to our social media as well - we need systems of interaction that inhibit the poisoning of the well of discourse. I'd say Tildes has done a good job with the user-based content moderation system, imperfect though it is. The federation system offers another approach, allowing independence of thought and discourse across a broad spectrum of interconnected spaces.
And frankly, I think the question is a bit misguided. Any engagement with society carries with it the burden of unethical practices. From the food you eat to the clothes you wear, people and the environment were all abused in some form for your benefit. Yet the goal should not be to shun these products in their entirety but to work collaboratively to make incremental improvements. I think the same holds for social media.
I will admit that I struggle with what to say next. For food and clothing the actions seem more obvious (if imperfect). Spread awareness, make conscious consumer choices, form or join organizations that work to improve the ethical quality of these goods. With social media it is harder, as their very egalitarian and monolithic nature makes it difficult to effect change. Consumer oriented decision making (what platform to use, where to comment etc.) seems the most obvious first step but larger, community driven movements are most probably a more critical step. So maybe, get involved in the social media platforms of your local communities - do you pay attention to subreddits or Facebook pages devoted to your city or neighborhood? What about to small hobbies that you enjoy? If you are engaged in these low level community oriented groups, maybe your ethical standards will be easier to uphold.
I'm afraid this ended up being a pretty stream-of-consciousness post, so I don't really know how to cleanly conclude it but that is basically my overall position - democratic, community driven action is the best way of limiting the excesses of social systems, both capitalism and social media. It's a terrible solution and simultaneously the best option we have.
Edit: as a quick addendum, I've always wondered about the possibility of a wikipedia-like social media, where the moderation and the rules themselves are collaboratively, democratically decided by users, with potentially even elections for important roles in the group. Maybe something to consider (although I am aware that wikipedia has its own problems)
I don't wanna get too into the weeds on this but I'm not sure this is a definition of capitalism that most people would agree with. Generally people think of capitalism more in terms of private ownership. I don't really think a socialist society would function without rewarding people for their work to benefit society. I think a way to think of this on social media would be to stop letting these tech billionaires lay claim to our data.
Does it really make sense for these platform owners to 'own' our posts? I don't personally think so. You can claim that everyone agreed to the terms of service but that's not really much of a choice when using these services is often near-essential. How do you expect to sell things without Facebook marketplace, for example? And if they don't have access to our data, how do they make money? And if they can't make money, how do they stay open?
I just don't really see how you can have a profit motive and also have good social media. Maybe there is an incentive structure out there that makes it work, but I'm skeptical.
This is it 100%. The primary differentiating factor between socialism and capitalism is ownership. Otherwise, there would be no space for market socialists.
A proper socialist society would have no concept of the phrase 'salaried employee'. You would be paid for hours worked, at a rate which accounts for your productivity relative to your peers. Ideally in a transparent and democratic way.
I’ve never used Facebook Marketplace. How does it compare to Craigslist (for local sales) or Ebay?
I find it quite a bit better than Craigslist. The fact that it’s attached to Facebook means there’s a picture associated, and you can try to do some metaphorical profile stalking to see if they’re a real person. Of course, that can be faked, but it’s another layer of effort.
Also it’s just much broader pool. Craigslist’s shitty 90s interface is delightful to some but in the end I just want to buy furniture cheaply and Facebook is more accessible to more people.
I have tried multiple times to simultaneously post listings on both Craigslist and FB Marketplace to increase my chances at finding a buyer. Both times FB Marketplace actually led me to a sale.
One time I listed a piano keyboard on Craigslist at a certain price point, but in the description I said I had some accessories I was willing to sell at an additional cost. Some Craigslist warrior reported my listing because it didn’t include the full cost upfront I suppose. That post was taken down within 30 minutes of its posting. Meanwhile on FB Marketplace, I immediately got an interested buyer who negotiated the price to include the accessories anyways. I didn’t mind the negotiation because it was a parent gifting a present for her child, which was relatively straightforward to confirm from her FB profile.
The other situation involved selling a bike where FB Marketplace gave yet another immediate response and I never got any bite from Craigslist.
As much as I absolutely despise having to be a FB user to enable the above, it was undeniably more effective. I used to hate that FB was the only method for planning events, but Partiful has been conveniently filling that gap. I’m not sure if there’s a better marketplace app that conveys the same level of user trust as FB Marketplace, but Craigslist has certainly gone downhill from it’s all-purpose listing service to a more sketchy corner of the web.
I had a dinner chat earlier tonight specifically about how most of Craigslist’s main features have been replicated by tech companies these days (i.e. AirBnB, Indeed, FB Marketplace, dating apps). Perhaps the masses are just easily swayed by a pretty UI and a marketing budget, but that strategy has consistently been eating Web 1.0’s lunch for the past 10+ years.
Craigslist was never not-sketchy. It is/was replacement for newspaper classifieds, and that has also had a decent level of sketch in it as well. It was a major advertising platform for prostitutes after all.
It also probably doesn't help that the majority of the base went to Facebook, leaving the oddballs and scammers.
I also refuse to use it, but it's objectively made craigslist worse. Most people use FB marketplace instead of craigslist.
In case you haven’t heard of it, Jimmy Wales the Wikipedia founder created https://www.trustcafe.io/en
I've been happy with Pillowfort so far. (I found it through a recommendation here on Tildes.) It's a blogging and social media platform, kind of similar to tumblr, but much smaller and not corporate-owned.
The vibe I get from the community is that there are a lot of artists, LGBTQ+ folk and allies, neurodivergent folk, geeks and fandom folk, and people who enjoy "old internet" things. NSFW content is allowed, but must be properly tagged (users must opt-in to view NSFW content on the platform).
I've noticed that I get more engagement on my posts on Pillowfort compared to, for example, Twitter, even if I'm relatively new and have less followers on Pillowfort. People on Pillowfort seem to take more time to leave meaningful, thoughtful, and kind comments. (I've noticed similar commenting culture here on Tildes too!)
The main drawbacks compared to "big social media":
For people who want to post AI-generated content, Pillowfort's current no-gen-AI policy would also be a drawback. (For me this is a plus as I'm kind of tired of seeing AI-generated content everywhere.)
Edit: It's a bit slow at the moment because there's a large influx of tumblr users signing up on Pillowfort.
The only winning move is not to play.
That's how I've personally felt about social media for quite a while now. The whole Facebook-genocide thing really cemented that in my mind, after having already (at that time) drastically reduced my social media presence after seeing research in my graduate Cognitive Science courses that demonstrated it negatively impacts mental health.
I don't believe there will be a social media site that avoids the plethora of ethical problems that plague the current social media landscape until (if ever) one can get traction and not be ultimately driven by Capitalism.
Ethics and profit are directly at odds in the context of many aspects of social media. And we all know how that ends up.
I agree with everything you say, but I wonder if removing the profit aspect is even enough. As I mentioned in my last paragraph, if I ran a non-profit, I would struggle to moderate it.
Freecycle.org for example is a non-profit website to let people give away their old belongings. It's great, but the way they run it is that every post has to be approved by an admin first. Would that be scalable for a hobbyist / non-profit? I can't imagine it. And the alternative would likely be a lot of content that needs to be removed but isn't.
You're on Tildes.net - you're safe here, treat it as your cyberhome.
Somebody mentioned Hackernews - I personally prefer lobste.rs, regarding technical topics.
Also, I've recently made my phone notify me less, resigned from some apps and try to talk with people more, be it via phone or in person. I realised I don't really need to be informed about everything and don't need to be online every second. My mind is having a good relief and I've actually started reading books again after 10 years of not doing it.
So, to answer your question, actually... :P
I draw the line where the code is obscure - be it closed source software or just a "free" product of a corporation. Also when I hear that something is lacking moderation and it's too often and too serious I also wonder if that's a place I want to be in - if they aren't able to moderate it, it shouldn't have so many users or even shouldn't exist. Where it's possible, of course - e.g. it took me some time to feel comfortable with the idea of not having Facebook. When I felt I was ready I did it... three times going back and forth and after third time I finally stopped recreating account. Also, after more time, I found out that my perfect alternative is just physical contact with people. So that's when we created local hackerspace. Combining it with a chat, mailing list, finding out tildes.net it makes a good and healthy but still a little bit overwhelming dose of offline & online social interactions, including also family. I also realised I simply don't have so much time to be able to scroll my phone.
Thing I like very much is getting rid of fomo, which I think leaving corporate social media helped a lot in.
Open Source is no guarantee of ethics, but generally I've noticed that companies do not share source code for their core product are less ethical than ones that do.
Facebook and Google might release a lot of open source code....but notice they keep their actual products as closely held secrets.
I don't think it's perfect, but Mastodon remains the social network I'm most happy with. At a small scale private groups or well-moderated ones like tildes are excellent, but for any kind of broadly public network I don't think anything that's entirely controlled by a single company will ever be safe or long-lasting, and in the shorter term it's very unlikely to be ethical as its very hard to profit off a social network without heavy advertising and/or selling user's data.
The common failure case of large social networks seems to be that they end up in control of a management team (or worse, a single guy) that doesn't have the network's best interests in mind. With Mastodon I can and have picked a server based on how much I trust the people running it, and it's far more than I trust Zuckerberg, Musk, or Dorsey. But even if I don't trust them as much, I'm not beholden to their choices forever — I can move to another server, or run my own, there's effectively no way to permanently ban someone from the Mastodon network (which is not to say I don't think moderation is good and important).
Community or privately funded services also seem like a far safer way to run the infrastructure of a social network, with lots of smaller costs spread across the ecosystem instead of one hyperfunded tech corporation that'll take the entire thing out with it when it fails to make a profit, or sell it off to someone that has entirely different priorities for it.
It's a hard problem to solve. One way is to hire an army of moderators, give them proper training and continuously updated guidelines. But this costs a fortune, so it requires a functioning business model (usually ads, with all the issues they bring) and investors willing to take a financial hit on moral grounds.
A second way is to do what Tildes does; use an invitation only strategy to control the user base and their culture. I assume it has features in place to trace invitation chains and ban related users en masse, if for example a Stormfront user gets a hold of an invitation and brings their whole community over.
There's a third way, which is deceptively simple (but has some limitations). If you track users' activity to make sure they only post/vote/comment on content they've actually consumed, it makes spam and disinformation campaigns highly inefficient. It also stops most clickbait content and rage bait comments, almost completely removing the need for active moderation. Weigh votes by consumption time, and insightful content will also compete on fair terms with memes and other short form content, significantly increasing the overall quality of content. Ah, I see you've spotted the shill hat I'm wearing; let me introduce you to Readup, which by the way is open source and community funded.
ReadUp looks really interesting. I wonder how it would work in more casual settings where people aren’t posting articles (I’m thinking of a hobby group, in my case), but it’s a unique and sounds much more substantive than your average social site.