-
18 votes
-
Thoughts on running online communities from the creator of Improbable Island
15 votes -
How would you improve advertising on Reddit?
Let me preface that I'm well aware that if given the choice between frequent, untargeted ads or fewer targeted ads, the average Tilderino's response would be "Neither." However, given that social...
Let me preface that I'm well aware that if given the choice between frequent, untargeted ads or fewer targeted ads, the average Tilderino's response would be "Neither."
However, given that social media at scale has yet to establish a sustainable business model that doesn't rely on advertising (people like free content, after all), it seems advertising has become a necessary evil (and has pervaded nearly all forms of media for the past century regardless).
With that in mind, I think coming up with creative solutions to deliver relevant advertising while preserving user privacy and avoiding destructive feedback loops (i.e. where the search for ad revenue compromises the user base and content generation) is an interesting thought exercise. This is one of social media's largest problems, imho, but it might be easier to analyze just Reddit as a platform due to its similarities (and notable differences) to Tildes.
A couple thoughts of my own:
- Whitelist "safe" subreddits - A massive problem for Reddit is identifying content that brands want to avoid association with (e.g. porn, violence, drugs). While new subreddits crop up every day, the large ones do not change so fast and could be classified as safe content spaces (e.g. /r/aww)
- User subreddit subscriptions - Rather than target ads based on the subreddit currently being viewed, why not use the subs people have voluntarily indicated they are interested in?
- Allow users to tag content - While people can report content to the mods today, there is no ability to tag content (like Tildes has) from a user level. Content that's inappropriate for advertising may not necessarily be a reportable offense. By allowing users to classify content, better models for determining "good" content vs. "bad" could be developed using ML.
- Use Mods to determine content appropriateness - User supplied data may introduce too much noise into any given dataset, and perhaps mods are a better subjective filter to rely on. Certain subreddits can have biased mods for sure, but without trying to overhaul content moderation entirely, could mod bans/flair be used to indicate suitable content for ads?
- Use computer vision to classify content - While this wouldn't work at scale, an up-and-coming post could have a nebulous title and difficult-to-decipher sarcastic comments. The post itself could be an image macro or annotated video that could be used to determine the subject matter much more effectively.
To be clear, the spirit of my initial prompt isn't "how can Reddit make more money?" per se, but how can it find a sustainable business model without destroying itself/impacting society at large. Facebook and Twitter seem to have optimized for "engagement" metrics which leads to prioritization of outrage porn and political divisiveness. Snapchat and Instagram seem to have succumb to being mostly an ad delivery engine with some overly-filtered content of "real life" influencers (read: marketers) strewn in between. None of these seem like a net-good for society.
What are all your thoughts? Perhaps Big Tech social media is irredeemable at this point, but I'm trying not to take such a defeatist attitude and instead explore any positive solutions.
9 votes -
The internet’s most beloved fanfiction site is undergoing a reckoning
15 votes -
With Parler down, QAnon moves onto a ‘free speech’ TikTok clone
10 votes -
Facebook's Oversight Board announces its first decisions, overturning Facebook's decision in four out of five cases
8 votes -
Twitter announces Birdwatch, a community-based approach to misinformation
21 votes -
The great Wikipedia titty scandal
36 votes -
Thoughts on the difficulties of content moderation, and implications for decentralised communities
12 votes -
Many people here believe that social media can't be both large and have good discussion because the human brain isn't made to interact with large numbers of people. What do you think of this?
p.s the difference between this post and this post is that I want to ask questions and get people's opinions and answers in this one more. Here's a few examples, last one being an argument between...
p.s the difference between this post and this post is that I want to ask questions and get people's opinions and answers in this one more.
Here's a few examples, last one being an argument between a few people where most people, including Deimos agreed with this idea.
Personally, I find this idea almost terrifying because it implies social media in it's current form cannot be fixed by changing or expanding human or automoderation, nor fact checking, because moderation can't reasonably occur at scale at all.
However, I have 2 questions:
1: If large social media platforms can't really be moderated what should we do to them? The implied solution is balkanizing social media until the 'platforms' are extended social circles which can be moderated and have good discussion (or more practically, integrate them to a federated service like mastodon which is made to be split like this or something like discord.) An alternative I've heard is to redo the early 2000s and have fanforums for everything to avoid context collapse and have something gluing the site's users together (something I am far more supportive of) or a reason for invite systems and stricter control of who enters your site but doesn't explain the idea that once your site hits a certain usercount, it will inevitably worsen and that is something that stems from human nature (Dunbar's number aka the max amount of friends you could theoretically have) and so is inevitable, almost natural.
2: Why is moderation impossible to do well at large scales? While I think moderation, which I think is analogous to law enforcement or legal systems (though the many reddit mods here can definitely give their opinions on that) definitely likely isn't the kind of thing that can be done at a profit, I'm not entirely sure why would it be wholly impossible. A reason I've heard is that moderators need to understand the communities they're moderating, but I'm not sure why wouldn't that be a requirement, or why would adding more mods make that worse (mods disagreeing with eachother while moderating seems quite likely but unrelated to this.)
20 votes -
Statistics on bans and transparency
Do we have any statistics on how many users have been banned and why they’ve been banned? What information should be or remain public? Some forum sites let you see the banned users post and...
Do we have any statistics on how many users have been banned and why they’ve been banned? What information should be or remain public? Some forum sites let you see the banned users post and comment history from prior to their ban; is there any value in that?
Unrelated; how many Tildes-ers are we up to now?
18 votes -
Twitter will force users to delete COVID-19 vaccine conspiracy theories
11 votes -
What is happening in r/CentOS and why /u/redundantly should not be a moderator
9 votes -
Parler’s got a porn problem: Adult businesses target pro-Trump social network
13 votes -
Open letter from Facebook content moderators re: pandemic
7 votes -
Reddit quarantined: Can changing platform affordances reduce hateful material online?
4 votes -
Reddit worries it’s going to be crushed in the fight against Big Tech
16 votes -
Reddit announces "Predictions" - Allowing users to bet on the outcomes of polls with Coins (purchased with real money), where moderators are responsible for choosing which option wins
38 votes -
Facebook's Supreme Court arrives
4 votes -
Twitter won’t let The New York Post tweet until it agrees to behave itself
13 votes -
Facebook and Twitter take unusual steps to limit spread of New York Post story
16 votes -
Why Facebook can't fix itself - The platform is overrun with hate speech and disinformation, but the company's strategy seems focused on managing perception of the problem instead of addressing it
14 votes -
Facebook is updating their hate speech policy to prohibit and remove Holocaust Denial content
16 votes -
Facebook moderators, working as contractors at Accenture, are being forced to return to the office despite the risk of COVID-19
8 votes -
Masnick's Impossibility Theorem: Content moderation at scale is impossible to do well
22 votes -
Should we be able to view comments/posts where mods/admins are doing their roles and not doing them separately?
What I mean by this is: Sometimes @Deimos posts something related to his mod/admin work, like saying he will be locking a thread or adding something new, but that's not all he does, he makes...
What I mean by this is:
Sometimes @Deimos posts something related to his mod/admin work, like saying he will be locking a thread or adding something new, but that's not all he does, he makes regular topics and comments about regular things, he doesn't have need to use an alt-account for that. I feel that when he's talking or posting about his mod/admin work and talking about anything else that interests him should be able to be viewed separately.
Thoughts?
9 votes -
Content moderation best practices for startups
3 votes -
Inside Roblox's war on porn - The game platform is extremely popular with children, and the company is waging an endless fight against "condo games": explicit, often sex-themed user creations
19 votes -
Content moderation case study: Nextdoor faces criticism from volunteer moderators over its support of Black Lives Matter (June 2020)
7 votes -
Reddit moderator accounts compromised in coordinated hack, hundreds of subreddits vandalized
29 votes -
Facebook fired an employee who collected evidence of right-wing pages getting preferential treatment
14 votes -
Facebook has an internal simulation of the site populated entirely by bots that they're using to test the effects of possible changes
8 votes -
Reddit releases their new content policy along with banning hundreds of subreddits, including /r/The_Donald and /r/ChapoTrapHouse
85 votes -
Facebook creates fact-checking exemption for climate deniers
17 votes -
Facebook vowed to investigate horrific abuse by anti-vaxxers. Nine months later, no one was penalized.
10 votes -
Twitter labels Donald Trump video tweet as "manipulated media" as it cracks down on misinformation
13 votes -
Twitter starts rolling out audio tweets on iOS
7 votes -
Reddit is finally facing its legacy of racism
45 votes -
CDA Section 230 explained: The important and often-misunderstood legal foundation of the social internet
6 votes -
Facebook groups are falling apart with drama, infighting, and deleted comments about Black Lives Matter posts
4 votes -
One Twitter account is reposting everything Trump tweets. It was suspended within three days
34 votes -
Twitter hides Donald Trump tweet for 'glorifying violence'
20 votes -
Zuckerberg dismisses fact-checking after bragging about fact-checking
6 votes -
Twitter allows new tweets to restrict replies to "everyone, people you follow, or only people you mention"
14 votes -
Will Facebook’s oversight board actually hold the company accountable?
5 votes -
Reddit removes new chat room feature after one day in the wake of moderator protests and bugs
33 votes -
Facebook approved ads with coronavirus misinformation, in an experiment which raises questions about how the social media giant screens ads on its platform
8 votes -
YouTube has banned all conspiracy theory videos falsely linking coronavirus symptoms to 5G networks
26 votes -
The difficulties of moderating COVID-19 misinformation when even statements from official sources are questionable
7 votes -
Internal TikTok policies instructed moderators to suppress videos featuring unattractive, disabled, or poor people so they wouldn't scare off new users, as well as to remove specific types of content
21 votes