-
6 votes
-
Discord will start designating entire servers as NSFW, and prevent all under-18 users from accessing them, as well as all users on iOS
27 votes -
Twitch will ban users for 'severe misconduct' that occurs away from its site
18 votes -
Thoughts on running online communities from the creator of Improbable Island
15 votes -
How would you improve advertising on Reddit?
Let me preface that I'm well aware that if given the choice between frequent, untargeted ads or fewer targeted ads, the average Tilderino's response would be "Neither." However, given that social...
Let me preface that I'm well aware that if given the choice between frequent, untargeted ads or fewer targeted ads, the average Tilderino's response would be "Neither."
However, given that social media at scale has yet to establish a sustainable business model that doesn't rely on advertising (people like free content, after all), it seems advertising has become a necessary evil (and has pervaded nearly all forms of media for the past century regardless).
With that in mind, I think coming up with creative solutions to deliver relevant advertising while preserving user privacy and avoiding destructive feedback loops (i.e. where the search for ad revenue compromises the user base and content generation) is an interesting thought exercise. This is one of social media's largest problems, imho, but it might be easier to analyze just Reddit as a platform due to its similarities (and notable differences) to Tildes.
A couple thoughts of my own:
- Whitelist "safe" subreddits - A massive problem for Reddit is identifying content that brands want to avoid association with (e.g. porn, violence, drugs). While new subreddits crop up every day, the large ones do not change so fast and could be classified as safe content spaces (e.g. /r/aww)
- User subreddit subscriptions - Rather than target ads based on the subreddit currently being viewed, why not use the subs people have voluntarily indicated they are interested in?
- Allow users to tag content - While people can report content to the mods today, there is no ability to tag content (like Tildes has) from a user level. Content that's inappropriate for advertising may not necessarily be a reportable offense. By allowing users to classify content, better models for determining "good" content vs. "bad" could be developed using ML.
- Use Mods to determine content appropriateness - User supplied data may introduce too much noise into any given dataset, and perhaps mods are a better subjective filter to rely on. Certain subreddits can have biased mods for sure, but without trying to overhaul content moderation entirely, could mod bans/flair be used to indicate suitable content for ads?
- Use computer vision to classify content - While this wouldn't work at scale, an up-and-coming post could have a nebulous title and difficult-to-decipher sarcastic comments. The post itself could be an image macro or annotated video that could be used to determine the subject matter much more effectively.
To be clear, the spirit of my initial prompt isn't "how can Reddit make more money?" per se, but how can it find a sustainable business model without destroying itself/impacting society at large. Facebook and Twitter seem to have optimized for "engagement" metrics which leads to prioritization of outrage porn and political divisiveness. Snapchat and Instagram seem to have succumb to being mostly an ad delivery engine with some overly-filtered content of "real life" influencers (read: marketers) strewn in between. None of these seem like a net-good for society.
What are all your thoughts? Perhaps Big Tech social media is irredeemable at this point, but I'm trying not to take such a defeatist attitude and instead explore any positive solutions.
9 votes -
The internet’s most beloved fanfiction site is undergoing a reckoning
15 votes -
With Parler down, QAnon moves onto a ‘free speech’ TikTok clone
10 votes -
Facebook's Oversight Board announces its first decisions, overturning Facebook's decision in four out of five cases
8 votes -
Twitter announces Birdwatch, a community-based approach to misinformation
21 votes -
The great Wikipedia titty scandal
36 votes -
Thoughts on the difficulties of content moderation, and implications for decentralised communities
12 votes -
Twitter requests deletion of three inciteful tweets from Donald Trump. If tweets remain undeleted, account will remain locked.
35 votes -
Twitter will force users to delete COVID-19 vaccine conspiracy theories
11 votes -
What is happening in r/CentOS and why /u/redundantly should not be a moderator
9 votes -
Parler’s got a porn problem: Adult businesses target pro-Donald Trump social network
13 votes -
Open letter from Facebook content moderators re: pandemic
7 votes -
Reddit quarantined: Can changing platform affordances reduce hateful material online?
4 votes -
Reddit worries it’s going to be crushed in the fight against Big Tech
16 votes -
Reddit announces "Predictions" - Allowing users to bet on the outcomes of polls with Coins (purchased with real money), where moderators are responsible for choosing which option wins
38 votes -
Facebook's Supreme Court arrives
4 votes -
Twitter won’t let The New York Post tweet until it agrees to behave itself
13 votes -
Facebook and Twitter take unusual steps to limit spread of New York Post story
16 votes -
Why Facebook can't fix itself - The platform is overrun with hate speech and disinformation, but the company's strategy seems focused on managing perception of the problem instead of addressing it
14 votes -
Facebook is updating their hate speech policy to prohibit and remove Holocaust Denial content
16 votes -
Masnick's Impossibility Theorem: Content moderation at scale is impossible to do well
22 votes -
Content moderation best practices for startups
3 votes -
Content moderation case study: Nextdoor faces criticism from volunteer moderators over its support of Black Lives Matter (June 2020)
7 votes -
Reddit moderator accounts compromised in coordinated hack, hundreds of subreddits vandalized
29 votes -
Facebook fired an employee who collected evidence of right-wing pages getting preferential treatment
14 votes -
Facebook has an internal simulation of the site populated entirely by bots that they're using to test the effects of possible changes
8 votes -
Reddit releases their new content policy along with banning hundreds of subreddits, including /r/The_Donald and /r/ChapoTrapHouse
85 votes -
Is anyone here involved with Stack Exchange/Stack Overflow? How do you feel about the new moderator agreement?
First post on Tildes (though I've commented some before). I'm a mod on one of the "beta" sites, and have been for almost four years now. I don't follow any of the other sites really, and certainly...
First post on Tildes (though I've commented some before). I'm a mod on one of the "beta" sites, and have been for almost four years now.
I don't follow any of the other sites really, and certainly not Meta, so the whole Monica scandal kinda happened without me noticing. After the fallout (or amidst, I guess), Shog9 and several other community managers that I liked were fired, with seemingly no notice or cause.
Then after that, there seemed to be a push to create a "mod council" to create standards for behavior and for removal and reinstatement of moderators.
But the whole thing has seemed so needless, everything could have been cleared up with a few heartfelt announcements (and/or apologies), and the executive team at SE has just been so damn opaque about everything.
Then, to top things off for me personally, the community leaders were explicit in stating that the votes for members of the mod council would not be treated as binding, so what's the point? They can just be a rubber stamp at that point if they're not freely elected. The new moderator rules are "abide by the council-approved rules and whatever the community managers say." The new moderator rules also say "moderators will be removed and reinstated per the council-approved procedure for doing so, except when SE doesn't want to use that policy."
The mod council vote was non-binding, and then SE is making it a point to clearly state that the rules approved by the council don't matter.
It's this last bit that is coming as too much of an insult for me. I've told my fellow moderators that I'm not planning on signing the agreement. I don't understand why there has to be such an adversarial relationship here when I'm volunteering my time.
I don't want to leave, but everything is just rubbing me the wrong way. Please someone help me understand how I'm wrong. I just can't understand the way things have been publicly announced.
17 votes -
Facebook creates fact-checking exemption for climate deniers
17 votes -
Facebook vowed to investigate horrific abuse by anti-vaxxers. Nine months later, no one was penalized.
10 votes -
Twitter labels Donald Trump video tweet as "manipulated media" as it cracks down on misinformation
13 votes -
Twitter starts rolling out audio tweets on iOS
7 votes -
Reddit is finally facing its legacy of racism
45 votes -
CDA Section 230 explained: The important and often-misunderstood legal foundation of the social internet
6 votes -
Facebook groups are falling apart with drama, infighting, and deleted comments about Black Lives Matter posts
4 votes -
One Twitter account is reposting everything US President Donald Trump tweets. It was suspended within three days.
34 votes -
Twitter hides Donald Trump tweet for 'glorifying violence'
20 votes -
Zuckerberg dismisses fact-checking after bragging about fact-checking
6 votes -
Twitter allows new tweets to restrict replies to "everyone, people you follow, or only people you mention"
14 votes -
Will Facebook’s oversight board actually hold the company accountable?
5 votes -
Reddit removes new chat room feature after one day in the wake of moderator protests and bugs
33 votes -
Facebook approved ads with coronavirus misinformation, in an experiment which raises questions about how the social media giant screens ads on its platform
8 votes -
YouTube has banned all conspiracy theory videos falsely linking coronavirus symptoms to 5G networks
26 votes -
The difficulties of moderating COVID-19 misinformation when even statements from official sources are questionable
7 votes -
Internal TikTok policies instructed moderators to suppress videos featuring unattractive, disabled, or poor people so they wouldn't scare off new users, as well as to remove specific types of content
21 votes -
Reddit's 2019 Transparency Report
15 votes