-
17 votes
-
Twitter testing prompts on Android and iOS for 'intense' conversations
@Twitter Support: Ever want to know the vibe of a conversation before you join in? We're testing prompts on Android and iOS that give you a heads up if the convo you're about to enter could get heated or intense.This is a work in progress as we learn how to better support healthy conversation.
4 votes -
High Court of Australia rules that media outlets are publishers of third-party Facebook comments
12 votes -
OnlyFans will prohibit "content containing sexually-explicit conduct" (but still allow nudity) starting October 1, at the request of banking/payment providers
50 votes -
Reddit has banned the misogynistic "Men Going Their Own Way" subreddits r/MGTOW and r/MGTOW2
AHS: 🦀. 🦀. 🦀. MGTOW and MGTOW2 are banned 🦀. 🦀. 🦀. SRD: r/MGTOW has been banned r/MGTOW was quarantined back in January 2020 after being cited in an FBI prosecution brief during the sentencing of...
AHS: 🦀. 🦀. 🦀. MGTOW and MGTOW2 are banned 🦀. 🦀. 🦀.
SRD: r/MGTOW has been bannedr/MGTOW was quarantined back in January 2020 after being cited in an FBI prosecution brief during the sentencing of a U.S. Coast Guard officer planning a domestic terrorist attack.
37 votes -
Facebook cracks down on discussing ‘hoes’ in gardening group
12 votes -
Why a YouTube chat about chess got flagged for hate speech
9 votes -
Conservative social networks keep making the same mistake
13 votes -
Not trying to make waves but why are articles posted to news that relate to lgbt moved?
As a new member I am really hesitant to post this but I recently posted an article to ~news that was related to lgbt issues and it was moved to ~lgbt. I fully support a sub section devoted to lgbt...
As a new member I am really hesitant to post this but I recently posted an article to ~news that was related to lgbt issues and it was moved to ~lgbt. I fully support a sub section devoted to lgbt but news should be news regardless.
Just because it has an lgbt angle does not mean it should be moved. I'm not even lgbt myself but I find it sort of hurtful that a news article was pushed off ~news. So I ask this, and once again not trying to make waves. But why?
Edit: I would love to be a member of this community as I am personally seeking a less asshole filled reddit alternative. But pushing a news article to another ~ just because it relates a bit more to them shouldn't be a thing. If you are tolerant it relates to us all. And yes I know I posted it in ~news because I was trying to participate and I'm a news junky.
Sorry.
Edit 2: This was a sad sorry way to come in to this community. I apologize.
19 votes -
Reddit is about to delete a lot of subreddits based on post activity metrics
31 votes -
Should Tildes have rules for healthcare advice?
Sometimes Tildes users give people healthcare advice. Sometimes that advice disagrees with the advice already given by a qualified registered healthcare professional. That might be okay if the...
Sometimes Tildes users give people healthcare advice. Sometimes that advice disagrees with the advice already given by a qualified registered healthcare professional. That might be okay if the tildes advice was compliant with national guidance, but sometimes it isn't. Sometimes it's bad, dangerous, advice.
Should Tildes have rules about this?
16 votes -
Reddit faces lawsuit for failing to remove child sexual abuse material
15 votes -
Is content moderation a dead end?
19 votes -
Life’s a Bitche: Facebook says sorry for shutting down town’s page
6 votes -
Discord will start designating entire servers as NSFW, and prevent all under-18 users from accessing them, as well as all users on iOS
27 votes -
Twitch will ban users for 'severe misconduct' that occurs away from its site
18 votes -
Thoughts on running online communities from the creator of Improbable Island
15 votes -
How would you improve advertising on Reddit?
Let me preface that I'm well aware that if given the choice between frequent, untargeted ads or fewer targeted ads, the average Tilderino's response would be "Neither." However, given that social...
Let me preface that I'm well aware that if given the choice between frequent, untargeted ads or fewer targeted ads, the average Tilderino's response would be "Neither."
However, given that social media at scale has yet to establish a sustainable business model that doesn't rely on advertising (people like free content, after all), it seems advertising has become a necessary evil (and has pervaded nearly all forms of media for the past century regardless).
With that in mind, I think coming up with creative solutions to deliver relevant advertising while preserving user privacy and avoiding destructive feedback loops (i.e. where the search for ad revenue compromises the user base and content generation) is an interesting thought exercise. This is one of social media's largest problems, imho, but it might be easier to analyze just Reddit as a platform due to its similarities (and notable differences) to Tildes.
A couple thoughts of my own:
- Whitelist "safe" subreddits - A massive problem for Reddit is identifying content that brands want to avoid association with (e.g. porn, violence, drugs). While new subreddits crop up every day, the large ones do not change so fast and could be classified as safe content spaces (e.g. /r/aww)
- User subreddit subscriptions - Rather than target ads based on the subreddit currently being viewed, why not use the subs people have voluntarily indicated they are interested in?
- Allow users to tag content - While people can report content to the mods today, there is no ability to tag content (like Tildes has) from a user level. Content that's inappropriate for advertising may not necessarily be a reportable offense. By allowing users to classify content, better models for determining "good" content vs. "bad" could be developed using ML.
- Use Mods to determine content appropriateness - User supplied data may introduce too much noise into any given dataset, and perhaps mods are a better subjective filter to rely on. Certain subreddits can have biased mods for sure, but without trying to overhaul content moderation entirely, could mod bans/flair be used to indicate suitable content for ads?
- Use computer vision to classify content - While this wouldn't work at scale, an up-and-coming post could have a nebulous title and difficult-to-decipher sarcastic comments. The post itself could be an image macro or annotated video that could be used to determine the subject matter much more effectively.
To be clear, the spirit of my initial prompt isn't "how can Reddit make more money?" per se, but how can it find a sustainable business model without destroying itself/impacting society at large. Facebook and Twitter seem to have optimized for "engagement" metrics which leads to prioritization of outrage porn and political divisiveness. Snapchat and Instagram seem to have succumb to being mostly an ad delivery engine with some overly-filtered content of "real life" influencers (read: marketers) strewn in between. None of these seem like a net-good for society.
What are all your thoughts? Perhaps Big Tech social media is irredeemable at this point, but I'm trying not to take such a defeatist attitude and instead explore any positive solutions.
9 votes -
The internet’s most beloved fanfiction site is undergoing a reckoning
15 votes -
With Parler down, QAnon moves onto a ‘free speech’ TikTok clone
10 votes -
Facebook's Oversight Board announces its first decisions, overturning Facebook's decision in four out of five cases
8 votes -
Twitter announces Birdwatch, a community-based approach to misinformation
21 votes -
The great Wikipedia titty scandal
36 votes -
Thoughts on the difficulties of content moderation, and implications for decentralised communities
12 votes -
Many people here believe that social media can't be both large and have good discussion because the human brain isn't made to interact with large numbers of people. What do you think of this?
p.s the difference between this post and this post is that I want to ask questions and get people's opinions and answers in this one more. Here's a few examples, last one being an argument between...
p.s the difference between this post and this post is that I want to ask questions and get people's opinions and answers in this one more.
Here's a few examples, last one being an argument between a few people where most people, including Deimos agreed with this idea.
Personally, I find this idea almost terrifying because it implies social media in it's current form cannot be fixed by changing or expanding human or automoderation, nor fact checking, because moderation can't reasonably occur at scale at all.
However, I have 2 questions:
1: If large social media platforms can't really be moderated what should we do to them? The implied solution is balkanizing social media until the 'platforms' are extended social circles which can be moderated and have good discussion (or more practically, integrate them to a federated service like mastodon which is made to be split like this or something like discord.) An alternative I've heard is to redo the early 2000s and have fanforums for everything to avoid context collapse and have something gluing the site's users together (something I am far more supportive of) or a reason for invite systems and stricter control of who enters your site but doesn't explain the idea that once your site hits a certain usercount, it will inevitably worsen and that is something that stems from human nature (Dunbar's number aka the max amount of friends you could theoretically have) and so is inevitable, almost natural.
2: Why is moderation impossible to do well at large scales? While I think moderation, which I think is analogous to law enforcement or legal systems (though the many reddit mods here can definitely give their opinions on that) definitely likely isn't the kind of thing that can be done at a profit, I'm not entirely sure why would it be wholly impossible. A reason I've heard is that moderators need to understand the communities they're moderating, but I'm not sure why wouldn't that be a requirement, or why would adding more mods make that worse (mods disagreeing with eachother while moderating seems quite likely but unrelated to this.)
20 votes -
Statistics on bans and transparency
Do we have any statistics on how many users have been banned and why they’ve been banned? What information should be or remain public? Some forum sites let you see the banned users post and...
Do we have any statistics on how many users have been banned and why they’ve been banned? What information should be or remain public? Some forum sites let you see the banned users post and comment history from prior to their ban; is there any value in that?
Unrelated; how many Tildes-ers are we up to now?
18 votes -
Twitter will force users to delete COVID-19 vaccine conspiracy theories
11 votes -
What is happening in r/CentOS and why /u/redundantly should not be a moderator
9 votes -
Parler’s got a porn problem: Adult businesses target pro-Trump social network
13 votes -
Open letter from Facebook content moderators re: pandemic
7 votes -
Reddit quarantined: Can changing platform affordances reduce hateful material online?
4 votes -
Reddit worries it’s going to be crushed in the fight against Big Tech
16 votes -
Reddit announces "Predictions" - Allowing users to bet on the outcomes of polls with Coins (purchased with real money), where moderators are responsible for choosing which option wins
38 votes -
Facebook's Supreme Court arrives
4 votes -
Twitter won’t let The New York Post tweet until it agrees to behave itself
13 votes -
Facebook and Twitter take unusual steps to limit spread of New York Post story
16 votes -
Why Facebook can't fix itself - The platform is overrun with hate speech and disinformation, but the company's strategy seems focused on managing perception of the problem instead of addressing it
14 votes -
Facebook is updating their hate speech policy to prohibit and remove Holocaust Denial content
16 votes -
Masnick's Impossibility Theorem: Content moderation at scale is impossible to do well
22 votes -
Should we be able to view comments/posts where mods/admins are doing their roles and not doing them separately?
What I mean by this is: Sometimes @Deimos posts something related to his mod/admin work, like saying he will be locking a thread or adding something new, but that's not all he does, he makes...
What I mean by this is:
Sometimes @Deimos posts something related to his mod/admin work, like saying he will be locking a thread or adding something new, but that's not all he does, he makes regular topics and comments about regular things, he doesn't have need to use an alt-account for that. I feel that when he's talking or posting about his mod/admin work and talking about anything else that interests him should be able to be viewed separately.
Thoughts?
9 votes -
Content moderation best practices for startups
3 votes -
Inside Roblox's war on porn - The game platform is extremely popular with children, and the company is waging an endless fight against "condo games": explicit, often sex-themed user creations
19 votes -
Content moderation case study: Nextdoor faces criticism from volunteer moderators over its support of Black Lives Matter (June 2020)
7 votes -
Reddit moderator accounts compromised in coordinated hack, hundreds of subreddits vandalized
29 votes -
Facebook fired an employee who collected evidence of right-wing pages getting preferential treatment
14 votes -
Facebook has an internal simulation of the site populated entirely by bots that they're using to test the effects of possible changes
8 votes -
Reddit releases their new content policy along with banning hundreds of subreddits, including /r/The_Donald and /r/ChapoTrapHouse
85 votes -
Facebook creates fact-checking exemption for climate deniers
17 votes -
Facebook vowed to investigate horrific abuse by anti-vaxxers. Nine months later, no one was penalized.
10 votes -
Twitter labels Donald Trump video tweet as "manipulated media" as it cracks down on misinformation
13 votes