Let me preface that I'm well aware that if given the choice between frequent, untargeted ads or fewer targeted ads, the average Tilderino's response would be "Neither." However, given that social...
Let me preface that I'm well aware that if given the choice between frequent, untargeted ads or fewer targeted ads, the average Tilderino's response would be "Neither."
However, given that social media at scale has yet to establish a sustainable business model that doesn't rely on advertising (people like free content, after all), it seems advertising has become a necessary evil (and has pervaded nearly all forms of media for the past century regardless).
With that in mind, I think coming up with creative solutions to deliver relevant advertising while preserving user privacy and avoiding destructive feedback loops (i.e. where the search for ad revenue compromises the user base and content generation) is an interesting thought exercise. This is one of social media's largest problems, imho, but it might be easier to analyze just Reddit as a platform due to its similarities (and notable differences) to Tildes.
A couple thoughts of my own:
- Whitelist "safe" subreddits - A massive problem for Reddit is identifying content that brands want to avoid association with (e.g. porn, violence, drugs). While new subreddits crop up every day, the large ones do not change so fast and could be classified as safe content spaces (e.g. /r/aww)
- User subreddit subscriptions - Rather than target ads based on the subreddit currently being viewed, why not use the subs people have voluntarily indicated they are interested in?
- Allow users to tag content - While people can report content to the mods today, there is no ability to tag content (like Tildes has) from a user level. Content that's inappropriate for advertising may not necessarily be a reportable offense. By allowing users to classify content, better models for determining "good" content vs. "bad" could be developed using ML.
- Use Mods to determine content appropriateness - User supplied data may introduce too much noise into any given dataset, and perhaps mods are a better subjective filter to rely on. Certain subreddits can have biased mods for sure, but without trying to overhaul content moderation entirely, could mod bans/flair be used to indicate suitable content for ads?
- Use computer vision to classify content - While this wouldn't work at scale, an up-and-coming post could have a nebulous title and difficult-to-decipher sarcastic comments. The post itself could be an image macro or annotated video that could be used to determine the subject matter much more effectively.
To be clear, the spirit of my initial prompt isn't "how can Reddit make more money?" per se, but how can it find a sustainable business model without destroying itself/impacting society at large. Facebook and Twitter seem to have optimized for "engagement" metrics which leads to prioritization of outrage porn and political divisiveness. Snapchat and Instagram seem to have succumb to being mostly an ad delivery engine with some overly-filtered content of "real life" influencers (read: marketers) strewn in between. None of these seem like a net-good for society.
What are all your thoughts? Perhaps Big Tech social media is irredeemable at this point, but I'm trying not to take such a defeatist attitude and instead explore any positive solutions.