27 votes

Facebook and Instagram's algorithmic favoritism towards extremist parties revealed in new study of political ads in Germany

3 comments

  1. [3]
    Wes
    Link
    I'm surprised it was that low. Untargeted ads are usually a good way to waste your ad budget. The focus here seems to be on the social algorithm, but it seems to me the biggest consideration is...

    The study revealed that 72.3% of all political ads used targeting strategies, accounting for 72.6% of the total ad spending.

    I'm surprised it was that low. Untargeted ads are usually a good way to waste your ad budget.

    The focus here seems to be on the social algorithm, but it seems to me the biggest consideration is actually the auction system that ads use. Highly-contested keywords and demographics will cost much more, whereas your longtail ads will be far cheaper to run. Presumably then the extremists groups are just targeting lower-value keywords/users. Or am I misunderstanding? The study itself only briefly mentions this auction system, but I'm sure that it's the main factor in actually determining the cost of ad spends.

    4 votes
    1. [2]
      boxer_dogs_dance
      (edited )
      Link Parent
      Max Fisher's book The Chaos Machine looks at social media in relation to extremism worldwide and also concludes that algorithms are the problem and should be adjusted for social benefit. He...

      Max Fisher's book The Chaos Machine looks at social media in relation to extremism worldwide and also concludes that algorithms are the problem and should be adjusted for social benefit. He suggests that outrage predictably increases engagement and companies intentionally boost polarizing content to keep people on platform longer.

      3 votes
      1. Wes
        Link Parent
        Oh I don't doubt it. Recommendation feeds all over have the problem of spreading misinformation, incendiary emotional content, and extremist content. It likely isn't intentional, but they're...

        Oh I don't doubt it. Recommendation feeds all over have the problem of spreading misinformation, incendiary emotional content, and extremist content. It likely isn't intentional, but they're designed to be engaging and this content is extremely engaging. Understanding the impacts of these algorithms is an important topic and necessary to reduce its harmful effects.

        To clarify though, the article and study linked seem more focused on understanding the inequality of ad spending across the political spectrum. I think it's easy to read that and assume "Oh Facebook wants to make it cheaper for <X group>", but really it's a dynamic system based on market forces. That's what I was hoping to explain in my comment.

        To give a more concrete example - and putting aside that the study is about Germany for a moment - an example of a high value demographic would be age 40+ women living in US battleground states. These are impactful voters with a lot of power, so there's more competition to get ads in front of their eyes.

        Meanwhile, youth (low turnout voters) from deep red or blue states are of less value. From an ad buyer's perspective, they have less money and less voting power. This makes their ad buys a lot cheaper, and this could benefit outreach for extremist or other political groups. It just depends on what their goals are. The article mentions that ads often reached younger audiences than intended, and that could be one reason why.

        It wouldn't surprise me at all if the targeted criteria used by some political groups results in higher costs, and this creates the inequality mentioned in the title. I don't really know what I'd suggest to fix the problem though. The study offers a few suggestions:

        For example, previous research (46) recommends that platforms make adaptations to the auction mechanisms to subsidize political advertisers or provide quotas in combination with separate auctions for political and commercial advertisers to lower competition for political advertisers. Policymakers could also require randomization of ad delivery among the target population as discussed by the European Parliament (47), thereby preventing discrimination due to algorithmic bias. Most importantly, policymakers should incentivize additional transparency measures. For example, current measures are insufficient to evaluate how the platform’s pricing mechanism influences ad delivery, limiting independent assessment by researchers.

        Just to be clear, there is still an underlying algorithm that groups users and calculates ad relevance, but it's different than the algorithm used for the content shared to these social networks. Transparency would help in both cases, though I don't think it's enough to defang them completely.

        6 votes