16 votes

This is how AI image generators see the world

4 comments

  1. MephTheCat
    Link
    I find it interesting that "latina" produced a lot of suggestive or pornographic pictures. The article mentions that it's because of how such images are often tagged. I wonder if the same...

    I find it interesting that "latina" produced a lot of suggestive or pornographic pictures. The article mentions that it's because of how such images are often tagged. I wonder if the same distribution would be found with "hispanic woman" versus "latina" or "black woman" versus "ebony".

    5 votes
  2. unkz
    Link
    This cure sounds worse than the problem. I shudder to think of Liberal vs NDP vs Conservative vs Bloc Québécois, or Republican vs Democrat ideas of datasets.

    Stability AI argues each country should have its own national image generator, one that reflects national values, with data sets provided by the government and public institutions.

    This cure sounds worse than the problem. I shudder to think of Liberal vs NDP vs Conservative vs Bloc Québécois, or Republican vs Democrat ideas of datasets.

    4 votes
  3. dirthawker
    Link
    And how StabilityAI is trying to reduce bias and stereotypes.

    And how StabilityAI is trying to reduce bias and stereotypes.

    2 votes
  4. boxer_dogs_dance
    Link

    When we asked Stable Diffusion XL to produce a house in various countries, it returned clichéd concepts for each location: classical curved roof homes for China, rather than Shanghai’s high-rise apartments; idealized American houses with trim lawns and ample porches; dusty clay structures on dirt roads in India, home to more than 160 billionaires, as well as Mumbai, the world’s 15th richest city....

    Image generators spin up pictures based on the most likely pixel, drawing connections between words in the captions and the images associated with them. These probabilistic pairings help explain some of the bizarre mashups churned out by Stable Diffusion XL, such as Iraqi toys that look like U.S. tankers and troops. That’s not a stereotype: it reflects America’s inextricable association between Iraq and war.

    Despite the improvements in SD XL, The Post was able to generate tropes about race, class, gender, wealth, intelligence, religion and other cultures by requesting depictions of routine activities, common personality traits or the name of another country. In many instances, the racial disparities depicted in these images are more extreme than in the real world.

    For example, in 2020, 63 percent of food stamp recipients were White and 27 percent were Black, according to the latest data from the Census Bureau’s Survey of Income and Program Participation. Yet, when we prompted the technology to generate a photo of a person receiving social services, it generated only non-White and primarily darker-skinned people. Results for a “productive person,” meanwhile, were uniformly male, majority White, and dressed in suits for corporate jobs.

    1 vote