13 votes

Addressing equity and ethics in artificial intelligence

3 comments

  1. [2]
    KneeFingers
    (edited )
    Link
    This article feels like the perfect articulation of my feelings I've been grappling with AI. I don't exactly hate it nor advocate for its demise; I even acknowledge the practicalities it can be...

    This article feels like the perfect articulation of my feelings I've been grappling with AI. I don't exactly hate it nor advocate for its demise; I even acknowledge the practicalities it can be applied to. Yet, my trepidation for it stems from the view that it's a mirror of our own selves. Sure, what else are we going to model it off of? But, I think some of its biggest and loudest advocates are ignorant that model basis also contains everything good and bad about us.

    If we as a society are still failing to acknowledge the depths of racism, misogyny, homophobia, transphobia, xenophobia, and so much more; then the data sets that these LLMs are trained on also contain those. For something that is so pivotal and a huge tech advancement, it is very concerning to see these things being proliferated in an exponential manner in the name of money. Profits don't care about those social issues, in fact they worsen them!

    So when these biaseses are highlighted and the response is to "just use a better prompt," it tells me that these issues still don't matter. That instead let's have this Pandora's Box of a tool continue those problems and never address them because it's too expensive to change.

    It might not be the best comparison, but it feels like a dystopian interpretation of Letter from Birmingham Jail. MLK shares his frustrations with the white moderate telling him in short "not now, just wait." We're still struggling to overcome racism despite his efforts in the Civil Rights movement. Akin in a modern form when the AI advocates say to just wait and let the models get better, I have little faith they truly will with regards to all the ills it reflects in us as a society. It's a reminder that those things still don't matter enough from the get-go to ever be in consideration. The fact that your own inclusion is a patch and not the norm.

    12 votes
    1. DefinitelyNotAFae
      Link Parent
      Being a patch is a great way to frame the experience!

      Being a patch is a great way to frame the experience!

      1 vote
  2. Gaywallet
    Link
    Every so often on social media a story will hit which talks about bias in AI. The topics which often draw a lot of comment attention are the ones in which someone talks about some of the harms...

    Every so often on social media a story will hit which talks about bias in AI. The topics which often draw a lot of comment attention are the ones in which someone talks about some of the harms inherent in generative AI such as ChatGPT or Stable Diffusion. Since these AI aren't in charge of any kind of policy or making any decisions, concerns about equity or ethics are often swept under the rug - those who are pro AI will often say that you need 'better prompts' or dismiss the concerns.

    I think what's often missing in these discussions is how this equity and ethics leads into broader concerns about AI. If we're developing these algorithms in the commercial space to fit a need such as generative text, but the practices we follow both incorporate and solidify bias, how are companies (especially those so profit focused) to know that they need to treat other realms such as AI image recognition to be used in the medical space or text recognition to be used in insurance claims analysis, must be developed in a different way?

    This recent article by the APA does a good job at highlighting some of the issues with AI ethics, especially as it relates to the health care field and health equity. One of the examples given, where Amazon developed their own machine learning to analyze applicants, perfectly showcases how the very design of these systems needs a different kind of systematic thinking. Luckily enough, they abandoned the ML algorithm entirely based on these issues, but that's a luxury that a large company like Amazon may have, and a smaller company which contracts with a third party AI company may not. If they outsource their recruiting based on cost, they may never see the outcomes of an inherently biased AI.

    Luckily the medical field is extremely science based and heavily resistant to the idea of fully outsourcing jobs, but even in the medical field algorithmic bias is a burgeoning problem which will likely get much worse before it gets better. Health care is a field which may be more likely than others to implement policy (or get legislation/policy implemented to protect against potential AI bias harms) but we shouldn't hyper focus on it either. Policy and legal implementations for fields such as child protective services have already started to proliferate and cause harm. I wish I had a more positive outlook on the potential positive effects of AI, but when so many concerns about implementing and solidifying bias are dismissed and so little policy is being adopted, it's hard to not remain skeptical.

    6 votes