7 votes

Topic deleted by author

1 comment

  1. stu2b50
    Link
    To be fair, I feel that the word "AI" makes "AI bias" seem like a new problem, when really it's an old problem. "AI" is a buzzword: machine learning algorithms are not new, and though we only...

    To be fair, I feel that the word "AI" makes "AI bias" seem like a new problem, when really it's an old problem. "AI" is a buzzword: machine learning algorithms are not new, and though we only recently unlocked the hardware necessary to really take advantage of mlps, it's the same old problems with regards to bias.

    For example, if you were to compile very basic statistics as indicators to profile people of their likelihood of committing a crime, your "algorithm" would be racist as hell. If you took the modal race of people who commit crimes, you'll overwhelmingly find people of color.

    That same issue plagues linear regression, and SVMs, and RFs, and neural networks--anything in which we infer rules from data.

    So it's less Skynet hating humanity, and more the same old correlation isn't causation, the same old not enough training samples, same old training samples aren't representative of the population.

    4 votes