2 votes

The ethics of deep learning AI and the epistemic opacity dilemma

1 comment

  1. imperialismus
    Link
    An example of this dilemma in practice: the recent International Baccalaureate grading scandal. The program canceled its exams due to covid, and replaced the exam results with the output of a...

    An example of this dilemma in practice: the recent International Baccalaureate grading scandal. The program canceled its exams due to covid, and replaced the exam results with the output of a proprietary predictive algorithm that numerous teachers and students considered unfair. They haven't released the algorithm, and if it is a deep learning based one, it might be difficult to understand it even with access. There's a strong tendency to hide behind the algorithm to escape responsibility. Well, it's not us saying it, it's the algorithm, so we are not to blame.

    Now, moving on to the article's proposal to add ethics as a secondary goal when training neural networks. Immediately, my issue with this is that most neural networks aren't doing ethics. They're doing things like classifying data according to a statistical model, and then ethics only comes into it when humans decide what actions to take based on that classification. I think ethically speaking, most neural networks benefit the most from being as accurate as possible, as their job is generally to classify data or make predictions about future events. A lot of the time, the right thing to do is not to build your proposed ethics into the model, but to scrap or rework the model entirely. For instance, racial profiling in police data. Simply don't use such models to determine threats when you know before you even start that all the data you can feed it is inherently biased.

    Consider a hypothetical object-recognition algorithm that misidentifies dark-skinned people as monkeys. Great, your neural network accidentally reproduced an offensive racial stereotype. But it seems to me the thing that went wrong here wasn't that you didn't program an understanding of moral offense into the machine, but that you fed it bad data in the first place.

    The article mentions medical care algorithms, but isn't patient welfare already what such algorithms are optimized for? The goal is to accurately diagnose diseases or to recommend the most suitable treatment according to current medical knowledge. The utility function that already exists lines up with ethical considerations.

    I can foresee instances were mixing ethics into the utility function simply leads to less accordance with reality. If we seek to accurately classify data, it's hard to see how ethics figures into it except as a way to introduce wishful thinking into the algorithm. Instead, the ethical dimension comes in when we, humans, make decisions about what to do with the output of the algorithm. And whether it's right to build it in the first place. The thing that makes the IB grading unfair is that it's inaccurate, downgrading large amounts of students for no apparent reason - its unethical quality perfectly correlates with its inaccuracy. And the ethical thing to do here was to not use an opaque algorithm to grade students in the first place.

    In short, I think this article largely locates the position of ethics in the wrong place. As long as we don't have strong AI, ethics will be the province of humans. We decide what models to build, and what to do with the results of those models. Those are the ethical decisions that are relevant.

    1 vote