So I think we need to stop demonizing technology and mathematical terms like "algorithm". The story shared in this article is a classic example of a government body being convinced by...
So I think we need to stop demonizing technology and mathematical terms like "algorithm". The story shared in this article is a classic example of a government body being convinced by opportunistic business-people to invest in a useless piece of technology that had no business being implemented in the first place. To me this article speaks to the growing issue of government bodies being technically illiterate and grossly outdated & underfunded. This is why governments experience frequent problems across every vector that involves technology, whether or not they relate to "algorithms".
I don't find "algorithm" to be a demonizing term. I find it to be a suitable description: "a process or set of rules to be followed in calculations or other problem-solving operations, especially...
I don't find "algorithm" to be a demonizing term. I find it to be a suitable description: "a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer." I believe it explains the pros and cons of following such a system well. It's an easy way to access large amount of data based on a certain systematic way BUT it's also just a system that follows a systematic way ever time. No budging. No exceptions. No human. Algorithms are never the issue in of themselves but the people developing them AND how we're using them.
In the article, it mentions a man following his GPS and almost falling off a cliff. This speaks more about people's over reliance on technology and not fully understanding its boundaries. I feel like this applies to all of the examples of software used in: hospitals, schools, shops, court rooms, and police stations. People must understand that Algorithms are not 100% full proof "magic" that comes from some wise sage. It comes from developers who themselves are flawed and therefor should not be taken at face value.
This is one of the reasons I get nervous about programs like the Social Credit System in China. The system is not objective. It's developed by the government and leans towards their goals of a "perfect" citizen. They have the option to weigh things like Government obedience over anything else.
Some may think the answer would be to have someone review the Algorithms and see if it may be leading to inequalities but even then the reviewers will have their own biases. This is why I believe that important decisions such as whether to deny probation or to sentence jail time can be used Algorithms if the work load is overwhelming but should be reviewed before any implementation. (Currently, the US is using the COMPAS System)
What would your response be to the people who got screwed over by the algorithm deciding to cut their state support. Who holds the responsibility here?
What would your response be to the people who got screwed over by the algorithm deciding to cut their state support. Who holds the responsibility here?
I don't know nearly enough about the situation to make that call, but depending on the details, I would place blame on The creators of the system who either maliciously or negligebly created a...
I don't know nearly enough about the situation to make that call, but depending on the details, I would place blame on
The creators of the system who either maliciously or negligebly created a product that did not perform what they claimed it would
And/or
The person/s who made the decision to purchase and implement the system without know what the hell they were buying
Regardless, I think you're arguing to a strawman here, I never defended the use of algorithms, I'm saying it's not "algorithms" that are the problem, so we shouldn't focus our efforts on punishing algorithms
Yeah ok, I think we agree then. Though I think it's a little reductive to think that the article is just talking about algorithms and how they function when you consider non of that is explored....
Yeah ok, I think we agree then. Though I think it's a little reductive to think that the article is just talking about algorithms and how they function when you consider non of that is explored. Everything is focused on what's surrounding them, their implementation, user error, programmer error the ethics of using them at all.
Because, ultimately, we can’t just think of algorithms in isolation. We have to think of the failings of the people who design them – and the danger to those they are supposedly designed to serve.
It's probably an error in simplification on the articles part to just refer to algorithms as the catch phrase for the issue at large.
Placing blind faith in software is only something you do if you've not been paying any attention at all to the software you use on a daily or near-daily basis. In fact, placing blind faith in...
Placing blind faith in software is only something you do if you've not been paying any attention at all to the software you use on a daily or near-daily basis. In fact, placing blind faith in anything is a mistake you shouldn't be making more than once or twice, and is something you should be learning from as early as childhood.
I don't even pour milk into a bowl before giving it the sniff test, even if before the printed expiration date, because I've made the mistake of blindly trusting the expiration date in the past and had a mouthful of milk that tasted like canned corn. How the hell do you even go through life like that?
With tech in particular, automation != automatic. The tasks may be automated for you, but it's up to you to verify that the end result is the expected one. If you can't be bothered to, then it's no one's fault but your own if you end up careening off a 100 ft. cliff.
This is called "automation bias": https://en.wikipedia.org/wiki/Automation_bias People have been trained or otherwise assume that computers know what they're doing and that they shouldn't question...
People have been trained or otherwise assume that computers know what they're doing and that they shouldn't question whether a declaration they make is correct or not.
I suppose I may be dealing with my own bias because I spend so much time with tech. It's easy to forget that people are often fed the idea that machines can do things better and faster than they...
I suppose I may be dealing with my own bias because I spend so much time with tech. It's easy to forget that people are often fed the idea that machines can do things better and faster than they can without making any mistakes, and that there's not really an emphasis on the subject of when things go wrong and appropriate error handling.
I'm having trouble imagining how a GPS insists that you drive through a fence. If my GPS said "turn right" and I looked right and there was a fence, I would not shrug, say "GPS knows best" and...
saved from the 100ft drop only by the flimsy wooden fence at the edge he had just crashed into. “It kept insisting the path was a road,”
I'm having trouble imagining how a GPS insists that you drive through a fence. If my GPS said "turn right" and I looked right and there was a fence, I would not shrug, say "GPS knows best" and crash through the fence. I believe issuing a driver's license to anyone who would is a mistake.
That said, I think the questions the author lists at the end are worth considering. We shouldn't blindly trust things just because they are codified as an algorithm. We should know what we want them to do, test them, and replace them if they don't work out.
Garbage in, garbage out. Both of the examples in the article weren't even issues with the "algorithm" itself, but what it did with that data and how humans went along with it without a second...
Garbage in, garbage out. Both of the examples in the article weren't even issues with the "algorithm" itself, but what it did with that data and how humans went along with it without a second thought. As automated systems take in more and more information and make decisions based on that data it becomes imperative that we not only control the algorithms but make sure that they're fed the appropriate info.
Ok, but that’s like saying “what is gravity but a force that drops things” and then blaming gravity because you let go of a bowling ball above a baby. Gravity followed its rules properly; you just...
Ok, but that’s like saying “what is gravity but a force that drops things” and then blaming gravity because you let go of a bowling ball above a baby. Gravity followed its rules properly; you just provided a situation with a bad input leading to a bad output.
So I think we need to stop demonizing technology and mathematical terms like "algorithm". The story shared in this article is a classic example of a government body being convinced by opportunistic business-people to invest in a useless piece of technology that had no business being implemented in the first place. To me this article speaks to the growing issue of government bodies being technically illiterate and grossly outdated & underfunded. This is why governments experience frequent problems across every vector that involves technology, whether or not they relate to "algorithms".
I don't find "algorithm" to be a demonizing term. I find it to be a suitable description: "a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer." I believe it explains the pros and cons of following such a system well. It's an easy way to access large amount of data based on a certain systematic way BUT it's also just a system that follows a systematic way ever time. No budging. No exceptions. No human. Algorithms are never the issue in of themselves but the people developing them AND how we're using them.
In the article, it mentions a man following his GPS and almost falling off a cliff. This speaks more about people's over reliance on technology and not fully understanding its boundaries. I feel like this applies to all of the examples of software used in: hospitals, schools, shops, court rooms, and police stations. People must understand that Algorithms are not 100% full proof "magic" that comes from some wise sage. It comes from developers who themselves are flawed and therefor should not be taken at face value.
This is one of the reasons I get nervous about programs like the Social Credit System in China. The system is not objective. It's developed by the government and leans towards their goals of a "perfect" citizen. They have the option to weigh things like Government obedience over anything else.
Some may think the answer would be to have someone review the Algorithms and see if it may be leading to inequalities but even then the reviewers will have their own biases. This is why I believe that important decisions such as whether to deny probation or to sentence jail time can be used Algorithms if the work load is overwhelming but should be reviewed before any implementation. (Currently, the US is using the COMPAS System)
What would your response be to the people who got screwed over by the algorithm deciding to cut their state support. Who holds the responsibility here?
I don't know nearly enough about the situation to make that call, but depending on the details, I would place blame on
The creators of the system who either maliciously or negligebly created a product that did not perform what they claimed it would
And/or
The person/s who made the decision to purchase and implement the system without know what the hell they were buying
Regardless, I think you're arguing to a strawman here, I never defended the use of algorithms, I'm saying it's not "algorithms" that are the problem, so we shouldn't focus our efforts on punishing algorithms
Yeah ok, I think we agree then. Though I think it's a little reductive to think that the article is just talking about algorithms and how they function when you consider non of that is explored. Everything is focused on what's surrounding them, their implementation, user error, programmer error the ethics of using them at all.
It's probably an error in simplification on the articles part to just refer to algorithms as the catch phrase for the issue at large.
Placing blind faith in software is only something you do if you've not been paying any attention at all to the software you use on a daily or near-daily basis. In fact, placing blind faith in anything is a mistake you shouldn't be making more than once or twice, and is something you should be learning from as early as childhood.
I don't even pour milk into a bowl before giving it the sniff test, even if before the printed expiration date, because I've made the mistake of blindly trusting the expiration date in the past and had a mouthful of milk that tasted like canned corn. How the hell do you even go through life like that?
With tech in particular, automation != automatic. The tasks may be automated for you, but it's up to you to verify that the end result is the expected one. If you can't be bothered to, then it's no one's fault but your own if you end up careening off a 100 ft. cliff.
This is called "automation bias": https://en.wikipedia.org/wiki/Automation_bias
People have been trained or otherwise assume that computers know what they're doing and that they shouldn't question whether a declaration they make is correct or not.
I suppose I may be dealing with my own bias because I spend so much time with tech. It's easy to forget that people are often fed the idea that machines can do things better and faster than they can without making any mistakes, and that there's not really an emphasis on the subject of when things go wrong and appropriate error handling.
I'm having trouble imagining how a GPS insists that you drive through a fence. If my GPS said "turn right" and I looked right and there was a fence, I would not shrug, say "GPS knows best" and crash through the fence. I believe issuing a driver's license to anyone who would is a mistake.
That said, I think the questions the author lists at the end are worth considering. We shouldn't blindly trust things just because they are codified as an algorithm. We should know what we want them to do, test them, and replace them if they don't work out.
Garbage in, garbage out. Both of the examples in the article weren't even issues with the "algorithm" itself, but what it did with that data and how humans went along with it without a second thought. As automated systems take in more and more information and make decisions based on that data it becomes imperative that we not only control the algorithms but make sure that they're fed the appropriate info.
I mean what is an algorithm, if not something that does stuff with data.
Ok, but that’s like saying “what is gravity but a force that drops things” and then blaming gravity because you let go of a bowling ball above a baby. Gravity followed its rules properly; you just provided a situation with a bad input leading to a bad output.