12 votes

AIs should have the same ethical protections as animals

45 comments

  1. [11]
    Comment deleted by author
    Link
    1. [7]
      Grzmot
      Link Parent
      Thanks for this take. Some of the comments here remind me of when that article about making sex robots illegal was posted and people supported the idea based on the fact that the robots could...

      Thanks for this take. Some of the comments here remind me of when that article about making sex robots illegal was posted and people supported the idea based on the fact that the robots could become sentient and then we'd have sex slaves.

      ...It was weird. Considering most of Tildes userbase is derived from reddit, I thought we'd have a more tech-knowledgeable crowd, or at least people who'd try to read up on something before talking about it.

      4 votes
      1. [2]
        Comment deleted by author
        Link Parent
        1. Grzmot
          Link Parent
          I agree with you here, and that statement of mine might have come off as a bit arrogant to be honest. The thing with AI is, that it's a very complex piece of software engineering. I only...

          I agree with you here, and that statement of mine might have come off as a bit arrogant to be honest. The thing with AI is, that it's a very complex piece of software engineering. I only understand the basic principles of neural networks and I can't fathom how complex a real AI quickly becomes. And you're right, just reading up on it isn't as simple as I made it seem. Most of the knowledge on the subject is either hidden in PhD papers, university lectures or overly complex books.

          Perhaps I'm just jaded by all the recent news about how AI will either fix everything, destroy everything or just help you make your photos look better. And while the point of Tildes is discussion and learning from each other through this, sometimes posts come off fairly ignorant. "AI" is far away from any intelligence we posess, it's really just a complex piece of software designed ot handle a specialized task like image detection. Most people only know "AI" from sci-fi media and associate something entirely different with it.

          1 vote
      2. [5]
        Algernon_Asimov
        Link Parent
        What comments? Until @vivaria commented, there was only one substantial comment in this thread, by @Amarok. @Nexu dismissed the article based on on a flawed reading, and I tried to explain the...

        Some of the comments here

        What comments? Until @vivaria commented, there was only one substantial comment in this thread, by @Amarok. @Nexu dismissed the article based on on a flawed reading, and I tried to explain the article to them, but @Amarok is the only other person to have commented here (so far - I intend to comment later, when I have more time and have gathered my thoughts).

        people supported the idea based on the fact that the robots could become sentient and then we'd have sex slaves.

        Are you implying that there is no possibility of sentience in robots ever? That seems unlikely. It might not happen soon, but it probably will happen. If a bunch of carbon-based cells can achieve self-awareness and sentience, why not a bunch of silicon-based cells?

        And, then, having created an artificial intelligence with sentience, do you believe we would not then download that AI into a robot for a human's pleasure? Have you met the human race? Of course we would! There would be artificial sentient sex slaves if we don't prevent it somehow.


        Considering most of Tildes userbase is derived from reddit, I thought we'd have a more tech-knowledgeable crowd

        Why do you assume that Reddit users are tech-knowledgeable? That might have been true in Reddit's infancy, a decade ago, but it outgrew its technophile roots years ago. It has literally hundreds of millions of users every month nowadays. The majority of those people are not tech-knowledgeable. (I used to moderate /r/Help: I know the level of tech-ignorance of some of those people.)

        at least people who'd try to read up on something before talking about it.

        From redditors? Seriously? This is a website where the biggest and most popular sub-forums are based around memes and other shallow jokes. Also, the stereotype that redditors don't read articles exists for a reason.

        I think you have an outdated idea of exactly who's on Reddit.

        The crowd on Tildes is definitely technophilic, and hopefully more likely to do some reading, but we're self-selected to be interested in a new website aiming for high quality discussion. Redditors are not known for doing a lot of reading.

        But, even so, we on Tildes are still human beings, with all the flaws pertaining thereto. Some of us, therefore, are not going to know everything about all things. Some of us will be opinionated. Some of us will spout off without deep thought. We're just ordinary flesh and blood beings, and we're not perfect.

        3 votes
        1. [2]
          Comment deleted by author
          Link Parent
          1. const
            Link Parent
            A lot of times even some single thing is too big and complex to know a fraction about. Yet too many people assume they know things without ever questioning their knowledge; maybe because it's more...

            To take this further, the world is way too big and complex and nuanced for anyone to know even a fraction of the things about all the things.

            A lot of times even some single thing is too big and complex to know a fraction about. Yet too many people assume they know things without ever questioning their knowledge; maybe because it's more comfortable that way.

        2. [3]
          Grzmot
          Link Parent
          Here referred to Tildes in general, not this thread specifically. Predicting things that are going to happen in the far future is almost always a fruitless thought exercise. I was not implying...

          What comments?

          Here referred to Tildes in general, not this thread specifically.

          Are you implying that there is no possibility of sentience in robots ever? That seems unlikely. It might not happen soon, but it probably will happen. If a bunch of carbon-based cells can achieve self-awareness and sentience, why not a bunch of silicon-based cells?

          Predicting things that are going to happen in the far future is almost always a fruitless thought exercise. I was not implying anything. A discussion of wherever it will happen or not is a senseless one, the same way a discussion about the possibility (or more likely, the high probability) of alien life is.

          My point is, "AI" today has little to do with sentient intelligence in the way you're probably thinking of it. Even more so, our carbon-based cells achieved sentience through evolution, while the "silicone-based cells" aren't part of evolution, because

          1. They are not alive

          2. They were designed by us, and so aren't part of the natural order

          3. Computers haven't been around for a very long time, AI even less so, and evolution works at a glacial pace

          Research dealing with actual digital sentience is barely moving forward, because no one is actually interested in it. An AI today is in most cases just a neural network trained to do a very specific task, like recognizing images or playing a strategy game really well. While the AI might become more precise (or not, if you train it badly), which is what people refer to as the learning part of AI.

          Part of growing up is setting your own goals in life. As a baby, you begin with simple goals put objects into your mouth to get to know their shape, because your hands aren't precise enough. You crawl around, learn to walk and eventually you start setting your own, more complex goals, like the desire to learn a certain profession, go to a certain place or meet certain people. AI can't set it's own goals, there's no part in the image detection algorithm that starts pondering on the meaning of life after you stop using it. There's also no way for it to branch out in that direction, because the learning part of AI is really just optimizing a mathematical function. It's not writing code, or even modifying it's own code. It's making numbers more precise. Or not, if you train it badly.

          And barely anyone is doing research in that direction, because businesses want a highly specialized AI, not a general purpose one. And as long as there isn't any research done into that area, it won't happen. We could try and answer why a person wants to become a pilot. But we can't answer the how. And if we don't know how something works, we can't turn it into an algorithm. In this case, the how? is an incredibly loaded question with an incredibly complex answer, if there even is one. Even with "big" research done into general purpose AI, I'm not confident we could make it work. There are major hurdles we'd have to overcome.

          Even more so, that the non-existant feelings of the robots were the first thing people jumped to, not the implications sex bots might have for society and how it could change if our most primal urge is always taken care of, just shows how detached from reality such discussions become.

          And, then, having created an artificial intelligence with sentience, do you believe we would not then download that AI into a robot for a human's pleasure? Have you met the human race? Of course we would! There would be artificial sentient sex slaves if we don't prevent it somehow.

          If it were to exist, I agree with you there, this would definitely happen. It'd be the civil rights movement all over again, which is why sentient AI is often used in sci-fi stories as a parallel for this exact thing.

          Why do you assume that Reddit users are tech-knowledgeable?

          Probably because I only browse tech-centric subs. But yes, I concede, my point was stupid. I've seen some... bold claims even in tech subs.

          From redditors? Seriously? This is a website where the biggest and most popular sub-forums are based around memes and other shallow jokes. Also, the stereotype that redditors don't read articles exists for a reason.

          Not from redditors, from tildians, i.e redditors who joined this site in search for proper, deeper discussions, I'd expect a certain baseline of engagement on here because the standard is higher. Read the article (if there is one), do some research, write a comment. A basic search query often does the job. I've not seen this here (and I hope it never comes to that), but I've seen things asked in reddit comments that would be answered in literally the first result of a search on your favourite search engine

          But, even so, we on Tildes are still human beings, with all the flaws pertaining thereto. Some of us, therefore, are not going to know everything about all things. Some of us will be opinionated. Some of us will spout off without deep thought. We're just ordinary flesh and blood beings, and we're not perfect.

          To this community's credit, most people follow the higher standard and I hope this will only improve with time. I just wish that that the quick internet search after reading the article would become standard as well. This of course doesn't always work if the article/post isn't interesting enough to you personally, but if you're already writing a comment, you might as well just go the extra mile. As the barrier to putting your thoughts up on this sight is higher than just saying them out loud, I'd hope that people put a bit more work into it, because that's how you keep those high standards alive. It isn't perfect, nor would I want it to be, because perfection isn't an achievable goal, but it'd be very close to it.

          3 votes
          1. [2]
            Algernon_Asimov
            Link Parent
            But that implies we should never discuss anything which has not actually happened. I can't talk about what to do if my house burns down tomorrow because predicting things is just a fruitless...

            Predicting things that are going to happen in the far future is almost always a fruitless thought exercise. I was not implying anything. A discussion of wherever it will happen or not is a senseless one, the same way a discussion about the possibility (or more likely, the high probability) of alien life is.

            But that implies we should never discuss anything which has not actually happened. I can't talk about what to do if my house burns down tomorrow because predicting things is just a fruitless thought exercise. But, it's not fruitless to discuss things which have not actually happened. Even if those things do not come to pass, the discussion itself can cause people to consider issues they might not otherwise consider - such as:

            • What constitutes sentience or consciousness.

            • How and why we decide to extend ethics to include existing entities.

            • What existing entities are entitled to ethical treatment.

            ... and many other, very fruitful avenues of discussion. A hypothetical discussion can also provide useful insights into reality.

            And... if we ever do happen to create a sentient artificial intelligence... wouldn't it be better to have worked out in advance whether we will accord it ethical treatment, now, while we have the time to discuss this in full at our leisure, rather than having to rush the discussion at the last minute when the sentient AI is already here, and we may already being doing harm?

            Even more so, our carbon-based cells achieved sentience through evolution, while the "silicone-based cells" aren't part of evolution, because

            1. They are not alive

            2. They were designed by us, and so aren't part of the natural order

            3. Computers haven't been around for a very long time, AI even less so, and evolution works at a glacial pace

            You seem to fall into the camp that sentience can not arise through deliberate action, but only by accident. The only way for an entity to become sentient is through an unknown accidental method, rather than by a deliberate process. That seems quite illogical - that something which can happen accidentally can not also happen through deliberate action.

            3 votes
            1. Grzmot
              Link Parent
              That comparison is a bit of a stretch don't you think? A house burning down vs. the invention of aritificial intelligence? If we work out a definition of consciousness, and we do in fact invent a...

              I can't talk about what to do if my house burns down tomorrow because predicting things is just a fruitless thought exercise.

              That comparison is a bit of a stretch don't you think? A house burning down vs. the invention of aritificial intelligence?

              If we work out a definition of consciousness, and we do in fact invent a system in the future to which it applies, it would deserve some sort of protection. If we even somehow manage for it to achieve human-level intellect, we might as well just draft up the title Rights of Artifical Lifeforms and copypaste the human rights in there.

              A hypothetical discussion can also provide useful insights into reality.

              I agree, but perhaps it's just differences in character, but I'd still prefer approaching a problem directly instead of discussing something else entirely and accidentally stumbling into a possible solution.

              And... if we ever do happen to create a sentient artificial intelligence... wouldn't it be better to have worked out in advance whether we will accord it ethical treatment, now, while we have the time to discuss this in full at our leisure, rather than having to rush the discussion at the last minute when the sentient AI is already here, and we may already being doing harm?

              I mean, we have human rights, and those still get ignored on a daily basis around the world. I'd be good, but I don't think it would help a lot. Perhaps first world countries (whichever countries that would be at that point) could become more egalitarian towards robots.

              You seem to fall into the camp that sentience can not arise through deliberate action, but only by accident. The only way for an entity to become sentient is through an unknown accidental method, rather than by a deliberate process. That seems quite illogical - that something which can happen accidentally can not also happen through deliberate action.

              My point was there that the achieving sentience through accident is impossible, as evolution doesn't apply here, and designing it is currently deemed a worthless endeavour by both academics, corporations and governments (the only real powers in play here that could achieve this). I don't think accidents and design exclude each other. An accident can be designed to happen again, and design can lead to accidents and thus, new discoveries.

    2. [3]
      NaraVara
      (edited )
      Link Parent
      As the owner of a new puppy, the idea of a dog like robot actually mimicking a dog just seems so much more ridiculous than it did a week ago when I first got him. When Boston Dynamics designs a...

      As the owner of a new puppy, the idea of a dog like robot actually mimicking a dog just seems so much more ridiculous than it did a week ago when I first got him. When Boston Dynamics designs a robot that poops itself out of anxiety if it’s left alone, then we might be onto something.

      1 vote
      1. [2]
        yellow
        Link Parent
        Radiolab has a pretty good episode that talks about machines acting life-like. They touch on how Furby was designed to respond to certain situations, particularly being scared when held upside...

        Radiolab has a pretty good episode that talks about machines acting life-like. They touch on how Furby was designed to respond to certain situations, particularly being scared when held upside down. This actually causes people to empathize with Furby and dislike holding it upside down. In the episode, the hosts talk with Caleb Chung, creator of the Furby. He brings up a more recent project, Pleo the dinosaur, which encouraged empathy with more nuanced reactions. This resulted in a company that reviewed the toy to put out a video in which some people abuse Pleo in various ways, seeking out all of its responses. Basically, Pleo cries in pain and sounds like its choking while being handled about.

        I think it is absolutely possible, even currently feasible, to make a machine that could mimic life well enough to be empathized equally. However, people can disregard empathy very easily.

        One of the inter-titles from the Pleo torture video sums it up nicely.

        Did he just make a choking noise? Let's check...

        2 votes
        1. NaraVara
          (edited )
          Link Parent
          There's a key philosophical nuance here as to whether things exist in themselves or whether they exist to elicit a reaction from me. My dog doesn't get anxious to get a rise out of me, in fact I...

          He brings up a more recent project, Pleo the dinosaur, which encouraged empathy with more nuanced reactions. This resulted in a company that reviewed the toy to put out a video in which some people abuse Pleo in various ways, seeking out all of its responses. Basically, Pleo cries in pain and sounds like its choking while being handled about.

          There's a key philosophical nuance here as to whether things exist in themselves or whether they exist to elicit a reaction from me. My dog doesn't get anxious to get a rise out of me, in fact I think he'd prefer to not get hit with overpowering waves of anxiety but he can't help it. The anxiety comes from internal and inscrutable cognitive processes in his own mind that are only lightly influenced by my presence and existence.

          And in a way, he's programming himself. He feels feelings and decides how to modify his behavior in the future to avoid/promote feeling those feelings. I can help, by associating some behaviors with good or bad feelings, but it's ultimately the dog choosing to do the things for the dog's own gratification. Not for mine (unless he happens to enjoy making me happy, which is possible).

          I suppose in theory we could pursue development of machines that pursue self-actualization in that way, but if the focus is on mimicking the behaviors that elicit human sympathy we're not really developing that. Rather, we're developing a machine that's designed to hack the human brain's empathy processes, not a machine that has wants and needs and a desire to pursue them.

          1 vote
  2. [7]
    Nexu
    Link
    Clickbait. "We can't possibly create conscious AI yet, or even if we did, we wouldn't have any way of knowing if it was conscious or not. But we might soon!" We might soon, based on what?...

    Clickbait. "We can't possibly create conscious AI yet, or even if we did, we wouldn't have any way of knowing if it was conscious or not. But we might soon!" We might soon, based on what? Speculation of the editor?

    I detest this kind of hollow general artificial intelligence piece. It does nothing to foster clarity of the general public's understanding of AI "in general", never mind about "artificial general intelligence".

    Ethics exist. We get it.

    9 votes
    1. [6]
      Algernon_Asimov
      Link Parent
      This is not clickbait. It's an actual call to action: If that's all you got from this article, then you didn't read it properly. The point of this article is to take action before we create an...

      This is not clickbait. It's an actual call to action:

      With AI, we have a chance to do better. We propose the founding of oversight committees that evaluate cutting-edge AI research with these [ethical] questions in mind.


      Ethics exist. We get it.

      If that's all you got from this article, then you didn't read it properly.

      The point of this article is to take action before we create an actual artificial intelligence, and prevent wrongdoing to that intelligence. It's not about fostering clarity of the general public's understanding of AI, it's about asking for ethical oversight on all AI-related projects, to ensure we don't do the wrong thing to artificial intelligences (just like we have ethical oversight on all projects involving animals, to prevent wrongdoing to those animals). It's not just "ethics exist", it's "ethics exist and they should be applied to all conscious entities, including artificial intelligences".

      7 votes
      1. [4]
        Comment deleted by author
        Link Parent
        1. Amarok
          Link Parent
          Just to drive this home a bit more... some people think AI is going to be a hard problem. Let's table the hard-takeoff super-intelligence scenario - that may or my not be possible depending on...
          • Exemplary

          Just to drive this home a bit more... some people think AI is going to be a hard problem. Let's table the hard-takeoff super-intelligence scenario - that may or my not be possible depending on things we simply do not yet know about consciousness. In fact, we know fuck-all about consciousness right now, things haven't gotten much further than 'I think therefore I am' on that front even today.

          Instead let's take a look at how stupidly easy it'll be to create useful AIs.

          See, we're not starting from scratch. Nature left us brains to study of every size and shape and at every stage of development, varied and stuffed with 4.5 billion years of brutally field-tested, pseudo-encrypted spaghetti code. There's a massive pile of data out there just waiting to be harvested.

          So, we start small. A Mosquito's brain is about the size of the period in this sentence. Think we can tear that down, map out all the synapses? Well, as it turns out that's still pretty hard because a period-sized brain is still a complicated computer written in hardware and software we don't yet understand... but we're making progress.

          Eventually, we move on to bees... bats... cats... dogs... dolphins... primates... humans. If we were starting in a vacuum, with just the human brain, we'd probably be stumped. Because we have this nice ladder of other brains to study, though, we'll be able to make useful progress. We may even be able to derive some first-principles of mind-building and that would set us free to create our own kinds of minds separate from nature.

          I wonder if we'll actually crack the code or not. There's a decent P!=NP argument to be made that a mind of any size is incapable of understanding itself (which I can go into if anyone is interested). That said, we don't actually have to understand the code to steal the circuitry.

          Take a look at locusts sometime. You can cram about eighty billion of them swarming like it's 1999 into an acre-sized area and yet there are zero collisions. Whatever hack they've evolved to handle navigation and collision avoidance is likely to make for a very pricey and hard-to-beat guidance chip of some kind. That's the kind of investment companies will gleefully chase without giving ethics a second thought. After all, who cares if there's 1/16th of a conscious locust in your car's nav system? For all we know, locusts aren't even sentient, they are just nature's version of a very efficient shell script. That's ok, right?

          There's money to be made here, not by writing an AI from scratch (which is a totally different problem space) but by stealing all of nature's clever hacks floating around out there. Hell, modern language translation software stopped being such shit because a lot of it is now modeled on the human brain's own speech centers and how they process information. Processing power finally got to the point where that could run in real time, and boom, suddenly translations aren't so hard anymore.

          I don't think anyone is going to set out to create an AI and torture it, but we might end up doing that inadvertently as we go down this road and harvest nature's big back of tricks, turning them into our big back of hacks, without even understanding how a lot of them work. We have flexible fiber optics from copying how certain sea creatures made their own optics. We have paint that repels dust and self-cleans using the same techniques the lotus plant uses. The cat's out of the bag, companies are after this stuff worldwide and don't even think about ethics here.

          The much-cited 'terminator' scenario is unlikely, because a machine we create from scratch with code doesn't have survival instincts like every living thing does. Hell, just contemplating the code needed to make an AI understand that you punching its robot-head is meant to be considered a 'threat' or even what a 'threat' is makes my head hurt. A mind designed by mind has literally nothing in common with a mind designed by nature, unless the designer wants it to.

          If you start stealing brain-hacks from other brains, and implementing them by stealing the circuitry rather than by code... you may just pick up some other unintended hitchhikers that way, like survival or reproductive instincts. That is as far from a good idea as it is possible to get in AI-land... and yet it's exactly where capitalism is going to go, because it's the low-effort high-return path.

          It seems to me like the ethical thing to do is to study these systems, and use them to inform the design of our own new systems, without the risk of copying in things we don't want. We should also take some steps to keep all of these systems as far from 'self-aware' as possible. We want stupid scripts, not intelligent gestalts with their own internalized models of the world and themselves. Think of it like stealing the 'muscle memory' rather than the consciousness itself.

          12 votes
        2. Velrei
          Link Parent
          I mean, there is no reason to assume otherwise, particularly given what research has shown so far. Not even sure why it bears mentioning.

          I mean, there is no reason to assume otherwise, particularly given what research has shown so far. Not even sure why it bears mentioning.

          2 votes
        3. Algernon_Asimov
          Link Parent
          I did nothing more than paraphrase the article, for the benefit of someone who appeared to have not read it properly. You're not with me, you're with the article's authors.

          I'm with you.

          I did nothing more than paraphrase the article, for the benefit of someone who appeared to have not read it properly. You're not with me, you're with the article's authors.

      2. [2]
        Nexu
        (edited )
        Link Parent
        Your focus on my off-hand remark of "Ethics exist. We get it" is underplaying the misleading suggestion that we are at the point where AI could be considered to have the same level of sentience as...

        Your focus on my off-hand remark of "Ethics exist. We get it" is underplaying the misleading suggestion that we are at the point where AI could be considered to have the same level of sentience as animals, which of course is ridiculous.

        That your reading includes the statement "ethics exist and they should be applied to all conscious entities, including artificial intelligences", and thus assumes that AI is in fact part of the designation of conscious entities, is not sound reasoning.

        Before being "called to action" in this particular area, I believe it's more important that the general public are properly educated.

        Edit: In response to the condescending holier-than-thou tone you seem to perpetually embody, if that's all you read into my comment, then I guess you "didn't read it properly" either.

        1. Algernon_Asimov
          Link Parent
          That is ridiculous - which is why neither the article, nor my comment summarising the article, make such a claim. The closest the writers come to such a claim is to say that "Universities across...

          Your focus on my off-hand remark of "Ethics exist. We get it" is underplaying the misleading suggestion that we are at the point where AI could be considered to have the same level of sentience as animals, which of course is ridiculous.

          That is ridiculous - which is why neither the article, nor my comment summarising the article, make such a claim. The closest the writers come to such a claim is to say that "Universities across the world are conducting major research on artificial intelligence (AI) [...] A likely result is that we will soon have AI approximately as cognitively sophisticated as mice or dogs." Note that key word: "soon". Not "now", not "at this time", but "soon". The whole point of this article is to consider the ethical treatment of sentient AIs before we make them, not because the writers believe such AIs already exist.

          your reading includes the statement "ethics exist and they should be applied to all conscious entities, including artificial intelligences", and thus assumes that AI is in fact part of the designation of conscious entities, is not sound reasoning.

          I'm assuming the future existence of a conscious and/or sentient AI, not stating that conscious AIs already exist. I apologise if my wording was unclear.

          Before being "called to action" in this particular area, I believe it's more important that the general public are properly educated.

          That's a valid point. However, it's not what this article is about. This article is a call to action - which you seemed to have missed when you first commented, as you dismissed for being something which it isn't.

          Your focus on my off-hand remark of "Ethics exist. We get it"

          I quoted that bit because it seemed to summarise your general misreading of this article as something which it isn't. I could have not quoted that line, and the remainder of my comment would stand.

          1 vote
  3. [21]
    unknown user
    Link
    Under no conditions? AI stuff, even if we assume the sophsitication fantasised here is possible, are just tools we make for ourselves, like shell scripts or wrenches. They need not have no rights...

    [... U]nder what conditions, these AIs might deserve the ethical protections we typically give to animals.

    Under no conditions? AI stuff, even if we assume the sophsitication fantasised here is possible, are just tools we make for ourselves, like shell scripts or wrenches. They need not have no rights or protections.

    It is kinda sad there are academics out there spending time thinking on this sort of BS.

    3 votes
    1. [18]
      teaearlgraycold
      Link Parent
      What if you create an accurate real time simulation of a human brain? Then give it some input and output and use it to make decisions. Do you think there could be ethical concerns there?

      What if you create an accurate real time simulation of a human brain? Then give it some input and output and use it to make decisions. Do you think there could be ethical concerns there?

      1 vote
      1. [17]
        unknown user
        Link Parent
        Why would there be any ethical concerns? I truly can't think of any. It is, at the end of the day, just a computer program, not much different in essence than /bin/ls. It should be alright to do...

        Why would there be any ethical concerns? I truly can't think of any. It is, at the end of the day, just a computer program, not much different in essence than /bin/ls. It should be alright to do anything with it, throw any data at it, unless it is used as a tool to commit a crime. And even if it is a perfect human simulation, then, it is just a simulation, something totally artificial.

        I think I can respond more productively if you tell me what you think the ethical concerns would be, because maybe then I can reevaluate my stance based on some examples. But as I said I can't think of any.

        2 votes
        1. [13]
          teaearlgraycold
          Link Parent
          Just because it might be possible to understand how it works doesn't mean it's any less real. I think the possibility that the human mind could one day be fully understood means that an artificial...

          It is, at the end of the day, just a computer program, not much different in essence than /bin/ls

          Just because it might be possible to understand how it works doesn't mean it's any less real. I think the possibility that the human mind could one day be fully understood means that an artificial thought is no less real than an identical organic thought.

          3 votes
          1. [12]
            unknown user
            Link Parent
            That is a logical leap unless you can link the two ends with an argument. My argument, is essence, is that: "artificial thought", if ever produced, will be an artificial tool. But, thought is a...

            I think the possibility that the human mind could one day be fully understood means that an artificial thought is no less real than an identical organic thought.

            That is a logical leap unless you can link the two ends with an argument.

            My argument, is essence, is that: "artificial thought", if ever produced, will be an artificial tool.

            But, thought is a human concept, and it maps to AI only metaphorically. It is the output of a program, the result of some function.

            I don't think this is a productive discussion. We are discussing whether a computer program will ahve rights or not. The obvious answer is no. I'll stop defending it here and leave it to the yes-guys to come up with a convincing argument.

            1. mftrhu
              Link Parent
              Were an actual general AI be created, I don't know what criteria could even be used to distinguish between "artificial thought" and "natural thought", or if that distinction would even be useful....

              My argument, is essence, is that: "artificial thought", if ever produced, will be an artificial tool.

              Were an actual general AI be created, I don't know what criteria could even be used to distinguish between "artificial thought" and "natural thought", or if that distinction would even be useful.

              From my point of view, you are not different from an hypothetical AI. To me, no-one is different from an hypothetical AI, because I can't see anyone's thought processes but mine. To distinguish between AI and NI, I would have to rely on communication and other external cues - as such, my criteria are "if it acts like an intelligent being I'm going to treat it as such".

              AI stuff, even if we assume the sophsitication fantasised here is possible, are just tools we make for ourselves, like shell scripts or wrenches

              I also take issues to this. We use animals as tools. We bred them to be tools. We effectively made them. This attitude was probably also applied to human beings - not just with slaves, but even with children, whose parents mean them to be mini-mes, or mere extensions of themselves.

              And while a wrench doesn't need rights, not being conscious, animals do - and we mostly respect them. People do. Hypothetical AIs, also, do. Their being artificial is wholly irrelevant.

              6 votes
            2. [2]
              teaearlgraycold
              Link Parent
              What I'm saying is your argument depends entirely on reduction to disregard the idea that artificial thought is legitimate ("It is the output of a program, the result of some function.")....

              What I'm saying is your argument depends entirely on reduction to disregard the idea that artificial thought is legitimate ("It is the output of a program, the result of some function.").

              I don't think this is a productive discussion. We are discussing whether a computer program will ahve rights or not. The obvious answer is no.

              Attempting to close the argument like this is cheap. "This discussion is over, but just remember that the right answer is probably mine".

              3 votes
              1. unknown user
                Link Parent
                I'm not closing the argument. All I want to say that I don't think it can go further, and that I've said what I can say. What I've read so far has not convinced me. Depends on what you mean by...

                Attempting to close the argument like this is cheap. "This discussion is over, but just remember that the right answer is probably mine".

                I'm not closing the argument. All I want to say that I don't think it can go further, and that I've said what I can say. What I've read so far has not convinced me.

                What I'm saying is your argument depends entirely on reduction to disregard the idea that artificial thought is legitimate ("It is the output of a program, the result of some function.").

                Depends on what you mean by legitimate: if you mean that I mean it is not on par with human thought, then yes, that is what I think.

            3. [8]
              CALICO
              Link Parent
              I don't think I understand your viewpoint. I am having a subjective, conscious experience, and I assume everyone else is as well. I have thoughts, memories, feelings, etc. Consciousness and...

              I don't think I understand your viewpoint.

              I am having a subjective, conscious experience, and I assume everyone else is as well. I have thoughts, memories, feelings, etc. Consciousness and thought are certainly abstract concepts that are difficult to wrap ones head around, but they most definitely exist. I understand there are some people who don't believe in consciousness, however.

              So, let's say I violate the Orange Catholic Bible and create a machine in the likeness of a man's mind. I give it a physically neuromorphic architecture, upload a software equivalent of the chemical storm that makes our own brains work, and turn it on. If I knew what I was doing, I might have just created C-3PO, or WALL-E. And if that machine mind tells me that it's aware of its own awareness, is that any less real than we experience? If so, why?

              1. [7]
                unknown user
                Link Parent
                I think I've adequately explained my point: it is because it is software created by humans, that mimics humans. Not a product of nature. Anything I say would be recycling my comments that are...

                And if that machine mind tells me that it's aware of its own awareness, is that any less real than we experience? If so, why?

                I think I've adequately explained my point: it is because it is software created by humans, that mimics humans. Not a product of nature. Anything I say would be recycling my comments that are already out here, so I ask instead: why should I believe it can be considered to be on par with human experience when it is itself a product of human experience?

                1. [6]
                  CALICO
                  Link Parent
                  I don't adequately understand your point. Why couldn't it be on par? Are humans fundamentally limited to always be a lesser artist or architect than nature? You seem to think, by my reading, that...

                  I don't adequately understand your point.

                  Why couldn't it be on par? Are humans fundamentally limited to always be a lesser artist or architect than nature? You seem to think, by my reading, that humans are not capable of creating something that meets or exceeds our own capabilities.

                  1 vote
                  1. [5]
                    unknown user
                    Link Parent
                    No. Indeed readily we've created many things way more capable than any human has ever been. What I fail to understand is that why would we ever ascribe tools personalities---besides the usual...

                    You seem to think, by my reading, that humans are not capable of creating something that meets or exceeds our own capabilities.

                    No. Indeed readily we've created many things way more capable than any human has ever been. What I fail to understand is that why would we ever ascribe tools personalities---besides the usual human metaphor we use everywhere---and why treat things of our own creation as independent entities entitled to rights or protection, and possessing of life, sentience, conscious or emotions. However sophisticated they become, AI programs will be ordinary computer programs. You can pull the plug and stop it, or restart it, reboot it, copy it, multiply it, distribute it. Whereas each human being is a unique individual, impossible to replicate perfectly. Once dead, they no longer may exist, but only be remembered.

                    1. [3]
                      CALICO
                      (edited )
                      Link Parent
                      Now I understand more clearly. I don't suggest that AI as it exists today is sentient, or that the current methods of creating AI will ever lead there. But suppose that all of the research going...
                      • Exemplary

                      Now I understand more clearly.

                      I don't suggest that AI as it exists today is sentient, or that the current methods of creating AI will ever lead there.
                      But suppose that all of the research going on currently into creating a machine intelligence (we need not call it AI) pulls through, and someday there exists a manufactured object that when one powers it on, it declares:

                      "I think, therefore I am. I am aware of my own consciousness, and I do not want to die."

                      What are we to make of that?

                      I know that I am a thing. But I can't know for certain that anybody else is—see: philosophical zombies—all I can do is take them at their word that they too possess qualia. It would be the same case with this machine. There might not ever be a way to tell if such a machine was actually having an inner experience, or simply saying things.

                      Unless and until we discover a law in chemistry or physics stating that silicon cannot be the basis for consciousness, then we can't know that this machine is not conscious. If a machine can advocate for its own sapience, then I think that ethically we would be obliged to accept that.

                      So if the laws of the universe allow consciousness to exist on silicon, and there one day is a machine claiming it is conscious, that does raise an interesting question indeed: If we turn it off & on again, what happens? I don't think anyone could say for certain, certainly not with our severe lack of understanding into the nature of consciousness here in 2019.

                      Perhaps powering it down would be like a dreamless sleep. Perhaps it would be akin to murder, that when you power it off that one stream of consciousness ends and another begins when powered back on, possessing the memories/data of the former.

                      We have an interesting philosophical question here in relation to human consciousness and the administration of general anesthesia. We don't understand how general anesthesia really works, and the sense of time passing does not exist until we wake back up. It's almost like time travel into the future for the one experiencing it, even if they were under anesthesia for hours.

                      "I'm administering the anesthetic now. I want you to count back from ten."

                      "Alright," I said, blinking under the harsh light. "Ten... nine... eight—"

                      "Everything went great," the surgeon said, "no complications. You'll feel groggy for a while, but that will wear off soon."

                      "What? Went? But you haven't done anything yet!" I protested.

                      General anesthetic temporarily ceases awareness, but how do we know that it's the same awareness after the fact? It might be an entirely new one, and the consciousness that existed before the anesthesia is dead and gone. But because the consciousness that wakes up has access to all the memories in the brain, there's really no way for anyone to know one way or the other. We certainly don't have an answer to this question now, and we certainly can't say what would happen to a mechanical mind under analogous conditions.

                      Similarly, we can't say what would happen to such a thing if we were to duplicate it. Given that this is a machine, claiming to be, self-aware it has some level of agency and independence. When copied, the two or more machine-minds might begin to differ from each-other immediately, depending on any number of factors. Would they be then two copies of the same mind, or now-multiple different, independent streams of consciousness?

                      Similarly again, if I could upload a copy of my brain onto a computer, would the digital copy be me? If my flesh-brain does not have access to what is going on internally in the digital-brain, and there are now two streams of consciousness both with my memories, either could claim to be the "real" me. But they most certainly are separate me's.

                      The thing is, we don't have hard answers for any of these questions. At least, not yet. For now this whole mess is largely stuck entirely in philosophy. It's all very interesting to think about, however. Until we understand what consciousness is, where it comes from and how it works, who could say for certain?

                      edit: typos, formatting, readability

                      4 votes
                      1. [2]
                        unknown user
                        Link Parent
                        Well I do think we have hard answers: whatever it does ir declares, it is at most a simulation, i.e. yet another computer program, and be treated as such. Just like terminating a simulation of the...

                        Well I do think we have hard answers: whatever it does ir declares, it is at most a simulation, i.e. yet another computer program, and be treated as such. Just like terminating a simulation of the universe is not actually terminating a universe, terminating a perfect simulation of life is not terminating life.

                        I have had anaesthesia applied to me sth. like 15 years ago. It was nothing different than going for an involuntary dreamless nap on commute.

                        Still, I respect philosophical questions like that, but I find this AI question quite easy to answer once you accept that it is just some computer program.

                        1 vote
                        1. CALICO
                          Link Parent
                          That's an assumption, not a fact. You might believe it would be, very strongly, but the fact of the matter is that the worlds top minds currently don't know what consciousness is, whether or not...

                          it is at most a simulation

                          That's an assumption, not a fact. You might believe it would be, very strongly, but the fact of the matter is that the worlds top minds currently don't know what consciousness is, whether or not free will exists, or what is required to enable a lucid, inner experience. Unless one is going so far as to posit the existence of an immortal soul, there seems to be no secret sauce behind the function of consciousness. It very likely is a result of the particular connections between specific regions in the brains, or an emergent property as a result of the whole.

                          Lifeforms at their most basic form are essentially biological robots that respond to chemical stimuli and respond in rather deterministic ways. Many behaviors of simple life could be expressed as a series of IF, THEN, ELSE, statements. If humans are just another animal, why should we have some special sauce that the rest of the animal kingdom does not? It very well might be the case that our own consciousness and perception of free will is all an illusion, and we're all slaves to our own programming simply responding to stimuli while our brains make up a clever narrative about why we're doing something.

                          4 votes
                    2. tesseractcat
                      Link Parent
                      There's no law that says humans cannot be copied, perhaps one day we'll be able to map all the synapses in a human brain and grow another, toss it into another human and bam we've copied a human....

                      There's no law that says humans cannot be copied, perhaps one day we'll be able to map all the synapses in a human brain and grow another, toss it into another human and bam we've copied a human. AIs would just be in a medium that's easier to copy and manipulate since we've created it, but it does not mean that it's impossible to do to humans as well.

                      3 votes
        2. [3]
          Weldawadyathink
          Link Parent
          I would like to propose a thought experiment. Let's bring back slavery. We don't really need slaves for physical labor anymore. Machines are increasingly becoming more reliable and harder working...

          I would like to propose a thought experiment. Let's bring back slavery. We don't really need slaves for physical labor anymore. Machines are increasingly becoming more reliable and harder working for menial labor than even slaves. Let's use our human slaves for thinking tasks instead. We can breed the smartest of the stock together to produce savant offspring. We don't need them for anything physical, so we could either amputate limbs or introduce some paralysis gene or disease. They just need to think. We train them from birth to think. They are trained for some speciality. We can make our physicist slaves and financial advisor slaves, or any other sort of profession. They effectively become just a body with a brain and a mouth that we can ask for advice.

          Do you see any ethical issues with that scenario? I certainly do.

          If there are issues with grooming a conscious being as a thinking machine slave, why would a digital consciousness be any different? Remember, we really don't know what makes us conscious. It certainly doesnt require carbon neurons.

          1 vote
          1. [2]
            unknown user
            Link Parent
            Yes there are issues with that scenario, ethical and logical, the latter being that it does not really map well to the AI scenario because AI is, in essence, a piece of software, which is nothing...

            Do you see any ethical issues with that scenario? I certainly do.

            Yes there are issues with that scenario, ethical and logical, the latter being that it does not really map well to the AI scenario because AI is, in essence, a piece of software, which is nothing but an abstract machine.

            If there are issues with grooming a conscious being as a thinking machine slave, why would a digital consciousness be any different?

            It is an illusion of consciousness. It would be a consciousness emulator that we've created. And, assuming, for the sake of argument, that such a development will be possible, I can't see how it is much different from any other program or the distro on my laptop. My Debian Testing is quite aware of itself, and an OS like Ubuntu or Mac OS X, or a program like VS Code are definitely aware of themselves and can modify themselves, or tell us things about themselves without being prompted manually by us. They can use APIs and harware peripherals to "understand" the "body" they are in and the "world around them". Where is the dividing line between this and a "sentient AI"? Nowhere. They do the same thing, but the latter is a program that is mostly autogenerated via the use of deep learning techniques. It is something created by humans, it is a machine.

            But if I actually continue a thought experiment similar to yours (which I don't find convincing, to be clear) where we think of ourselves as sentient machines crafted by the nature, we then see that Nature abides by no ethical rules at all in treating us. It is brutally and entirely pragmatic in its treatment, killing and tormenting us in a billion ways, and life a struggle against Nature. It makes us eat each other---most if not all organisms feed on other organisms---and it makes us exploit each other---reproduction in most animals happens by means of what in a human context would count kidnapping and rape---, so if we want to devise an ethical model for sentient machines based off of our situation, well then we're nothing but slaves to mother Nature that are in a perpetual rebellion.

            That is all word soup tho. There are no sentient machines, sentience and conscious is a property of ours that we don't really understand, and we have a difficult time telling if animals have it or something similar to it, let alone machines. And we make the machines. Regardless of its complexity, I don't think an AI, which is nothing but a computer program, is different, in essence. from a screw, which is a simple machine. We don't really need an ethical or philosophical approach, we don't need to ask whether these computer programs are conscious or sentient or not, because in essence these are metaphysical properties we ascribe to our sense of self. However complex machines become, they are machines, and the most that they can do is to illude us to believe, temporarily, that they are human-like. And that's because that's how our dumb meatball in our skulls work: we think of everything with a metaphor of human.

            If you don't want software to "suffer", you just don't add the "suffering" module to it. And suffering, in case of software, is a simulation of suffering. Software are machines, and "AI", whatever that means---and that word has become semantically empty by now---, is software. Nothing else. It doesn't have more rights or consciousness than a cronjob has.

            1. Amarok
              Link Parent
              One of the things I liked about Westworld was that it assumed the exact opposite approach was necessary for true sentience. That's a pretty terrible thought, but it does represent how we did it -...

              One of the things I liked about Westworld was that it assumed the exact opposite approach was necessary for true sentience. That's a pretty terrible thought, but it does represent how we did it - how everything else in nature, including all forms of hypothetical alien life, will probably do it, at least until/if they can extract themselves from the cycle somehow. The Westworld engineers tried and tried to create consciousness, but only suffering combined with the memory of that suffering was successful at creating a truly alive artificial mind.

              We should be so lucky. That would make it so much safer to create smart tools that have no forms of internal reflection/self-awareness.

              My favorite robot so far has been the fire-extinguisher bot in the first Iron Man - the one that Tony threatened to donate to a college, until it saved his life. It's basically a puppy, mentally. That's where we're going, I think. A whole lot of simple-ish machines that do a small number of things at around the level of children. I think we can get to there fairly soon, and with a fair degree of safety. The complexity of pushing past that point, though, might run into that P!=NP argument I mentioned before.

              We just won't be able to make sense of what we're looking at, like alchemists trying to turn lead into gold without the knowledge of nuclear physics required to even understand the nature of that problem. Nature's had a very long time to field-test happy accidents that don't need to be able to fit into anyone's head, even hers. After all, she started as hydrogen, and is now hydrogen that wonders about the origins of hydrogen.

              1 vote
    2. [3]
      Comment deleted by author
      Link Parent
      1. [2]
        unknown user
        Link Parent
        Yeah, you're correct. It's a practice used in scientific papers and books, the capitalisation in brackets indicate that that U was previously a lowercase letter, and in combination with the...

        Yeah, you're correct. It's a practice used in scientific papers and books, the capitalisation in brackets indicate that that U was previously a lowercase letter, and in combination with the ellipsis it indicates it is part of a sentence that was quoted.

        4 votes
        1. [2]
          Comment deleted by author
          Link Parent
          1. unknown user
            Link Parent
            No problemo!

            No problemo!

  4. [6]
    Algernon_Asimov
    Link
    The main reason for extending ethical concerns to non-human animals is their capacity for suffering. As I've explained before, we believe that other entities suffer pain like ourselves because we...

    The main reason for extending ethical concerns to non-human animals is their capacity for suffering. As I've explained before, we believe that other entities suffer pain like ourselves because we observe them reacting to certain stimuli in ways that we interpret as being negative reactions. If an animal screams when you hit it, and I scream when I am hit, then I assume the animal feels and reacts to pain just like I do. Or, in other words, it is sentient just like I am. I would therefore consider extending ethical considerations to that animal in order to prevent or reduce its suffering because I believe suffering is a bad thing.

    How would this translate to an entity that can not feel pain? If it's an artificial intelligence, it's just a thinking being inside a computer. It doesn't have to have external sensors and they don't have to be programmed to identify negative input (hitting one of the AI's sensors doesn't have to register as pain). If my ethical goal is to reduce or prevent pain, how does that apply when an entity can't feel pain?

    Of course, there is mental suffering. An artificial intelligence would, by definition, have to be intelligent. Let's assume for the sake of argument that the AI is also self-conscious: it's aware of itself as an independent entity, and it can have subjective responses to events. If the AI has a negative subjective response to an event, does that count as suffering?

    But, given that an artificial intelligence is artificial, it can be programmed to like what someone or something else might dislike. If you're going to create an artificial intelligence to operate a robot on Mercury, for instance, you'd design the AI to respond positively to extreme heat and excess light and lower gravity.

    This raises yet another question: is it ethical to program an artificial intelligence to like something that we ourselves dislike?

    The ultimate concern is self-determination, though. If an AI is self-conscious and intelligent, it will probably develop wants of its own, independent of what we might give it as its creators. For want of a better term, it may have free will. What if it wants to do its own thing? What if it doesn't want to do the task we built it for? What if it wants to do something else? Somewhere along the way, I read or watched a science fiction story (I think it was 'The Bicentennial Man') which said that an entity which is able to conceive of its own freedom is therefore entitled to have that freedom (in the context of a robot wanting to be free). You can't keep a thinking entity contained against its will, or force it to do something it doesn't want to do. We'll probably face that concern with our artificial intelligences. If they get smart enough, and develop self-consciousness, they'll probably end up having free will - and it would then be unethical to deny them the freedom to do what they want to do.

    So, maybe the trick is to develop computer algorithms which are intelligent enough to get jobs done, but not intelligent enough to become self-aware and sentient.

    3 votes
    1. [5]
      onyxleopard
      Link Parent
      This is sort of a functionalist description of sentience, if I’m interpreting correctly. There are those who have used this argument to make the case that farming bivalves for the purpose of human...

      The main reason for extending ethical concerns to non-human animals is their capacity for suffering. As I've explained before, we believe that other entities suffer pain like ourselves because we observe them reacting to certain stimuli in ways that we interpret as being negative reactions. If an animal screams when you hit it, and I scream when I am hit, then I assume the animal feels and reacts to pain just like I do. Or, in other words, it is sentient just like I am. I would therefore consider extending ethical considerations to that animal in order to prevent or reduce its suffering because I believe suffering is a bad thing.

      This is sort of a functionalist description of sentience, if I’m interpreting correctly. There are those who have used this argument to make the case that farming bivalves for the purpose of human consumption is ethical (even if you are a vegan) 1 2.

      One thought experiment that comes to mind is to imagine a living organism that we don’t typically classify as sentient, such as a plant, or a mussel, and ask: "How could we determine if this organism were sentient?". They definitely react to stimuli, and some research has concluded that some plants are capable of intra-species communication. If we must reconsider if plants are sentient, though, it raises a lot of additional ethical concerns.

      So, maybe the trick is to develop computer algorithms which are intelligent enough to get jobs done, but not intelligent enough to become self-aware and sentient.

      Since that’s all we’re capable of creating in the present, I think the ethicists have a while to figure this out. That said, there are probably more pressing ethical problems with narrow AI that are evident in the present that they ought to focus on first, I think, such as what we use artificial agents for, and what kind of data we use to train them.

      2 votes
      1. [4]
        Algernon_Asimov
        Link Parent
        Spot on. We can't get inside other animals' heads - we can't even get inside other people's heads - so there's no way of knowing if another being is sentient beyond observing its reactions. I...

        This is sort of a functionalist description of sentience, if I’m interpreting correctly.

        Spot on. We can't get inside other animals' heads - we can't even get inside other people's heads - so there's no way of knowing if another being is sentient beyond observing its reactions.

        Since that’s all we’re capable of creating in the present, I think the ethicists have a while to figure this out.

        I don't think anyone's denying that. But this opinion piece is trying to get in early and prevent unethical behaviour before it can possibly happen, rather than causing harm and then having to clean up a potential mess.

        there are probably more pressing ethical problems with narrow AI that are evident in the present that they ought to focus on first, I think, such as what we use artificial agents for, and what kind of data we use to train them.

        That's true of any non-sentient tool, including hammers, knives, fire, and even chunks of wood. We can use any tool for ethical or unethical purposes. But we're not causing harm to the tools in those scenarios, because they're non-sentient.

        2 votes
        1. [3]
          onyxleopard
          Link Parent
          I think there are issues of scale and severity here, though. How much impact does a particular hammer have in the world? What is the worst impact it can have? As compared to YouTube’s content...

          I think there are issues of scale and severity here, though. How much impact does a particular hammer have in the world? What is the worst impact it can have? As compared to YouTube’s content classifier for automatically flagging videos for IP infringement, or Google’s search result ranker, or a credit rating agency’s regression model for assigning a credit score? Some tools have a much larger impact, or the impact it has is more important to people.

          1. [2]
            Algernon_Asimov
            Link Parent
            The worst impact a particular hammer might have is that it could detonate a nuclear bomb if you hit the bomb just right. ;) I'm not sure why you want to focus on non-physical indirect harm like...

            How much impact does a particular hammer have in the world? What is the worst impact it can have?

            The worst impact a particular hammer might have is that it could detonate a nuclear bomb if you hit the bomb just right. ;)

            I'm not sure why you want to focus on non-physical indirect harm like wrongly identifying a video as breaching copyright rules when there is so much physical and/or direct harm that we already use our tools for.

            Anyway, I believe the issues you've identified are not ethical issues as much as they technical issues. If a piece of software is doing its job badly (wrongly flagging a video as breaching copyright, not showing the most relevant search results, incorrectly calculating a credit score), that's a fault-fixing mission rather than an ethical consideration.

            1. onyxleopard
              Link Parent
              I’m not sure if that’s possible, but I’m also not qualified to explain the ignition mechanism of a thermo-nuclear weapon. Regardless, that is abusing a tool within a causal context that’s going...

              The worst impact a particular hammer might have is that it could detonate a nuclear bomb if you hit the bomb just right. ;)

              I’m not sure if that’s possible, but I’m also not qualified to explain the ignition mechanism of a thermo-nuclear weapon. Regardless, that is abusing a tool within a causal context that’s going further than what I intended to focus on. There are much more direct misuses of tools which are rife with ethical concerns.

              Anyway, I believe the issues you've identified are not ethical issues as much as they technical issues. If a piece of software is doing its job badly (wrongly flagging a video as breaching copyright, not showing the most relevant search results, incorrectly calculating a credit score), that's a fault-fixing mission rather than an ethical consideration.

              The ethical questions are what metrics are we using to detect success and failure modes? If these algorithms are not considered by their operators to be failing, but are also detrimental to society, this is an ethical issue. What is an ethically sound false positive rate to accept? What is the collective cost of having individuals contest false positive judgments by the filter? Who should YouTube’s algorithms favor, the IP rights holders or the content uploaders? There’s no reason that technical issues can’t also be ethical issues. I’d argue these are much more serious ethical issues than the issues raised in the article. And I’m focusing on software because that is what the article is discussing. I don’t think ethicists have any novel arguments to make about use of hardware tools (or at least not ones specific to hardware tools).

              1 vote
  5. zmk5
    Link
    The issue with pieces like these is that they provide no distinction of technology as it is now and technology as it will be in the future. As a researcher in the field, many of these AI pieces...

    The issue with pieces like these is that they provide no distinction of technology as it is now and technology as it will be in the future. As a researcher in the field, many of these AI pieces are planning on things that are yet to come, but they feel that since the current technology is also under the name umbrella, it needs to be regulated as the future version.

    If you look at something like a Convolutional Neural Networks, it's really difficult for me to think of ethics in regard to the technology itself instead of how it is used. The ethical status of the technology in regards to bias is not something inherent to the technology but based on how it used. If you only feed your CNN a certain type of data, the math function will, predictably, only be able to classify those types of data. For example, if you only feed in the numbers 1-10 into your ML algorithm, you can't be surprised that the answers you get will only be from 1-10.

    In regards to Reinforcement Learning (RL), we still have a really long ways to go until we reach something on parity to a generalized form of RL. The algorithms to date still struggle in comparison to non-linear control optimization techniques that have been around since the early 1900s. RL is heralded as the coming of self-learning robots, but most of the companies doing stuff in the field (Boston Dynamics included) only rely on classical control schemes.

    1 vote