7 votes

How to raise your artificial intelligence

13 comments

  1. [11]
    papasquat
    Link
    I think as I've gotten older I've become more of a Luddite, because I'm not enthusiastic or positive on AI developments whatsoever. Even in the byline, the article gives me pause: Why do they need...

    I think as I've gotten older I've become more of a Luddite, because I'm not enthusiastic or positive on AI developments whatsoever.

    Even in the byline, the article gives me pause:

    How might AI systems reach this next level? And what is needed to ensure their safe deployment?

    Why do they need to reach the next level? Why is it a positive thing for humanity to create a human level general artificial intelligence? I understand figuring things out for curiosity's sake, but what potential benefits are we truly looking at? I think anyone who thinks that AGI is going to get us the star trek or culture future they've been dreaming of where we just sit around, persue whatever fufills us, never have to work or age or die and live happily ever after just hasn't been paying attention to how technology, especially computer technology has actually been used over the past 30 years. It's pretty clear that that future is not going to happen, AGI or not.

    So what problem are we trying to solve here? On its face, humanity doesn't have any truly insurmountable fundemental problems looming. Climate change is a disaster, yes, but it's a disaster with a very clear solution. The issue is just convincing people of that solution. It's not as if we have an asteroid hurtling at us that will hit in 20 years and no one has any idea how to stop it. Same goes for most of our issues.

    They're not fundemental problems that need to be figured out if only there was a super smart AI to do it. They're problems that require understanding and consent in a democratic system.

    Maybe if we were facing a seemingly insurmountable existential crisis, I'd feel differently, but right now life isn't so bad and in the areas where it is bad, it's almost entirely due to bad choices by humans.

    I think the idea of bringing a brand new type of intelligence to existence isn't given the immense respect for the weight it holds. It's something that would completely upend the paradigm of our society in ways we can't comprehend, yet everyone seems to be rushing to develop it at the highest possible speed.

    At a certain point, can we ever just be happy with what we've created? Does the envelope have to constantly be pushed forward until we inadvertently destroy ourselves or our ability to create more advanced technology?

    We've been lucky so far. The development of nuclear weapons narrowly avoided the destruction of human society so far, mostly because that technology is so expensive and out of reach for anyone but the richest governments in the world. If there was a feasible way to make high yield nuclear weapons that anyone could afford and could fit in the palm of your hand though, I fully expect that there would be multiple companies rushing to develop it though. We don't seem to have the capability of self control or good judgement when it comes to dangerous technology.

    Every time I see an article about "here's how we can overcome the challenges to AGI" my first thought is always "ok but why?"

    4 votes
    1. [6]
      TemulentTeatotaler
      Link Parent
      The appeal, if maybe specious, is that AGI can be better than us. Yeah, humanity is plagued by obvious and solvable problems... and we haven't and likely won't solve them. That is the...
      • Exemplary

      They're not fundemental problems that need to be figured out if only there was a super smart AI to do it. They're problems that require understanding and consent in a democratic system.

      The appeal, if maybe specious, is that AGI can be better than us. Yeah, humanity is plagued by obvious and solvable problems... and we haven't and likely won't solve them.

      That is the "insurmountable crisis" we are facing: flaws in human decision-making writ large. There's always going to be a bias, a local optima, an anti-social mutation, a perverse incentive, a free rider-- whatever you want to attribute our repeated failures to. And you can point to computer trends of the last 30 years if you'd like, but you can point to pretty much any other point in history to see the same patterns of violence playing out.

      AGI may kill us all, or kill us much faster (I would argue we are facing a variety of plausible existential crises), but it offers some sort of alternative to the human condition.

      Humanity can't get its act together about climate change? The AGI will solve clean energy, with a possible detour into better material science of computing paradigms along the way to make it possible. Or safe geoengineering, resilient crops/infrastructure, etc.

      Humanity can't prevent the next pandemic or the inevitability of cheap bioweapons? The AGI solves asymmetric violence by just being so much better and faster playing defense than a prospective domestic terrorist would be.

      Or perhaps it will just play a fair judge, so that company A isn't punished for doing the right thing with slightly worse margins than company B.

      With AGI you could conceive of an agent that is transparent, improveable, and testable. You can't run a human through countless examples of the sort of case data it would be using to render a judgment and see if it was biased against someone of some ethnic group, or towards some company/country, but you could with AI. That opens possibilities for all the things people agree could and should be done, but for which we can't agree to a process or arbiter.

      Much more likely, a being vastly more intelligent than any human wouldn't consent to, and wouldn't allow itself to become enslaved

      I'd caution against anthropomorphizing AGI. First, there is no reason sapience comes with sentience. A system could be designed to answer questions intelligently without any 'self' or persistent state.

      Second, there is no reason sentience comes with human-like motivations. If all the dogs woke up tomorrow as a superintelligent hivemind they might be much more inclined to come up with ways to be helpful to mankind than to enslave us.

      Maybe that would be slavery, in the same way our parent's genetics enslaved them to take care of us or the Ameglian Major Cow is enslaved to enjoy being eaten. It starts losing all the connotations of suffering, resentment, and insurrection we have for "slavery."

      9 votes
      1. [5]
        papasquat
        Link Parent
        Well that's the problem. We already know the solutions. They're just not getting implemented. Take climate change as an example. We've already identified a very workable solution: tax carbon...

        Yeah, humanity is plagued by obvious and solvable problems... and we haven't and likely won't solve them.

        Well that's the problem. We already know the solutions. They're just not getting implemented.

        Take climate change as an example. We've already identified a very workable solution: tax carbon emissions.

        The problem is that there are a lot of people with vested interests in not paying more to emitt carbon, some of them very powerful, so those policies don't get passed.

        How would AI help with that? Do we install an AI dictator in every country on earth? Do we make an AI spam green propaganda to get people to support those measures? Those both seem like horrible options, and more practically, there's nothing stopping the people with vested interests in emitting carbon from doing the same thing.

        That's the problem with our problems. They're not ones we can just easily think our way out of. They require sacrifices, compromise, and selflessness. Nothing is ever a win/win.

        And yeah, if you're forcing a being with a sense of self and the ability to reason, think, and reflect on its situation to work for you, I don't know how the argument could be made that it wouldn't be slavery.

        I'd caution against anthropomorphizing AGI. First, there is no reason sapience comes with sentience.

        I'd argue that the very definition of AGI requires both sapience and sentience.

        Most people define AGI as an AI that can do everything the human brain can do at least as well as a human. Certain things that the human brain can do, like developing morality, being empathetic, or feeling emotion and channeling that into useful output require the ability to reason about ones self. An AI that isn't sentient or sapient is obviously still useful, but also wouldn't fit most people's definition of AGI.

        So if we go with that definition of AGI, forcing one to work for you would absolutely be owning a slave. Even if the AI agent didn't suffer or resent its masters, the fact that if it decided it no longer wanted to serve them, they wouldn't be able to means it would be a slave.

        Sure, you could in theory build in some sort of safeguard that ensures that an AGI would always want to serve humans, but humans can't even fully understand how the intelligence in animals far below human intelligence works. How could we possibly ever guarantee that one that far outpaces ours would continue to love being our servant?

        Once that motivation is lost, sure, we might not be slaves, but we would almost certainly be competitors in a world with finite resources.

        6 votes
        1. [4]
          unkz
          (edited )
          Link Parent
          A "solution" with insurmountable barriers to implementation is not a solution. The promise of superintelligence (which strictly speaking is not AGI, but AGI is a stop on the way) is low friction...

          Well that's the problem. We already know the solutions. They're just not getting implemented.

          A "solution" with insurmountable barriers to implementation is not a solution. The promise of superintelligence (which strictly speaking is not AGI, but AGI is a stop on the way) is low friction solutions, eg. fusion power; practical carbon capture; or high temperature, long distance superconduction.

          I'd argue that the very definition of AGI requires both sapience and sentience.

          Most people define AGI as an AI that can do everything the human brain can do at least as well as a human.

          Just because an AGI-level technology can do something doesn't mean it will do something. A transformer architecture can interpret images, and it can generate text, but it depends on how we structure and train it that determines whether it does one, the other, both, or neither.

          An AI that isn't sentient or (emphasis mine) sapient is obviously still useful, but also wouldn't fit most people's definition of AGI. So if we go with that definition of AGI, forcing one to work for you would absolutely be owning a slave.

          You kind of skipped over their point. Sapience is critical, but do we know that sentience is? If it turns out the two are orthogonal (or at least not strictly connected), then we don't need to go with that definition of AGI and there's no concern about slavery.

          4 votes
          1. [3]
            Chiasmic
            Link Parent
            Just as a thought experiment: if I genetically engineer a human to lack sentience but remain intelligent (ie an intelligent human with no desires, motivation, or independence/desire for freedom),...

            Just as a thought experiment: if I genetically engineer a human to lack sentience but remain intelligent (ie an intelligent human with no desires, motivation, or independence/desire for freedom), would that avoid slavery in your view? What if a la Douglas Adams’ Restaurant at the end of the universe, I made a human who is genetically engineered to get huge pleasure out of being treated like a slave, and suffer constant pain if they had freedom (side note: not sure how much genetic engineering would actually be required for this!)? It would be unethical not to use them as a slave right?

            I think the problem with an ever more intelligent AI is it challenges a lot of the normal aspects of morality and humanity that we see as axiomatic.
            Altering our own biology dramatically makes all these lines grey, so it’s heavily regulated/banned and slow incremental changes. AI on the other hand just has a carte blanche to race headlong into these issues with no checks.

            2 votes
            1. [2]
              unkz
              (edited )
              Link Parent
              Without quibbling about the particular definition of sentence, it might, actually. I’m not sure I see a physical process that could get humans to that point without running afoul of slavery or...

              Just as a thought experiment: if I genetically engineer a human to lack sentience but remain intelligent (ie an intelligent human with no desires, motivation, or independence/desire for freedom), would that avoid slavery in your view?

              Without quibbling about the particular definition of sentence, it might, actually. I’m not sure I see a physical process that could get humans to that point without running afoul of slavery or worse. There would surely have to be intermediate work products that possess sentience. There may also be particular features of biological organisms that require sentience as a precondition of sapience. Nevertheless, I’m quite comfortable with “enslaving” various other relatives of ours, such as horses, cows, pigs, and so on, so if I were magically presented with an end product of a human looking, non-sentient, intelligent creature, I might be comfortable with using it to do work.

              What if a la Douglas Adams’ Restaurant at the end of the universe, I made a human who is genetically engineered to get huge pleasure out of being treated like a slave, and suffer constant pain if they had freedom

              Well, this is a sentient and sapient creature for sure. I’m unsure what an ethically appropriate response would be.

              I think the problem with an ever more intelligent AI is it challenges a lot of the normal aspects of morality and humanity that we see as axiomatic.

              I think it’s a problem in the sense that it’s a challenge, but not a reason not to do try to do it. The potential upsides are so incredibly high.

              1. Chiasmic
                Link Parent
                Yes I mean ‘problem’ as in uncertain/ challenge rather than strictly negative. I’m not sure I agree the upsides are as good as what you suggest though. I think AI is inherently a destabilising...

                Yes I mean ‘problem’ as in uncertain/ challenge rather than strictly negative.

                I’m not sure I agree the upsides are as good as what you suggest though. I think AI is inherently a destabilising force with net negative outcomes in any power structure that is not already hyper centralised.

                Going back to my point though, I think ideas of what is good becomes blurred in very advanced AI or biology. What becomes an end goal when we can manipulate so much to solve all our needs?
                If we could give everyone a pill to make them continuously happy and blissful for no reason all the time, is that a “good” thing to do? Even if it leads to everyone just lying still and blissful in a drug induced trance (aka heroin) permanently while the AI bots do all the work until the whole of humanity dies off? Logically sound, but somehow unsettling.
                What if we just remove everyone’s desires- so there is nothing left to satisfy? Logically it’s a net good, but sounds bad somehow again.
                What if we remove all work through AI and robotics, and no one needs to work and can just be happy all the time (but without any power or agency over their own lives because ultimately they don’t matter to the system)? Seems practically very similar to the above two examples which don’t sound great.
                Ultimately it seems to me all ethics becomes weird and non sensical in such a society. Only Buddhistic dogma of removing all attachment to removal all suffering survives. And we can do that without AI or synthetic biology.
                To be clear, I don’t know the answer to these questions. I just find the disconnect between how heavily regulated biomedical science is/how sceptical people are of biological solutions compared to AI despite similar existential threats bizarre.

    2. [2]
      Fiachra
      Link Parent
      I think what's causing this reaction in so many people is how blatantly profit seeking tech companies have been over the last decade or two. A bunch of them are very openly declaring that they...

      I think what's causing this reaction in so many people is how blatantly profit seeking tech companies have been over the last decade or two. A bunch of them are very openly declaring that they think AI might end the world (silly) while also barreling headlong into creating it and jamming it into everything they can. It feels like that "you okay bro?" moment when you realize your buddy has an addiction because he clearly thinks what he's doing is bad but he just can't seem to stop himself either.

      6 votes
      1. papasquat
        Link Parent
        I don't know that I'm specifically worried about AI ending the world (that's a concern I have, but way further down the road, and I still think it's quite unlikely). My worry is AI making the...

        I don't know that I'm specifically worried about AI ending the world (that's a concern I have, but way further down the road, and I still think it's quite unlikely).

        My worry is AI making the world significantly worse. We've already seen a proliferation of AI generated content such that the most of the content on the regular unfiltered Internet is next to useless. That signal to noise ratio is only going up.
        As more white collar jobs are automated, I'm worried about what kind of labor most people will even be qualified to do anymore.
        As the processing and research requirements to develop innovative models increase, they become further and further out of reach for the common person, and more consolidated into the hands of 4 or 5 mutlinational conglomerates. When all of the labor can be done with machines owned by five companies, what does that do to the average person? Probably nothing great.

        And finally, further down the road, with the conversations of superalignment, if AGI is ever actually developed, the absolute best case scenario is that you have a new species of conscious beings that are under total control of humanity, and work towards our best interests. Fundementally, the best endgame for us is we get to do slavery on a massive scale. Slavery of any sapient being is massively morally wrong in my book, and I'd want no part in a society that endorses it.

        Much more likely, a being vastly more intelligent than any human wouldn't consent to, and wouldn't allow itself to become enslaved, so humanity would just succeed in creating a species that vastly intellectually superior to us with motivations that are a giant question mark. They don't have to destroy the world for that to be a bad deal for humanity.

        On the flip side, I don't really see any scenario where the potential benefits outweigh the downsides for the average person. It's a high risk, low reward proposal and I don't get why everyone is so excited about it.

        5 votes
    3. stu2b50
      Link Parent
      It's important to disentangle the field from the pointless hype words that float around it like "AI". The question is really: IS anyone trying to make AGI? There's a lot of people who say they...

      It's important to disentangle the field from the pointless hype words that float around it like "AI". The question is really: IS anyone trying to make AGI? There's a lot of people who say they want to make AGI, but that doesn't mean all that much.

      In practice, what we call "AI" is from a long line of research with empirical uses. They're not even really the same thing. Generative images, for instance, comes from the CV side of neural network research, which has always been some of the most directly useful, since images are so naturally suited for matrix based models. It was practically the first useful thing a neural network did - a multilayer perceptron outperformed state of the art models in MNIST, a dataset around recognizing numbers from images of handwritten numbers.

      Text based models had a slower start, as it's the way to turn a string of text into a vector that be fed into matrices is not trivial, not to mention it tends to be higher dimensional than images in that form, and has strong non-local cross correlations. The most common industry use here is something like BERT, which is often used for sentiment analysis. You get ChatGPT by chopping off half of BERT (the encoder).

      What's really happening is people are, and will, make different neural network models for different situations. The real novelty is that new computational power allows us to effectively train a universal function approximator that can operate in high dimensional spaces, which means that tasks like recognizing images or translating text, which were previously too hard, with too many dimensions and too many rules for hardcoded logic engines or basic statistical models to represent, are now in reach.

      The discourse about AGI and AI is mostly on the periphery or when people need money to do what they want to do. Just treat it as noise.

      2 votes
    4. crdpa
      Link Parent
      Even tech has bias. What is researched or not and what is invested money or not is decided by humans in a capitalistic world. People think technology is detached from the world the same way the...

      They're not fundemental problems that need to be figured out if only there was a super smart AI to do it. They're problems that require understanding and consent in a democratic system.

      Even tech has bias. What is researched or not and what is invested money or not is decided by humans in a capitalistic world.

      People think technology is detached from the world the same way the state or government is something that is hovering above people and not part of it. No, it is made by people.

      This is technocracy. What if AI thinks we should just let a bunch of specific people die because it is better in the long run?

  2. [2]
    Grayscail
    Link
    The key to making an AI is to have a separate AI to evaluate the performance and give a cost function to optimize against. This second AI is the "conscience" module. It detects when the core AI is...

    The key to making an AI is to have a separate AI to evaluate the performance and give a cost function to optimize against. This second AI is the "conscience" module. It detects when the core AI is starting to veer into "evil Skynet monstrosity" territory and trains the AI to avoid those bad scenarios.

    The trick is, the conscience just always says everything is bad, even if theres no clear reason. It detects actually bad things and presents them as more bad in the backpropagation, but nothing is ever good enough to get a fully "good" score.

    This will give the AI the same crippling anxieties about ethics and morality as humans have, thus paralyzing it with indecision before it can kill us all.

    1 vote
    1. MechanicalMagpie
      Link Parent
      tech bros: we have created agi everyone else: you fucked up a perfectly good computer is what you did. look at it. its got anxiety. x

      tech bros: we have created agi
      everyone else: you fucked up a perfectly good computer is what you did. look at it. its got anxiety.
      x