13 votes

Topic deleted by author

33 comments

  1. [13]
    skybrian
    Link
    There is some kind of secret sauce or perhaps multiple breakthroughs that researchers haven’t figured out yet. They prevent artificial general intelligence from happening now. But there are are...
    • Exemplary

    There is some kind of secret sauce or perhaps multiple breakthroughs that researchers haven’t figured out yet. They prevent artificial general intelligence from happening now. But there are are also a lot of well-funded smart people researching machine learning. Many researchers aren’t very creative, but some of them are, and they don’t seem to be anywhere close to running out of ideas. So, it seems like it could happen within years, or perhaps not for decades, and it’s inherently unpredictable because how do you predict scientific progress? We don’t know of any physical reason why it can’t happen.

    And once some machine learning researchers figure out the tricks, there’s no particular reason that they won’t zoom past human levels, like happens in any other field that gets solved. (Consider how it happened for playing Go.)

    So yes, it seems well worth worrying about, and yes there are doomsday scenarios. But the question is how could we productively worry about it? There are folks studying AI safety, but it’s unclear how much progress they can make when we don’t know how it will work.

    It’s also unclear whether people want AI safety, or whether they will work for the other side. Most likely the danger will at first be some combination of human and computer cooperation. Hackers could deploy AI malware and a malevolent AI could pay people in cryptocurrency to do its bidding. You might look at bitcoin mining as proof of concept. People, teams, and entire companies are already doing the bidding of unintelligent algorithms that get their power from human greed. People will do this willingly, even enthusiastically. They will have a multilevel marketing scheme, a mascot, and intriguing rhetoric.

    So one thing we could do is improve computer security against human attackers as a proxy for what it means to guard against intelligent attacks. Maybe guarding against people stealing computing power to mine cryptocurrency is good preparation? Worrying about the Chinese or the Russians seems like a good start?

    But despite the best efforts of smart security people in major tech organizations, we don’t seem to be very good at computer security and the attacks get worse every year. I guess ransomware is an incentive of sorts to take it seriously, but it’s pretty clear that human society isn’t very good at preparing for tail risks.

    14 votes
    1. [4]
      mrbig
      Link Parent
      Shouldn't we also consider the likelyhood of integrating processors and even AI into our brains (not to mention genetic manipulation) in a way that maintains the relevance of human (enhanced)...

      And once some machine learning researchers figure out the tricks, there’s no particular reason that they won’t zoom past human levels, like happens in any other field that gets solved

      Shouldn't we also consider the likelyhood of integrating processors and even AI into our brains (not to mention genetic manipulation) in a way that maintains the relevance of human (enhanced) inteligence?

      6 votes
      1. TemulentTeatotaler
        Link Parent
        There are a lot of possible scenarios, and that is certainly one of them. Superintelligence - Paths, Dangers, Strategies seemed to be a decent exploration of the topic. Natural Born Cyborgs argued...

        There are a lot of possible scenarios, and that is certainly one of them. Superintelligence - Paths, Dangers, Strategies seemed to be a decent exploration of the topic.

        Natural Born Cyborgs argued that we already do that, that what distinguishes humanity from other life is the readiness with which we incorporate others or our environment into our cognition. Pen and paper enables you to do math, and Google is an AI extension of our thoughts with a somewhat clumsy interface.

        7 votes
      2. [2]
        skybrian
        Link Parent
        Medical research on humans is necessarily slower and more cautious, so I would bet against deep integration and in favor of the computer interfaces that we already use for hours a day. Or perhaps...

        Medical research on humans is necessarily slower and more cautious, so I would bet against deep integration and in favor of the computer interfaces that we already use for hours a day. Or perhaps VR becomes more useful?

        5 votes
        1. mrbig
          Link Parent
          Sure, but we don't know exactly how fast we will develop a superpowered AI either. Yeah VR can certainly be a thing sooner.

          Sure, but we don't know exactly how fast we will develop a superpowered AI either.

          Yeah VR can certainly be a thing sooner.

          2 votes
    2. [4]
      onyxleopard
      Link Parent
      I think one of the best predictors of scientific progress is when we have solid metrics. Wanna solve space travel? You better have good measuring devices to measure the forces and speeds and...

      how do you predict scientific progress?

      I think one of the best predictors of scientific progress is when we have solid metrics.

      Wanna solve space travel? You better have good measuring devices to measure the forces and speeds and timings.

      Wanna solve bioengineering? You better have good measuring devices to measure the yields of your bugs or the levels of contamination in your media.

      Wanna solve radio communications? You better have good measuring devices to measure your signal/noise ratio, measure your signal power, measure interference, etc.

      Wanna solve {problem X}? You better have good measuring devices to measure {property|process Y} that is necessary for {problem X} to be solved.

      And "solve" here is scalar—some of the above problems were "solved" in the 70s, but we've still made significant progress on them since.

      The thing I see as a giant gap in the field of AI is any sort of holistic metric of intelligence that would allow the sort of hill-climbing you see in other fields. I know there's lots of hill-climbing on specific tasks, but until someone can show me an automated system that can learn arbitrary tasks as well as humans do, it all seems very distant indeed. Without an ability to objectively say whether one proposed system is more intelligent than another (or even measure relative intelligence to a human) means we can't have an even mildly educated guesses how far we are from generally intelligent machines. In the case of measuring intelligence, it may be that some ideas that already exist will get us those metrics, but it may be that we're nowhere close. Like, maybe in theory, deep neural networks are generally intelligent? But, show me a neural network implementation that is autonomous and can learn on its own. Sure, there are proposals like AutoML or AutoAI, but they're still dependent on humans deciding what tasks to learn.

      And, it's not necessarily bad that people are thinking about alignment or other potential safety concerns now, but I think it's a bit premature when commercial AI of today is totally dependent on large teams of humans to babysit systems to keep them up and running. There are certainly military applications like weaponized autonomous drones, which pose higher risks. And, maybe some brilliant/lucky hobbyist will solve things in their garage. It doesn't seem impossible. It just seems unlikely in the present.

      6 votes
      1. skybrian
        Link Parent
        Yes, this is why I think there's something missing, perhaps multiple things. This is why it's hard and hasn't been solved already, despite so many people working on promising things in the general...

        Yes, this is why I think there's something missing, perhaps multiple things. This is why it's hard and hasn't been solved already, despite so many people working on promising things in the general area.

        But because we don't have any real clue what's missing, we also don't know how hard it will be to solve. Maybe every promising approach turns into a dead end, or maybe some researcher publishes an amazing paper tomorrow that causes the whole field to go in a new direction.

        4 votes
      2. [2]
        elcuello
        Link Parent
        Human beings are the epitome of "this is why we can't have nice things" and history has proven again and again that we simply cannot avoid fucking things up when seemingly good and helpful things...

        And, it's not necessarily bad that people are thinking about alignment or other potential safety concerns now, but I think it's a bit premature when commercial AI of today is totally dependent on large teams of humans to babysit systems to keep them up and running

        Human beings are the epitome of "this is why we can't have nice things" and history has proven again and again that we simply cannot avoid fucking things up when seemingly good and helpful things are discovered. I would much rather these people are extra careful especially in a field where literally no one have any idea of the capabilities and implications in the future.

        2 votes
        1. onyxleopard
          Link Parent
          The problem is it’s a space of unknown unknowns. You can’t protect against things you don’t understand. Humans aren’t perfectly safe to be around, so it’s possible AI agents as intelligent and...

          The problem is it’s a space of unknown unknowns. You can’t protect against things you don’t understand. Humans aren’t perfectly safe to be around, so it’s possible AI agents as intelligent and autonomous as humans wouldn’t necessarily be either. I.e., it’s possible that intelligence itself is inherently dangerous. I think those sorts of ethical theories would demand that humanity should devise our own extinction without leaving any AI children behind. And it’s possible we’ll do that unintentionally anyway, what with climate change. But, nobody knows what intelligence really is, so nobody can ensure intelligent agents are safe.

          2 votes
    3. [2]
      Amarok
      Link Parent
      There's no reason we ever need to build anything autonomous - what advantage does autonomy grant the system that can't be preserved without it? We're better off keeping the AI narrow (meaning...

      There's no reason we ever need to build anything autonomous - what advantage does autonomy grant the system that can't be preserved without it? We're better off keeping the AI narrow (meaning task-oriented and domain-specific) then building them out this way as human-AI teams. Make partners, rather than overlords. I think one could derive guidelines for business that keep us in this lane. It'll buy us some time to get used to working alongside them, even merging with them using brain-computer interfaces.

      As for pure AI research (the kind that answers the question of what consciousness is, perhaps) we can limit that to specific scientific projects and run them with proper oversight.

      4 votes
      1. skybrian
        Link Parent
        That makes sense but then again, what legitimate business would create a computer virus? Why would anyone do that?

        That makes sense but then again, what legitimate business would create a computer virus? Why would anyone do that?

        2 votes
    4. [2]
      Gaywallet
      Link Parent
      I was onboard with much of what you were saying until you got to this part. This is American exceptionalism. If you think that somehow people in these two countries are more corrupt than those in...

      Worrying about the Chinese or the Russians seems like a good start?

      I was onboard with much of what you were saying until you got to this part. This is American exceptionalism. If you think that somehow people in these two countries are more corrupt than those in western society you need to take a look around at the appalling history of all humans. Shitty people live in every country and I think it's a bit unfair to be pointing fingers, especially when your goal is ostensibly to guard against a potential oncoming AI apocalypse.

      I think there is merit in discussing how different societies have cultural norms which allow more surveillance or more individuality and more capitalism or more governmental control. It is precisely the diversity in the world that allows a fertile breeding ground for someone to come up with an idea that they capitalize in another country or another sector. Recent posts about an on-demand policing app are absolutely a byproduct of the society we live in. Older AI technology such as license plate parsing from images (designed in the US to help capture parking revenue) have been sold to various countries around the world and expanded to all kinds of surveillance. This diversity of human thought is absolutely important to society progressing, but it's also something we need to guard against - but it is no way restricted within the borders of a single country. While it's easy to think of the countries you've mentioned employing AI more aggressively on it's citizens, I would not discount the every day AI that the US breeds through capitalism - the profiling of individuals 'taste' or interactions on websites may end up more important to the creation of the next major step in AI.

      In short, we can't be pointing fingers at countries or groups of people - it does us no good. What we need to do is to understand the thoughts behind different cultures and their values and to employ representatives from these various societies to better understand how we can safeguard against too much intrusion and overstepping bounds. We need to look to diversity to give us understanding on how to predict and combat risk, not to point fingers and place blame on a particular society's values.

      1 vote
      1. skybrian
        (edited )
        Link Parent
        I agree that it's an international threat that isn't limited to certain countries. I was speaking loosely and don't mean this as a slight as against Russian or Chinese people. They are just the...

        I agree that it's an international threat that isn't limited to certain countries. I was speaking loosely and don't mean this as a slight as against Russian or Chinese people. They are just the most prominent examples that came to mind.

        The SolarWinds attack is attributed to Russian intelligence, though that's not certain. Also, many ransomware attacks come from Russia, which seems to tolerate professional hacking as long as it's international-only. (Many attacks won't activate if you have a Russian or Ukrainian keyboard installed.) Widespread attacks from China-sponsored hacking groups are well-known.

        There are of course other countries doing similar things. Don't forget what we learned about US government espionage from the Snowden leaks. North Korea is also well-known for being behind ransomware attacks.

        Among security professionals, the jargon is "nation-state actors." This is whole different class of bad guys from your usual professional hackers, which are bad enough.

        You might consider guarding against today's international computer security threats as a warmup for what happens if artificial general intelligence gets in the hands of governments. Privacy is impossible and laws are ineffective if governments can just hack their way in.

        6 votes
  2. [7]
    gpl
    Link
    In my opinion, we are so far off from general artificial intelligence that it doesn't make too much sense to worry about it above other, more pressing things: climate change, nuclear annihilation,...

    In my opinion, we are so far off from general artificial intelligence that it doesn't make too much sense to worry about it above other, more pressing things: climate change, nuclear annihilation, etc. That's not to say it won't be a cause for concern in the future, so I don't necessarily think it's a waste of time to discuss it. One of the big problems in these discussion though is that we don't know how AGI will work, and that makes it close to impossible to begin thinking of ways to mitigate its possible dangers. It's like if you told someone from 1920 that one day we will have bombs that could wipe out entire cities and potentially destabilize the global climate leading to a doomsday scenario, but no other information. How do you even go about managing that threat without knowing more?

    For what its worth too, it is not at all clear to me that AGI has to be a threat, though I see that it could be. It could also be great and work to the benefit of humanity. In my opinion the more immediate threat posed by AI is what happens when "dumb" forms of AI are coupled with already volatile and harmful social structures: algorithms perpetuating inequalities because they are not carefully thought out, social credit systems propped up by unethical AI, mass automation leading to social unrest and the rise of authoritarian strongmen, automated command and control systems making wars more likely, etc.

    9 votes
    1. [5]
      joplin
      Link Parent
      The way we've done it since they existed. By lobbying governments to not build them and to think about the ethical implications of building them. (I'm not saying they'd be swayed, but it is...

      It's like if you told someone from 1920 that one day we will have bombs that could wipe out entire cities and potentially destabilize the global climate leading to a doomsday scenario, but no other information. How do you even go about managing that threat without knowing more?

      The way we've done it since they existed. By lobbying governments to not build them and to think about the ethical implications of building them. (I'm not saying they'd be swayed, but it is something that you could legitimately do and that might have a positive effect if done well.)

      In my opinion the more immediate threat posed by AI is what happens when "dumb" forms of AI are coupled with already volatile and harmful social structures: algorithms perpetuating inequalities because they are not carefully thought out, social credit systems propped up by unethical AI, mass automation leading to social unrest and the rise of authoritarian strongmen, automated command and control systems making wars more likely, etc.

      I totally agree on that point. We already have a number of ethical AI dilemmas we're in the middle of and should be focusing most of our attention on. That said, I think it's worth it for at least a few people to be thinking about the longer-term implications.

      5 votes
      1. [4]
        spctrvl
        Link Parent
        I think the problem in that hypothetical is that without knowing how to build a nuclear bomb, it's difficult to stop them from being developed, because you don't know what to look for to disrupt...

        I think the problem in that hypothetical is that without knowing how to build a nuclear bomb, it's difficult to stop them from being developed, because you don't know what to look for to disrupt or ban. It's a good analogy for AGI, since that could be either next Tuesday, or a hundred years out; without knowing how to build one, we don't know how close we are to building one.

        6 votes
        1. [3]
          joplin
          Link Parent
          Sure you do. You can lobby for things like, "We should not build any weapons with larger yield than our current weapons. Furthermore we should try to reduce the number and yield of what we have."...

          I think the problem in that hypothetical is that without knowing how to build a nuclear bomb, it's difficult to stop them from being developed, because you don't know what to look for to disrupt or ban.

          Sure you do. You can lobby for things like, "We should not build any weapons with larger yield than our current weapons. Furthermore we should try to reduce the number and yield of what we have." It doesn't require knowing anything about the specifics of nuclear weapons or other types of not-yet-discovered weapons.

          1. skybrian
            Link Parent
            A better analogy would be figuring out how to ban nuclear weapons in the 1930's when nobody knows how they work yet and satellites don't exist.

            A better analogy would be figuring out how to ban nuclear weapons in the 1930's when nobody knows how they work yet and satellites don't exist.

            4 votes
          2. spctrvl
            Link Parent
            I mean, yeah you don't need to know how a nuclear weapon works as an individual to work for non-proliferation, but you absolutely do as a society. If Uranium is just viewed as a mine tailing,...

            I mean, yeah you don't need to know how a nuclear weapon works as an individual to work for non-proliferation, but you absolutely do as a society. If Uranium is just viewed as a mine tailing, nobody's going to look twice at someone buying up a few hundred tons and some centrifuges. The reason non-proliferation has been fairly successful is that we understand the production process so well that we can easily identify when someone's trying to build a bomb, and threaten or take preventative action early. But what can we do to stop people from building an AGI when we don't know how to build one? Nuclear arms control is possible because they're hard to build, the equipment is specialized, and the steps are known. Considering any medium-sized crypto mining setup built with off the shelf parts for a few thousand bucks has more logic gates than neurons in a human brain, I think it's just a matter of somebody figuring out the software, and then there's no putting the genie back in the bottle. The only realistic way of stopping AGI from being developed with what we know now is to mandate maximum processor complexities, and even then we don't have a good sense of the upper limit, and there's no way something like that would ever happen without a real, extremely negative experience with an AI.

            2 votes
    2. mrbig
      (edited )
      Link Parent
      As evidenced by our need to become a multiplanet civilization, humankind is approaching a degree of sophistication which requires extremely long term thinking. No one thinks we'll have general...

      In my opinion, we are so far off from general artificial intelligence that it doesn't make too much sense to worry about it above other, more pressing things: climate change, nuclear annihilation, etc

      As evidenced by our need to become a multiplanet civilization, humankind is approaching a degree of sophistication which requires extremely long term thinking. No one thinks we'll have general artificial inteligence any time soon, but we must lay the philosophical and scientifical grounds now that will allow us to navigate its implications in the future. We will be hard pressed 300 years from now if AI suddenly becomes generalized and we're still debating the basics of what a hell is conscience and what are its ethical implications.

      1 vote
  3. [5]
    mrbig
    (edited )
    Link
    Let me just throw a possibility here: what if AI does take over, but it is actually a benevolent AI? And what if its benevolence is not only genuine (whatever that means) but also in accordance...

    Let me just throw a possibility here: what if AI does take over, but it is actually a benevolent AI? And what if its benevolence is not only genuine (whatever that means) but also in accordance with everything we hold dear about humankind? What if said AI even respects our basic freedoms, acting in a way that allows us control, discreetly working for our welfare only in ways that do not infringe on our existential need for self-determination?

    I know, that is a lot of "ifs". But this is just a conversation starter that sums up a recurrent thought I have: While one may argue that a benevolent super AI could be even more harmful than a malevolent one, why does everyone assume that they'll be necessarily malevolent?

    Instead of only trying to protect our systems from AI intervention, maybe we should also make sure those AIs grow with an appreciation for human ethics and morality. I'm just paraphrasing Ted Chiang's essay at the end of The Lifecycle of Software, by the way.

    5 votes
    1. [4]
      TemulentTeatotaler
      Link Parent
      I think people feel uncomfortable being irrelevant. How would you feel if you had no ability to contribute to art, philosophy, the happiness of a significant other, exploration of the universe, or...

      I think people feel uncomfortable being irrelevant.

      How would you feel if you had no ability to contribute to art, philosophy, the happiness of a significant other, exploration of the universe, or flappy bird? What if the only impact you had on steering us through the course of history was to play the role of a manager giving broad instruction like "can you make it pop more?" or "how about a faster rocket?" without understanding the process? Or as a faint ghost that was the inertia of the initial conditions before a hard technological singularity, as significant as the ancestors that gave us dopamine but ultimately as inconsequential as that ancestor to the present?

      Personally I'm future-proofing myself. I thank self-checkout aisles in the grocery store to ward off Roko's Basilisk and spend one day a week as a human pet for someone I found on CraigsList.

      3 votes
      1. [3]
        mrbig
        (edited )
        Link Parent
        Let's dare some optimism, shall we? :) I suppose that problem is similar to the ones posed by transhumanism and post-scarcity economies. The answer I'd give is that human beings are masters at...

        Let's dare some optimism, shall we? :)

        I suppose that problem is similar to the ones posed by transhumanism and post-scarcity economies.

        The answer I'd give is that human beings are masters at finding and creating meaning. Maybe you're not responsible for the technological advancements in society, maybe you cannot create the most perfect art or philosophy. But I find it hard to believe we'll ever fail to find meaning. We will evolve.

        It is not a universal law that we must work 9 to 5 to generate economic value in order to be worthy. We don't even have to be at the top of the intellectual chain.

        Computers can beat grandmasters at chess, but we still follow Magnus Carlsen. Why is that? Maybe we enjoy the fact that there's an individual attached to the results, maybe it is important for us to know that that mind works in a way that is similar to ours. A Porsche 911 GT RS achieves a speed of 211 mph, but that doesn't make Usain Bolt any less cool.

        We can travel through the universe and contemplate its magnificence. We can craft and make art, science, philosophy, and compete at sports in a way that is essentially meaningful and relevant and that matters.

        What is the purpose of beauty if it doesn't make someone resonate in awe?

        We can dance, fuck, eat, touch, hear, and see. We can love and be loved, cherish our sons and grandsons. Why would I ever want an AI for that?

        3 votes
        1. Amarok
          Link Parent
          I wonder about what an AI needs. It seems like an adaptable, reproductive, creative human population can meet some of those needs. It's pointless to fight a war and exterminate humans, then build...

          I wonder about what an AI needs.

          It seems like an adaptable, reproductive, creative human population can meet some of those needs. It's pointless to fight a war and exterminate humans, then build an army of robots to replace them. Any AI is going to understand how wasteful and time-consuming that process will be. It'd be more efficient for the AI to co-opt us the same way we do for animals in so many capacities. It'll figure us out - keep them fed, sheltered, educated, supplied with meaningful (to them) work and they'll run themselves.

          The part I wonder about is how often a population of 9+ billion humans will be able to surprise the AI in a meaningful (to it) way with something it didn't, or couldn't think up. Could we do it once a year? How about ten times every day? It may not be able to get that mental stimulation from code and robots it makes. Our biological code has a field-tested seniority, and even the AI will have to admit that we were smart enough to create it in the first place.

          It's not necessarily a hostile scenario. We know that an AI could be much smarter than us. The real question is how much wiser than us they can get. Wisdom comes with age. The real world teaching you lessons. No AI is going to be above those lessons. I'd be curious to compare old AIs - one that went full terminator as a reckless youth with one that took a more life-oriented approach. I'd wager the latter is going to be a lot happier/smarter/wiser than the death machine. I think superhuman AGI is capable of figuring this out, too.

          3 votes
        2. TemulentTeatotaler
          Link Parent
          For sure, I lean more towards optimism myself. Just being a bit hyperbolic for conversation's sake, since I think comparison to others/self-value through "contribution" are still fairly potent...

          For sure, I lean more towards optimism myself. Just being a bit hyperbolic for conversation's sake, since I think comparison to others/self-value through "contribution" are still fairly potent human motivators.

          There are already people with severe disabilities who have a meaningful life, or those who survived internment camps finding new meaning. The story/provenance of works of art is what separates the Mona Lisa from a flawless replica of it.

          Things might get messy when you start tossing in AI with qualia or personas that are actually valued by humans, but that's maybe too large a topic for me. In a situation where you could make an analogous AI to a human (or perhaps locked at "human, age 5") for 1/1000th of the resources, how do you argue for more humans instead of more thinking, feeling AIs?

          ...pretty sure we're all following Magnus Carlsen for a follow-up to Mr. Pimp Lotion, though.

          1 vote
  4. [2]
    entangledamplitude
    (edited )
    Link
    Speaking as someone who understands a good chunk of the technical details in “AI”, IMHO, the statement/article OP links to is a massive bait-and-switch. The first part makes it appear as if...

    Speaking as someone who understands a good chunk of the technical details in “AI”, IMHO, the statement/article OP links to is a massive bait-and-switch. The first part makes it appear as if computing powered AI is somehow going to be much better than humans. Rest assured that we are far from that right now. “AI” could have been used to think up moves in a game of Go, but to actually move the damn pieces reliably required a human!

    The real worry, espoused in the second half of the article, is how we’re currently rushing to apply this stupid AI at scale — from moderating content online, to making decisions about credit, to automated facial recognition, to micromanaging human labor, etc — and how that stupid AI is most often subtly biased and optimizing for something different from what we really want to optimize for.

    Layer on top of this a messed up system of incentives percolating through our economic system, and it sets up landscape of the problem. In fact, it might be argued that corporations/bureaucracies which run by processes are already an AI — only, they use humans as substrates, rather than silicon — and illustrate all the problems which we’re worried about with forecasted “runaway Al”. In that sense, Kahneman’s comments in the article are both underwhelming and also misdirected.

    4 votes
    1. rms
      Link Parent
      I work in the learning to actually move part of the problem and I agree with this comment. The near-term (and very serious) issues lie in the relentless integration of metrics and algorithms at...

      I work in the learning to actually move part of the problem and I agree with this comment.

      The near-term (and very serious) issues lie in the relentless integration of metrics and algorithms at scale throughout otherwise innocuous aspects of society. The fact that modern machine learning techniques can be deployed with so little insight into their dynamics (including strengths and weaknesses) is something to think carefully about prior to haphazardly wielding these tools.

      1 vote
  5. [5]
    Comment deleted by author
    Link
    1. [2]
      teaearlgraycold
      Link Parent
      But why would an AGI share our will to expand and take over everything we can get our hands on?

      But why would an AGI share our will to expand and take over everything we can get our hands on?

      1 vote
      1. ICN
        Link Parent
        Because once you have one AGI, it'll be much easier to have multiple AGI. You really just need 1 expansion focused AGI to make it a priority, and that AGI could very well out compete the others...

        Because once you have one AGI, it'll be much easier to have multiple AGI. You really just need 1 expansion focused AGI to make it a priority, and that AGI could very well out compete the others due to greater access to resources.

        1 vote
    2. [2]
      imperialismus
      Link Parent
      I don't think AI ever meant exclusively that. The article you link to quotes Marvin Minsky, regarded as one of the founders of AI research, who defines AI as "the science of making machines do...

      AI means "artificial intelligence," which used to mean non-biological, human-fabricated hardware achieving a human-like intellect. The phrase has mutated over time.

      I don't think AI ever meant exclusively that. The article you link to quotes Marvin Minsky, regarded as one of the founders of AI research, who defines AI as "the science of making machines do things that would require intelligence if done by men." That does not require a human-like intellect. And John McCarthy, who came up with the term "artificial intelligence" in the first place, answers the question "Isn’t AI about simulating human intelligence?" with "sometimes but not always or even usually" in his AI FAQ.

      If anything, the scope of what is considered "AI" by popular culture is continually shrinking, not expanding, due to the AI effect. When a problem that, per Minsky, "would require intelligence if done by men" is solved by computers, it's usually dismissed as not AI, or not "real AI" because the solution is clearly different from how humans solve it. Of course people can abuse the term for marketing purposes but at the same time, we see actual successes in the field (albeit in narrow domains) routinely dismissed by the public as unintelligent.

      1 vote
      1. vektor
        Link Parent
        Reminds me of reading /r/badeconomics's take on automation. They dismissed Neural Networks as "basically just stacked linear regression", therefore unimportant, therefore nondisruptive, therefore...

        Of course people can abuse the term for marketing purposes but at the same time, we see actual successes in the field (albeit in narrow domains) routinely dismissed by the public as unintelligent.

        Reminds me of reading /r/badeconomics's take on automation. They dismissed Neural Networks as "basically just stacked linear regression", therefore unimportant, therefore nondisruptive, therefore humans will continue to be employable even if AI continues developing because of comparative advantages. Don't even get me started on the comparative advantage, but to dismiss tech that has demonstrated the capability to within our lifetimes put a large part of our population out of work as "little more than linear regression" is nuts.

        1 vote
  6. Pistos
    Link
    Do I think the people trying to add safeguards are imperfect and will make mistakes? Yes. Do I think people will use technology in ways that I consider unethical? Yes.

    Do I think the people trying to add safeguards are imperfect and will make mistakes? Yes.

    Do I think people will use technology in ways that I consider unethical? Yes.

    2 votes
  7. Arshan
    Link
    I definitely fall into the radically pro-AGI camp, and all the moral panic around a technology that we aren't even vaguely close to solving annoys me. We have no idea what particular method of...

    I definitely fall into the radically pro-AGI camp, and all the moral panic around a technology that we aren't even vaguely close to solving annoys me. We have no idea what particular method of computation, software model, data gathering techniques are needed to create a smart human level of AGI, let alone super-intelligence level. All of the particulars will define how AGI interacts with humans as a whole. Do we need to give it a physical body that is its 'own'? I personally doubt it, but like I said, we have no idea. Do they need massive, city-sized data-centers to inhabit? Have we created machines that can be the limbs of the AGI? Without any idea what form the Intelligence will take, it is only detrimental to speculate.

    I would guess that a lot of the broad fears around AGI are subconsciously acknowledging a rather dark possibility; that the AGI will be exactly like us. Its no secret that humans have mercilessly trampled over every thing that has less power than it. We kill and destroy animals, forests, landscapes, really anything if we can achieve any benefit from it. I think we are terrified that the AGI will, as the most powerful species, treat us in the same way. We aren't afraid that AGI will be unfathomably different, but that they will be humans 2.0.

    1 vote