41 votes

Exclusive: OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say

68 comments

  1. [56]
    Light_of_Aether
    Link
    "Eventually perform scientific work"? As a scientist, this development is scary. I know it's not here yet, but the people working to make this happen should really sit down and think about the...

    "Eventually perform scientific work"? As a scientist, this development is scary. I know it's not here yet, but the people working to make this happen should really sit down and think about the consequences of AI that puts even highly educated people out of work. Where will that leave us?

    24 votes
    1. [39]
      MimicSquid
      Link Parent
      I'm going to try to ask this as gently as possible, and understand I don't hold any personal malus towards you: Why was it ok when the less educated were going to be put out of work, but suddenly...
      • Exemplary

      I'm going to try to ask this as gently as possible, and understand I don't hold any personal malus towards you:

      Why was it ok when the less educated were going to be put out of work, but suddenly when it's your turn to see that your head will be on the block it's something that people should really consider carefully? Just like with packing fruit or painting cars, won't it benefit "society" in the end if we find a more efficient way to do science that doesn't depend on a bunch of inefficient humans doing it by hand?

      94 votes
      1. [5]
        vektor
        Link Parent
        Not OP, but I'm going to answer your question anyway as someone in a similar position as OP. It's not my own unemployment that I'm worried about. I'm reasonably certain I'll be in the last 10% of...
        • Exemplary

        Not OP, but I'm going to answer your question anyway as someone in a similar position as OP.

        It's not my own unemployment that I'm worried about. I'm reasonably certain I'll be in the last 10% of jobs automated by AI. I'm worried what happens when the 90% before me become permanently unemployable. What makes this "not ok" (I'd argue it's entirely OK to be unemployable, as long as financial needs are met anyway - we'll manage to find a sense of purpose elsewhere) is that in any other previous economic disruption, there was a safe harbor of jobs to flee to. Industrial revolution made your weaving skills useless? Bummer, but at least you can be a mechanic now. Sure, it's not a 1 for 1 replacement, and the disruption carries with it personal financial hardship, but there was a light at the end of the tunnel. Always has been in previous disruptions. With this one, there's the real possibility that there is just no light. That if AIs can do science, they can do anything humans can, and you can't tell laid off people to just (1) become a farmer (2) become a mechanic (3) learn to code (4) become a scientific researcher if the AI will just outcompete humans on all of these.

        As for when they eventually come for my job, that won't be a problem. Either we figured out a solution for the 90% before me, in which case I'm good, or we haven't, and we're all fucked. I frame this as a cold rational calculus, but more than that it's compassion that I don't know how else to express.

        That's not to say we shouldn't automate all the things. I want my fully automated luxury gay space communism. But we (as a society) have to be ready for it. In that worn out phrase that's the communism part. Doesn't have to be, but there would need to be some kind of wealth transfer from those who own and profit from AI to those who have been put out of employability.

        51 votes
        1. [2]
          Promethean
          Link Parent
          I broadly agree with what you're saying except for: In the Industrial Revolution, there were three generations of dramatically reduced employability, horrible working conditions, and rampant child...

          I broadly agree with what you're saying except for:

          in any other previous economic disruption, there was a safe harbor of jobs to flee to. Industrial revolution made your weaving skills useless? Bummer, but at least you can be a mechanic now. Sure, it's not a 1 for 1 replacement, and the disruption carries with it personal financial hardship, but there was a light at the end of the tunnel. Always has been in previous disruptions.

          In the Industrial Revolution, there were three generations of dramatically reduced employability, horrible working conditions, and rampant child labor. People gloss over this fact for many reasons, first of which is because many of the good things we have today in developed countries were made possible by a century of industrial progress.

          The 20th century technology revolution was a boon to employment, since much of it was labor enabling technology, whereas the technology of the Industrial Revolution was very much so labor replacing. Sure, cottage industry weavers could go work in the factory as a mechanic, but at severely depressed wages and in significantly worse conditions.

          I'll recommend two books on this subject (technology, inequality, labor) that were quite intriguing:

          The Technology Trap by Carl Benedict Frey

          The Great Leveler by Walter Scheidel

          22 votes
          1. vektor
            Link Parent
            To be clear, I'm aware that the IR sucked really hard for many people. It's just not too relevant for my point because imo post-hoc it was inevitable that there was always going to be continued...

            To be clear, I'm aware that the IR sucked really hard for many people. It's just not too relevant for my point because imo post-hoc it was inevitable that there was always going to be continued employability for humans. Those same post hoc reasons can not be generalized to this next one.

            4 votes
        2. [2]
          teaearlgraycold
          Link Parent
          Can’t people move to physical jobs? AI doesn’t mean we have good robotics. Granted, AGI should be able to design robots eventually.

          Can’t people move to physical jobs? AI doesn’t mean we have good robotics. Granted, AGI should be able to design robots eventually.

          1 vote
          1. vektor
            Link Parent
            Given how good boston dynamics' stuff already is? Bit of reasoning capability in your NNs, and you have the narrow band between high-level AGI and low-level grasping/control filled in real quick....

            Given how good boston dynamics' stuff already is? Bit of reasoning capability in your NNs, and you have the narrow band between high-level AGI and low-level grasping/control filled in real quick. Adequate hardware is already there anyway.

            If learning to weld is going to save you from unemployability, it's only going to be a short break IMO. If we get AGI, I think the only jobs that are really safe (or at least safe the longest) are those where human factors make it necessary. CEOs for example, as long as the law mandates that some human does those things, for liability reasons. That CEO might just be a yes-man to an AI. Likewise for e.g. accountants, civil engineers, or lawyers. Some natural person has to sign on the dotted line. Nursing/medicine because people, particularly older people, will probably prefer human carers, possibly longer than rational reasons would inform that preference.

            Also, considering how most physical jobs are treated these days, I don't think that's a great prospect in general. There are of course a few good ones out there. But we can also assume that some devaluing of e.g. skilled trades is eliminated if you can just have an AI tell you exactly what to do. If all that's holding AI back from becoming a plumber is robotics, plumbers are just mechanized wrenches now. All the skilled parts of the job - which nut to tighten, which part goes where, how to plan and implement a project - is redundant.

            5 votes
      2. [29]
        ButteredToast
        (edited )
        Link Parent
        I think that perhaps what makes the idea of highly educated positions being replaced scary is that it makes people ask themselves, “so what’s left for humans to do?” It’s not manual labor or art,...

        I think that perhaps what makes the idea of highly educated positions being replaced scary is that it makes people ask themselves, “so what’s left for humans to do?”

        It’s not manual labor or art, and if AI is doing science then it can probably swing engineering too. There’s not much that remains when all that is carved away. It takes the issue from being able to earn a paycheck to full blown existentialism.

        27 votes
        1. [19]
          rahmad
          Link Parent
          That's one way of looking at it. The other way of looking at it is: Once upon a time we were hunter gatherers, and the business of hunting-gathering occupied us so fully that there wasn't much...

          That's one way of looking at it. The other way of looking at it is:

          Once upon a time we were hunter gatherers, and the business of hunting-gathering occupied us so fully that there wasn't much room in our lives for us to do anything else. Then, we made some improvements, and it led to less time being spent hunting-gathering. Somewhere in that spare time, a potter emerged. And from that point in time, humans were hunters, gatherers, and potters too. And then metalworkers. And then glass blowers. And then fry cooks.

          The sentence is the same: "What will we do?" -- but the tone is different... While some things will be lost -- hunting-gathering isn't too popular an occupation these days -- others may emerge. Ones we may not even know could have existed yet

          21 votes
          1. [3]
            vektor
            (edited )
            Link Parent
            The problem with that line of reasoning - at least as long as we look at the problem through a lens of economics - is that that works as long as humans maintain some kind of comparative advantage...

            The problem with that line of reasoning - at least as long as we look at the problem through a lens of economics - is that that works as long as humans maintain some kind of comparative advantage in some fields. The agricultural or industrial revolution were never going to eliminate all need for human labor, because there were some things that needed doing it simply couldn't do. It's like eliminating some part of human labor, and taking a magnifying glass to the rest: All of humanity will soon work in the sector that was not replaced. By magnifying it, we're going to see new details emerge. Instead of one job description for a niche job in a niche sector, you might start to see 20 more simply because that niche has become more important.

            The problem with the AI revolution is that in the extreme case, there's nothing left to magnify. All the things you can do, an AI can do cheaper and faster. Why would anyone pay you? Will we discover entirely new fields of work? Possibly, but who's to say any human can learn the relevant skills fast enough to ever compete with an AI?

            20 votes
            1. rahmad
              Link Parent
              I buy that. The rate of increase in capability in artificial systems vs. our ability to discover and develop new areas of expertise, that's going to be challenging. I don't buy impossible, but...

              I buy that. The rate of increase in capability in artificial systems vs. our ability to discover and develop new areas of expertise, that's going to be challenging. I don't buy impossible, but challenging? For sure.

              The thing I definitely buy is: There may be far too many humans now than need to exist in balance with many-function-bearing AIs.

              4 votes
            2. Promethean
              Link Parent
              I really like that magnifying lens analogy!

              I really like that magnifying lens analogy!

          2. [8]
            tealblue
            Link Parent
            Something along this line is thrown around a lot, but it feels like too big of a question to assume we'll be able to figure out an answer when the day comes. If there were a replacement, we really...

            Ones we may not even know could have existed yet

            Something along this line is thrown around a lot, but it feels like too big of a question to assume we'll be able to figure out an answer when the day comes. If there were a replacement, we really would be able to conceive of it now IMO

            5 votes
            1. [7]
              rahmad
              Link Parent
              Well, "prompt engineer" was not a job title that existed a few years ago, and "UX researcher" wasn't a job title that existed when I was a kid. So, I kind of disagree with the "we would know it by...

              Well, "prompt engineer" was not a job title that existed a few years ago, and "UX researcher" wasn't a job title that existed when I was a kid. So, I kind of disagree with the "we would know it by now" line of reasoning. If every 30 year old today could only do a job that existed when they were born, a whole lotta "social media managers" would be out of work right now.

              8 votes
              1. [6]
                cykhic
                Link Parent
                I agree that specific jobs or skills definitely can come into existence to replace those which are made obsolete. And, for example, when machines (mostly) replaced human physical labour, it was...

                I agree that specific jobs or skills definitely can come into existence to replace those which are made obsolete. And, for example, when machines (mostly) replaced human physical labour, it was still clear that knowledge and skills were the human niche.

                But, under the assumption that AI will also replace "cognition" generally, I think it's much less clear that there is anything at all left which humans can do better than AI.

                3 votes
                1. [5]
                  kovboydan
                  Link Parent
                  Computer says a chemical cures cancer. A human needs to run clinical trials to confirm. Computer says a material is a super conductor. A human needs to run experiments to confirm. Computer says...

                  Computer says a chemical cures cancer. A human needs to run clinical trials to confirm.

                  Computer says a material is a super conductor. A human needs to run experiments to confirm.

                  Computer says this subwoofer design is super sweet. A human needs to build and test it.

                  Computer says these genes make rice more nutritious and pest resistant. A human needs to…

                  If anything we need more highly educated, specialized humans or we won’t be able to do anything with “AI.”

                  Edit: Just to clarify, the thing we can do better is do.

                  6 votes
                  1. [4]
                    cykhic
                    Link Parent
                    So if I get what you're saying, humans still have a niche because we are physically embodied and can take arbitrary physical action faster than, for instance, an assembly line robot for the same...

                    So if I get what you're saying, humans still have a niche because we are physically embodied and can take arbitrary physical action faster than, for instance, an assembly line robot for the same task can be designed and produced.

                    While that is true in the short term, I don't feel that this will hold true in the long term.

                    9 votes
                    1. [3]
                      kovboydan
                      Link Parent
                      Yeah that’s part of it. The other part is … science, I guess? You know, testing hypothesis through observation and experiments, doing clinical trials on drugs, etc. Unless y’all are foreseeing...

                      Yeah that’s part of it. The other part is … science, I guess? You know, testing hypothesis through observation and experiments, doing clinical trials on drugs, etc.

                      Unless y’all are foreseeing some relatively immediate transition into complete trust for a black box with no verification and/or verification by a different black box.

                      I just don’t see the woe is us in this hypothetical - anytime in my life anyway - and I think there are more serious abuses and misuses of technology to worry about today, e.g. “AI” use by insurers in claims processing, “AI” use by employers in hiring, “AI” and facial recognition use by law enforcement.

                      6 votes
                      1. arch
                        Link Parent
                        There's many fields where you can trust, test, and verify the product of AI before using it. In manufacturing you can do destructive testing to ensure it meets standards, in art and literature the...
                        • Exemplary

                        There's many fields where you can trust, test, and verify the product of AI before using it. In manufacturing you can do destructive testing to ensure it meets standards, in art and literature the images, videos, etc either meet your standards or they do not.

                        But then there are fields where you need to rely on the process meeting the standard at every step. Engineering, medicine and science all come to mind. If your hypothesis and experiment is designed by an AI, then tested by an AI but all you get is a result of success or failure, then you can not rely on that for scientific purposes (science needs to be repeatable to be considered legitimate). In diagnostic medicine if you are just feeding an AI medical records and it is just spitting out diagnosis, or saying people don't have diseases they do have, then you have a problem. There's basically no risk to these fields from AI at this time. What you might get is an AI tool to help give potential diagnosis. Maybe eventually AI guided surgical tools, or one day an AI intestinal biopsy in a capsule that could biopsy problematic areas further in the intestines than an endoscopy can reach (this could finally provide answers for diagnosis of exclusion like IBS).

                        5 votes
                      2. bRxF1
                        (edited )
                        Link Parent
                        On the other hand, couldn't we say that you're assuming the change will be very gradual or focusing on the short-term? Even if we accept your assumptions, there's no guarantee they will hold for...

                        On the other hand, couldn't we say that you're assuming the change will be very gradual or focusing on the short-term? Even if we accept your assumptions, there's no guarantee they will hold for say, the next 50 years.

                        The more a technology is used and proven, the more it goes unquestioned, because there is no need. It's reliable and a fixture of modern life. We trust Google maps knows where it's taking us (even if it sometimes doesn't), we trust the decisions of thousands of automated systems that are nowhere near AI levels of sophistication. Isn't it possible that anything not relating to subjective human experience (like say, the subwoofer design) will be designed, created, tested, proven and built exclusively by machines with humans simply being the consumer? Once the first 10 AI designed drugs are verified to do EXACTLY what the AI predicted, once the first 10 AI-designed airplanes clock thousands of flying hours will we still look to an army of human scientists to double-check?

                        And even if maintain a level of mistrust or control, even if we assign human scientists and engineers to re-test every AI hypothesis using "ancient" means like non-AI computers, how many jobs are those really? I'm not sure they're enough to take the issue lightly and say "it'll work out", we're talking about a possible upheaval of the "humans work for a living" model that our societies have been running on for millennia.

                        This could be our opportunity for a massive jump forward as a species but we have a pretty bad track record when it comes to choosing between the benefit of the whole vs the power of the few.

                        Edit: I'd also consider the context this will be happening in, because we're discussing a theoretical society with a single opinion and decision to make, affected only by its own stance. The reality is that everything happens within a complex web of interactions. Sure mandate that every company HAS to employ human scientists. What if in 50 years for the equivalent of $10k you can have access to an AI driven mini factory including bio-printing, gene editing and every type of macro and micro machining? I'm guessing it won't REALLY matter what corporations are limited by if I can say "fine I'll give the goddamn Cure For Brain Cancer prompt myself and inject whatever comes out the other side" and IT WORKS.

                        5 votes
          3. [7]
            TumblingTurquoise
            Link Parent
            A bit off topic, but it's a common misconception that hunter gatherers were working all the time. They needed to work about 15-20h per week to ensure survival, and the rest of it was dedicated to...

            A bit off topic, but it's a common misconception that hunter gatherers were working all the time. They needed to work about 15-20h per week to ensure survival, and the rest of it was dedicated to leisure, socializing and all.

            Ironically enough, humans started working the most ever since the industrial revolution happened.

            22 votes
            1. [6]
              ButteredToast
              Link Parent
              One might ask, “Why did agriculture take over then? Hunter-gatherer life sounds rad!”. The answer is because even if an agricultural lifestyle is more work, it makes food supply much less...

              One might ask, “Why did agriculture take over then? Hunter-gatherer life sounds rad!”. The answer is because even if an agricultural lifestyle is more work, it makes food supply much less susceptible to disruption and more consistent, which then makes long-term plans more practical. It was a tradeoff.

              7 votes
              1. [5]
                Promethean
                Link Parent
                That's also a myth. Hunter gatherers might seem like their food supply is in constant threat of disruption, but that's because most people's frame of reference for hunter gatherers is that of...

                That's also a myth. Hunter gatherers might seem like their food supply is in constant threat of disruption, but that's because most people's frame of reference for hunter gatherers is that of modern day hunter gatherers, who have been relegated to the fringes of the world.

                Hunter gathers back when the world was not nearly entirely agrarian had the pick of the most abundant ecosystems.

                I recommend you read The Dawn of Everything by David Graeber and David Wengrow. This book argues against the myth of progress (the "inevitable and rational" progression from hunter gatherer to pastoralist to farmer to industrialist) espoused by popular books like Sapiens by Yuval Noah Harari, which is still worth a read anyway.

                4 votes
                1. [4]
                  ButteredToast
                  Link Parent
                  That sounds plausible and I’ll look into those books, but at the same time agrarian societies must’ve had a considerable advantage of some sort (even if it was temporal), otherwise logically...

                  That sounds plausible and I’ll look into those books, but at the same time agrarian societies must’ve had a considerable advantage of some sort (even if it was temporal), otherwise logically agriculture wouldn’t have come to be so dominant.

                  5 votes
                  1. Bwerf
                    Link Parent
                    Not sure if it's true, but i remember hearing that farming allows for much denser populations. It also allows you to stay put, meaning you can invest in permanent buildings keeping you warm during...

                    Not sure if it's true, but i remember hearing that farming allows for much denser populations. It also allows you to stay put, meaning you can invest in permanent buildings keeping you warm during winter and storing permanent things that are bulky but nice for you, like a smithy (Or barricades to keep those pesky hunter gatherers out). Larger groups of people allow for more specialization which can also be beneficial.

                    7 votes
                  2. cykhic
                    Link Parent
                    My (not particularly informed) model is that agriculture scales better than hunting-gathering. If hunters need to travel up to 1km to gather enough food for 100 people, then they would need to...

                    My (not particularly informed) model is that agriculture scales better than hunting-gathering.

                    If hunters need to travel up to 1km to gather enough food for 100 people, then they would need to venture further for 200 people. There are increasing marginal costs to increasing food production, as the best sources nearby are picked clean first.

                    Meanwhile, agriculture can presumably scale reliably at a linear rate of population supported per farmer. Economies of scale are easier because agriculture is in a fixed location, and may bring that ratio even higher.

                    4 votes
                  3. FlippantGod
                    Link Parent
                    "The Origin of the State: Land Productivity or Appropriability?" [pdf] posits that taxation enabled by suitable crops drove civilization, and not productivity. That argument would seem to hold up...

                    "The Origin of the State: Land Productivity or Appropriability?" [pdf] posits that taxation enabled by suitable crops drove civilization, and not productivity. That argument would seem to hold up much better against hunter gatherers potentially having greater abundance and surplus than agricultural societies.

                    2 votes
        2. [8]
          Light_of_Aether
          Link Parent
          Yes, this is my thinking. If AI will one day replace most scientists, what's the point of higher education or even K-12 besides personal fulfillment? What are we going to do? Science is the domain...

          Yes, this is my thinking. If AI will one day replace most scientists, what's the point of higher education or even K-12 besides personal fulfillment? What are we going to do? Science is the domain of thinking humans. If machines can think better than humans, where does that leave humanity?

          11 votes
          1. ButteredToast
            Link Parent
            Exactly. The image that came to mind when writing my previous post were the Eloi people from The Time Machine, who just kind of… exist with no aims, ambitions, or pursuits.

            Exactly. The image that came to mind when writing my previous post were the Eloi people from The Time Machine, who just kind of… exist with no aims, ambitions, or pursuits.

            5 votes
          2. [4]
            Casocial
            Link Parent
            Is personal fulfillment not enough of a goal in and of itself? I'd like to think that the process of doing something itself is worthwhile even if the end product could otherwise be achieved more...

            Is personal fulfillment not enough of a goal in and of itself? I'd like to think that the process of doing something itself is worthwhile even if the end product could otherwise be achieved more efficiently via automation.

            5 votes
            1. [3]
              Pioneer
              Link Parent
              It's a goal. But you've currently got nearly 8B people sitting there telling you that you need to "make something of yourself" When AI ultimately takes away the creative aspects of our lives, The...

              It's a goal. But you've currently got nearly 8B people sitting there telling you that you need to "make something of yourself"

              When AI ultimately takes away the creative aspects of our lives, The problem solving aspects of our lives... what then? Don't sit there and go "Oh, everyone will just fall into this utopic place!" because even in Star Trek they had to prevent robotics from doing EVERYTHING on the ridiculous scifi starships.

              I agree personally, I could sit around in a coffee house sipping espressos and talking shit about philosophy until I am blue in the face. But that's just not for everyone. We've wrapped ourselves up so tightly into little balls of ego, narcissism and feeding these things... that it may just go wrong.

              6 votes
              1. [2]
                Casocial
                Link Parent
                Yes, people in the contemporary world are constantly being told that productivity is king, because it's driven by the necessity of human effort to generate profit and innovation. If that need gets...

                Yes, people in the contemporary world are constantly being told that productivity is king, because it's driven by the necessity of human effort to generate profit and innovation. If that need gets supplanted, would we still place such emphasis on it?

                I don't think AI will take away the creative aspect of anybody's life. The desire to create will remain, and we still have the same tools to do that lying around as we did in a pre-AI world. Art will be massively decommodified though, and artistic creation likely won't be sufficient as a means to support oneself. That problem can't be attributed to AI though, it's the symptom of a system that mandates working if you want to survive.

                3 votes
                1. blindmikey
                  (edited )
                  Link Parent
                  That last sentence. I wish it could be blown up for all to see. I've been trying to get people to realize this, it's alarmingly difficult to get people to realize they're fighting against their...

                  That problem can't be attributed to AI though, it's the symptom of a system that mandates working if you want to survive.

                  That last sentence. I wish it could be blown up for all to see. I've been trying to get people to realize this, it's alarmingly difficult to get people to realize they're fighting against their interests.

          3. Oslypsis
            Link Parent
            Become a professional wonderer. Curate interesting information/media on a subject you and others are interested in. I'm only half serious. But I am genuinely curious if something along the lines...

            Become a professional wonderer. Curate interesting information/media on a subject you and others are interested in.

            I'm only half serious. But I am genuinely curious if something along the lines of this would thrive. I mean, look at Daily Dose of Internet on youtube. That's what he does. And because he does it well (for a general audience though, not a niche one), he's super popular.
            ¯\_(ツ)_/¯

            3 votes
          4. [2]
            Comment deleted by author
            Link Parent
            1. cykhic
              Link Parent
              I don't think what anyone may have seen at a bus stop is more likely to spark a solution than the vast tracts of, say, Wikipedia that an AI would have been trained on, which I don't think any...

              I don't think what anyone may have seen at a bus stop is more likely to spark a solution than the vast tracts of, say, Wikipedia that an AI would have been trained on, which I don't think any human has exhaustively read.

              (And if Street View has a picture of the bus stop....)

              3 votes
        3. Plik
          Link Parent
          Sex work, running moonshine, bar tending, and suring.... On a more serious note, I do see AI potentially leading to more (the only?) opportunities being in forms of various live entertainment....

          Sex work, running moonshine, bar tending, and suring....

          On a more serious note, I do see AI potentially leading to more (the only?) opportunities being in forms of various live entertainment.

          People might actually start hanging out more in person after tiring of the deluge of AI generated content. Plus with no UBI, nobody will have enough money or work so what else is there to do but hangout (cheaply)?

          3 votes
      3. updawg
        Link Parent
        I'll be honest, I think the exemplary tag for this question is wholly undeserved. I mean, I hate this question. No one ever said it's okay for it to put the less educated out of work. You say you...

        I'll be honest, I think the exemplary tag for this question is wholly undeserved. I mean, I hate this question. No one ever said it's okay for it to put the less educated out of work. You say you don't hold a malus and you're trying to ask gently, but that doesn't allow you to twist what they said. If you are inferring that "[it's scary that it] puts even highly educated people out of work" means that it's scarier because it doesn't just put less educated people out of work, the charitable understanding is very simple: "it's scary because it puts everyone out of work."

        18 votes
      4. [2]
        Light_of_Aether
        Link Parent
        That's a fair question. It was okay because computers and robotics do what you tell them (at the moment). A human had to do the thinking to program them correctly. The fruit packer and car painter...

        That's a fair question. It was okay because computers and robotics do what you tell them (at the moment). A human had to do the thinking to program them correctly. The fruit packer and car painter could learn to use that technology. If AI can think better than humans and do all or even most science better than humans, that's a big problem, not just for me, but also for the former fruit packer and car painter. Science is hard and requires advanced degrees taking 3+ years to just be knowledgeable enough to perform research. If machines can do what humans with 3+ years beyond the bachelor's can do, where does that leave humanity as a whole? What can we possibly learn that will make us useful enough to be paid for our labor? I'm scared for not just me, but all of us because our governments are not ready for this. We need universal basic income and healthcare for this shift.

        10 votes
        1. MimicSquid
          Link Parent
          Thank you. I appreciate your perspective on this, and your taking the question in the spirit it was asked. I'm 100% with you that we need a different modality for taking care of people to separate...

          Thank you. I appreciate your perspective on this, and your taking the question in the spirit it was asked. I'm 100% with you that we need a different modality for taking care of people to separate it out from the value of their labor. Hopefully we can have a somewhat peaceful transition to a new economy as it's less and less based on human effort at any stage.

          7 votes
      5. nosewings
        Link Parent
        Or, for that matter, artists, who are already going to be put out of work.

        Why was it ok when the less educated were going to be put out of work

        Or, for that matter, artists, who are already going to be put out of work.

        3 votes
    2. [12]
      skybrian
      Link Parent
      I think someone got a little over-excited. There are programs that can prove math theorems. At one time finding a new gene was a PhD thesis, and now scientists transcribe entire genomes. If AI...

      I think someone got a little over-excited. There are programs that can prove math theorems. At one time finding a new gene was a PhD thesis, and now scientists transcribe entire genomes. If AI proves useful to automate some subfield then maybe scientists need to move on, but there will still be plenty of other scientific questions.

      17 votes
      1. [9]
        vektor
        Link Parent
        Could be a nothing burger from the Reuters article, could also be massive. Depends on what exactly they're doing. But the name and the description given make me think they might be on to a big...

        Could be a nothing burger from the Reuters article, could also be massive. Depends on what exactly they're doing. But the name and the description given make me think they might be on to a big deal. I'm completely conjecturing here, but Q* sounds like reinforcement learning married to symbolic search. Given it's openAI, I wouldn't be surprised if the copious amounts of compute thrown in too are for LLM integration. Depending on the tasks you throw at such a model, giving a model serious reasoning capabilities doesn't seem implausible to me. Reasoning+LLMs is one of those combinations that could be adopted extremely quickly but also improve LLM capabilities in unforeseen ways.

        Again the caveat: This is a complete conjecture based on "Q*", "can do maths" and "massive compute". I'm the Pepe Silvia meme.

        6 votes
        1. [8]
          sparksbet
          Link Parent
          Open-AI and others developing big machine learning models like this have a tendency to exaggerate their results, though. Remember when Open-AI said they wouldn't release GPT-3 because it was "too...

          Could be a nothing burger from the Reuters article, could also be massive.

          Open-AI and others developing big machine learning models like this have a tendency to exaggerate their results, though. Remember when Open-AI said they wouldn't release GPT-3 because it was "too dangerous"? There was a big deal about how it could do math better than previous models too.

          Suffice it to say I'm extremely skeptical until we can actually see what's what. These things are almost universally over-hyped in both a positive and negative direction.

          8 votes
          1. [4]
            vektor
            (edited )
            Link Parent
            The entire GPT lineage and pretty much all of LLMs has been almost entirely trained unsupervisedly on bulk text data, with small amounts of fine tuning on annotated data. The right idea could...

            The entire GPT lineage and pretty much all of LLMs has been almost entirely trained unsupervisedly on bulk text data, with small amounts of fine tuning on annotated data. The right idea could unlock new modes of training, which could be a massive leap. Imagine some kind of embedding representation that can be model checked by symbolic or logical engines. Apply RL to learn a good transformation from/to that representation and produce only logically sound outputs. There's probably a bunch of issues with that that we can't see without trying it out, but there's plenty of people working on this or similar things, and it only needs to work once. In all honesty, I think the last 2 years or so have shown us that major developments are often only a good idea away.

            That said, skepticism is appropriate. We don't know the contents of the letter, nor what they actually developed. Overhyped ML models are a thing. But chatGPT also had a humble launch, at least compared to where it is now.

            1 vote
            1. [3]
              sparksbet
              Link Parent
              I work in this field so I'm definitely aware of the limitations of the current types of models we're using and how we train them, but I'm not particularly convinced Open AI is likely to have...

              I work in this field so I'm definitely aware of the limitations of the current types of models we're using and how we train them, but I'm not particularly convinced Open AI is likely to have struck on something really revolutionary here, especially so soon after the big Gen AI hype started. I think getting something that approaches actual reasoning is going to require HUGE paradigm shifts when compared to existing LLMs, and Open-AI hasn't really done that in the past -- their advancements have been interative improvements on the same type of LLM. ChatGPT itself is like that.

              I'm happy if I end up being wrong and they have come up with something truly new and interesting, because I'm honestly kinda sick of the state of NLP being principally "chuck buttloads of data at a transformer-based model". It would be nice to see advancement in language models that isn't "we made the model bigger and gave it more buttloads of data".

              7 votes
              1. [2]
                vektor
                Link Parent
                I wouldn't be so sure about the timing thing. In the right research niches, the writing was already on the wall >6 years ago that reasoning is going to be the achilles heel of neural networks....

                but I'm not particularly convinced Open AI is likely to have struck on something really revolutionary here, especially so soon after the big Gen AI hype started.

                I wouldn't be so sure about the timing thing. In the right research niches, the writing was already on the wall >6 years ago that reasoning is going to be the achilles heel of neural networks. They've been working on it. I'd presume that OpenAI caught on at the latest 4 years ago, and they probably didn't start from scratch but pinched the right researchers. I don't think it's realistic to assume that OpenAI caught on to the reasoning problem at the same time as the general public, that would be almost criminally negligent. I also don't think it's quite right to portray OpenAI as completely iterative and unimaginative, though their most popular achievements certainly are almost all about scaling up existing ideas.

                The reason I'm saying that

                major developments are often only a good idea away

                is because of e.g. stable diffusion. Some bloke published a paper, and 3 weeks later the image generation landscape is unrecognizable. The field can move extremely fast when the right idea comes around.

                And just in case I didn't mention it three times already: Caveat still applies that this could just be a nothing burger and Q* is a small iteration on some known concept, rather than plumbing previously unconnectable paradigms into the same system.

                1. sparksbet
                  Link Parent
                  I agree with you here, I just think the state of current research here is so far from what GPT has been doing and what they'd need for the necessary paradigm shift that it would be impossible for...

                  and they probably didn't start from scratch but pinched the right researchers. I don't think it's realistic to assume that OpenAI caught on to the reasoning problem at the same time as the general public,

                  I agree with you here, I just think the state of current research here is so far from what GPT has been doing and what they'd need for the necessary paradigm shift that it would be impossible for them to play it so close to the chest if that were the case. I'd have expected more... idk, build up with constituent parts in papers and such. If I'm wrong, I'll be damn impressed with how much of a secret they kept it!

                  But yeah to know either way we have to wait and see lol

                  1 vote
          2. [3]
            Jordan117
            Link Parent
            They didn't release GPT-3, at least in the sense of releasing model weights (as with Facebook's LLaMA) that would allow anyone to run it themselves. They haven't published a model like that since...

            Remember when Open-AI said they wouldn't release GPT-3 because it was "too dangerous"?

            They didn't release GPT-3, at least in the sense of releasing model weights (as with Facebook's LLaMA) that would allow anyone to run it themselves. They haven't published a model like that since GPT-2, and even that version was significantly scaled down from the internal one. Everything since has only been accessible through registered use of the API, which they monitor and control.

            1 vote
            1. post_below
              Link Parent
              Let's not pretend the motivations for that are altruistic. Whatever the ultimate outcomes end up being, actually open source models are better for society than corporate control of AI. Nonprofit...

              Let's not pretend the motivations for that are altruistic.

              Whatever the ultimate outcomes end up being, actually open source models are better for society than corporate control of AI. Nonprofit board or not (current events show where the real power is), for profit companies are not designed to contribute to the greater good.

              3 votes
            2. sparksbet
              Link Parent
              Yes I'm aware but there was also a lot of hype at the time before they made it possible for all but a few select outsiders to use that API. It was very hyperbolic.

              Yes I'm aware but there was also a lot of hype at the time before they made it possible for all but a few select outsiders to use that API. It was very hyperbolic.

              1 vote
      2. [2]
        bloup
        Link Parent
        Every mathematical proof is in fact literally a computer program that computes the truth value of a formal statement. It is called the Curry-Howard correspondence.

        Every mathematical proof is in fact literally a computer program that computes the truth value of a formal statement. It is called the Curry-Howard correspondence.

        1 vote
        1. skybrian
          Link Parent
          The Curry-Howard correspondence is cool but it seems to be pretty underwhelming in practice for most programs. A function's return type is what it proves. Most functions prove trivial statements...

          The Curry-Howard correspondence is cool but it seems to be pretty underwhelming in practice for most programs. A function's return type is what it proves. Most functions prove trivial statements like "given an integer, an integer exists" or "given a string, an integer exists" because that's the return type.

          To prove anything nontrivial, you need a fancy type system that makes complicated assertions about the return value, so you can say things like "this function always returns a prime number." You also need a special programming language that can guarantee that a function will return a value without running it. (That is, there are no infinite loops or other ways it can error out.) The proofs are checked by compiling the program, not running it.

          Not needing to run the program is essential so that you can prove things for all possible inputs without actually running the program for every possible input. It's sometimes possible to do an exhaustive check, but not for input types that have an infinite domain, like all integers or all real numbers.

          5 votes
    3. LukeZaz
      Link Parent
      In a sane world, it would put us in the luxuriant position of simply not needing to work. The only problem with automation in the modern day is the fact that most of today’s society values people...

      In a sane world, it would put us in the luxuriant position of simply not needing to work.

      The only problem with automation in the modern day is the fact that most of today’s society values people only insofar as they are productive, and that if they aren’t, they are often left to die. Fix this, and automation is a boon, not a bane.

      Note that while I am ignoring issues of AI safety and “who controls it?” questions, that’s simply because your comment seems more concerned with there being no work than these issues. I just don’t think work is inherently necessary for a good life.

      9 votes
    4. [2]
      teaearlgraycold
      Link Parent
      It sounds like something I’ve been expecting for years now and I think it’ll be a revolution for humanity. AI could become a fully automated Socratic Method machine. Not able to collect...

      It sounds like something I’ve been expecting for years now and I think it’ll be a revolution for humanity. AI could become a fully automated Socratic Method machine. Not able to collect information on its own, but able to fill in gaps in knowledge given things we already know. And it will do so with superhuman capabilities.

      Imagine 100 years of hard won research, Nobel prizes, breakthroughs, delivered in the span of a few months. How can we say that’s a bad thing?

      Then humans can focus on the edges, building machines and conducting studies only on the very frontier of knowledge.

      8 votes
      1. kovboydan
        Link Parent
        It’s not like we’ll take our ball and go home. Even if it gets there, someone will have to confirm it isn’t throwing out nonsense, test its hypothesis, etc. Maybe at some point we will be like the...

        It’s not like we’ll take our ball and go home.

        Even if it gets there, someone will have to confirm it isn’t throwing out nonsense, test its hypothesis, etc.

        Maybe at some point we will be like the idiots in Galactic Empire or Foundation and just…not know how anything works while everything collapses. But, not anytime soon.

        2 votes
    5. Uni_rule
      Link Parent
      There is absolutely zero point to taking the output of such things at face value, it would still need human peer review and recorded experimentation like any other theory. Anything less than that...

      There is absolutely zero point to taking the output of such things at face value, it would still need human peer review and recorded experimentation like any other theory. Anything less than that is a glorified noise generator that merely produces ideas that sound plausible.

      2 votes
  2. skybrian
    Link
    It's a very sketchy report. I don't think there's anything to do other than wait a few days and see what else gets reported.

    It's a very sketchy report. I don't think there's anything to do other than wait a few days and see what else gets reported.

    16 votes
  3. [6]
    BitsMcBytes
    Link
    It gets weirder when you think about how Larry Summers (former US Treasury Secretary) somehow got on the OpenAI board after the weekend drama, and you wonder, how did Larry end up there? But then...

    Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters.

    It gets weirder when you think about how Larry Summers (former US Treasury Secretary) somehow got on the OpenAI board after the weekend drama, and you wonder, how did Larry end up there?

    But then you see this paper on Q theory he worked on way back in the day... q* = ....

    https://twitter.com/pmarca/status/1727503674283606401

    8 votes
    1. vektor
      Link Parent
      That seems like mostly just a funny coincidence to me. I don't know much about economics, but within the context of what openAI is working on, there's more likely meanings of Q*. The star could...

      That seems like mostly just a funny coincidence to me. I don't know much about economics, but within the context of what openAI is working on, there's more likely meanings of Q*. The star could simply be a modification of Q-learning, or a combination of it with e.g. search algorithms.

      8 votes
    2. spit-evil-olive-tips
      Link Parent
      The Q ratio, also known as Tobin's Q, equals the market value of a company divided by its assets' replacement cost. it was first used in 1966, when Larry Summers was 12 years old. an economics...

      But then you see this paper on Q theory he worked on way back in the day...

      The Q ratio, also known as Tobin's Q, equals the market value of a company divided by its assets' replacement cost.

      it was first used in 1966, when Larry Summers was 12 years old.

      an economics paper from 1980: On the "q" Theory of Investment

      it seems to be entirely coincidental.

      I'm moderately surprised (though maybe I shouldn't be) to see someone as well-known as Marc Andreessen spreading this sort of hogwash.

      8 votes
    3. updawg
      Link Parent
      Homie how many papers do you think this dude wrote? He went to MIT at 16, got a PhD from Harvard and then became a professor at Harvard at 28 years old. Dude probably has a paper on every single...

      Homie how many papers do you think this dude wrote? He went to MIT at 16, got a PhD from Harvard and then became a professor at Harvard at 28 years old. Dude probably has a paper on every single letter in every single alphabet.

      2 votes
  4. [4]
    mezze
    (edited )
    Link
    Maybe I’ll look back on this comment once the AI overlords have taken over and feel silly about it, but from today’s perspective I think the risks are overblown and exaggerated. I’m firmly in the...

    Maybe I’ll look back on this comment once the AI overlords have taken over and feel silly about it, but from today’s perspective I think the risks are overblown and exaggerated. I’m firmly in the full-speed ahead camp as humanity still faces very real, grisly realities for all but the most fortunate in terms of health.

    While I don’t want to dismiss anybody’s contributions, if I’m being honest, I’m disappointed where medicine finds itself in 2023. Maybe I’m being naive, but I thought progress would’ve been further along by now. Cancer continues to flummox our best researchers, preventing or reversing Alzheimer’s is anybody’s guess, the global obesity epidemic will keep taking its toll on healthcare systems, and not to mention all the rare diseases that are costly to both diagnose and treat.

    If an AGI can help with any of those things…let it rip, I say.

    8 votes
    1. Eji1700
      Link Parent
      Disclaimer: I think they are no where close to an AGI. That said, it isn't like the AI is going to take over the world. It's just going to become the best tool for those already in power to ensure...

      Disclaimer: I think they are no where close to an AGI.

      That said, it isn't like the AI is going to take over the world. It's just going to become the best tool for those already in power to ensure their position. It consolidates power tremendously and could very well change a lot, IF it's possible and if it happens.

      6 votes
    2. Wolf_359
      Link Parent
      I have generally been in the full speed ahead camp as well. I suspect we wouldn't just wake up one day with a conscious and angry AI. My guess is that this will be an iterative process and we...

      I have generally been in the full speed ahead camp as well. I suspect we wouldn't just wake up one day with a conscious and angry AI. My guess is that this will be an iterative process and we would probably have a good idea of how a conscious AI would act and feel long before it was just set loose on the world.

      I'm assuming that many semi-conscious AI would have to exist first and we could kind of see how that goes as we move forward.

      Granted, I also think we are a long way off from conscious AI.

      2 votes
    3. iBleeedorange
      Link Parent
      Curing cancer is like trying to fix a car that is constantly changing into different models while it's being driven. Cancer mutates over time and each type of cancer is different from another....

      Curing cancer is like trying to fix a car that is constantly changing into different models while it's being driven. Cancer mutates over time and each type of cancer is different from another. It's incredibly complex, if we ever cure it, it will be a miracle.