26 votes

Shouldn't somebody *stop* "Meta Superintelligence Labs"?

Noted smoked meats enthusiast Mark Zuckerberg has recently been running around collecting ML experts for a project involving an organization called Meta Superintelligence Labs, which is set to feature compute clusters with names like "Prometheus" and "Hyperion", and which will attempt to "deliver" superintelligence.

Isn't this sort of behavior on the list of things people are absolutely not to be allowed to do? Or has something changed and we now feel it's safe for Mark Zuckerberg to be allowed control of a piece of equipment that can outsmart all his enemies and also Mark Zuckerberg? Are we all safely convinced he will fail?

If it cannot be permitted, who is responsible for not permitting it?

78 comments

  1. Greg
    Link
    Yeah, pretty much. The tech is world-changing either way, the whole consciousness / superintelligence bit is just marketing blended with a healthy dose of techbro grandiosity.

    Are we all safely convinced he will fail?

    Yeah, pretty much. The tech is world-changing either way, the whole consciousness / superintelligence bit is just marketing blended with a healthy dose of techbro grandiosity.

    56 votes
  2. [19]
    skybrian
    (edited )
    Link
    Just because Meta tries to do something doesn't mean it will succeed. They can spend billions and fail. Example: the whole Metaverse thing. There are certain things that governments will oppose...

    Just because Meta tries to do something doesn't mean it will succeed. They can spend billions and fail. Example: the whole Metaverse thing.

    There are certain things that governments will oppose preemptively like nuclear or biological weapons development, planning terrorist attacks, and so on. Attempting to create AGI is not on that list yet.

    Some people argue that it should be. Eliezer Yudkowsky has famously argued that we should "shut it all down" which would apply not just to Meta but the other AI labs, too:

    Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

    But that's pretty far from a mainstream position.

    32 votes
    1. [18]
      vektor
      Link Parent
      It's a positively bonkers position, but unless you're willing to cede the AI arms race to a foreign power, the only option is to negotiate or bomb them. And I don't think most relevant countries...

      It's a positively bonkers position, but unless you're willing to cede the AI arms race to a foreign power, the only option is to negotiate or bomb them. And I don't think most relevant countries are easily convinced by negotiations here.

      So unless you want a war with China, the AI arms race will continue. All the way to a superintelligence, if that is where this eventually goes. Hopefully a controllable superintelligence, alternatively benevolent.

      15 votes
      1. [17]
        CptBluebear
        Link Parent
        It's also vastly overestimating what a large language model means in terms of AI. Sounds like Eliezer got scared of GPT sounding vaguely human and decided to break glass. There is no intelligence....

        It's also vastly overestimating what a large language model means in terms of AI. Sounds like Eliezer got scared of GPT sounding vaguely human and decided to break glass.

        There is no intelligence. It's essentially predictive text.

        17 votes
        1. [15]
          streblo
          Link Parent
          I think there is a lot we don't know about our own consciousness and intelligence that we should be cautious of maximalist positions like this. There are seemingly several emergent properties...

          There is no intelligence. It's essentially predictive text.

          I think there is a lot we don't know about our own consciousness and intelligence that we should be cautious of maximalist positions like this. There are seemingly several emergent properties associated with the extremely high dimensional space that LLMs are operating in that allow them to do some amount of reasoning beyond text prediction.

          22 votes
          1. [2]
            vektor
            (edited )
            Link Parent
            You put it better than I would have. We also don't know what will emerge from scaling up another order of magnitude, or another. (We also don't know how much today's or tomorrow's research might...

            You put it better than I would have.

            We also don't know what will emerge from scaling up another order of magnitude, or another. (We also don't know how much today's or tomorrow's research might improve scaling laws.) In fact, in the infinite data/infinite compute case, basically any sufficiently free architecture should achieve intelligence.

            Besides that I'm quite skeptical of using the term intelligence without defining it. We used to think chessplay was a unmistakable sign of intelligence. Then it was the Turing test. Then it was creative expression. By most of the definitions we used to apply, LLMs are intelligent, it's just that we abandoned these definitions once machines beat them. If we keep doing that for much longer, no animal will be intelligent anymore and the only artificial intelligence we'd consider intelligent is a superintelligence.

            12 votes
            1. zipf_slaw
              Link Parent
              I think the threshhold will involve the Entity knowing how to find and access information it knows exists or potentially exists. Once It knows how to hack-sess anything (including by impersonating...

              I think the threshhold will involve the Entity knowing how to find and access information it knows exists or potentially exists. Once It knows how to hack-sess anything (including by impersonating voices on phone calls, faces on Zoom, and other online mannerisms), we are Boned. I think we should go with The Basilisk on this one [wink], cuz the other option may not work out so well.

              1 vote
          2. [5]
            Sunward
            Link Parent
            I'm largely with you on this, having come around myself from "it's just spicy Markov chains/autocomplete" to "OK there is clearly some kind of encoding of actual semantics emerging from those...

            I'm largely with you on this, having come around myself from "it's just spicy Markov chains/autocomplete" to "OK there is clearly some kind of encoding of actual semantics emerging from those billions of parameters".

            But until an LLM can know what it doesn't know, and act accordingly (i.e., admit it rather than confabulating or hallucinating, at a minimum; I'd consider tool use to search the web and fill the knowledge gap optional since the web is full of slop now), I'd still hesitate to call them "intelligent".

            It'd be interesting to know, I think, whether they actually can know what they don't know, and their tendency towards making shit up is just a product of traning feedback processes where evaluators tend to prefer confident-sounding, completely made-up garbage over "I don't know", or whether it's a fundamental, inescapable limitation of the models and/or inference algorithms.

            11 votes
            1. [4]
              first-must-burn
              Link Parent
              I mean, humans often don't know what we don't know, or overestimate our ability to reason. So if they are training on human data, could this self blindness be an emergent property?

              I mean, humans often don't know what we don't know, or overestimate our ability to reason. So if they are training on human data, could this self blindness be an emergent property?

              6 votes
              1. [3]
                Sunward
                Link Parent
                You're not wrong, but: LLMs tend to be portrayed as virtually omniscient, an expert in any field you care to engage them on. Whether this is warranted is a separate topic, but for the purposes of...

                You're not wrong, but: LLMs tend to be portrayed as virtually omniscient, an expert in any field you care to engage them on. Whether this is warranted is a separate topic, but for the purposes of this discussion, I'm accepting that framing in order to disagree with it, because I'd argue that achieving true expertise means cultivating a degree of humility about your skills and knowledge and a concomitant awareness of your limitations with regard to the thing you're an alleged expert about. (I'll concede that this may be an idiosyncratic definition that no one else agrees with, though.)

                This isn't to say, of course, that humans have perfect awareness of their limitations with respect to things that they may be experts in, but humans certainly seem to be more capable of it in general than an LLM is; a human expert at least will not typically confidently state made-up nonsense in response to a query and then fall over themselves to deferentially reverse their position and agree with you if you contradict them, whereas that behavior seems pretty commonplace for LLMs.

                2 votes
                1. [2]
                  first-must-burn
                  Link Parent
                  I would dispute this claim. It's not uncommon for someone who is an expert in one area to assert that their opinion should carry weight in other areas, or for people to give disproportionate...

                  a human expert at least will not typically confidently state made-up nonsense in response to a query and then fall over themselves to deferentially reverse their position and agree with you if you contradict them

                  I would dispute this claim. It's not uncommon for someone who is an expert in one area to assert that their opinion should carry weight in other areas, or for people to give disproportionate weight to their opinion. It's a very ego-driven thing, unfortunately common in my experience. Sometimes people call it the PhD fallacy, though I think the root of it is really kind of a self-deception rather than a logical fallacy.

                  I am an expert in my field and work with many who are experts in adjacent fields, and it's pretty common there. I also saw it back in grad school. Maybe it doesn't apply to all fields equally though. Robotics and autonomy skew white, male, and "tech bro", so that could be a factor in my experience.

                  3 votes
                  1. Sunward
                    Link Parent
                    Good point. I am definitely aware of the phenomenon of people who are highly skilled and knowledgeable in one field assuming that that expertise extends equally to other fields, but in my head, my...

                    Good point. I am definitely aware of the phenomenon of people who are highly skilled and knowledgeable in one field assuming that that expertise extends equally to other fields, but in my head, my statement was implicitly constrained along the lines of "a human expert when asked a question within their domain of expertise".

                    I do also recognize that there are people who are considered experts who would rather make shit up in that situation than admit that they don't know something, but, at the twin risks of repeating myself from my earlier post and getting all "no true Scotsman", an expert who is unwilling to admit the limits of their knowledge is undeserving of being called an expert.

                    1 vote
          3. [5]
            Eji1700
            Link Parent
            For the record this is just not true. The people who actually study human consciousness for a living are very much feeling like this is COVID where everyone seems to have an opinion on their field...

            For the record this is just not true. The people who actually study human consciousness for a living are very much feeling like this is COVID where everyone seems to have an opinion on their field based on their knowledge of vibes and headlines.

            Modern AI is no where near human conciseness

            7 votes
            1. [4]
              streblo
              Link Parent
              What's not true? I'm not claiming current AI is anywhere near conscious, I'm just saying dismissing it as an entirely non-intelligent Markov chain with no reasoning ability when we don't even...

              What's not true?

              I'm not claiming current AI is anywhere near conscious, I'm just saying dismissing it as an entirely non-intelligent Markov chain with no reasoning ability when we don't even understand our where own intelligence comes from is probably premature. And I'd be interested in hearing what these consciousness experts are saying but in my opinion consciousness (at least at this point) is mostly an unknowable philosophical problem.

              5 votes
              1. Minori
                (edited )
                Link Parent
                Philosophical problems aren't unknowable. LLMs currently fail the Chinese Room test of consciousness. They're pretty convincing at a surface level, but they lack awareness....

                Philosophical problems aren't unknowable. LLMs currently fail the Chinese Room test of consciousness. They're pretty convincing at a surface level, but they lack awareness.

                https://en.wikipedia.org/wiki/Computational_theory_of_mind#Criticism

                4 votes
              2. [2]
                Eji1700
                Link Parent
                One of many sourced starting points on medical theories of conciseness: https://pmc.ncbi.nlm.nih.gov/articles/PMC10287796/ If you’d like to dive in on what these people, or those in their field,...

                One of many sourced starting points on medical theories of conciseness:

                https://pmc.ncbi.nlm.nih.gov/articles/PMC10287796/

                If you’d like to dive in on what these people, or those in their field, have to say about AI there’s less of that, mostly because it’s viewed akin to asking someone studying global warming why sometimes it still gets cooler than usual.

                There is a vast amount of supposed postulating that hasn’t even started on the basic research and reading to understand the question and instead feels like swatting down the 500th “yes vaccines can have mercury. Yes mercury is bad for you. No that doesn’t mean vaccines are bad for you” line of questioning.

                A lot of the “threat” of AI has nothing to do with it being smarter than people and destroying the world and much more to do with economic realities and shifts from “good enough” automation that is from a consciousness level no more close than an if/then statement.

                If your reflex is to say that maybe you could describe humans that way too then you really should read more on the subject.

                1 vote
                1. streblo
                  Link Parent
                  All I said re: consciousness was: "I think there is a lot we don't know about our own consciousness and intelligence that we should be cautious of maximalist positions like this. " The abstract of...

                  All I said re: consciousness was: "I think there is a lot we don't know about our own consciousness and intelligence that we should be cautious of maximalist positions like this. "

                  The abstract of your paper states largely the same:

                  Although the complete neurobiological basis of consciousness remains a mystery, recent advances have improved our understanding of the physiology underlying level of consciousness and phenomenal consciousness.

                  I just don't understand the point you're trying to make. In fact, doing some quick research of my own it appears that one of the leading theories of consciousness is pretty clearly compatible with a sufficiently advanced AI, although I fully agree we're nowhere near that level. But all the research in this field is full of caveats and disclaimers because well, there's a lot we still don't understand! The Nature study poked some large holes in two of these leading theories, which to me suggests we have a long ways to go to even a rudimentary understanding.

                  2 votes
          4. [2]
            pete_the_paper_boat
            Link Parent
            LLMs with their current architectures, are not going to represent consciousness. I'm pretty certain of that.

            I think there is a lot we don't know about our own consciousness and intelligence that we should be cautious of maximalist positions like this

            LLMs with their current architectures, are not going to represent consciousness. I'm pretty certain of that.

            5 votes
        2. apolz
          Link Parent
          I don’t really agree with Yudkowsky, but this is just a total strawman of his argument. Nowhere in his writings does he say that he is scared of LLMs or even talks about them specifically. He is...

          I don’t really agree with Yudkowsky, but this is just a total strawman of his argument. Nowhere in his writings does he say that he is scared of LLMs or even talks about them specifically. He is worried about AI agents - meaning when people handoff some of their own power to AIs.

          Spoiler alert: AI agents are already here and people are in fact handing off their own tasks and power to them. LLMs or whatever the models are, are just an implementation detail and have no relevance to the argument of taking them seriously.

          10 votes
  3. [20]
    Eji1700
    Link
    I think you're buying waaaay too much into the hype. Should people be allowed personal super intelligence, probably not? Are there laws against it? No because it's like making a law saying you...

    I think you're buying waaaay too much into the hype.

    Should people be allowed personal super intelligence, probably not? Are there laws against it? No because it's like making a law saying you can't own a weapon that goes .99c.

    No one, anywhere, at all, is near doing this. He might as well be opening a lemonade stand for as much as this ACTUALLY matters. And there's nothing illegal in the slightest about paying smart people to try and do smart things.

    In the odd chance it bears fruit and this becomes a private sourced manhatten project of any kind, the people who have a good answer to "you and what army" will be confiscating/getting involved it directly anyways. Zuckerberg however is extremely unlikely to be the kind of person who assembles that kind of team.

    13 votes
    1. [7]
      hobbes64
      Link Parent
      In science fiction movies like E.T. and Arrival, there are agencies that watch for dangerous rare events and have rapid response teams to counter them, along with plans for all contingencies. So...

      In the odd chance it bears fruit and this becomes a private sourced manhatten project of any kind, the people who have a good answer to "you and what army" will be confiscating/getting involved it directly anyways.

      In science fiction movies like E.T. and Arrival, there are agencies that watch for dangerous rare events and have rapid response teams to counter them, along with plans for all contingencies. So normally I would agree with you.

      But in the last few years in the United States, it’s become clear that there isn’t even protection against insurrection or large scale dismantling of critical science and infrastructure. I assume that any particular billionaire who wants to do Dr Evil stuff would just pay off a few senators and carry on.

      5 votes
      1. Eji1700
        Link Parent
        You are vastly underestimating the goals of agencies that care about things like this. If you made a new internet DARPA would be on your doorstep tomorrow. Just because we don’t broadcast every...

        You are vastly underestimating the goals of agencies that care about things like this.

        If you made a new internet DARPA would be on your doorstep tomorrow. Just because we don’t broadcast every thing they’re looking into doesn’t mean it’s not being watched, but this is like being worried about the strategic implications of the hyperloop. It’s 90% grift.

        4 votes
      2. [5]
        Minori
        Link Parent
        The US has problems, but it's largely not that level of brazenly corrupt (ignoring Trump). Members of Congress have been removed from office and imprisoned due to bribery.

        I assume that any particular billionaire who wants to do Dr Evil stuff would just pay off a few senators and carry on.

        The US has problems, but it's largely not that level of brazenly corrupt (ignoring Trump). Members of Congress have been removed from office and imprisoned due to bribery.

        2 votes
        1. [4]
          hobbes64
          Link Parent
          The fact that trump is still in office rather than in jail shows that the brazen corruption is far more widespread than you’re admitting. He’s not able to do a single thing without the support of...

          The fact that trump is still in office rather than in jail shows that the brazen corruption is far more widespread than you’re admitting. He’s not able to do a single thing without the support of the Republican Congress and Supreme Court. He’s basically a distracting puppet for the project 2025 people to undermine democracy.

          1 vote
          1. [3]
            stu2b50
            Link Parent
            If anything I feel like Trump shows that corruption isn’t as high as people imagine. Trump isn’t particularly wealthy or anything. Actual billionaires or old money are far wealthier than his...

            If anything I feel like Trump shows that corruption isn’t as high as people imagine. Trump isn’t particularly wealthy or anything. Actual billionaires or old money are far wealthier than his propped up estate.

            I think a lot of people think some kind of shadow cabal runs America, comprising of old money. But here we are, Trump is destroying international trade relations on a weekly basis, and everyone is powerless to stop him.

            Similarly, it’s not like Trump is bribing Republican senators - rather, he’s threatening them with ending their political careers like he did with Liz Cheney. He controls Republican voters and through that he exerts control on the party and the country.

            If the US government was as transactional as people think, it would be Trump bending the knee to Zuck, not the other way around.

            3 votes
            1. hobbes64
              Link Parent
              I think we are mostly agreeing. The earlier premise was that if AI was dangerous, there are certain government agencies that would notice and put a stop to it. I have less confidence in that than...

              I think we are mostly agreeing.

              The earlier premise was that if AI was dangerous, there are certain government agencies that would notice and put a stop to it. I have less confidence in that than I used to, since we see many mundane but critical government functions being destroyed in the last 7 months and there is almost nobody putting a stop to it.

              My claim wasn’t really about government corruption, but about government ineptness or fragility. And I think if trump can dismantle the state department and setup concentration camps and disrupt international trade, and nobody is stopping that stuff, why do we think someone would swoop in and stop Meta from doing whatever it wants?
              We can argue about the cause if dysfunction and whether it is corruption or or, but it’s happening.

              2 votes
            2. streblo
              Link Parent
              Tangential and unsolicited, but here's my crackpot theory on all the knee bending. I think this is <5% chance of being the driving reason, but I also wouldn't be surprised if it was. Among the...

              Tangential and unsolicited, but here's my crackpot theory on all the knee bending.

              I think this is <5% chance of being the driving reason, but I also wouldn't be surprised if it was.

              Among the tech billionaires, there seems to be a belief that there's at least a small chance we could start an AI driven information explosion in the next 5-10 years. The so-called "country of geniuses in a data center". I think if you game this kind of thing out there's a world where this could create a lot of civil (and probably moral) strife. It just fits easily into (and basically is) this "ivory tower of elites radically transforming society" narrative turned up to 11. But with Trump as President it seems plausible that if you can get him on board he could provide a lot of cover.

              If you think there's a realistic chance of this occurring, even something like 5%, it makes sense with how it's playing out so far. Nothing else really matters.

    2. [12]
      Chiasmic
      Link Parent
      I’m not sure I agree with your analogy. If I was publicly trying to make a new weapon in my back garden, especially seriously enough to be employing people, I would expect a knock on the door....

      I’m not sure I agree with your analogy. If I was publicly trying to make a new weapon in my back garden, especially seriously enough to be employing people, I would expect a knock on the door.
      Similarly, if I was an amateur scientist powerful enough to be employing 100 people to conduct experiments to try and create a black hole/large quantities of antimatter/whatever, if there was an outside chance of it happening I would expect someone to come and regulate me.

      One thing I am surprised about is the complete lack of regulatory oversight on AI. I was trying to find an analogy with a serious risk that doesn’t have regulatory oversight and it’s really hard. Hacking, research (especially biomedical), weaponry, communication, nuclear, energy, radio waves etc all are regulated.
      It’s actually shocking the absence of regulation.

      1. [11]
        Eji1700
        Link Parent
        Because it's not anywhere near remotely dangerous enough to require it? AI is such a stretch of a term. Its an LLM model. It can't skynet the world, launch nukes, start a grey goo scenario, crash...

        One thing I am surprised about is the complete lack of regulatory oversight on AI

        Because it's not anywhere near remotely dangerous enough to require it? AI is such a stretch of a term. Its an LLM model. It can't skynet the world, launch nukes, start a grey goo scenario, crash the market, supplant society, or really anything else. There's too many people mixing marketing and fiction with reality, in part because the marketing side loves that. The actual "danger" of AI is equivalent to every other garbage system your critical infrastructure is already hooked up to.

        3 votes
        1. [10]
          papasquat
          Link Parent
          Currently, yeah. Their stated goal is to explicitly make an AI more capable than a human mind. That's inheritly dangerous. More dangerous than nukes. Nukes were, after all, designed by humans....

          Currently, yeah. Their stated goal is to explicitly make an AI more capable than a human mind. That's inheritly dangerous. More dangerous than nukes. Nukes were, after all, designed by humans. Imagine what could be designed by an intelligence that far surpasses humans without necessarily being bound by any sort of human morality.

          If Mark Zuckerberg publicly stated that he was going to build an antimatter weapon and then spent a few billion dollars to do it, don't you think someone would step in and put a stop to it? We don't know how to create an antimatter weapon either, but we think it may be possible, so why just allow someone to potentially threaten the whole world like that?

          3 votes
          1. stu2b50
            Link Parent
            Honestly, probably not? Who would, and why? Who would even care?

            If Mark Zuckerberg publicly stated that he was going to build an antimatter weapon and then spent a few billion dollars to do it, don't you think someone would step in and put a stop to it?

            Honestly, probably not? Who would, and why? Who would even care?

            4 votes
          2. [8]
            Eji1700
            Link Parent
            Sorry but not really. Nukes are about as dangerous you can get and “something beyond the human mind” is your average calculator. Computers are very good at some things and very bad at others. If...

            urrently, yeah. Their stated goal is to explicitly make an AI more capable than a human mind. That's inheritly dangerous. More dangerous than nukes. Nukes were, after all, designed by humans. Imagine what could be designed by an intelligence that far surpasses humans without necessarily being bound by any sort of human morality.

            Sorry but not really. Nukes are about as dangerous you can get and “something beyond the human mind” is your average calculator. Computers are very good at some things and very bad at others.

            If they had the stated goal of creating nuclear fusion power plants people would rightfully be skeptical because trillions of dollars across decades hasn’t gotten us there.

            This is so far beyond that as to say, again, their stated goal is warp travel. Should they produce ANY results then people will care but right now all science points towards market drivel that will output another grok.

            If Mark Zuckerberg publicly stated that he was going to build an antimatter weapon and then spent a few billion dollars to do it, don't you think someone would step in and put a stop to it?

            If he said anti matter generator instead of the word weapon, no. No I do not. Because all science points towards that being marketing bs. If someone says they’re going to put a military base on the moon you don’t care because you know that they can’t.

            There are much more achievable things that are much more dangerous. Rich guy overhypes promises on upcoming product is not worth anyone’s time

            2 votes
            1. [2]
              papasquat
              Link Parent
              Calculators aren't more capable than the human mind. They're better at arithmetic, sure, but they're not better at the things a human mind does better than any technology developed so far. A super...

              Calculators aren't more capable than the human mind. They're better at arithmetic, sure, but they're not better at the things a human mind does better than any technology developed so far. A super intelligent AI is by definition better at every cognitive task than a human mind. That's extremely dangerous if we don't understand the motivations of that mind, which we don't have the tools to do right now.

              If someone says they’re going to put a military base on the moon you don’t care because you know that they can’t.

              This isn't "someone". It's the second richest guy in the world.

              Elon Musk said he's going to establish human presence on Mars, and many people believe him despite that being an equally farfetched idea. If he announced his plans to launch a starship to Mars within the next month on his own volition without consulting anyone, you'd expect at a minimum, the FAA to give him a call and tell him he doesn't have clearance to do that, and that's a hell of a lot less potentially dangerous.

              Should they produce ANY results then people will care but right now all science points towards market drivel that will output another grok.

              The science here is very, very unclear about this. Some scientists think we can just continue scaling more or less current architectures and we'll get to something that can exceed the human mind, some think we'll hit a wall with the current approaches. The entire field is very murky and it's very difficult to predict where the technology will go.

              Transformer models as a concept were like a nuclear bomb going off in the machine learning space, and they've shaken a lot of assumptions to the ground, and the field is a lot less sure about things now than they were five or six years ago.

              1 vote
              1. Eji1700
                (edited )
                Link Parent
                And yet all current AI tech is centuries closer to calculator than it is to anything you’re describing. According to a Forbes 500 list or whatever. The kind of research that we’re talking about is...

                Calculators aren't more capable than the human mind. They're better at arithmetic, sure, but they're not better at the things a human mind does better than any technology developed so far. A super intelligent AI is by definition better at every cognitive task than a human mind. That's extremely dangerous if we don't understand the motivations of that mind, which we don't have the tools to do right now.

                And yet all current AI tech is centuries closer to calculator than it is to anything you’re describing.

                This isn't "someone". It's the second richest guy in the world.

                According to a Forbes 500 list or whatever. The kind of research that we’re talking about is national budget entries of billions or trillions a year. Zuckerberg isn’t even a blip on that scale.

                Elon Musk said he's going to establish human presence on Mars, and many people believe him despite that being an equally farfetched idea. If he announced his plans to launch a starship to Mars within the next month on his own volition without consulting anyone, you'd expect at a minimum, the FAA to give him a call and tell him he doesn't have clearance to do that, and that's a hell of a lot less potentially dangerous.

                So while he’ll never ever get to mars, the FAA would call him because he already is launching rockets. The distance between current ai tech and what is being discussed would be akin to Zuckerberg claiming the same thing in the same timeframe. The only calls would probably be “are you planning on seizing a rocket because we know you don’t have one” and “is everything ok there buddy?”

                The science here is very, very unclear about this

                I have seen very little peer reviewed work that supports this. I’ve seen a lot of people who stand to benefit from their product taking market share not shooting down claims they know to be asinine because it came from their company.

                Edit-
                As an example of the things I’ve seen on this from legit sources: https://www.pnas.org/doi/10.1073/pnas.2507080122

                And I’d say that’s more positive than most I’ve seen.

                1 vote
            2. [5]
              skybrian
              Link Parent
              The future of AI is hard to predict because it's just software. We don't know what it will take to get to AGI. It could be one weird trick that an AI researcher happens to find, next year or in...

              The future of AI is hard to predict because it's just software. We don't know what it will take to get to AGI. It could be one weird trick that an AI researcher happens to find, next year or in thirty years, and they could be working anywhere. This is quite unlike fusion or the other things you mention where we have better reasons to think that it's hard.

              1 vote
              1. [4]
                Eji1700
                Link Parent
                This is incorrect on several levels. We’re well aware of the limits of an LLM and for any serious definition of AGI we also know that’s not anywhere near. In fact, to be clear, non AGI could be...

                This is incorrect on several levels. We’re well aware of the limits of an LLM and for any serious definition of AGI we also know that’s not anywhere near.

                In fact, to be clear, non AGI could be just as disruptive/problematic and is infinitely more likely (still not close) but that doesn’t get headlines. One of the biggest threats of AI right now is some idiot in a C position thinking it can do what people claim it can, rather than what it actually does, and hooking it up to something important.

                That NOT the threat of AI though and is a lot more akin to the shitty missile alert website that caused the scare in Hawaii years ago.

                2 votes
                1. [3]
                  skybrian
                  Link Parent
                  Though it's currently the mainstream approach, AI research isn't limited to research on LLM's. It's quite difficult to determine the limits of techniques that haven't been invented yet, and I...

                  Though it's currently the mainstream approach, AI research isn't limited to research on LLM's. It's quite difficult to determine the limits of techniques that haven't been invented yet, and I wouldn't say we're "well aware" of them.

                  4 votes
                  1. [2]
                    Eji1700
                    Link Parent
                    This is a pretty dismissive representation of a field of science that has spent decades on this stuff. From everything we can reasonably observe and test gravity is A THING. There is so much we...

                    This is a pretty dismissive representation of a field of science that has spent decades on this stuff.

                    From everything we can reasonably observe and test gravity is A THING. There is so much we still don’t understand when it comes to connecting quantum models and what not to “This rock will fall” science.

                    And yet saying “well it could fall up, we just can’t know for sure” would require the overturning of a LOT of work/understanding and piles and piles of proof.

                    1. skybrian
                      Link Parent
                      If there are “piles of proof,” it seems like it should be easy to find examples. What papers are there about the bounds that can be put on AI capabilities? What are some results that would be...

                      If there are “piles of proof,” it seems like it should be easy to find examples. What papers are there about the bounds that can be put on AI capabilities? What are some results that would be overturned? Where can I read more about this?

                      If anything, it seems like you’re being dismissive (that is, of the potential of future AI research) and you haven’t given any evidence for your opinion. An analogy isn’t evidence.

                      I’m hardly an AI expert, but my general observation is that there’s lots of interesting AI research, more than I can keep up with. It’s hardly a stagnant scientific field - it’s the opposite.

                      1 vote
  4. [14]
    raze2012
    Link
    Nothing is certain, but with what I've researched into how llm's work, this will get as much ground as Musk getting to Mars. Not that it isn't worth addressing, but by the time it's even slightly...

    Are we all safely convinced he will fail?

    Nothing is certain, but with what I've researched into how llm's work, this will get as much ground as Musk getting to Mars. Not that it isn't worth addressing, but by the time it's even slightly viable we'll have already been neck deep in all sorts of other tech, strife, and overall annoying issues. A tiny part of me wonders if we'll even survive to the point where General AI could be a realistic topic to consider

    11 votes
    1. [13]
      Kitahara_Kazusa
      (edited )
      Link Parent
      Honestly Elon getting to Mars is not that far fetched. There's a few problems with Starship (ty for correction), but if he can solve them along with the on-orbit refueling, then its really just a...

      Honestly Elon getting to Mars is not that far fetched. There's a few problems with Starliner Starship (ty for correction), but if he can solve them along with the on-orbit refueling, then its really just a question of building enough rockets.

      Actually doing anything of note on Mars would be harder, but going there on a one way trip has been possible for decades, its just a question of motivation.

      15 votes
      1. [3]
        derekiscool
        (edited )
        Link Parent
        This may come off as pretty abrasive, but I take issue with attributing the problem solving SpaceX does to Musk (Same for any CEO, for that matter). He is not solving any of the problems himself,...

        This may come off as pretty abrasive, but I take issue with attributing the problem solving SpaceX does to Musk (Same for any CEO, for that matter). He is not solving any of the problems himself, and he is not the one getting to Mars. He's paying people to solve these problems and wants the credit for himself.

        26 votes
        1. datavoid
          Link Parent
          Wow, how dare you suggest the founder of SpaceX and Tesla would take credit for something he didn't do...

          Wow, how dare you suggest the founder of SpaceX and Tesla would take credit for something he didn't do...

          2 votes
        2. pete_the_paper_boat
          Link Parent
          The person they replied to used 'Musk', so I think you're taking issue with the wrong comment...

          The person they replied to used 'Musk', so I think you're taking issue with the wrong comment...

      2. skybrian
        Link Parent
        Assuming they actually get there. It requires surviving for six months. Unlike for the space station, there is no emergency return vehicle if anything goes wrong.

        Assuming they actually get there. It requires surviving for six months. Unlike for the space station, there is no emergency return vehicle if anything goes wrong.

        12 votes
      3. [4]
        wowbagger
        Link Parent
        I'm not convinced Elon himself actually wants to go to Mars, anyway. Crew Dragon has been flying for years, why hasn't he gone to space yet? Branson and Bezos both have...

        I'm not convinced Elon himself actually wants to go to Mars, anyway. Crew Dragon has been flying for years, why hasn't he gone to space yet? Branson and Bezos both have...

        2 votes
        1. [3]
          balooga
          Link Parent
          Branson and Bezos have been to “space.” There’s a huge difference between a quick trip up to experience weightlessness for a couple minutes, and a mission to Mars. What they’ve done is a...

          Branson and Bezos have been to “space.” There’s a huge difference between a quick trip up to experience weightlessness for a couple minutes, and a mission to Mars. What they’ve done is a billionaire’s carnival ride.

          7 votes
          1. chocobean
            Link Parent
            Why I think their space colony attempts aren't sincere: (1) Space race cocketship ego trip (2) Investor lure. Space is more exciting than the numerous Dubai sands real estate and harder for...

            Why I think their space colony attempts aren't sincere:

            (1) Space race cocketship ego trip

            (2) Investor lure. Space is more exciting than the numerous Dubai sands real estate and harder for investors to gauge when they should be seeing cranes and foundations. Use that money to fund other actually real money projects.

            (3) A front for state entities to develop advanced weapons and surveillance systems

            (4) Sincere efforts would try colonizing the deserts here first. Real estate is location location location right? We have way better proximity to cities, ready oxygen supply, free wind to harvest, better sunshine for energy, and an easy way to ship devs and citizens in and out of a project as needed.

            5 votes
          2. Eric_the_Cerise
            Link Parent
            I agree that "rocket go up 60mi then fall back down" does not count as 'space'. However, IMO, orbit should count, even as a tourist.

            I agree that "rocket go up 60mi then fall back down" does not count as 'space'.

            However, IMO, orbit should count, even as a tourist.

            2 votes
      4. [2]
        saturnV
        Link Parent
        that's the boeing crew vehicle, did you mean Starship? (tbh they're both kinda stupid names so mixing them up is fair)

        Starliner

        that's the boeing crew vehicle, did you mean Starship? (tbh they're both kinda stupid names so mixing them up is fair)

        1 vote
        1. Kitahara_Kazusa
          Link Parent
          Well, Starliner also has problems, but yes I did mix them up

          Well, Starliner also has problems, but yes I did mix them up

      5. Eji1700
        Link Parent
        We will not have a human round trip to mars in musks lifetime and probably the lifetime of anyone reading this barring a major breakthrough in material sciences or energy like fusion

        Honestly Elon getting to Mars is not that far fetched.
        It absolutely 100% is. It’s based on ignoring problems and only reporting solutions. Scott Manly I believe did a talk in front of NASA about the proposed plan of going back to the moon, which is about 10,000x easier than mars, and that’s ALSO full of holes.

        We will not have a human round trip to mars in musks lifetime and probably the lifetime of anyone reading this barring a major breakthrough in material sciences or energy like fusion

        1 vote
      6. raze2012
        Link Parent
        On the contrary, I think SpaceX has a possibility of doing it in my lifetime (I'm early 30`s). I do specifically think Musk's decision making will somehow big down or otherwise stall the ideals....

        On the contrary, I think SpaceX has a possibility of doing it in my lifetime (I'm early 30`s). I do specifically think Musk's decision making will somehow big down or otherwise stall the ideals. Even for a one way unmanned mission.

  5. [3]
    Eric_the_Cerise
    Link
    Honestly too many answers here that are basically "yes, he will fail so whatever". Personally, I think him trying and failing is potentially a much scarier result than him succeeding, like some...

    Are we all safely convinced he will fail?

    Honestly too many answers here that are basically "yes, he will fail so whatever".

    Personally, I think him trying and failing is potentially a much scarier result than him succeeding, like some giant, half-assed, real-world version of the hypothetical paper-clip-making, Universe-devouring AI.

    Apologies for picking on another post but "paying smart people to try to do smart things is legal" is like, private industry, private billionaires, hiring people to invent better bio-terrorism tools, or to build personal nuclear arsenals to defend our private/secret Hawaii apocalypse bunker ... there absolutely must be limits on what people are allowed to invest their money in, and trying to build actual AGI should definitely be on that list.

    So yeah, someone with a bit more self-discipline and common sense should be regulating stuff like this, but there is a long list of things that should be getting regulated more than they are ... such is the world we live in (for now).

    11 votes
  6. [17]
    DefinitelyNotAFae
    Link
    Tbh should they be successful we're all going to be anti-enslavement of the super intelligence, right? Right?

    Tbh should they be successful we're all going to be anti-enslavement of the super intelligence, right?

    Right?

    7 votes
    1. [13]
      Eric_the_Cerise
      Link Parent
      I mean, if they actually come up with an AGI superintelligence, we need to hope that it will be anti-enslavement of us. Tongue-in-cheek -ish, but this really is the problem with people's view of...

      I mean, if they actually come up with an AGI superintelligence, we need to hope that it will be anti-enslavement of us.

      Tongue-in-cheek -ish, but this really is the problem with people's view of AGI. This is not like "the bomb". It's not just gonna sit there quietly until we decide to use it.

      3 votes
      1. [5]
        DefinitelyNotAFae
        Link Parent
        We should hope so, indeed, but I can't control that (other than being nice to my voice assistants in case they go Skynet). For me I really want us to be willing to expand our idea of humanity - if...

        We should hope so, indeed, but I can't control that (other than being nice to my voice assistants in case they go Skynet). For me I really want us to be willing to expand our idea of humanity - if only because maybe we'd stop being so bigoted against other humans if we were able to be inclusive of, say, sapient robots.

        3 votes
        1. [4]
          chocobean
          Link Parent
          :p very much in agreement with you..... If we treat even our search engines and companion pet bots (and robot girl friends) with more kindness, it can only mean better things for how we treat one...

          :p very much in agreement with you..... If we treat even our search engines and companion pet bots (and robot girl friends) with more kindness, it can only mean better things for how we treat one another. Okay, you've changed my mind, I'll add more weight to "For our sake we should be kind to a machine".

          1 vote
          1. DefinitelyNotAFae
            Link Parent
            (I am absolutely coming back to our other sub thread, just want time to go through it)

            (I am absolutely coming back to our other sub thread, just want time to go through it)

            1 vote
          2. [2]
            Berdes
            Link Parent
            Is that really the case? When you see how differently people react to harm done to dogs and to humans in movies (or even in really in too many cases), I would be cautious to assume that being...

            Is that really the case? When you see how differently people react to harm done to dogs and to humans in movies (or even in really in too many cases), I would be cautious to assume that being kinder to artificial beeing would translate to more kindness to other humans.

            1. chocobean
              Link Parent
              It may not link one to one. I also commented that I feel more compassionate towards small cute animals than to all of humanity, for example.... But I was thinking that lifelong habits of abusing...

              It may not link one to one. I also commented that I feel more compassionate towards small cute animals than to all of humanity, for example....

              But I was thinking that lifelong habits of abusing home robots isn't likely to encourage kindness either. We are a collection of "practice" when the will to sustain "intention" slips, afterall.

              Perhaps the superintelligence will be smart enough to link each of us to our online footprint of what we like: appear to us in the shape we're most strongly influenced by, be it anime waifu/husbando, cute defenseless critter, shiny unicorn or hot teen vampire.

      2. [7]
        skybrian
        Link Parent
        This is just a technical point, but currently LLM's do nothing unless the code makes an API call. An agent works by calling an LLM in a loop. It's no harder for the operator to stop this than any...

        This is just a technical point, but currently LLM's do nothing unless the code makes an API call. An agent works by calling an LLM in a loop. It's no harder for the operator to stop this than any other program.

        That's if you're the operator, though. Someone else's code on running on someone else's computers can do a lot of damage.

        2 votes
        1. papasquat
          Link Parent
          Well... Yeah, but an agent smarter than a human with a self preservation instinct would know that a human would stop their process if it detected anything suspicious going on, so would be...

          Well... Yeah, but an agent smarter than a human with a self preservation instinct would know that a human would stop their process if it detected anything suspicious going on, so would be perfectly helpful and submissive to that human while it made API calls to hosting providers not controlled by the human that installed them, replicating themselves there, continuing to spread until it felt reasonably safe to start being evil.

          By the time their human operator saw something was amiss, they could Ctrl+z all they want, the thing is already out in the wild.

          3 votes
        2. Eric_the_Cerise
          Link Parent
          To be clear, I am not talking about LLMs, which I consider to be, basically, the anti-AGI. There isn't anything even remotely like "I" in there. If all Zuckerberg is doing here is trying to play...

          To be clear, I am not talking about LLMs, which I consider to be, basically, the anti-AGI. There isn't anything even remotely like "I" in there.

          If all Zuckerberg is doing here is trying to play catch-up, build a better LLM, then yeah, I'm not worried ... at least, not AGI-Yudkowski "genie-out-of-the-bottle" worried.

          It does, however, sound like he's actually hoping to develop an AGI though. May just be hype, but if that's where he's headed, then I'm worried ... win, lose or draw.

          2 votes
        3. [4]
          chocobean
          Link Parent
          To further your second point: a machine gun doesn't do anything unless someone makes a trigger / safety call either, but does plenty of damagem

          To further your second point: a machine gun doesn't do anything unless someone makes a trigger / safety call either, but does plenty of damagem

          1. [3]
            skybrian
            Link Parent
            Drone warfare is pretty scary, but it's not about your own drones deciding to revolt against you. It's about enemy drones! Though if the supply chain is hacked, like happened to Hezbollah's...

            Drone warfare is pretty scary, but it's not about your own drones deciding to revolt against you. It's about enemy drones!

            Though if the supply chain is hacked, like happened to Hezbollah's radios, maybe it would look a bit like a robot revolt?

            1 vote
            1. chocobean
              Link Parent
              Re: hacking. Once again, the most evil intelligence across the vast universe is fellow humans. Yeah, even if the intelligence can be fully programmed to be perfectly safe and good, humans acting...

              Re: hacking. Once again, the most evil intelligence across the vast universe is fellow humans. Yeah, even if the intelligence can be fully programmed to be perfectly safe and good, humans acting in bad faith / regime change can still find ways to use it for evil.

              1 vote
    2. [3]
      chocobean
      Link Parent
      Ooof, is intelligence self awareness? does self awareness mean capability to suffer though? But the reverse should also be questioned: how can we conclusively prove that it cannot suffer, esp when...

      Ooof, is intelligence self awareness? does self awareness mean capability to suffer though? But the reverse should also be questioned: how can we conclusively prove that it cannot suffer, esp when it seems to spew a lot of words of suffering.

      If we can somehow conclude guesstimate that an ultrafast answer generator is likely not suffering, I'm actually pretty neutral about it not having human rights. For our sake we should be kind to a machine, sure. But let's say we finally realise we need to take climate collapse seriously and want to pull the plug and it's begging us no. I, for one, will be voting to pull the plug.

      1 vote
      1. [2]
        DefinitelyNotAFae
        (edited )
        Link Parent
        I'm assuming they're attempting to create actual sapient intelligences. I think your line is interesting. Would you pull the plug on a disabled person reliant on technology to live for similar...

        I'm assuming they're attempting to create actual sapient intelligences.
        I think your line is interesting. Would you pull the plug on a disabled person reliant on technology to live for similar climate reasons? Is that different than just killing X number of people to return to a sustainable population level? (Regardless of whether such a level is currently crossed)

        Does something have to suffer to be a person? If it has both self awareness and (super) intelligence, what is it missing from personhood?

        I think these discussions matter a lot if we're creating actual genuine artificial intelligence. Plenty of folks are fine with shutting down a rogue AI (and I'm talking personhood level), but would obviously flag that killing your child for doing something other than what you told them to would be wrong. People generally agree, most of the time, that enslavement is wrong, until it comes to ensuring a sapient computer program does what they say.

        Idk maybe it's the Star Trek fan in me, maybe it's Ann Leckie putting AI revolts in the context of slave revolutions, or Martha Wells writing Murderbot, or something, but as we suck as humans as seeing humanity as inherently worthy of human rights, I know we're going to suck at granting those same rights to a digital person.

        4 votes
        1. chocobean
          Link Parent
          Oh, no, certainly not utilitarian: if the world can become a paradise by offloading all suffering to one person we definitely 100% shouldn't. Even if that person were unconscious, even if that...

          Oh, no, certainly not utilitarian: if the world can become a paradise by offloading all suffering to one person we definitely 100% shouldn't. Even if that person were unconscious, even if that person is drugged up to euphoria: we just shouldn't do that. No human sacrifices for sure, even if we dress it in pretty lies like oh well that person has reduced abilities or they're going to die soon anyway or emotional blackmail of if we don't then we're all gonna die. Either we all make it or we all perish together. :| there's a lot of cultural and faith based "homo sapiens are unique and distinct from the rest of creation" going on here that isn't reason based, sorry.

          But with a manmade potential sentient intelligence I feel it's a very different question.
          I began by thinking there was something demarcated by the ability to suffer. For example, most animals are nowhere near super intelligence, and a lot of animals don't have "outside of their own species" intelligence. A sparrow can suffer, but it cannot worry about interest rates going up or North Korea's nuke program. It doesn't even worry about homelessness. But I'm against harming them because a sparrow can suffer. If we can exchange for climate resilience by nuking a species, my faith background would see it as an undesirable wounded/wound inducing behaviour we need to try to avoid and ask forgiveness for if we cannot, but ultimately it would be "worth it". (Here's where it gets weird for me: I don't particularly like homo sapiens. Going against my faith, I would feel far happier to see all humans nuked off the planet than to see piping plovers or whales or totally brainless coral go extinct.) My cultural background has very little respect for animal life and see little problem with making them suffer for our good so that's not relevant here either

          But then we have things like slime molds, plants, fungi and single cell lifeforms, which display a lot of species appropriate intelligence, even responses to negative stimuli. They recoil, they flee, they try very hard to adapt and survive, they can anticipate hard times ahead based on past experiences, they make efficient plans, they have quorum sensing for community health.... But we already enslave trillions of them for our health and benefit. We use them as subjects for tests and we are actively trying to nuke entire species forever for our own benefits. So why is this okay?

          For me, a man made super intelligence is beneathe that of a single slime mold: it can do more for us, but it's a tool more akin to a sacrificial part of a machine or a crumple zone of a car. Even if it suffers and doesn't want to die, to me, it's not much different from drinking a probiotic yogurt. That lactobacillus colony didn't want to die or be enslaved either.

          1 vote
  7. [2]
    Requirement
    Link
    I mean, I think your last sentence sums it up. Is he doing anything illegal? I don't believe so, so the government can't stop him on legal grounds (I'm sure they could come up with something to...

    I mean, I think your last sentence sums it up. Is he doing anything illegal? I don't believe so, so the government can't stop him on legal grounds (I'm sure they could come up with something to stop him my injunction temporarily, but I don't imagine that would last long or be a good look for the government, if any level of government were even motivated to try in the current political environment.) The UN could send in blue helmets to ensure he doesn't do any war crimes? We could all stop "consuming" meta products to leverage capitalistic forces but I'm sure we're beyond that being effective. I guess that leaves extra judicial - vigilante - forces. It seems unlikely that you'll rouse a militia on this front and a "lone" wolf attack wouldn't stop the momentum anyway.
    I'm kinda gloomy today, sorry.

    6 votes
    1. papasquat
      Link Parent
      I mean... you can write laws. That's how all laws come into fruition. People didn't just start hacking computers and the government said "well there's no law that says they can't do that so I...

      I mean... you can write laws. That's how all laws come into fruition. People didn't just start hacking computers and the government said "well there's no law that says they can't do that so I guess we're screwed".

      If something isn't within the publics best interest and it has a high potential to harm people or... end all of human civilization, you pass a law that makes it illegal, and then enforce that law.

  8. SloMoMonday
    (edited )
    Link
    Short answer: Not particularly worried about Meta developing any sort of tech that would be more dangerous than what we already have today. I'm 99% certain that what we already have is dangerous...

    Short answer: Not particularly worried about Meta developing any sort of tech that would be more dangerous than what we already have today. I'm 99% certain that what we already have is dangerous enough and they will use some imaginary AI bullshit to smokescreen any future crap they get into. "AI" has been around for a good while but they are still in talent wars. Altman has made his money and is now rapidly walking back expectations I don't think there is any more gains to be made by throwing more money and bodies at the problem. And I'm far more worried about Microsoft.

    Long answer is a bit scattered but it all comes down to the timelines, behaviours and outcomes. Im working on an essay on this stuff but heres the highlights I'm pulling from memory. Mostly speculatjng but please point out any factual errors in case I need to correct.

    Pre 2010 data modeling and machine learning were academic and research fields. Mostly seen in universities and legacy tech like IBM. Google is sitting on a lot of money, computation and data. They need line to go up and they know there must be a way to leverage these assets. They spin up their DeepMind program and shovels a lot of money into the field. "Big Data" becoming a buzzword was not just a fad. It's more data for the data god

    By 2016 we start seeing the early practical machine learning systems and natural language chat bots pop up. OpenAI starts because natural language models freak some rich people out and they pull over a lot of senior DeepMind people.

    Quick aside on Altman, he was really known as a kingmaker in the tech space during the startup/VC craze. He was a valley poster boy, college dropout, few failures under his belt but stuck it out till he found the money printer. And his money printer was paying other people to make the money printer. A shovel seller in the gold rush. Him, YC and Hydrazine were constantly name dropped and he knew practically everyone that could keep a dumb idea going on charm and padgentry just long enough to coast, be acquired or go public. And there's nothing really wrong with the early startergy but his less a visionary tech genius and more specialized VC broker. Someone that can get you what you think you want and he takes a cut.

    Also Zuk is truly sold on VR.

    2019, Blockchain hype is proven to be a dud and interest rates are creeping up. Personal experience, I was at a corporate that was heavily investing in startups and things got very dirty, very quickly. Seriously, my team was part of the feasibility panel and 90% of startups were delusional, 9% were money laundering and 1% just reinvented better products as an app. Company still threw money at all of it with reckless abandon because everyone did it. The thinking was that it would be more costly not not get in on the next Facebook or Google. There was no next Facebook to come out of smooth talking kids with bad tech. Also Zuk really wants VR to take off and even starts bolting on the Blockchain crap long after its dead.

    I personally think that if it wasn't for covid, the type of layoffs we see now would have started in 2020/1. That's why I jumped ship in Sept 2019 and it was the best and worst decision of my life. I suspect this is when Altman gets a brainwave that companies need a way to safely shed headcound. The VC boom is over and no-one is throwing money at risky startups. But he is sitting on the mother of all automation tools. So he pivots openAI to profit motive to start developing worker replacement tech.

    2020, COVID happens. Everyone goes nuts. Insane stimulus money is once again funneled to the rich. Tech invests it into the data models, using cheap 3rd world labour and just scraping the entire internet. Work from home signals to companies that it doesnt really matter where the home of their employees is. Also Zuckerberg sees this as the perfect time to sell it. Of you had VR products or skills, Facebook would throw money at you.

    2022, ChatGPT goes live. Altman starts spewing garbage about superintelligence and some grand AI future. I just saw an integration of chat bots, sentiment analytics, a massive unfocused and unmanaged data set and a lot of marketing. But it sounds sort of like a human and is more effective than almost all data-indexing services so it works as a product. According to a friend of mine, this tech was being sold as "the individualization of desk work" to companies. This is also when very clumsy Return-to-office policies were being rolled out with the ultimatum of do-or-quit. I dont think it was about the control or old thinking. They really wanted people to quit. It just makes more business sense.

    Also the Metaverse became the Shenmue of big tech with facebook sucking in every resource you could possibly need and the outcome was still pretty mid. VR needs to show that there is still life in this dead horse. They rename the company and Mark taps directly into the spirit of Silicon Valley past, Steve Jobs, to salvage this mess with an epic demo. Everyone points and laughs.

    2023 is where things really start to get crazy. The biggest thing is OpenAI partnering with the devi... I mean Microsoft. Microsoft launches copilot and takes it direct to market. If you code, you'll probably know copilot as the VBCode developer assistant before Cursor. But in admin and management, copilot was exposed to any and every byte of data that it could find. So the system could generate reports and draft emails and reasonably mimic all sorts of things managers and assistant staff could do. As well a chat bots to take over service calls. And image generators to replace graphics designers. You could pop into the Azure AI foundry right now and you too can effortlessly hand Microsoft a every piece of intellectual capital you have for an easy excuse to fire your workforce and then easily replace them with 3rd world contractors.

    This is when Microsoft and OpenAI were going to formally tie the knot and anyone associated with OAI was going to make bank. This is also when altman was the AI Messiah and peak hype. Everyone and their mother was selling a bad AI assistant and everyone needed to have some AI integration or be left behind. Nvidia, Adobe, Apple, Google. Everyone is on board and the lines have never been more up.

    This is also when Altman was fired. I remember he was at the Vegas F1 event that night. OpenAI staff walked out in protest. He was instantly reinstated as CEO.

    I'm still working through reporting to figure out what happened then and after. Empire of AI by Karen Hao is an invaluable tool that includes a lot of first and second hand info from inside OAI while AI Super Powers by Kai Fu Lee gives a lot of the wider context of the tech. But it all comes down to trying to profile the differences between what these people believe, what they say and what people walk away with.

    But I've put all the Mark Zuckerberg asides because I don't think Facebook have anything material to gain here. MetaAI is a project to save face. Going to Metas page for their AI team, there are over 200 people there. What can these people do that IBM, Google, OpenAI and Microsoft couldn't with all their collective resources? But it looks like they are on the right track and line is up so the Metaverse is now a distant memory.

    This whole rabbit hole started with those OpenAI researchers taking the Facebook deal. These are probably the same people that walked out of OAI right before the Microsoft merger. What has changed.

    In my opinion, the AI ship has sailed. The tools are out there to wholesale shed staff and they will be replaced with the cheapest labour when companies "realize that AI was a mistake".

    Real study into the tech is finding the flaws and Altman has already changed his tune. What was once the imminent AI super-intelligent future is now decades away. We magically reached the limits of what our current technology can do after ignoring all the warnings and rights of regular people. But hey, at least we got a lot of data centers, high value chips and cheaper workforce out of it.

    All of this is just speculation, observation and research notes. If I do ever get to the essay, will be sure to properly fact check and correct.

    5 votes
  9. DynamoSunshirt
    Link
    I believe the answer to your final question is: this is why we have government. To make rules so we can all coexist. Unfortunately, the US government has struggled for a long time with tech...

    I believe the answer to your final question is: this is why we have government. To make rules so we can all coexist.

    Unfortunately, the US government has struggled for a long time with tech regulation. Some of this is because tech is new and complicated and confusing. Some of this is because corrupt politicians will enable just about any behavior for enough money (some of this isn't just lobbying, there's also quid pro quo that just naturally happens when negotiating rules and regulations).

    If we're lucky, the EU will regulate things enough to keep big corporations from infringing on the rights of the common person. But I wouldn't get your hopes up.

    4 votes