52 votes

Sam Altman will join Microsoft to lead a new advanced Al research team following his ouster from OpenAl, CEO Satya Nadella said

40 comments

  1. [11]
    Handshape
    Link
    This is a new variation on "embrace, extend, extinguish", but not off-brand for Microsoft. You know what's likely not coming over to Microsoft? The superalignment commitment. I have a little...

    This is a new variation on "embrace, extend, extinguish", but not off-brand for Microsoft. You know what's likely not coming over to Microsoft?

    The superalignment commitment.

    I have a little trepidation. This twist feels like Hollywood wrote it.

    26 votes
    1. [9]
      sparksbet
      Link Parent
      Given the circumstances around Sam leaving OpenAI, it's hard to really blame Microsoft or claim it's a variation on "embrace, extend, extinguish". OpenAI's board did this to themselves (though...

      Given the circumstances around Sam leaving OpenAI, it's hard to really blame Microsoft or claim it's a variation on "embrace, extend, extinguish". OpenAI's board did this to themselves (though whether their reasons were good is hard to tell since there's so little solid information on exactly why they ousted Altman).

      24 votes
      1. [8]
        Eji1700
        (edited )
        Link Parent
        Yeah everyone is calling the board stupid but without knowing the reasoning on their behavior it’s hard to judge. I do think Sam falls pretty squarely in the category of charismatic promoters and...

        Yeah everyone is calling the board stupid but without knowing the reasoning on their behavior it’s hard to judge.

        I do think Sam falls pretty squarely in the category of charismatic promoters and has done a ton of shifty shit.

        I also wonder how much of this is all post Sam firing propaganda. The “signatures” seem to be typed names on a Google doc?

        12 votes
        1. [5]
          sparksbet
          Link Parent
          I think Ilya Sutskever's recent tweet apologizing for his participation in what happened and the calls from staff at OpenAI for the board to resign is better evidence that this was not a great...

          I think Ilya Sutskever's recent tweet apologizing for his participation in what happened and the calls from staff at OpenAI for the board to resign is better evidence that this was not a great decision for them to have made (though both of those are pretty recent, this stuff is moving fast).

          I agree that from what I know of Sam Altman he's kinda shady, but tbqh I would've said the same of OpenAI as an organization and of several of their other board members so I don't have a rat in this race. It's possible they had good reasons for not wanting him to be CEO still that I would have agreed with (and I'm still very curious about the details) but it's hard to say, as I'm meh on OpenAI's overall ethos as an organization. But it's certainly clear now that this was a tactical misstep in retrospect if nothing else, since they're losing out in pretty much every way atm.

          I'm a data scientist (at a different company ofc lol) and we've been posting the hot gossip about this in our slack all day. Luckily the company I work for is pretty drama-free, and watching everything implode vicariously ends up just being pure popcorn material since it doesn't really affect me directly.

          9 votes
          1. [4]
            Eji1700
            Link Parent
            Yeah i think just about everyone involved is shady as hell just in different degrees. I think the board believes too much of their own hype and Sam is a grifter. It's just been odd to watch so...

            Yeah i think just about everyone involved is shady as hell just in different degrees. I think the board believes too much of their own hype and Sam is a grifter. It's just been odd to watch so many people violently swing around on this issue.

            4 votes
            1. sparksbet
              Link Parent
              Yeah I prefer my spot in the cheap seats for sure 🍿🍿🍿

              Yeah I prefer my spot in the cheap seats for sure 🍿🍿🍿

              3 votes
            2. [2]
              unkz
              Link Parent
              Why is Sam a grifter? Sure seems like he is delivering on some pretty amazing technology.

              Why is Sam a grifter? Sure seems like he is delivering on some pretty amazing technology.

              1 vote
              1. Eji1700
                Link Parent
                Yeah and so did Musk. They're hype men first and all else second. No one seems to remember sam's worldcoin which is concerning as hell in all sorts of directions with a million red flags. Just...

                Yeah and so did Musk. They're hype men first and all else second. No one seems to remember sam's worldcoin which is concerning as hell in all sorts of directions with a million red flags. Just like people seem to forget all the various timelines, projects, and bs Musk has put out over the years which is only finally starting to catch up with him.

                14 votes
        2. [2]
          babypuncher
          Link Parent
          Their unwillingness to reveal this information, coupled with their panicked attempt to undo the whole thing, is cause enough for me to think they are dumb. If they have any information that would...

          Yeah everyone is calling the board stupid but without knowing the reasoning on their behavior it’s hard to judge.

          Their unwillingness to reveal this information, coupled with their panicked attempt to undo the whole thing, is cause enough for me to think they are dumb.

          If they have any information that would change my opinion, then they are more than welcome to share it.

          7 votes
          1. Eji1700
            Link Parent
            While I personally think everyone involved is dumb, just in different ways, I can see situations where either their lack of experience (because this is NOT a board with real credentials in this...

            While I personally think everyone involved is dumb, just in different ways, I can see situations where either their lack of experience (because this is NOT a board with real credentials in this kind of situation), naivety, or the severity of what's occurred (like say if going public also means tanking the project), keeps them from coming out right away.

            I just don't like guessing on things like this especially in the twitter news era where people just run with stuff that's barely verified.

            3 votes
    2. Beowulf
      Link Parent
      The situation at OpenAI wasn’t perfect but it was probably the best that would have ever come out of our capitalist society. The OpenAI board should have continued trying to work with Altman...

      The situation at OpenAI wasn’t perfect but it was probably the best that would have ever come out of our capitalist society. The OpenAI board should have continued trying to work with Altman instead of kicking him to the curb. What did they think MS (and key OpenAI employees for that matter) would do?

      I’m no fan of MS but I don’t think it’s fair to say they orchestrated this at all. Now instead of limited oversight the OpenAI board will have literally no oversight. They did this to themselves.

      15 votes
  2. [4]
    Lucid
    Link
    This is confusing, didn't Microsoft invest heavily into Open AI and integrate Bing with DALL-E? Would this signify a split from Open AI?

    This is confusing, didn't Microsoft invest heavily into Open AI and integrate Bing with DALL-E? Would this signify a split from Open AI?

    10 votes
    1. [2]
      vczf
      Link Parent
      I think yes, this is a sign that Microsoft intends to compete with (what's going to be left of) OpenAI and build rival generative models.

      I think yes, this is a sign that Microsoft intends to compete with (what's going to be left of) OpenAI and build rival generative models.

      20 votes
      1. Pioneer
        Link Parent
        Yay, AI in the hands of powerful tech companies. We know they know what's best to use these tools for in the world... The sheer lack of governance and oversight leadership on these topics is...

        Yay, AI in the hands of powerful tech companies. We know they know what's best to use these tools for in the world...

        The sheer lack of governance and oversight leadership on these topics is terrifying. We're barrelling towards Intellectual Property problems already, now Microsoft is going to jam even more 'AI' into Power-everything suite and it's going to be so frustrating socially.

        11 votes
    2. Macil
      Link Parent
      I assume at least in the short term they're going to build on top of what they get from the OpenAI partnership. It could evolve into replacing OpenAI but I don't think enough is known or planned...

      I assume at least in the short term they're going to build on top of what they get from the OpenAI partnership. It could evolve into replacing OpenAI but I don't think enough is known or planned yet to know if that's how it will work out.

      5 votes
  3. [10]
    Grumble4681
    Link
    This will be interesting to see how OpenAI responds after all this. From what I had read, the board members of the non-profit entity that manages the capped-for-profit entity are very cautious...

    This will be interesting to see how OpenAI responds after all this. From what I had read, the board members of the non-profit entity that manages the capped-for-profit entity are very cautious about the potential dangers of AI.

    The problem with investment and selling products is that in a capitalist economy, being responsible in an undefined social way isn't key to survival. Doing things that can make others money is. The non-profit OpenAI had best positioned it's theoretical ethical responsibility towards society when it gave itself control over an entity with a leader that wasn't as constrained as the non-profit itself was. I'm imagining with the cautious leadership on the board they have they will probably end up losing ground in the long run from this. Clearly it's also difficult to thread the needle, if you can't hold the leader accountable then you aren't really fulfilling your goals, but if you hold the leader accountable too much and neuter yourself then you're not really in a position to fulfill goals either. Whether they would have succeeded at their theoretical goal or not is not something we can know, maybe if society crumbles as the result of all this unrestrained growth and perhaps if OpenAI loses ground to its competitors they may end up on the right side of whatever history could remain after said crumbling, or maybe they could have been the one to lead by example if they had stayed at the forefront.

    I just see it as a very bad sign for OpenAI if Microsoft bails because what we have seen in our time is that these small independent companies very rarely survive in markets where the big established tech companies can come in and throw their weight around. They often just end up selling themselves to the big established tech companies instead.

    If Microsoft bails on OpenAI, who at this point even needs them? Meta doesn't. Google doesn't. Apple maybe could benefit but at the same time Apple prefers to do everything in-house whenever possible (either by developing the solution themselves or buying it), but partnering with OpenAI doesn't seem like it would be a long-term Apple move. I'm not sure where Amazon is at in this market but I don't know if they would have a need for OpenAI either.

    8 votes
    1. [8]
      post_below
      Link Parent
      I don't know how serious you're being there, but assuming the crumbling in question would be the result of an AGI mishandled... it's not whatsoever clear that LLMs are a major step in that...
      • Exemplary

      maybe if society crumbles

      I don't know how serious you're being there, but assuming the crumbling in question would be the result of an AGI mishandled... it's not whatsoever clear that LLMs are a major step in that direction. They feel like it, I get that, that's why that guy tried to get Google's chatbot a lawyer, but their complete lack of contextual understanding is a bit of a drawback if the goal is AGI. Currently there is no practical path from where LLMs are now to anything like contextual understanding. They can make words, but they don't have a clue what letters are. Moreover they don't have the basic plumbing that would be required for the concept of having a clue to have any meaning in the first place.

      We're still at a place where it's anyone's guess when, or if, AGI will happen. And what branch of AI research it will come from. Personally I doubt it will be LLMs, though the things we learn about training and large datasets will probably play an important part.

      Disclaimer: I'm not saying LLMs aren't a remarkable achievement, or that they won't change things, or that ethics aren't an important consideration. The ideal time to create ethics around new tech is definitely before that tech proliferates. I just don't think we need to worry about society crumbling any time soon as a result of AI.

      Seems likely we'll have to figure out how to stop society from crumbling from the non-theoretical threat of climate change long before we need to worry about AGI.

      27 votes
      1. tauon
        (edited )
        Link Parent
        You might want to check out this essay by Bruce Schneier, who has been advocating against e.g. digital voting since like 2005, on some of the risks (and opportunities!) of AI on society, here in...

        maybe if society crumbles

        I don't know how serious you're being there, but assuming the crumbling in question would be the result of an AGI mishandled... it's not whatsoever clear that LLMs are a major step in that direction.

        […]

        I just don't think we need to worry about society crumbling any time soon as a result of AI.

        Seems likely we'll have to figure out how to stop society from crumbling from the non-theoretical threat of climate change long before we need to worry about AGI.

        You might want to check out this essay by Bruce Schneier, who has been advocating against e.g. digital voting since like 2005, on some of the risks (and opportunities!) of AI on society, here in the context of democracy.

        When (most serious) people talk about dangers of AI, they don’t mean a Terminator-style scenario. First there will be misuse by human bad actors, loooong before machines themselves might ever want to harm humans.

        I think the Tildes crowd (I’m including myself) has a tendency to under-hype – as opposed to general population/media right now, which over-hypes – the day-to-day “AI” focus as the public understands it, i.e. when the discussion is around a current chatbot nine out of ten times. And probably rightfully so. We’re not all getting fired and learning programming will be useless by 2027.
        But – and Schneier makes this point as well – we should start thinking now about potential dangers of a future technology which lies somewhere in between “obviously too incapable of real world harm at scale” and “rogue AGI welp it’s too late now”.

        Erosion of government trust and power is a very real issue already today in quite a lot of regions in the world, and historically, all government and peace has been rather fragile. This is really something we take for granted these days, when we shouldn’t. The next US presidential election will already be an example for a major shit show, with a lot of shenanigans enabled by AI (think deepfaked video statements paired with micro-targeted personalized messaging. I, candidate of the {D,R} party, think you, and your [insert family name] relatives are all incapable losers!)

        But again, it’s less about discussing what is currently possible, but about the future capabilities such systems could realistically thinkably achieve.

        (I still agree that climate change is the more pressing issue here. My point is rather that maybe “the other topic” isn’t quite so far up the list of laughably unimportant issues or without danger as you made it seem.)

        14 votes
      2. Grumble4681
        (edited )
        Link Parent
        Current LLMs are not a concern. I get that some people overestimate their capabilities so it might make sense to address that, but that isn't what I was basing my statements on. I'm being serious...

        I don't know how serious you're being there, but assuming the crumbling in question would be the result of an AGI mishandled... it's not whatsoever clear that LLMs are a major step in that direction.

        Current LLMs are not a concern. I get that some people overestimate their capabilities so it might make sense to address that, but that isn't what I was basing my statements on.

        I'm being serious but in different ways depending on the context. I was making an argument from the perspective of the non-profit OpenAI entity, so within that context it was serious in that if you take both the mission of that entity and perspectives offered of people within/around that entity, then clearly they too view something serious about the situation and the potential dangers of AGI.

        Unaligned AGI could pose substantial risks to humanity and solving the AGI alignment problem could be so difficult that it will require all of humanity to work together.

        That for example is taken from
        https://openai.com/blog/our-approach-to-alignment-research

        So from their perspective, I'd say "substantial risks to humanity" could cover society crumbling or anything else for matter.

        In the context of my perspective, I am being serious in that our societies are already fairly delicate and drastic economic shifts could be enough to bring down the whole house of cards, especially considering wealth inequality as it is. When the vast majority of humans with little to no wealth become obsolete, society crumbling isn't some wild conspiracy to me. There have been numerous people within the field who have been sounding the alarm about AGI, and surely it's possible they are wrong, but I think it clearly indicates that this is not some storyline for sci-fi movies but some problem we could face in the future.

        You also have someone like Geoffrey Hinton who said this

        “Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose A.I. And now I think it may be 20 years or less.”

        That could come before climate change drastically alters our societies to the point of crumbling.

        For me, I'm not even talking about the misuse of AGI or the loss of control over AGI being the threat to humanity. I think the threat to humanity in this scenario is humans themselves, when the capital controlling humans have no need or benefit for the rest of humans anymore. When all humans are is just a resource sink, and provide no other value, I don't think humans will be able to address that problem. Within this context, it's not as though AGI would necessarily have to do anything harmful, its existence creates a problem humans can't solve (from my perspective), a problem that will lead to humans destroying societies themselves.

        6 votes
      3. [5]
        skybrian
        Link Parent
        I'm not sure what you mean by contextual understanding, but to me it seems the opposite: GPT4 is very good at guessing words and phrases that are contextually appropriate, even if conceptually...

        [Their] complete lack of contextual understanding is a bit of a drawback if the goal is AGI. Currently there is no practical path from where LLMs are now to anything like contextual understanding.

        I'm not sure what you mean by contextual understanding, but to me it seems the opposite: GPT4 is very good at guessing words and phrases that are contextually appropriate, even if conceptually it's not quite right. It's sort of like context is all they know. Understanding things from context substitutes for a deeper understanding.

        1. [4]
          post_below
          Link Parent
          Understanding is the key word, they don't. Surely my point is apparent even if my word choice is imperfect?

          Understanding is the key word, they don't. Surely my point is apparent even if my word choice is imperfect?

          1 vote
          1. [3]
            skybrian
            Link Parent
            I’m uncomfortable with treating understanding as binary, but I think it’s fair to say that there’s a lot they don’t understand.

            I’m uncomfortable with treating understanding as binary, but I think it’s fair to say that there’s a lot they don’t understand.

            2 votes
            1. [2]
              TumblingTurquoise
              Link Parent
              But that's the thing, they don't understand at all. It's not a capability that LLM has, so we can't refer to their understanding in either binary or gradual terms. Their outputs are simply based...

              But that's the thing, they don't understand at all. It's not a capability that LLM has, so we can't refer to their understanding in either binary or gradual terms. Their outputs are simply based on some very complex & smart statistical analysis above all else.

              1. skybrian
                Link Parent
                I don't agree, but perhaps it has to do with different definitions of "understands." For example, I think it's very likely that LLM's evolve circuitry for keeping track of who is speaking in a...

                I don't agree, but perhaps it has to do with different definitions of "understands."

                For example, I think it's very likely that LLM's evolve circuitry for keeping track of who is speaking in a conversation, and I think that would count as a limited form of understanding.

                What they actually do is active area of research.

                1 vote
    2. Beowulf
      Link Parent
      The AI genie is out of the bottle. OpenAI had a very tenuous setup but it was one of the best possible setups possible, unfortunately. They had an opportunity to be the long-term market leader in...

      The AI genie is out of the bottle. OpenAI had a very tenuous setup but it was one of the best possible setups possible, unfortunately. They had an opportunity to be the long-term market leader in LLM’s and perhaps some day AGI as well. They could have used that position to somewhat limit risk and advocate for ethical standards that the rest of the industry would hopefully follow. Now they don’t have a seat at the table; they’ve really shot themselves in the foot.

      3 votes
  4. [4]
    DefiantEmbassy
    Link
    Ilya Sutskever: I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company
    8 votes
    1. moocow1452
      Link Parent
      This is a cautionary tale on why all parties should agree whether your shocking boardroom coop is “takebacksies” or “no takebacksies” before there is any change of hands in power.

      This is a cautionary tale on why all parties should agree whether your shocking boardroom coop is “takebacksies” or “no takebacksies” before there is any change of hands in power.

      6 votes
    2. JoshuaJ
      Link Parent
      Ha the audacity of that statement. All we learn is ilya is a turncoat. Pathetic.

      Ha the audacity of that statement. All we learn is ilya is a turncoat. Pathetic.

      1 vote
    3. DefiantEmbassy
      Link Parent
      We’ve reached full meltdown (of the company), folks.

      We’ve reached full meltdown (of the company), folks.

      5 votes
  5. [2]
    DefiantEmbassy
    Link
    Who Controls OpenAI? by Matt Levine. Some choice quotes:

    Who Controls OpenAI? by Matt Levine.

    Some choice quotes:

    I mean here's a diagram. And then here's a slightly annotated diagram

    It is not difficult to know what role money plays in the current world! The role money plays is: OpenAI (still) needs a lot of it, and investors have it. If you are a promising tech startup (and OpenAI very much is) then you can raise a lot of money from investors (and OpenAI very much has) while giving them little in the way of formal governance rights (and OpenAI very much does). You can even say “write me a $13 billion check, but view it in the spirit of a donation,” and they’ll do it.6

    You just can’t mean that! There are limits! You can’t just call up Microsoft and be like “hey you know that CEO you like, the one who negotiated your $13 billion investment? We decided he was a little too commercial, a little too focused on making a profitable product for investors. So we fired him. The press release goes out in one minute. Have a nice day.”

    7 votes
    1. DawnPaladin
      Link Parent
      Wow, this guy is funny. (Your link is broken--there's an extra ] on the end.)

      Wow, this guy is funny. (Your link is broken--there's an extra ] on the end.)

      1 vote
  6. stu2b50
    Link
    There’s going to be a lot of staff moving over to Microsoft with Altman, and I suspect many investors will pull out of OpenAI. I wouldn’t be surprised if a year or two down the line, Microsoft...

    There’s going to be a lot of staff moving over to Microsoft with Altman, and I suspect many investors will pull out of OpenAI. I wouldn’t be surprised if a year or two down the line, Microsoft buys the corporate part of OpenAI for a bargain, mainly for the now very valuable “GPT” brand.

    6 votes
  7. [5]
    Sodliddesu
    Link
    I know we're all predicting real world stuff that's going to come from this but I can only see two outcomes. They're going to add more "AI" features to Cortana, unironically, because they don't...

    I know we're all predicting real world stuff that's going to come from this but I can only see two outcomes.

    1. They're going to add more "AI" features to Cortana, unironically, because they don't know/remember that they made her basically go crazy and begin trying to use force to impose her will on the universe in the Halo series.

    2. They're not going to add to Cortana and are going to shove ANOTHER 'virtual assistant' into windows and hide half the settings behind it because not even an LLM can stand the mighty force of Microsoft and half-baked UX.

    3 votes
    1. [2]
      mezze
      Link Parent
      I think Cortana is deprecated and going forward all the “AI” stuff will live under the Copilot branding, instead. For the time being, Microsoft will continue partnering with OpenAI while they spin...

      I think Cortana is deprecated and going forward all the “AI” stuff will live under the Copilot branding, instead.

      For the time being, Microsoft will continue partnering with OpenAI while they spin up their own competitor in the background.

      I’m a bit surprised (and disappointed) that Apple didn’t make a play to bring Sam, Brock, and co. into their Siri LLM team — feels like a missed opportunity.

      8 votes
      1. tauon
        Link Parent
        Apple typically buys for very, very concrete purposes and less so “general research” capabilities. In addition to that I believe Altman is way too much of a public figure for Apple’s liking. Last...

        I’m a bit surprised (and disappointed) that Apple didn’t make a play to bring Sam, Brock, and co. into their Siri LLM team — feels like a missed opportunity.

        Apple typically buys for very, very concrete purposes and less so “general research” capabilities. In addition to that I believe Altman is way too much of a public figure for Apple’s liking.

        Last I heard they even had some issues hiring because the top level ML researchers typically tend to want to publish their work when it’s state of the art, and Apple’s secrecy isn’t too compatible with that.

        5 votes
  8. rosco
    Link
    As per usual I'm strapping on my tinfoil hat with this one and hoping someone will point out all the flaws in my theory. OpenAI always felt like a way for Microsoft to do really experimental, tax...

    As per usual I'm strapping on my tinfoil hat with this one and hoping someone will point out all the flaws in my theory.

    OpenAI always felt like a way for Microsoft to do really experimental, tax payer financed R&D. Most of the funding they donated to OpenAI ended up being used on Azure credits anyway. So the tax write off they got on the hundreds of millions of dollars to OpenAI came back in the form of cloud processing credit spending anyway. It was a closed loop. OpenAI reached it's target outcome, the industry leading LLM and bringing AI to the world in a softer, positive light of a non-profit. Now Microsoft is splitting the development tree, OpenAI will continue to be open source and free while Microsoft can take this development and continue with proprietary development. The Open version will be kneecapped, as a number of folks have pointed out here, without Microsoft financing and slow the development of the open version. Mean while, Microsoft are going to take the foundational team, all of the former heads and a good deal of staff with them, and ramp up production of the proprietary tree. Mapbox did this a few years ago when they took their open source mapping tech and split off of the open community developed tech that then became Map Libre.

    2 votes
  9. streblo
    Link
    I know they're paid obscene salaries but if you were an early or even mid employee of OpenAI sitting on a literal fortune of stock options it would be a very bitter pill to swallow to have to zero...

    I know they're paid obscene salaries but if you were an early or even mid employee of OpenAI sitting on a literal fortune of stock options it would be a very bitter pill to swallow to have to zero that and then pack your bags and head to Microsoft to duplicate most of your work for the last few years. I guess you just have to hope Microsoft absorbs OpenAIs valuation?

    2 votes