36 votes

Why do you think Sam Altman was fired from OpenAI?

Anybody have some greater amount of background or context on this? I certainly don't see it helping anything, but I know nothing of the Valley or its ways

25 comments

    1. [12]
      DefiantEmbassy
      Link Parent
      (If I had to have a take, it'd be roughly this tweet: so basically the “move slower” people ousted the “move faster” people who’ll move fast to start a newco all the “move faster” people will join...
      12 votes
      1. [8]
        Lucid
        Link Parent
        I'm not really involved in the AI space at all, but it's odd to me that so many people in these replies are so against "moving slow". Especially when at least some (most?) experts are describing...

        I'm not really involved in the AI space at all, but it's odd to me that so many people in these replies are so against "moving slow". Especially when at least some (most?) experts are describing AI as a potential existential threat.

        31 votes
        1. Macil
          (edited )
          Link Parent
          I've felt this confusion too today after reading a lot of the discourse. I think a lot of people think safety could only be a concern related to a firing if OpenAI had dangerous AGI today, and...

          I've felt this confusion too today after reading a lot of the discourse.

          I think a lot of people think safety could only be a concern related to a firing if OpenAI had dangerous AGI today, and that it's unlikely that OpenAI has AGI today, therefore this concern must be an obviously false pretense for the firing, but there are multiple ways this reasoning is false. For one, it's easy to me to imagine that a disagreement over how to handle safety in the long-term lead to multiple people in charge digging in and losing the ability to work together. I think people incorrectly expect the reason for dramatic news to be more dramatic than something that mundane.

          15 votes
        2. sparksbet
          Link Parent
          There is definitely still a large contingent of "move fast and break things" types in the AI space, unfortunately.

          There is definitely still a large contingent of "move fast and break things" types in the AI space, unfortunately.

          12 votes
        3. [2]
          Eji1700
          Link Parent
          They are no where near an existential threat and it’s mostly overhyped. They should worry about safety and effects on society but there’s been a ton of hyperbole involved in all of this

          They are no where near an existential threat and it’s mostly overhyped. They should worry about safety and effects on society but there’s been a ton of hyperbole involved in all of this

          8 votes
          1. Lucid
            Link Parent
            Oh don't get me wrong, I'm not in the "Chat GPT is almost AGI" camp. But AGI is an explicit purpose of Open AI, or at least their claim. Having people at the company that want to move slow sounds...

            Oh don't get me wrong, I'm not in the "Chat GPT is almost AGI" camp. But AGI is an explicit purpose of Open AI, or at least their claim. Having people at the company that want to move slow sounds like a good thing.

            9 votes
        4. [3]
          RodneyRodnesson
          Link Parent
          Only my opinion ofc but move slow isn't an option, AI is just as powerful a weapon as nukes; it's not what AI will do to humans I fear as much as what humans will do with AI to each other.

          Only my opinion ofc but move slow isn't an option, AI is just as powerful a weapon as nukes; it's not what AI will do to humans I fear as much as what humans will do with AI to each other.

          1 vote
          1. [2]
            GunnarRunnar
            Link Parent
            And what's the horror scenario? You probably aren't talking about Skynet-like situation and I just have a hard time imagining the worst case scenario. (But I'm also a skeptic and generally...

            And what's the horror scenario? You probably aren't talking about Skynet-like situation and I just have a hard time imagining the worst case scenario. (But I'm also a skeptic and generally non-believer, maybe even a hater, when it comes to AI.)

            1 vote
            1. RodneyRodnesson
              Link Parent
              We slow down on AI. Our enemies do not. At some point vital infrastructure will be an easy target for enemy with advanced AI and we may not be clued up enough (because AI helps us learn too) to...

              We slow down on AI. Our enemies do not. At some point vital infrastructure will be an easy target for enemy with advanced AI and we may not be clued up enough (because AI helps us learn too) to know who's attacking us and/or we may not be able to counter it in any way.

              Imagine stuxnet on steroids with no counter.

              I think this unlikely but possible enough that it needs to be thought about. Having said that I'm sure far more intelligent people than I have been thinking about scenarios.

              1 vote
      2. Adverbly
        Link Parent
        Phrasing it that way certainly makes it seem like betting on FastCo would be the way to go. However, there may be a correlation between philosophy around speed of development and talent. It...

        Phrasing it that way certainly makes it seem like betting on FastCo would be the way to go. However, there may be a correlation between philosophy around speed of development and talent. It certainly seems like many of the people who want to go slow do so because of good reasons rather than laziness or anything that most fast moving tech companies would supplant.

        If there are enough people in the slow camp(and they are higher quality), then the slow camp will still win out on progress.

        3 votes
      3. RodneyRodnesson
        Link Parent
        Incredibly succinct. Thanks to you and the other people who've looked into it more.

        Incredibly succinct. Thanks to you and the other people who've looked into it more.

      4. triadderall_triangle
        Link Parent
        The way you structured this reply is very pleasing to me. I kind of understand how powerful it is that Tildes tends towards cultivating such takes

        The way you structured this reply is very pleasing to me. I kind of understand how powerful it is that Tildes tends towards cultivating such takes

        6 votes
  1. [6]
    JoshuaJ
    Link
    Lots of background and speculation about this here: https://news.ycombinator.com/item?id=38309611

    Lots of background and speculation about this here: https://news.ycombinator.com/item?id=38309611

    11 votes
    1. [5]
      triadderall_triangle
      (edited )
      Link Parent
      Interesting twist

      Another source [1] claims: "A knowledgeable source said the board struggle reflected a cultural clash at the organization, with Altman and Brockman focused on commercialization and Sutskever and his allies focused on the original non-profit mission of OpenAI."

      Interesting twist

      18 votes
      1. [4]
        arqalite
        Link Parent
        This is plausible. The fact that GPT-4 is closed-source where all the other GPTs were (at least partially) open-source did suggest there was a significant push on productization and commercialization.

        This is plausible. The fact that GPT-4 is closed-source where all the other GPTs were (at least partially) open-source did suggest there was a significant push on productization and commercialization.

        5 votes
        1. Macil
          Link Parent
          GPT-3 was closed source too. Neither Sam Altman nor Ilya Sutskever believed that committing to open sourcing their future models was good for the benefit of society...

          GPT-3 was closed source too. Neither Sam Altman nor Ilya Sutskever believed that committing to open sourcing their future models was good for the benefit of society (https://www.theverge.com/2023/3/15/23640180/openai-gpt-4-launch-closed-research-ilya-sutskever-interview), so it seems really unlikely that this firing is because either wanted to open source things.

          It does slightly seem like Ilya wanted to study the safety of new advancements (not necessarily current advancements! maybe just about long-term plans) more while Sam wanted to apply and commercialize them quicker. (https://twitter.com/karaswisher/status/1725678898388553901)

          13 votes
        2. [2]
          triadderall_triangle
          Link Parent
          Can one "have" or gain unfettered access to any of the prior iterations?

          Can one "have" or gain unfettered access to any of the prior iterations?

          1 vote
          1. Handshape
            Link Parent
            Depends how far back or how far sideways you want to go. Self-hosting chat LLMs is remarkably easy now. If you just want to "click the thing and go", check out nomic.ai's GPT4ALL project/product....

            Depends how far back or how far sideways you want to go. Self-hosting chat LLMs is remarkably easy now. If you just want to "click the thing and go", check out nomic.ai's GPT4ALL project/product. It'll run on a shoe, if the shoe has AVX2 processor extensions.

            4 votes
  2. beret4
    Link
    I don’t think anyone knows yet. There were apparently some internal debate around AI safety, but to me that doesn’t point to enough of a reason to kick him. To me it’s either he messed up with the...

    I don’t think anyone knows yet. There were apparently some internal debate around AI safety, but to me that doesn’t point to enough of a reason to kick him. To me it’s either he messed up with the product or finances. Considering the CFO and CTO remain would imply not financial, so maybe he leaked data/model/weights or something to Microsoft or a competitor.

    9 votes
  3. [3]
    shrike
    Link
    There's a website for this: https://whywassamfired.com :)

    There's a website for this: https://whywassamfired.com :)

    9 votes
    1. ignorabimus
      Link Parent
      This would be very funny but sadly it seems unlikely.

      🤖 Sam has been running OpenAI using ChatGPT and hallucinated to the board one too many times.

      This would be very funny but sadly it seems unlikely.

      5 votes
  4. triadderall_triangle
    Link
    The tagging is working equisitely by the way, Tildes all :) ai wonder if it will ever get so good (if it isn't already) that you can literally just read the tags and syntopically gather an...

    The tagging is working equisitely by the way, Tildes all :) ai wonder if it will ever get so good (if it isn't already) that you can literally just read the tags and syntopically gather an abstract of each topic, obviously reading further when it is prescient or particularly personally applicable or of interest

    1 vote
  5. Jerutix
    Link
    It’s because the board decided an AI could do his job, obviously. 🥸

    It’s because the board decided an AI could do his job, obviously. 🥸

    4 votes