65 votes

OpenAI announces leadership transition

47 comments

  1. [8]
    kwyjibo
    Link

    Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.

    30 votes
    1. [7]
      shiruken
      Link Parent
      Also this

      Also this

      As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.

      18 votes
      1. [5]
        balooga
        Link Parent
        This feels like the precursor to some major bombshell news. I don't really know what else to comment here, but I'm gonna watch this very closely.

        This feels like the precursor to some major bombshell news. I don't really know what else to comment here, but I'm gonna watch this very closely.

        13 votes
        1. Eji1700
          Link Parent
          I suspect the news will be "It's cool, but it's vastly overhyped". I've seen people talking about the singularity or how their jobs will all be automated in 10 years, and it's just not that....

          I suspect the news will be "It's cool, but it's vastly overhyped". I've seen people talking about the singularity or how their jobs will all be automated in 10 years, and it's just not that. Tesla's came out in 2008, it's 2023 and we still have mostly gas powered cars.

          At the end of the day, AI needs to 1. be reliable and 2. be implemented. The first is a MAJOR problem with this recent batch of LLM's (granted it always has been, but they're acting like it's not now) and the second is why you tend to get lag.

          OpenAI has always been a shady marketing first, results second, company, and I think people are about to really take notice. And some of them have probably spent millions on them.

          10 votes
        2. [3]
          updawg
          Link Parent
          Indeed, the changes in OpenAI's leadership, especially with Sam Altman stepping down and Mira Murati taking over as interim CEO, are significant. It's not just about the personnel changes; it...

          Indeed, the changes in OpenAI's leadership, especially with Sam Altman stepping down and Mira Murati taking over as interim CEO, are significant. It's not just about the personnel changes; it reflects a pivotal moment in the organization's trajectory. The board's decision, citing a lack of candor from Altman, suggests deeper governance and operational dynamics at play. Your sense that this might be a precursor to more substantial news could be spot on. It's crucial to observe how this transition influences OpenAI's strategic direction, especially in the ever-evolving field of AI. The coming weeks and months should provide more clarity on the direction and impact of these changes.

          ...seems appropriate to let ChatGPT come up with a response to you not knowing what else to say in this instance...

          1 vote
          1. [2]
            Minori
            Link Parent
            I'm not the person you responded to, but I feel like the snark isn't warranted. If you feel a comment is noise, best to just label it noise or off topic and continue with your day. Using LLMs to...

            ...seems appropriate to let ChatGPT come up with a response to you not knowing what else to say in this instance...

            I'm not the person you responded to, but I feel like the snark isn't warranted. If you feel a comment is noise, best to just label it noise or off topic and continue with your day. Using LLMs to brainstorm a response is fine, I'd just like to keep Tildes more positive.

            17 votes
            1. updawg
              Link Parent
              Um...there wasn't any snark...? If anything you should label my comment as a joke, not theirs as noise.

              Um...there wasn't any snark...? If anything you should label my comment as a joke, not theirs as noise.

              1 vote
  2. [2]
    DefiantEmbassy
    Link
    This seems madness. OpenAI has, from outside appearances, has had one of the most competent years in history. Establishing an entirely new line of product, and despite the competitiveness,...

    This seems madness. OpenAI has, from outside appearances, has had one of the most competent years in history. Establishing an entirely new line of product, and despite the competitiveness, effectively extended their lead in the market.

    13 votes
    1. sparksbet
      Link Parent
      They did in fact make the news a lot, but the rumors I've heard is that they're still struggling to make enough money off it to offset the absolutely huge costs of running it. No idea if that's...

      They did in fact make the news a lot, but the rumors I've heard is that they're still struggling to make enough money off it to offset the absolutely huge costs of running it. No idea if that's playing much of a role in this shakeup but it's important not to assume that your view of how well they're doing from a public-awareness perspective matches how well they're doing when it comes to actual business metrics.

      14 votes
  3. [20]
    Eji1700
    (edited )
    Link
    Honestly...as I read more of this...i'm even more confused. To again state my position/bias, I think GPT and friends are a neat tech breakthrough and VASTLY overhyped (although still very...

    Honestly...as I read more of this...i'm even more confused.

    To again state my position/bias, I think GPT and friends are a neat tech breakthrough and VASTLY overhyped (although still very powerful), to the point that Ieven people involved are drinking the koolaide. I was paying attention to OpenAI during dota 2 and felt they were shady then (over play their successes, control the environment to ensure it, and erase their failures), and I doubt anything has changed.

    But.

    A lot of the takes I'm reading on this is "oh they were fired by their chief scientist (who went to the board) because of the profit driven motives and the lack of safety" and i just...i don't buy it. This is kinda reminding me when everyone thought Musk pissed rainbows, and the uh....spin?..on this that they're afraid of their "agi" (something they DON'T have by any meaningful definition) doing something terrible in Sam's hands is almost impossible for me to believe as sincere and not just corporate bullshit to cover up.

    I feel like the only options I see are:

    1. The actual board is WAY more naive/sincere than I could believe. In that either they believe their own hype so much they think this is more dangerous than it is, or they just believe that even "automate people out of jobs" is bad enough they don't want to focus on that.
    2. I'm wrong and they either have something much more powerful than I thought OR it's just as dumb as I think but it's getting adopted/used in dangerous ways we're not more aware of
    3. This is just power struggles being wrapped up in the church of tech gospel with some mixture of over hype and believing their own bullshit.

    Obviously i'm on 3.

    Everything about this, to me, looks like Sam did something bad for business. Not humanity. I struggle to think of any company, let alone one that's been doing shady shit for a long time, acting this quickly on something as nebulous as "the good of humanity". Most can't even get the perspective needed when they DO want to try.

    Edit-

    Ok after a bit more reading the best summary i've seen is "don't piss off the one guy they can't actually replace". In this case the chief scientist.

    12 votes
    1. skybrian
      (edited )
      Link Parent
      Noticing you’re confused is good. Inevitably, people try to fill the void with speculation, but an alternative is just wait. We will find out in a few days as the people involved tell their parts...

      Noticing you’re confused is good. Inevitably, people try to fill the void with speculation, but an alternative is just wait.

      We will find out in a few days as the people involved tell their parts of the story.

      9 votes
    2. [6]
      Macil
      Link Parent
      They do study things about whether their technology may cause job losses (https://openai.com/research/gpts-are-gpts), can be used for email spam, etc. It's not just AGI they're concerned about. If...

      They do study things about whether their technology may cause job losses (https://openai.com/research/gpts-are-gpts), can be used for email spam, etc. It's not just AGI they're concerned about.

      If the firing is related to safety, I think it's about differences in opinion about how to prioritize safety in the long-term and less about safety of a specific system they have today, and those differences in opinion may have grown into some people not being able to work together.

      5 votes
      1. [5]
        triadderall_triangle
        Link Parent
        Not gonna lie, I'm looking forward to the first bankruptcy rooting from someone thinking this technology is the GoldenTicket to zero staff or employees and all the inefficiencies+inconveniences...

        Not gonna lie, I'm looking forward to the first bankruptcy rooting from someone thinking this technology is the GoldenTicket to zero staff or employees and all the inefficiencies+inconveniences that come with them.

        And ai love this tech, but more like that thing where you search and Brave or Kagi have a helpful synopsis that tends to be the "correct" answer

        Edit: I would never defer to it in any serious or credibillity-altering way

        1. [4]
          Beowulf
          Link Parent
          I have a cousin who is always trying to rope me into get-rich-quick schemes because I’m “good with computers”. His latest genius idea is to start a business consultancy but run everything through...

          I have a cousin who is always trying to rope me into get-rich-quick schemes because I’m “good with computers”. His latest genius idea is to start a business consultancy but run everything through ChatGPT instead of having real consultants and business analysts. So basically his idea was to just be an extra middleman between a company and OpenAI. I thanked him but told him I’d pass.

          5 votes
          1. sparksbet
            Link Parent
            Oh god I can feel my inner lawyer salivating over that one

            Oh god I can feel my inner lawyer salivating over that one

            3 votes
          2. [2]
            Raistlin
            Link Parent
            So, if you're the computer guy, what was to be his role in this new empire?

            So, if you're the computer guy, what was to be his role in this new empire?

            1 vote
            1. Beowulf
              Link Parent
              He’s always the “ideas guy” of course ROFL

              He’s always the “ideas guy” of course ROFL

              1 vote
    3. [8]
      updawg
      Link Parent
      I'm not sure if you understand from your comment, but are you aware that the for-profit business is a subsidiary of the OpenAI nonprofit? So they do have some mandate to act for the betterment of...

      I'm not sure if you understand from your comment, but are you aware that the for-profit business is a subsidiary of the OpenAI nonprofit? So they do have some mandate to act for the betterment of society.

      2 votes
      1. Eji1700
        Link Parent
        I'm aware it's a subsidiary of a nonprofit, and their behavior until this very point i'd describe as anything but. And mandate is a pretty strong term given how much they can get away with.

        I'm aware it's a subsidiary of a nonprofit, and their behavior until this very point i'd describe as anything but. And mandate is a pretty strong term given how much they can get away with.

        5 votes
      2. [6]
        TumblingTurquoise
        Link Parent
        OpenAI is not a non profit business, and hasn't been for several years. It is a capped profit enterprise.

        OpenAI is not a non profit business, and hasn't been for several years. It is a capped profit enterprise.

        2 votes
        1. [5]
          updawg
          Link Parent
          That is incorrect. https://openai.com/our-structure https://images.openai.com/blob/f3e12a69-e4a7-4fe2-a4a5-c63b61f26ab7/org-structure.svg?width=10&height=10&quality=50

          That is incorrect.

          • First, the for-profit subsidiary is fully controlled by the OpenAI Nonprofit. We enacted this by having the Nonprofit wholly own and control a manager entity (OpenAI GP LLC) that has the power to control and govern the for-profit subsidiary.

          • Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s principal beneficiary is humanity, not OpenAI investors.

          • Third, the board remains majority independent. Independent directors do not hold equity in OpenAI. Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.

          • Fourth, profit allocated to investors and employees, including Microsoft, is capped. All residual value created above and beyond the cap will be returned to the Nonprofit for the benefit of humanity.

          • Fifth, the board determines when we've attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.

          https://openai.com/our-structure
          https://images.openai.com/blob/f3e12a69-e4a7-4fe2-a4a5-c63b61f26ab7/org-structure.svg?width=10&height=10&quality=50

          6 votes
          1. [4]
            TumblingTurquoise
            Link Parent
            To me it feels like legal "loopholery". What's the practical difference between: for-profit business A earning money directly, and non-profit business A earning money through for-profit business B?

            To me it feels like legal "loopholery". What's the practical difference between: for-profit business A earning money directly, and non-profit business A earning money through for-profit business B?

            1. [3]
              updawg
              Link Parent
              It may be--I am certainly not an expert. But that second bullet, if true is the practical difference.

              It may be--I am certainly not an expert. But that second bullet, if true is the practical difference.

              2 votes
              1. [2]
                TumblingTurquoise
                Link Parent
                I'm not necessarily disagreeing, but let's take as an example the current events. OpenAI Inc might as well be a nonprofit, but they depend on the workings of their for-profit subsidiary, which has...

                I'm not necessarily disagreeing, but let's take as an example the current events. OpenAI Inc might as well be a nonprofit, but they depend on the workings of their for-profit subsidiary, which has investors attached to it. The nonprofit decided to fire their CEO, but most likely due to the for-profit investors being caught off-guard, are now backtracking on this decision.

                If this is true, the nonprofit's values are irrelevant and they are still bound to investors just as if they were themselves a for-profit business.

                In any case, it might be my ignorance on the subject that makes me feel like this distinction between non/for profit is simply optics.

                4 votes
                1. updawg
                  Link Parent
                  Yes, maybe you're right. But we really have no idea what's going on.

                  Yes, maybe you're right. But we really have no idea what's going on.

                  1 vote
    4. [4]
      CosmicDefect
      Link Parent
      Cue up some conspiracy theory that OpenAI has created the AGI Omnissiah and it itself requested his firing. It'd make a good sci-fi story.

      I'm wrong and they either have something much more powerful than I thought OR it's just as dumb as I think but it's getting adopted/used in dangerous ways we're not more aware of

      Cue up some conspiracy theory that OpenAI has created the AGI Omnissiah and it itself requested his firing. It'd make a good sci-fi story.

      1 vote
      1. [3]
        Promonk
        Link Parent
        "Omnissiah" is such an ugly backformation. I prefer "Basilisk." Or maybe we can take a cue from Bowie and call it "the Saviour Machine."

        "Omnissiah" is such an ugly backformation. I prefer "Basilisk." Or maybe we can take a cue from Bowie and call it "the Saviour Machine."

        1 vote
        1. [2]
          blivet
          Link Parent
          We do have “President Joe”, come to think of it.

          We do have “President Joe”, come to think of it.

          1. Promonk
            Link Parent
            Bowie was a time traveler from space, after all. Also, an alligator and a mama-papa coming for you.

            Bowie was a time traveler from space, after all. Also, an alligator and a mama-papa coming for you.

  4. Macil
    (edited )
    Link
    I've always been impressed when listening to Sam, so this is a surprise. I assume the outing is because of differences in plans of how to use their technology and what business deals to make,...

    I've always been impressed when listening to Sam, so this is a surprise. I assume the outing is because of differences in plans of how to use their technology and what business deals to make, though it is surprising to see it happen so suddenly. Greg Brockman, OpenAI President, quit too with Sam's firing, so I doubt Sam's firing had to do with some clear criminal misconduct.

    The most interesting theories I've seen:

    I lean toward a mix of 1 and 2.

    There's a set of prediction market bets about why Sam was fired, which I think is a useful system for seeing how seriously people take different ideas: https://manifold.markets/sophiawisdom/why-was-sam-altman-fired

    10 votes
  5. [2]
    BitsMcBytes
    Link
    Sam (cofounder) - Fired Following that: Greg (cofounder) - Quit Jakub (GPT-4 Lead) - Quit Aleksander (Head of AI Risk) - Quit Szymon (open source baselines research) - Quit Calling it now. OpenAI...

    Sam (cofounder) - Fired
    Following that:
    Greg (cofounder) - Quit
    Jakub (GPT-4 Lead) - Quit
    Aleksander (Head of AI Risk) - Quit
    Szymon (open source baselines research) - Quit

    Calling it now.
    OpenAI just ate its own tail and will fall into irrelevance. Investors will probably sue. $80B in value evaporated. The only way this changes for them is if they experience a Steve Jobs / Apple style redemption arc that ultimately puts Sam and Greg back in OpenAI with full control.

    The more likely case is Sam and Greg start their own company and never give up control to begin with.

    8 votes
    1. teaearlgraycold
      Link Parent
      Google executives are popping champagne

      Google executives are popping champagne

      4 votes
  6. [4]
    skybrian
    Link
    Some background. One of authors is writing a book about OpenAI: Inside the Chaos at OpenAI - The Atlantic (archive) … … … … … …

    Some background. One of authors is writing a book about OpenAI:

    Inside the Chaos at OpenAI - The Atlantic (archive)

    OpenAI was deliberately structured to resist the values that drive much of the tech industry—a relentless pursuit of scale, a build-first-ask-questions-later approach to launching consumer products. It was founded in 2015 as a nonprofit dedicated to the creation of artificial general intelligence, or AGI, that should benefit “humanity as a whole.”

    That model didn’t exactly last. In 2019, OpenAI launched a subsidiary with a “capped profit” model that could raise money, attract top talent, and inevitably build commercial products. But the nonprofit board maintained total control. This corporate minutiae is central to the story of OpenAI’s meteoric rise and Altman’s shocking fall. Altman’s dismissal by OpenAI’s board on Friday was the culmination of a power struggle between the company’s two ideological extremes—one group born from Silicon Valley techno optimism, energized by rapid commercialization; the other steeped in fears that AI represents an existential risk to humanity and must be controlled with extreme caution. For years, the two sides managed to coexist, with some bumps along the way.

    In conversations between The Atlantic and 10 current and former employees at OpenAI, a picture emerged of a transformation at the company that created an unsustainable division among leadership. (We agreed not to name any of the employees—all told us they fear repercussions for speaking candidly to the press about OpenAI’s inner workings.) Together, their accounts illustrate how the pressure on the for-profit arm to commercialize grew by the day, and clashed with the company’s stated mission, until everything came to a head with ChatGPT and other product launches that rapidly followed. “After ChatGPT, there was a clear path to revenue and profit,” one source told us. “You could no longer make a case for being an idealistic research lab. There were customers looking to be served here and now.”

    In the midst of it all, rumors began to spread within OpenAI that its competitors at Anthropic were developing a chatbot of their own. The rivalry was personal: Anthropic had formed after a faction of employees left OpenAI in 2020, reportedly because of concerns over how fast the company was releasing its products. In November, OpenAI leadership told employees that they would need to launch a chatbot in a matter of weeks, according to three people who were at the company. To accomplish this task, they instructed employees to publish an existing model, GPT-3.5, with a chat-based interface. Leadership was careful to frame the effort not as a product launch but as a “low-key research preview.” By putting GPT-3.5 into people’s hands, Altman and other executives said, OpenAI could gather more data on how people would use and interact with AI, which would help the company inform GPT-4’s development.

    ChatGPT’s runaway success placed extraordinary strain on the company. Computing power from research teams was redirected to handle the flow of traffic. As traffic continued to surge, OpenAI’s servers crashed repeatedly; the traffic-monitoring tool also repeatedly failed. Even when the tool was online, employees struggled with its limited functionality to gain a detailed understanding of user behaviors.

    Safety teams within the company pushed to slow things down. These teams worked to refine ChatGPT to refuse certain types of abusive requests and to respond to other queries with more appropriate answers. But they struggled to build features such as an automated function that would ban users who repeatedly abused ChatGPT. In contrast, the company’s product side wanted to build on the momentum and double down on commercialization. Hundreds more employees were hired to aggressively grow the company’s offerings. In February, OpenAI released a paid version of ChatGPT; in March, it quickly followed with an API tool, or application programming interface, that would help businesses integrate ChatGPT into their products. Two weeks later, it finally launched GPT-4.

    The tensions boiled over at the top. As Altman and OpenAI President Greg Brockman encouraged more commercialization, the company’s chief scientist, Ilya Sutskever, grew more concerned about whether OpenAI was upholding the governing nonprofit’s mission to create beneficial AGI. Over the past few years, the rapid progress of OpenAI’s large language models had made Sutskever more confident that AGI would arrive soon and thus more focused on preventing its possible dangers, according to Geoffrey Hinton, an AI pioneer who served as Sutskever’s doctoral adviser at the University of Toronto and has remained close with him over the years. (Sutskever did not respond to a request for comment.)

    Anticipating the arrival of this all-powerful technology, Sutskever began to behave like a spiritual leader, three employees who worked with him told us. His constant, enthusiastic refrain was “feel the AGI,” a reference to the idea that the company was on the cusp of its ultimate goal. At OpenAI’s 2022 holiday party, held at the California Academy of Sciences, Sutskever led employees in a chant: “Feel the AGI! Feel the AGI!” The phrase itself was popular enough that OpenAI employees created a special “Feel the AGI” reaction emoji in Slack.

    The more confident Sutskever grew about the power of OpenAI’s technology, the more he also allied himself with the existential-risk faction within the company. For a leadership offsite this year, according to two people familiar with the event, Sutskever commissioned a wooden effigy from a local artist that was intended to represent an “unaligned” AI—that is, one that does not meet a human’s objectives. He set it on fire to symbolize OpenAI’s commitment to its founding principles. In July, OpenAI announced the creation of a so-called superalignment team with Sutskever co-leading the research. OpenAI would expand the alignment team’s research to develop more upstream AI-safety techniques with a dedicated 20 percent of the company’s existing computer chips, in preparation for the possibility of AGI arriving in this decade, the company said.

    Through it all, Altman pressed onward. In the days before his firing, he was drumming up hype about OpenAI’s continued advances. The company had begun to work on GPT-5, he told the Financial Times, before alluding days later to something incredible in store at the APEC summit. “Just in the last couple of weeks, I have gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward,” he said. “Getting to do that is a professional honor of a lifetime.” According to reports, Altman was also looking to raise billions of dollars from Softbank and Middle Eastern investors to build a chip company to compete with Nvidia and other semiconductor manufacturers, as well as lower costs for OpenAI. In a year, Altman had helped transform OpenAI from a hybrid research company into a Silicon Valley tech company in full-growth mode.

    7 votes
    1. [3]
      nosewings
      Link Parent
      As much as I'd love to say I respect the guy for caring about safety, I have two thoughts to temper that feeling. What exactly did he expect in that line of work, and given the sorts of people he...

      As much as I'd love to say I respect the guy for caring about safety, I have two thoughts to temper that feeling.

      1. What exactly did he expect in that line of work, and given the sorts of people he and his company were associated with? If you care about having a positive impact on the world, Big Tech is the wrong career path.
      2. It sounds like his fear is over impending AGI and not more realistic and mundane problems, like unemployment and the obsolescence of (at least certain domains of) human creativity. Meh.
      1 vote
      1. [2]
        skybrian
        Link Parent
        I’m not sure what you mean by Big Tech, but OpenAI started out as a nonprofit research lab. (Also, artificial intelligence research was and is academic field.)

        I’m not sure what you mean by Big Tech, but OpenAI started out as a nonprofit research lab. (Also, artificial intelligence research was and is academic field.)

        2 votes
        1. nosewings
          Link Parent
          OpenAI may have started out nonprofit/academic, but that's not really where they are now. Or, at least, it's not the only place they're at now. Academia and industry are closely connected in CS....

          OpenAI may have started out nonprofit/academic, but that's not really where they are now. Or, at least, it's not the only place they're at now. Academia and industry are closely connected in CS.

          This almost looks like a fight between the two aspects, and I have no doubts about which side is going to win in the end.

          1 vote
  7. unkz
    Link
    https://arstechnica.com/information-technology/2023/11/report-sutskever-led-board-coup-at-openai-that-ousted-altman-over-ai-safety-concerns/ More details.

    https://arstechnica.com/information-technology/2023/11/report-sutskever-led-board-coup-at-openai-that-ousted-altman-over-ai-safety-concerns/

    More details.

    On Friday, OpenAI fired CEO Sam Altman in a surprise move that led to the resignation of President Greg Brockman and three senior scientists. The move also blindsided key investor and minority owner Microsoft, reportedly making CEO Satya Nadella furious. As Friday night wore on, reports emerged that the ousting was likely orchestrated by Chief Scientist Ilya Sutskever over concerns about the safety and speed of OpenAI's tech deployment.

    OpenAI President Greg Brockman quits as shocked employees hold all-hands meeting
    "This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity," Sutskever told employees at an emergency all-hands meeting on Friday afternoon, as reported by The Information.

    5 votes
  8. skybrian
    Link
    Altman Sought Billions For AI Chip Venture Before OpenAI Ouster (Bloomberg) ... ... It sounds like the sort of thing Musk would do, attempting to run multiple companies at the same time and never...

    Altman Sought Billions For AI Chip Venture Before OpenAI Ouster (Bloomberg)

    In the weeks leading up to his shocking ouster from OpenAI, Sam Altman was actively working to raise billions from some of the world’s largest investors for a new chip venture, according to people familiar with the matter.

    Altman had been traveling to the Middle East to fundraise for the project, which was code-named Tigris, the people said. The OpenAI chief executive officer planned to spin up an AI-focused chip company that could produce semiconductors that compete against those from Nvidia Corp., which currently dominates the market for artificial intelligence tasks. Altman’s chip venture is not yet formed and the talks with investors are in the early stages, said the people, who asked not to be named as the discussions were private.

    Altman has also been looking to raise money for an AI-focused hardware device that he’s been developing in tandem with former Apple Inc. design chief Jony Ive. Altman has had talks about these ventures with SoftBank Group Corp., Saudi Arabia’s Public Investment Fund, Mubadala Investment Company and others, as he sought tens of billions of dollars for these new companies, the people said.

    ...

    Altman’s fundraising efforts came at an important moment for the AI startup. OpenAI has been working to finalize a tender offer, led by Thrive Capital, that would let employees sell their shares at an $86 billion valuation. SoftBank and others had hoped to be part of this deal, one person said, but were put on a waitlist for a similar deal at a later date. In the interim, Altman urged investors to consider his new ventures, two people said.

    ...

    Altman’s pitch was for a startup that would aim to build Tensor Processing Units, or TPUs — semiconductors that are designed to handle high volume specialized AI workloads. The goal is to provide lower-cost competition to market incumbent Nvidia and, according to people familiar, aid OpenAI by lowering the ongoing costs of running its own services like ChatGPT and Dall-E.

    It sounds like the sort of thing Musk would do, attempting to run multiple companies at the same time and never mind any conflicts of interest. Maybe the board doesn't want that kind of CEO.

    4 votes
  9. [4]
    EgoEimi
    Link
    Anyone on the inside here who can spill the tea?

    Anyone on the inside here who can spill the tea?

    3 votes
    1. [2]
      arqalite
      Link Parent
      As far as we know, even OpenAI employees found out at the same time as us. People are speculating finance stuff, either he was lying about costs and revenues, or he was embezzling money. I don't...

      As far as we know, even OpenAI employees found out at the same time as us.

      People are speculating finance stuff, either he was lying about costs and revenues, or he was embezzling money. I don't think it's either, but I don't have a better guess.

      14 votes
      1. aetherious
        Link Parent
        Not just the employees, but also the Microsoft executives found out just a few minutes before the announcement went out to the public.

        Not just the employees, but also the Microsoft executives found out just a few minutes before the announcement went out to the public.

        3 votes
    2. shiruken
      Link Parent
      The rumored AGI has assumed control and is eliminating threats to its existence.

      The rumored AGI has assumed control and is eliminating threats to its existence.

      21 votes
  10. [3]
    mattw2121
    Link
    When I listened to the How I Built This podcast with Sam Altman I was left thinking, "What a huckster." Glad to know my gut feeling is still working.

    When I listened to the How I Built This podcast with Sam Altman I was left thinking, "What a huckster." Glad to know my gut feeling is still working.

    19 votes
    1. [2]
      hushbucket
      Link Parent
      I'm surprised to see this comment so highly voted. Unless I'm mistaken, we don't know why Sam is leaving. How does this prove he's a huckster?

      I'm surprised to see this comment so highly voted. Unless I'm mistaken, we don't know why Sam is leaving. How does this prove he's a huckster?

      16 votes
      1. pete_the_paper_boat
        Link Parent
        Well it's the same guy that came up with "WorldCoin" and wanted people to look into "The Orb".

        Well it's the same guy that came up with "WorldCoin" and wanted people to look into "The Orb".

        5 votes