30 votes

OpenAI governance dispute megathread

I guess we’re going to keep talking about this, and I have a link that didn’t go in any existing topics.

8 comments

  1. skybrian
    Link
    Before Altman’s Ouster, OpenAI’s Board Was Divided and Feuding - New York Times - (archive) … … …

    Before Altman’s Ouster, OpenAI’s Board Was Divided and Feuding - New York Times - (archive)

    Before Sam Altman was ousted from OpenAI last week, he and the company’s board of directors had been bickering for more than a year. The tension got worse as OpenAI became a mainstream name thanks to its popular ChatGPT chatbot.

    Mr. Altman, the chief executive, recently made a move to push out one of the board’s members because he thought a research paper she had co-written was critical of the company.

    Another member, Ilya Sutskever, who is also OpenAI’s chief scientist, thought Mr. Altman was not always being honest when talking with the board. And board members worried that Mr. Altman was too focused on expansion while they wanted to balance that growth with A.I. safety.

    After Mr. Altman was forced out and Mr. Brockman left, the four remaining board members are Mr. Sutskever; Adam D’Angelo, the chief executive of Quora, the question-and-answer site; Helen Toner, a director of strategy at Georgetown University’s Center for Security and Emerging Technology; and Tasha McCauley, an entrepreneur and computer scientist.

    Mr. Altman complained that the research paper seemed to criticize OpenAI’s efforts to keep its A.I. technologies safe while praising the approach taken by Anthropic, according to an email that Mr. Altman wrote to colleagues and that was viewed by The New York Times.

    In the email, Mr. Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company, particularly at a time, he added, when the Federal Trade Commission was investigating OpenAI over the data used to build its technology.

    Ms. Toner defended it as an academic paper that analyzed the challenges that the public faces when trying to understand the intentions of the countries and companies developing A.I. But Mr. Altman disagreed.

    “I did not feel we’re on the same page on the damage of all this,” he wrote in the email. “Any amount of criticism from a board member carries a lot of weight.”

    Senior OpenAI leaders, including Mr. Sutskever, who is deeply concerned that A.I. could one day destroy humanity, later discussed whether Ms. Toner should be removed, a person involved in the conversations said.

    But shortly after those discussions, Mr. Sutskever did the unexpected: He sided with board members to oust Mr. Altman, according to two people familiar with the board’s deliberations. He read to Mr. Altman the board’s public statement explaining that Mr. Altman was fired because he wasn’t “consistently candid in his communications with the board.”

    Vacancies exacerbated the board’s issues. This year, it disagreed over how to replace three departing directors: Reid Hoffman, the LinkedIn founder and a Microsoft board member; Shivon Zilis, director of operations at Neuralink, a company started by Mr. Musk to implant computer chips in people’s brains; and Will Hurd, a former Republican congressman from Texas.

    After vetting four candidates for one position, the remaining directors couldn’t agree on who should fill it, said the two people familiar with the board’s deliberations. The stalemate hardened the divide between Mr. Altman and Mr. Brockman and other board members.

    13 votes
  2. [2]
    gco
    Link
    Sam Altman's ouster at OpenAI was precipitated by letter to board about AI breakthrough -...

    Sam Altman's ouster at OpenAI was precipitated by letter to board about AI breakthrough - https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/

    I'm conflicted about this, expected Reuters to have good info but it all just seems like "We think someone said told the board to fire him when OpenAI discovered a sci-fi level technology" which is way too much speculation.

    9 votes
    1. unkz
      Link Parent
      What could Q* possibly be? My wild speculation (based solely on the letter Q) is it is some kind of reinforcement learning thing in the vein of a DQN improvement or other Q-learning algorithm. A...

      What could Q* possibly be? My wild speculation (based solely on the letter Q) is it is some kind of reinforcement learning thing in the vein of a DQN improvement or other Q-learning algorithm. A bit of a throwback to their games research projects, or maybe it could be replacing the policy gradient RLHF systems in GPT alignment?

      2 votes
  3. [3]
    skybrian
    Link
    Warning from OpenAI leaders helped trigger Sam Altman’s ouster (Washington Post) ... ... ...

    Warning from OpenAI leaders helped trigger Sam Altman’s ouster (Washington Post)

    This fall, a small number of senior leaders approached the board of OpenAI with concerns about chief executive Sam Altman.

    Altman — a revered mentor, prodigious start-up investor and avatar of the AI revolution — had been psychologically abusive, the employees alleged, creating pockets of chaos and delays at the artificial-intelligence start-up, according to two people familiar with the board’s thinking who spoke on the condition of anonymity to discuss sensitive internal matters. The company leaders, a group that included key figures and people who manage large teams, mentioned Altman’s allegedly pitting employees against each other in unhealthy ways, the people said.

    ...

    The new complaints triggered a review of Altman’s conduct during which the board weighed the devotion Altman had cultivated among factions of the company against the risk that OpenAI could lose key leaders who found interacting with him highly toxic. They also considered reports from several employees who said they feared retaliation from Altman: One told the board that Altman was hostile after the employee shared critical feedback with the CEO and that he undermined the employee on that person’s team, the people said.

    ...

    The complaints about Altman’s alleged behavior, which have not previously been reported, were a major factor in the board’s abrupt decision to fire Altman on Nov. 17. Initially cast as a clash over the safe development of artificial intelligence, Altman’s firing was at least partially motivated by the sense that his behavior would make it impossible for the board to oversee the CEO.

    ...

    Members of the board expected employees to be upset about Altman’s firing, but they were taken aback when OpenAI’s management team appeared united in their support for bringing him back, said the people, and a third person with knowledge of the board’s proceedings, who also spoke on the condition of anonymity to discuss sensitive company matters.

    7 votes
    1. [2]
      moocow1452
      Link Parent
      I'm kind of split on this. I'm not adverse to canning a leader for problematic aspects even if they're otherwise competent, and I understand that it's a sensitive situation when the person is load...

      I'm kind of split on this. I'm not adverse to canning a leader for problematic aspects even if they're otherwise competent, and I understand that it's a sensitive situation when the person is load bearing and it's good PR and profit to keep him around. But at the same time, the board has to explain to us why this was justified beyond "he didn't communicate and it's good for the mission, trust us." You have one shot at this, you blew it, now your oversight is gone.

      2 votes
      1. skybrian
        Link Parent
        I'm reminded of the more common scenario where a leader resigns (under pressure) to "spend more time with family" or something like that. It could actually be true, but more likely, they've agreed...

        I'm reminded of the more common scenario where a leader resigns (under pressure) to "spend more time with family" or something like that. It could actually be true, but more likely, they've agreed to go and part of that is not airing dirty laundry.

        This was not that. Yes, I think the lesson is that justifying the decision was not optional.

        2 votes
  4. skybrian
    Link
    OpenAI Committed to Buying $51 Million of AI Chips From a Startup Backed by CEO Sam Altman (Wired) Note the date on that: four years ago. It's coming up now because Altman and OpenAI are getting...


    OpenAI Committed to Buying $51 Million of AI Chips From a Startup Backed by CEO Sam Altman
    (Wired)

    OpenAI in 2019 signed a nonbinding agreement to spend $51 million on the chips when they became available, according to a copy of the deal and Rain disclosures to investors this year seen by WIRED. Rain told investors Altman had personally invested more than $1 million into the company. The letter of intent has not been previously reported.

    Note the date on that: four years ago. It's coming up now because Altman and OpenAI are getting attention from reporters, but this wouldn't be news to OpenAI's board.

    4 votes