36
votes
Why do you think Sam Altman was fired from OpenAI?
Anybody have some greater amount of background or context on this? I certainly don't see it helping anything, but I know nothing of the Valley or its ways
Anybody have some greater amount of background or context on this? I certainly don't see it helping anything, but I know nothing of the Valley or its ways
The image seems to be coming clearer, looks like a philosophical conflict.
(If I had to have a take, it'd be roughly this tweet: so basically the “move slower” people ousted the “move faster” people who’ll move fast to start a newco all the “move faster” people will join to move faster, and all that will be left are the “move slower” people moving slow to move slower together)
I'm not really involved in the AI space at all, but it's odd to me that so many people in these replies are so against "moving slow". Especially when at least some (most?) experts are describing AI as a potential existential threat.
I've felt this confusion too today after reading a lot of the discourse.
I think a lot of people think safety could only be a concern related to a firing if OpenAI had dangerous AGI today, and that it's unlikely that OpenAI has AGI today, therefore this concern must be an obviously false pretense for the firing, but there are multiple ways this reasoning is false. For one, it's easy to me to imagine that a disagreement over how to handle safety in the long-term lead to multiple people in charge digging in and losing the ability to work together. I think people incorrectly expect the reason for dramatic news to be more dramatic than something that mundane.
There is definitely still a large contingent of "move fast and break things" types in the AI space, unfortunately.
They are no where near an existential threat and it’s mostly overhyped. They should worry about safety and effects on society but there’s been a ton of hyperbole involved in all of this
Oh don't get me wrong, I'm not in the "Chat GPT is almost AGI" camp. But AGI is an explicit purpose of Open AI, or at least their claim. Having people at the company that want to move slow sounds like a good thing.
Only my opinion ofc but move slow isn't an option, AI is just as powerful a weapon as nukes; it's not what AI will do to humans I fear as much as what humans will do with AI to each other.
And what's the horror scenario? You probably aren't talking about Skynet-like situation and I just have a hard time imagining the worst case scenario. (But I'm also a skeptic and generally non-believer, maybe even a hater, when it comes to AI.)
We slow down on AI. Our enemies do not. At some point vital infrastructure will be an easy target for enemy with advanced AI and we may not be clued up enough (because AI helps us learn too) to know who's attacking us and/or we may not be able to counter it in any way.
Imagine stuxnet on steroids with no counter.
I think this unlikely but possible enough that it needs to be thought about. Having said that I'm sure far more intelligent people than I have been thinking about scenarios.
Phrasing it that way certainly makes it seem like betting on FastCo would be the way to go. However, there may be a correlation between philosophy around speed of development and talent. It certainly seems like many of the people who want to go slow do so because of good reasons rather than laziness or anything that most fast moving tech companies would supplant.
If there are enough people in the slow camp(and they are higher quality), then the slow camp will still win out on progress.
Incredibly succinct. Thanks to you and the other people who've looked into it more.
The way you structured this reply is very pleasing to me. I kind of understand how powerful it is that Tildes tends towards cultivating such takes
Lots of background and speculation about this here: https://news.ycombinator.com/item?id=38309611
Interesting twist
This is plausible. The fact that GPT-4 is closed-source where all the other GPTs were (at least partially) open-source did suggest there was a significant push on productization and commercialization.
GPT-3 was closed source too. Neither Sam Altman nor Ilya Sutskever believed that committing to open sourcing their future models was good for the benefit of society (https://www.theverge.com/2023/3/15/23640180/openai-gpt-4-launch-closed-research-ilya-sutskever-interview), so it seems really unlikely that this firing is because either wanted to open source things.
It does slightly seem like Ilya wanted to study the safety of new advancements (not necessarily current advancements! maybe just about long-term plans) more while Sam wanted to apply and commercialize them quicker. (https://twitter.com/karaswisher/status/1725678898388553901)
Can one "have" or gain unfettered access to any of the prior iterations?
Depends how far back or how far sideways you want to go. Self-hosting chat LLMs is remarkably easy now. If you just want to "click the thing and go", check out nomic.ai's GPT4ALL project/product. It'll run on a shoe, if the shoe has AVX2 processor extensions.
I don’t think anyone knows yet. There were apparently some internal debate around AI safety, but to me that doesn’t point to enough of a reason to kick him. To me it’s either he messed up with the product or finances. Considering the CFO and CTO remain would imply not financial, so maybe he leaked data/model/weights or something to Microsoft or a competitor.
There's a website for this: https://whywassamfired.com :)
This would be very funny but sadly it seems unlikely.
Holy Batman turnaround* time?!
The tagging is working equisitely by the way, Tildes all :) ai wonder if it will ever get so good (if it isn't already) that you can literally just read the tags and syntopically gather an abstract of each topic, obviously reading further when it is prescient or particularly personally applicable or of interest
It’s because the board decided an AI could do his job, obviously. 🥸