24
votes
Nvidia CEO declares AI could start, grow, and run a successful technology company worth more than a billion dollars—excerpt from Lex Fridman Podcast
Link information
This data is scraped automatically and may be incorrect.
- Title
- Jensen Huang: NVIDIA - The $4 Trillion Company & the AI Revolution | Lex Fridman Podcast #494
- Authors
- Lex Fridman
- Duration
- 2:25:59
- Published
- Mar 23 2026
Transcript:
https://lexfridman.com/jensen-huang-transcript#chapter17_agi_timeline
This is one of the most "person said a thing" stories I've posted here, but a lot of the press and chatter has been coming out of it. And while Fridman set the terms for defining AGI, and Huang immediately undercut himself, the active CEO of Nvidia did say "we achieved AGI," so I'm torn on whether to count this one in 2026 predictions.
AGI hasn't been achieved, from the practical perspective of an LLM being able to trivially perform all human tasks better than a human. Trivially, FoodTruck Bench -- a benchmark where an LLM needs to run a simulated food truck -- still has leaderboards dominated by humans.
Jensen Huang is a C-suite executive who is compensated primarily in shares. Putting oneself in his shoes: if my net worth were reliant on spitting lies to a non-technical public, would I do so or take the moral highroad? Then further iterating this scenario: if many other people -- on say, a board of directors -- also had incomes reliant on my performing this mass social manipulation, would they keep me onboard if I made a habit out of telling people "we have no idea what AGI means. It's a buzzword some lunatic at Google said a decade ago, and now we're all forced to keep using it. We're in a cyclic debt loop spiral. In order to keep the money printers running, everyone has to clap and believe that there's a light at the end of this tunnel. Yes, it performs some tasks very well now, no, it isn't simultaneously a neurosurgeon, rocket scientist, world leader, and NVidia CEO"? No, because that's too wordy, and C-suite execs need to be quotable!
Please correct me if I'm wrong, but Jensen Huang is not a very credible source, is he?
Huang is the president and CEO of Nvidia, and if the headline is "this person said this thing," then he's a pretty credible source for it. His company invests and sells a whole lot of hardware to software companies, so he could very well know what he's talking about, but I guess it would be like an asphalt company talking about road planning. Knowledgeable in some areas and not in others.
Edit: But he's also a salesman, so it would be reasonable to take claims with a grain of salt.
I don't mean that he lacks knowledge. I'm pretty sure that he likes to boast a lot, that's all.
Was it clear who/what he was referring to when he said "we"?
"We" is ambiguous, so I'm assuming humanity?
What a ridiculous statement. If it's "now", then where are the billion dollar companies started by AI agents?
If the technology is capable of it, the market should absolutely flooded with ai companies. There's not even a single successful company started by an AI agent, let alone a billion dollar one.
Well, it is flooded. But it's not making anyone except the shovel seller (Nvidia) richer. Don't know why we should take the words of someone so incentivized to lie (and with little consequences for lying) at their word.
Thanks for the transcript. I would agree that he is likely referring to the collective of humanity and specifically the AI companies like anthropic et al. All the companies that use nvidia GPUs more or less ;).
Interesting, thanks. Personally I get the impression the CEO of Nvidia has a lot to gain from the bolstering the impression that AGI has been achieved regardless of whether it has or not, but you never know!
I don't see why you're twisting yourself around this... what do you mean by "whether to count this one in 2026 predictions"? What prediction? That AGI is achieved, or that somebody in the industry will claim AGI? If the former, I don't see how this has any bearing on the matter (especially as Kacey points out, given Jensen stands to profit from such a claim being made, even if untrue). If the latter I'm pretty sure there have been random hypemen claiming AGI for a while now...?
The most generous reading I have of the interaction you quoted from the podcast is "nVidia CEO says CEOs can be replaced by LLMs": Friedman says: "an AI system that’s able to essentially do your job. ... start, grow, and run a successful technology company" (emphasis mine).
In that case we've had AGI for years. The job of a modern tech CEO mostly involves sitting in meetings, listening and choosing between different strategies, and these days plenty of podcast/twitter/media shitposting to drive your stock price up. ELIZA was capable of all of that quite some time ago.
I'm sure you're just being facetious (tone is hard to read on the internet), but if ELIZA was genuinely capable of running a company better than a human someone would have tried it, others would have copied it, and it would be slowly replacing humans running companies. There's no boards would pay seven figure plus salaries to execs if a chatbot could do the same job just as effectively
Artificial general intelligence is what that suggests and is in an entirely different class than the current LLMs. Not really watched this yet but 'we achieved AGI' is an extraordinary claim that needs extraordinary evidence. It is on the level of cold fusion or room temperature standard pressure superconductor.
Without that evidence I am more likely to believe we will see redefinition of AGI similarly to what marketing did to AI.
Oh god, that's exactly what's going to happen isn't it? I think you hit the nail on the head
You can tell it's not true because you wouldn't bother to even announce it you'd just run the damn world.
A cursory googling tells me Nvidia are currently hiring humans for engineering and PR roles, among others.
Hmm. Must just be old job postings that are due to be cleared out.
Lol, another Lex Friedman episode where a guest makes a wild claim and he just nods along and tries to validate it. If there is a boot, he'll find a way to lick it and if there is a butt, his nose is already there. Can we get some content where the people making these claims actually find some pushback? The era of the sycophantic "yes man" tech podcast needs to die.
The interviewers who push back just dont get people like this as their guests
I cannot for the life of me understand why anyone takes that guy seriously. He's a deeply unserious person
I guess no AI peddler is going to go on a podcast that doesn't let them pronounce hype
Modern LLMs are artificial, they are general, and it’s not crazy to call them intelligent. So if you want to throw out the old definition of AGI you could call them that.
Here's where they discuss AGI in the transcript and video.
Specifically, Lex defines AGI as "able to essentially do your job", then clarifies he means Jensen's job by saying "run, no, start, grow, and run a successful technology company that’s worth...more than a billion." And Jensen says yes, then..."It is not out of the question that a Claude was able to create a web service, some interesting little app that all of a sudden, you know, a few billion people used for 50 cents, and then it went out of business again shortly after". Then Jensen explains that AI won't take people's jobs but assist them, because it can do some but not all of their tasks.
I think this touches the deeper issue on debating whether LLMs are AGI. AGI isn't clearly defined, and any specific definition can be subverted by something that technically fits but doesn't feel like it's AGI or vice versa. I think it makes more sense to instead figure out and debate what LLMs can and can't do.
For example, LLMs seem good at short tasks that involve surface-level concepts even across disparate domains, but struggle with long tasks that involve niche concepts (i.e. not in training data). I don't think the latter makes LLMs "not AGI", because the former tasks can require some generalization and novelty in output (LLMs can generate output that isn't verbatim from their training data); but it makes them unable to automate most real-world tasks, which require unwritten knowledge learned through experience.
I mean, in a sense, they're not wrong...
.....the AI just has to be stupendously lucky, is all. Just like any other human trying to run a business. I don't see it as a flexing of the capability of AI, so much as I see it as a quiet confession that anyone can be successful given enough seed funding.
"Sirens of Titan" by Vonnegut had a wonderful scenario illustrating this.
Reminds me of the saying of, in the long run we will all be dead anyway.
To the extent that Jensen's job atm is principally to make wild, untrue claims about his products to whip up a media frenzy, then yes, an LLM could easily perform his job. Not that that's a high bar.
An alternate take: Jensen kinda sorta believes it. There's this phenomenon with LLMs where, early on in the process of starting to really see evidence of what they're capable of, people lose their minds. Some people freak out "AI is going to replace everyone and we're all doomed", some people get overly bullish "holy shit this changes everything and makes me a superhero", sometimes it's sort of personal "wow this thing is a genius, it gets me and has arcane knowledge about all the things". There are a variety of options but all of them come with a reality distortion field. It looks a little bit like an intense crush or the early stages of love.
Sometimes it even is love, according to the afflicted.
We've seen this play out in articles, blog posts and podcasts countless times in recent years. A lot of it is intentional hype of course but there's a true believer piece that the hype overshadows. LLMs have an interesting (if dystopian) psychological effect that will no doubt eventually be studied and named.
There's no question that Jensen is on the podcast to sow hype, but he might also be in the the butterflies stage of LLM salience. Which is to say, a little crazy.
Would love to watch this but don’t want to support the fraudster that is Lex Fridman. Guess I’ll look at the transcript.
I’m not keen on Lex and find his questions are limited to agreeing with his guests, but I don’t know much about any fraud/negative beyond this, can you expand?
Why is he a fraudster?
I'm tired of this guy and his jacket, either fix your GPU drivers or go away.