This whole "coding is dead, long live AI" bring to mind the priests in Asimov's Foundation. What do people think is going to happen? AI will write everything and MBAs will just tell their computer...
This whole "coding is dead, long live AI" bring to mind the priests in Asimov's Foundation. What do people think is going to happen? AI will write everything and MBAs will just tell their computer what system needs to be built? If something doesn't work OpenAI and Nvidia will send out their priests to exorcise demons from within the machine?
People will need to code. Maybe we don't need the legions of devs that we currently have. But coding is also an invaluable tool. The style of thinking in math, physics, biology are mirrored in writing code. Code is a place where you can be "hands on" with concepts and test your ideas. Not everyone needs to be a professional code jockey. But I think most people would benefit from exposure to writing it.
Maybe that's exactly what the Nvidia CEO is thinking. Those priests had immense influence in the Foundation series. At one point they even used their influence to stop a major interstellar war. As...
This whole "coding is dead, long live AI" bring to mind the priests in Asimov's Foundation.
Maybe that's exactly what the Nvidia CEO is thinking. Those priests had immense influence in the Foundation series. At one point they even used their influence to stop a major interstellar war. As the CEO on Nvidia, if we go the route of tech priests, that would make him tech pope.
Oh believe me, I bet Nvidia would love debugging to be as mystical as possible. "Oh yes, the incense is definitely necessary and BTW it was 40 thousand units of spice to get our holy armada on site."
Oh believe me, I bet Nvidia would love debugging to be as mystical as possible.
"Oh yes, the incense is definitely necessary and BTW it was 40 thousand units of spice to get our holy armada on site."
It's even more insidious than that. The people who make decisions on where to invest money are MBA management types. Those people detest the fact that they have to pay the dorks that are good at...
It's even more insidious than that.
The people who make decisions on where to invest money are MBA management types. Those people detest the fact that they have to pay the dorks that are good at computer programming so much in order to attract them.
Their ultimate fantasy with regard to technology is to fire all of them and pocket all that money for themselves. Huang knows his audience so well.
The idea that programming (a skill that virtually none of those management types have, yet all of them covet so highly) will become worthless and obsolete might as well be mainlining heroin right into their Harvard educated veins. It's the most amazing sounding thing ever to that type of person.
Not only is Huang positioned as arguably the single person the entire world that has the most to gain from the insane AI hype gripping the world right now, he also knows how to play into the fears and ultimate fantasies of the people who make those decisions.
It's no coincidence that the rhetoric around AI is that while it can replace those pesky, entitled, expensive artists, programmers and engineers, there's no WAY it could replace you, Mr. insightful and oh so uniquely wise business person.
On one hand I understand that these Harvard educated people are mostly just using standard tools in combination with nepotism and second-degree nepotism ("networking") to get rich. They aren't...
On one hand I understand that these Harvard educated people are mostly just using standard tools in combination with nepotism and second-degree nepotism ("networking") to get rich. They aren't special geniuses. They're just inheriting wealth and means to wealth. So maybe they won't be replaceable without the AI itself being able to take that nepotistic role.
But on the other hand AI could be enough of a means to move power that even generational wealth can't stop it.
Exactly this. It’s in Nvidia’s best interest to add to the AI hype bubble by convincing everyone that it’s only a matter of time until AI fully replaces humans. This keeps the VC money flowing to...
Exactly this. It’s in Nvidia’s best interest to add to the AI hype bubble by convincing everyone that it’s only a matter of time until AI fully replaces humans. This keeps the VC money flowing to AI companies, who turn around and spend it on NVIDIA hardware to run their models.
Personally I’m not sure if he’s right or wrong, but I’m not gonna put much stake in someone’s opinion when they have such a clear conflict of interest, same as I’m not going to listen to a crypto CEO saying that crypto/blockchain is the future. They win by pumping up the hype bubble, so why should I put value in their public opinion.
Talk is cheap, and at least this piece of data shows that AI is just another coding tool, not a replacement. Someone who understands the code is still necessary to judge if it's safe and correct,...
Talk is cheap, and at least this piece of data shows that AI is just another coding tool, not a replacement. Someone who understands the code is still necessary to judge if it's safe and correct, to interface with it and debug it. Maybe it's a matter of time, but this time is not yet.
It's akin to saying that fancier telescopes will eliminate the need for astronomers. Writing code is a small part of what a software engineer does, hardly even the most substantial. Every new tool...
It's akin to saying that fancier telescopes will eliminate the need for astronomers. Writing code is a small part of what a software engineer does, hardly even the most substantial. Every new tool that has accelerated software development and reduced the need to reinvent wheels has only increased productivity and been met with an increasing demand for software engineers.
Fancy Markov changes are fundamentally incapable of solving novel problems. A machine that probabilistically predicts the next token in a chain, by definition, won't arrive at something that isn't well-tread in its training set. What they are good at is regurgitating Stack Overflow and documentation samples for things that are well understood. Neither of which helps you write novel software, only boilerplate for common patterns. (So does using a framework like Spring instead of writing an entire socket server yourself.)
If you're an intern doing superficial work for CRUD applications, I guess GPT type AI is a threat. But from the standpoint of someone experienced with software development, it looks like people thinking a Magic 8-Ball is revealing divine insight. The ability to speak is a neat trick, but it doesn't imply reasoning ability.
So far I've mostly found them annoying. The autocomplete covers whatever else I was looking at, then I have to stop and check to see if it's doing what I want. If it isn't, I now have to extract...
So far I've mostly found them annoying. The autocomplete covers whatever else I was looking at, then I have to stop and check to see if it's doing what I want. If it isn't, I now have to extract my original code out and try to remember where I was 2 minutes ago.
All the really helpful examples I've seen are things like asking it to quickly fetch from an API. That's awesome, but it's not the hard part. I can see stuff like that expanding to save time on tediousness. I'd still want to write tests for it, I certainly wouldn't trust it to test itself.
Have you tried chatGPT itself instead of copilot? Because I can totally see where autocompletion tools might not be as beneficial. In fact, I don't trust current LLMs to write code directly for...
Exemplary
Have you tried chatGPT itself instead of copilot? Because I can totally see where autocompletion tools might not be as beneficial. In fact, I don't trust current LLMs to write code directly for me. They are pretty good at being a pretty advanced rubber ducky assistant.
Helping me go through obscure errors, stack traces, decipher spaghetti code someone else wrote and a lot of the tedious Google searches.
The benefit of a chat style LLM is that you can follow up on it and ask it questions. I don't expect it to be perfect, but it gives me a jumping off point for a lot of things that previously would have cost me more time.
To give a few slightly more concrete examples.
Deciphering spaghetti code: LLMs generally are pretty good at picking apart code blocks and generally explaining the functional parts. A while ago I was dealing with code that had lots of methods on single lines with tons of conditions. I put in in chatGPT, asked it to go over it and it gave me a point by point explanation of all the logic in there. Again, I don't expect it to be perfect here, it doesn't need to be. The way my mind works once I have the explanation I can much easier go to the single line mess and follow it along. If chatGPT messed up I will see that, but I will also be much further along already with deciphering as I would have been doing it manually.
Tedious google searches: Lots of stuff related to specific implementations, errors, etc. Where you often would go over multiple, often outdated, stackoverflow threads.
Picking up new technologies faster: Recently I had to figure out Kubernetes for the first time, in combination with google cloud. Previous experience was very limited and the person within the team who had knowledge left before I joined. While I still had documentation and all that on hand, chatGPT allowed me to just quickly get insights on the current configuration of things. Just giving it snippets of the Kubernetes deployment yaml and various events I saw.
Again, it isn't perfect. In fact, I can only use it like this because I have knowledge and experience myself that allows me to ask the right questions and validate answers.
You also need to be aware of other limitations. For example, I don't expect it to be up-to-date with cutting edge technology. In fact, I always assume the answers about frameworks/libraries/tools are based on older versions. For example, I recently asked it some things about Grafana. While largely the answers were helpful, some specifics about configuring graphs made it clear that it was giving answers for a different version. But, that was fine, as I now did have much more specific terminology to work with to dive in the documentation for the version I needed help with and get to the answer.
And all things considered, it isn't as different from encountering answers online for older versions of the software or language you are using. Although as the answers look much more tailored to you, it is much more important to be aware that they can be faulty.
So to tie it into the bigger discussion, I am not worried about it replacing me in the near future. If anything, more junior people will need training in how to use it in a way that is actually productive. In the meanwhile, it is very nice QoL improvement for my day-to-day work.
I appreciate you taking the time to put this together. I haven't tried using ChatGPT like this, mostly because it isn't allowed at work and I've been trying to rest my hands by not typing in my...
I appreciate you taking the time to put this together. I haven't tried using ChatGPT like this, mostly because it isn't allowed at work and I've been trying to rest my hands by not typing in my free time.
I like your suggestions a lot. One thing I've become convinced of over time is that the hardest part of leading a new language or framework is understanding how to read the error stack - I could see this being incredibly helpful in speeding that up.
I use it mainly for two things: writing sql, splunk queries, and gql. Also things like regex. Terminal invocations as well. And also using it as a reference for more abstract concepts. For...
I use it mainly for two things: writing sql, splunk queries, and gql. Also things like regex. Terminal invocations as well.
And also using it as a reference for more abstract concepts. For instance, I asked an LLM how something akin to dependency injection is done in modern JavaScript. It’d be a tricky thing to google - I don’t necessarily need literally an OO dependency injection framework or anything, just how that kind of abstraction is usually done. And I want to do as the Roman’s do.
I haven't used it to generate code for an actual product. Do you have experience with tools like Visual Studio, MVC, and entity framework, where you could build the database first, and it would...
I haven't used it to generate code for an actual product. Do you have experience with tools like Visual Studio, MVC, and entity framework, where you could build the database first, and it would generate all the CRUD interfaces, the C# rest API, and the HTML and client side stuff? That would be my benchmark, of can the LLM do better than the procedural in terms of getting the CRUD cruft out of the way.
I haven't used it do anything like that. Mostly I just use it in neovim, where I can write a comment that describes what I want and it frames in the code, eg. # a view that loads a...
I haven't used it do anything like that. Mostly I just use it in neovim, where I can write a comment that describes what I want and it frames in the code, eg.
# a view that loads a FloorControlImage and returns a png to the user out of the image field
and it writes out code:
from django.http import HttpResponse
from django.shortcuts import get_object_or_404
from .models import FloorControlImage
def floor_control_image(request, pk):
# Retrieve the model instance based on its primary key
instance = get_object_or_404(FloorControlImage, pk=pk)
# Check if the instance has an image associated with it
if instance.image:
# Retrieve the image data from the ImageField
image_data = instance.image.read()
# Set the content type to JPEG
response = HttpResponse(image_data, content_type='image/png')
return response
else:
# Return a default image or a placeholder if no image is found
return HttpResponse(status=404) # or return a default image
I can ask it to do other things like, "write the urls.py entry for this view" or "make a django admin view that overrides the change form to show the image in-line" and it'll do that as well. Or, instead of looking through documentation I can do things like "my image is being displayed with the wrong color scheme in jupyter when I use imshow, write an appropriate image conversion to show it in RGB" and it can just do it.
Has this really improved your efficiency by hundreds of percent? I’m a full time web developer. I find it is at best a 15% improvement. And that’s only when I’m writing a ton of code, which isn’t...
Has this really improved your efficiency by hundreds of percent? I’m a full time web developer. I find it is at best a 15% improvement. And that’s only when I’m writing a ton of code, which isn’t every day, and only when the code I need to write is fairly simple and I know what needs to be done.
It’s been the most helpful when, as an expert in one system, I need to learn a very similar but completely independent system. So when I needed to write Angular code as someone that is experienced in React. The two map almost 1:1 so I could describe exactly what I’d do in React and get decent Angular code out. Compared to me struggling through docs and trying to adapt existing Angular code it was perhaps 2-3x faster.
If you're comfortable sharing, what is your job title and level of experience? How much of your day is spent programming?
I would say yes, hundreds of percent. I spend a lot of time exploring new code or using obscure libraries, so having the LLM as a shortcut to digging through documentation is a huge benefit. I...
I would say yes, hundreds of percent. I spend a lot of time exploring new code or using obscure libraries, so having the LLM as a shortcut to digging through documentation is a huge benefit.
I would say that it is, for me, not just an efficiency improvement in coding, but for many tasks that I might not normally code a project for. Lowering the barrier for putting together a quick and dirty app to track and optimize some random thing means I'm more willing to automate things that I might normally do by hand.
I've been programming for almost 40 years now -- mostly building startups, currently just working on a bunch of hobby projects. I probably program about 30-40 hours/week.
Ah, are you doing a ton of different small projects then? I can see LLMs being a massive time saver for that case. They’re very good at boilerplate and implicit documentation lookups.
Ah, are you doing a ton of different small projects then? I can see LLMs being a massive time saver for that case. They’re very good at boilerplate and implicit documentation lookups.
Hit and miss in my experience. I’ve had them hallucinate the perfect-sounding API operation for a problem I was working on more than once. Sounds and looks right, but had no basis in reality.
Hit and miss in my experience. I’ve had them hallucinate the perfect-sounding API operation for a problem I was working on more than once. Sounds and looks right, but had no basis in reality.
It was niche stuff I suppose, so not really surprising all I got was hallucinations. A concrete example is when I looking for ways to calculate azimuth between 2 points on a geodesic sphere using...
It was niche stuff I suppose, so not really surprising all I got was hallucinations. A concrete example is when I looking for ways to calculate azimuth between 2 points on a geodesic sphere using a specific library. ChatGPT gave me this helpful snippet (nestled among boilerplate):
// Calculate the azimuth from point1 to point2doubleazimuth=spatialContext.getAzimuthCalculator().azimuth(point1,point2);
Trouble is, the "getAzimuthCalculator()" function and the class it supposedly returns doesn't exist and never has :-/. A similar case where I don't have the original snippet any longer was for converting between UTF-8 and the ancient HP Roman 8 charset. Helpfully I was told to just call the "toCharSet("HP Roman 8") function or something along those lines.
Both examples are just using public ChatGPT (3.5 I guess), so maybe more specialized LLMs would have given better results.
That's neat. Similar to the scaffolding I've experienced, but allows for method by method generation. How complete or functional is it normally? I.e., how much do you typically need to add to make...
That's neat. Similar to the scaffolding I've experienced, but allows for method by method generation. How complete or functional is it normally? I.e., how much do you typically need to add to make it minimally work in the most generic way, not necessarily exactly how you end up wanting it?
At least with the database drive functionality, everything worked as generated, but might be ugly or require additional business logic to be added.
I'd say it depends on a lot of factors, but frequently the code just works as is. If it's something complex, it will probably take a bit of editing. Simple CRUD-like views maybe 95% likely to be...
I'd say it depends on a lot of factors, but frequently the code just works as is. If it's something complex, it will probably take a bit of editing. Simple CRUD-like views maybe 95% likely to be exactly correct in my experience.
I already made a comment but would like to separately say the code you provided as an example has issues: There are too many comments. This ruins the signal-to-noise ratio of comments. The...
I already made a comment but would like to separately say the code you provided as an example has issues:
There are too many comments. This ruins the signal-to-noise ratio of comments. The comments are also wrong, ex: Set the content type to JPEG but the next line says content_type='image/png'.
The code should use an early-return pattern. Maybe this is just a style matter, but I find in HTTP handler functions you really want to rely on early-returns. There are usually a handful of error cases (auth, permissions, parameter invariants, not-found issues). Nesting ifs in that situation will get out of control fast.
In my experience having an LLM write too much code in one shot will cause issues because most of the training data comes from mediocre developers. But if you're getting it to write just a couple lines in the context of a file in which you, as a human, are being careful and thoughtful about what you write it will be well guided to continue your patterns.
Yeah, I manually edited it to PNG -- this is just the most recent thing I asked it to do for me, so it was already sitting on my screen and I just pasted it here half way through editing it....
The comments are also wrong
Yeah, I manually edited it to PNG -- this is just the most recent thing I asked it to do for me, so it was already sitting on my screen and I just pasted it here half way through editing it. Initially I asked it for a JPEG, but realized a few minutes later that I actually stored PNGs.
The benefit to this code is: I did not remember how to get the actual binary data out of a Django ImageField (just instance.image.read as it turns out), and I didn't want to look it up. So I wrote the above prompt and I got that working code, and it also saved me the trouble of getting the necessary imports and setting the content type/handling 404s.
But that's a rather big if. How would you know if the code produced is actually good, and the results are correct, if you don't understand it? People copied from Stack Overflow since its...
But that's a rather big if. How would you know if the code produced is actually good, and the results are correct, if you don't understand it? People copied from Stack Overflow since its inception, and it didn't kill coding. Even if "programming language is human" you still need someone good at analytic thinking to phrase things correctly, to state the requirements. Kind of the point behind business analysts I guess.
I did not use any of my childhood programming knowledge in adulthood. The languages, environments, methods, practices all changed. I did use my analytic thinking skills that I trained on my childhood programming.
I applied them to physics. Because we'll always need analytical thinkers in science, engineering, and, yes, even biology mentioned by Huang. All the biologists I know would not survive without writing scripts, and it's not the only part of the job that requires the same mindset.
That's the cool thing about AI. You don't have to. Sure, it's not that great now, but we're only a year into the ChatGPT era and it's already way better than it was a year ago. As more products...
How would you know if the code produced is actually good, and the results are correct, if you don't understand it?
That's the cool thing about AI. You don't have to. Sure, it's not that great now, but we're only a year into the ChatGPT era and it's already way better than it was a year ago. As more products reach maturity and are integrated, the average person will be completely worthless compared to LLMs when programming. Sure, experts will be needed until we have something approximating AGI, but the average shitty dev won't be needed within a handful of years.
Yeah, because those positions might not exist anymore by the time those junior devs would have the opportunity to become senior devs. Remember, we are talking about literal children here. They...
Yeah, because those positions might not exist anymore by the time those junior devs would have the opportunity to become senior devs. Remember, we are talking about literal children here. They won't be at that point for multiple decades.
That, and a lot of the double-checking can also be done by AI I presume. No reason an AI can't take requirements, convert them to tests, write code that solves the problem, run the tests to ensure...
That, and a lot of the double-checking can also be done by AI I presume. No reason an AI can't take requirements, convert them to tests, write code that solves the problem, run the tests to ensure the code satisfies them and then validate it all against the original requirements, producing a report that highlights potential issues. Only slightly optimistically, that's just scaling what's already there - building models with better reasoning capabilities, gathering more data and some integration work to put it all together into a usable product. It's IMO entirely feasible within the time frame that's relevant for what kids should learn today. If you're going to use your coding skills in 15-20 years, my guess for their marketability is quite grim, considering the hordes of coders we have educated the last 10 years, a lot of which will probably be made redundant if my crystal ball isn't wrong.
You're absolutely correct, but this is one of the reasons I think there are going to be issues. Have you ever worked with a contracting team that was handed requirements and then turned in their...
No reason an AI can't take requirements, convert them to tests, write code that solves the problem, run the tests to ensure the code satisfies them and then validate it all against the original requirements, producing a report that highlights potential issues.
You're absolutely correct, but this is one of the reasons I think there are going to be issues. Have you ever worked with a contracting team that was handed requirements and then turned in their code at the end of the project? In my experience, the resulting code will do exactly what the requirements say.
The code is put into production and now all the issues pop up. I wouldn't even call a lot of them bugs. These are things that weren't addressed in the requirements, usually because they were so obvious to the person who created them.
Here are some examples I've seen:
No logout button (logging in works great, though!)
People can't figure out how to navigate the software
The basic functionality is so hard to use that folks either go back to what they were doing before or come up with some new manual process
Logical fallacies
This is how we get a lot of government software, and why it's so expensive.
I remember a time in my career where I was updating a web page. This particular web page was dynamically generated with assembler running in a CICS region on a zOS mainframe. The data was queried...
I remember a time in my career where I was updating a web page. This particular web page was dynamically generated with assembler running in a CICS region on a zOS mainframe. The data was queried through a Natural program that interfaced with an Adabase database and a C routine running in an OMVS segment running a UNIX environment which queried the central identity system. When the data came back, I used assembler to construct the HTML that would serve up people's pay statements.
Just that one little task required detailed knowledge of the hardware architecture, OS principles, and all sorts of things I'm glad I only vaguely remember. (I've been down the gullet of an interstellar cockroach; that's one of a thousand memories I don't want).
Do I think kids should learn programming? Yes. Do I think they should learn what I learned? No. I think they should focus on Python, and SciKit-Learn, and how to apply it to problem domains, and only get into the deep technical if that is where their interest and career takes them.
I don't see on the horizon yet LLMs being able to write the next version of LLMs in the same way that we used machine code to write an assembler compiler, which we then used to write a C compiler, and on and on. Until that happens, someone has to code, even if it is LLMs and other machine learning platforms.
I think this CEO is trying to appear visionary, but I don't think the comment will age well as we go 5, 10, 15 years with AI still not driving code development. I suspect it will be at least that long that you need developers to drive the prompts used to scaffold the different elements of a system, as well as to architect the system to even know what to ask for.
Edit: amusingly my phone keeps replacing LLMs with Llama, which is nicely confusing.
Depends what you mean by coding. The dull as hell busywork like setting up a basic website? Sure, an AI can webpage your bicycle shop in five minutes. You won't find a single living developer who...
Depends what you mean by coding.
The dull as hell busywork like setting up a basic website? Sure, an AI can webpage your bicycle shop in five minutes. You won't find a single living developer who actually wants to do that job, either. It's below the level of burger flipping, it's cleaning toilets with a toothbrush. Good riddance to this stuff once it's automated.
Real coding? The kind you need a master's degree in pure mathematics with a helping of several other hard domain sciences just to put a toe in the door? The kind you need to build something like an operating system or precision real time weapons systems or particle models at a collider? That's not going away.
I can imagine an AI is going to make certain things like deployment, debugging, prototyping, and refactoring (time sinks everyone hates) into little more than progress bars. Worry more about what one human developer with a sharp mind can do when the automation multiplies their effectiveness tenfold. Should take a lot of the gruntwork out of it all and leave the humans with more design and experimentation time, where the fun is.
I'd expect programmers to automate CEOs, business management, marketing, and even most HR work out of existence long before they manage to truly obsolete themselves. :P
No one plays chess for some practical goal. If it were important that people play chess well (eg, let’s say that in a parallel universe god rewarded humans by how many angels we could defeat in...
No one plays chess for some practical goal. If it were important that people play chess well (eg, let’s say that in a parallel universe god rewarded humans by how many angels we could defeat in chess), then absolutely only chess algorithms would play chess in that context.
Wasn't it like 15 minutes ago that tech CEOs were running around shouting that everyone should learn to code ASAP? It's almost like we shouldn't pay attention to CEOs because what they say is just...
Wasn't it like 15 minutes ago that tech CEOs were running around shouting that everyone should learn to code ASAP? It's almost like we shouldn't pay attention to CEOs because what they say is just what's best for them at that particular moment in time.
Compsci is not coding. I think kids should be given the opportunity to learn a whole boatload of things. The "should" in this sentence is what is problematic about the trend that this CEO is...
Compsci is not coding.
I think kids should be given the opportunity to learn a whole boatload of things. The "should" in this sentence is what is problematic about the trend that this CEO is talking about. Is he saying kids "should" not be forced learn to code, or that there "should" be no need for humans of their generation to code? The former I would agree with, and the latter is just industry "buy my cereal" hype.
I agree that kids should not be forced to learn how to code. If they're talking about a graduation requirement level of "should", kids should definitely NOT be forced to learn how to code. Maybe they want to do networking or hardware or a billion different compsci things. Heck, maybe they want to learn HVAC and plumbing and growing food, which I would argue are even better things to learn than compsci. All kids should be given exposure to compsci, given a mile high view of how computers and programming works, and basic ready to play sandboxes to mess around for an afternoon. Any more than that and we are veering towards "leetcode summer camp" territories. Do they need to recite a sort or demonstrate how to balance a binary tree? Absolutely not, unless they want to as an elective. They don't need coding any more than they need to know how to change the oil of their car: it's good to know at least in theory how it's done, but honestly they're far far better off learning how to read a topography map, evaluate climate change, how investments and real estate works, how to change the air filter in the furnace, and how to unclog a toilet.
Kids' extremely limited time is better served by teaching them how to teach themselves anything, but that's a whole other rant..
As for what it sounds like he's saying here, that AI has "closed the gap", one needs to look no further than how folks living in lower income countries contribute to make AI work, and how little they're paid for their effort, to know it's a bunch of pfoowie.
The common mindset seems to be that we teach kids to code so that they can code as a job later on. So should we stop teaching kids math if the job of actuary is taken by AI? Writers usually aren't...
The common mindset seems to be that we teach kids to code so that they can code as a job later on. So should we stop teaching kids math if the job of actuary is taken by AI? Writers usually aren't paid very well and I have a pile of obviously AI written books in my Kindle ads, we should probably stop teaching kids to write coherently. Taking this mindset to an extreme, do we even need school now that we suspect we've either reached the singularity or are on the cusp of it?
Learning to code teaches a style of problem solving. Part of that is breaking a problem down into a series of simpler problems until you reach a point that you can move forward. Another part is understanding not just what is literally asked for, but what is needed.
I posit the opposite: We live in an era where damn near every job on the planet can benefit from some degree of coding. Everyone should be able to code at least a little so they don't strictly...
I posit the opposite:
We live in an era where damn near every job on the planet can benefit from some degree of coding. Everyone should be able to code at least a little so they don't strictly need to rely on somebody else to make their lives easier.
The difficult part of programming is understanding the requirements within the business logic. Handle all the edge cases and interfaces to other internal and external systems. While ensuring that...
The difficult part of programming is understanding the requirements within the business logic. Handle all the edge cases and interfaces to other internal and external systems. While ensuring that whatever new stuff you add doesn't break anything existing or doesn't have unintended side effects. I am not see AI handling doing all that soon. It seems good at closed environments but how well will it handle the bigger picture so to speak? And who will ensure that the stuff it produces is correct and be able to fix its errors?
I think the Nvidia CEO is dumbing down his message to create nice headlines. Most kids do not need to learn to code in order to succeed in life. Many jobs that require light queries or occasional...
I think the Nvidia CEO is dumbing down his message to create nice headlines.
Most kids do not need to learn to code in order to succeed in life. Many jobs that require light queries or occasional scripts will not require the effort that they do now.
Computer Scientists will still need to know how to code. They may code less, the same way that math is done using calculators and spreadsheets instead of hand calculations, but they still will need to know how everything current Computer Scientists do.
I think that learning to do arithmetic yourself is useful even if you expect to use a calculator or spreadsheet, and similarly, an introduction to coding (which is all many kids will learn anyway)...
I think that learning to do arithmetic yourself is useful even if you expect to use a calculator or spreadsheet, and similarly, an introduction to coding (which is all many kids will learn anyway) is useful background knowledge even if you don’t do a lot of it by hand.
It might be more useful to look at this in a less binary way: how much about programming should schools teach? I think it should be enough that kids get a taste for it, and the ones who are really into it will likely learn a lot more on their own.
Regarding AI, the future hard to predict, particularly in a rapidly changing field. LLM’s haven’t been around for very long and it seems too soon to say what AI-based tools will really be like. Programming jobs might still be around (in some form) when current LLM’s become obsolete.
I don’t think he’s wrong, he’s just too early by maybe 10-20 years. Right now, AI is a really useful tool in the hands of developers who know what they’re trying to achieve with it. It will write...
I don’t think he’s wrong, he’s just too early by maybe 10-20 years. Right now, AI is a really useful tool in the hands of developers who know what they’re trying to achieve with it. It will write decent code if you prompt it well, which itself is a technical skill. Then you’ve got to review the generated code for bugs, which is tricky because AI-written bugs tend to be more subtle than those written by people.
After that, you’ve got working code but you have to know what to do with it. You need familiarity with your versioning process, your build tools, your CI/CD pipeline, deployment workflow, whatever. Just writing code is only one step toward delivering functioning software. As far as I’m aware, AI can’t do any of that other stuff yet. It will. But not today. I don’t think it’s able to architect more complicated modules with many files and dependencies either, not without significant hand-holding.
In 2024, AI is a tool… not an automaton. I fully expect that to change as the technology matures. This guy is jumping the gun though.
This whole "coding is dead, long live AI" bring to mind the priests in Asimov's Foundation. What do people think is going to happen? AI will write everything and MBAs will just tell their computer what system needs to be built? If something doesn't work OpenAI and Nvidia will send out their priests to exorcise demons from within the machine?
People will need to code. Maybe we don't need the legions of devs that we currently have. But coding is also an invaluable tool. The style of thinking in math, physics, biology are mirrored in writing code. Code is a place where you can be "hands on" with concepts and test your ideas. Not everyone needs to be a professional code jockey. But I think most people would benefit from exposure to writing it.
Maybe that's exactly what the Nvidia CEO is thinking. Those priests had immense influence in the Foundation series. At one point they even used their influence to stop a major interstellar war. As the CEO on Nvidia, if we go the route of tech priests, that would make him tech pope.
Started my career too late for "Webmaster", too soon for "Technopriest". sigh
Haha that's what I was getting at. Kind of an inversion of the Foundation story.
Was Warhammer 40K prophetic? Hope not!
Oh believe me, I bet Nvidia would love debugging to be as mystical as possible.
"Oh yes, the incense is definitely necessary and BTW it was 40 thousand units of spice to get our holy armada on site."
AI hardware company tells you to bet it all on AI.
How else are they going to carry the S&P 500 next quarter?
It's even more insidious than that.
The people who make decisions on where to invest money are MBA management types. Those people detest the fact that they have to pay the dorks that are good at computer programming so much in order to attract them.
Their ultimate fantasy with regard to technology is to fire all of them and pocket all that money for themselves. Huang knows his audience so well.
The idea that programming (a skill that virtually none of those management types have, yet all of them covet so highly) will become worthless and obsolete might as well be mainlining heroin right into their Harvard educated veins. It's the most amazing sounding thing ever to that type of person.
Not only is Huang positioned as arguably the single person the entire world that has the most to gain from the insane AI hype gripping the world right now, he also knows how to play into the fears and ultimate fantasies of the people who make those decisions.
It's no coincidence that the rhetoric around AI is that while it can replace those pesky, entitled, expensive artists, programmers and engineers, there's no WAY it could replace you, Mr. insightful and oh so uniquely wise business person.
On one hand I understand that these Harvard educated people are mostly just using standard tools in combination with nepotism and second-degree nepotism ("networking") to get rich. They aren't special geniuses. They're just inheriting wealth and means to wealth. So maybe they won't be replaceable without the AI itself being able to take that nepotistic role.
But on the other hand AI could be enough of a means to move power that even generational wealth can't stop it.
Exactly this. It’s in Nvidia’s best interest to add to the AI hype bubble by convincing everyone that it’s only a matter of time until AI fully replaces humans. This keeps the VC money flowing to AI companies, who turn around and spend it on NVIDIA hardware to run their models.
Personally I’m not sure if he’s right or wrong, but I’m not gonna put much stake in someone’s opinion when they have such a clear conflict of interest, same as I’m not going to listen to a crypto CEO saying that crypto/blockchain is the future. They win by pumping up the hype bubble, so why should I put value in their public opinion.
The death of coding has been predicted many times in the past years and decades even, but now we have AI. I'm curious what people think about this.
Talk is cheap, and at least this piece of data shows that AI is just another coding tool, not a replacement. Someone who understands the code is still necessary to judge if it's safe and correct, to interface with it and debug it. Maybe it's a matter of time, but this time is not yet.
It's akin to saying that fancier telescopes will eliminate the need for astronomers. Writing code is a small part of what a software engineer does, hardly even the most substantial. Every new tool that has accelerated software development and reduced the need to reinvent wheels has only increased productivity and been met with an increasing demand for software engineers.
Fancy Markov changes are fundamentally incapable of solving novel problems. A machine that probabilistically predicts the next token in a chain, by definition, won't arrive at something that isn't well-tread in its training set. What they are good at is regurgitating Stack Overflow and documentation samples for things that are well understood. Neither of which helps you write novel software, only boilerplate for common patterns. (So does using a framework like Spring instead of writing an entire socket server yourself.)
If you're an intern doing superficial work for CRUD applications, I guess GPT type AI is a threat. But from the standpoint of someone experienced with software development, it looks like people thinking a Magic 8-Ball is revealing divine insight. The ability to speak is a neat trick, but it doesn't imply reasoning ability.
Still, it's a shockingly powerful coding tool. I feel like my productivity has increased by several multiples.
Would you mind sharing some specific examples of how your productivity has increased? I haven't found AI tools useful at all in my day to day job.
So far I've mostly found them annoying. The autocomplete covers whatever else I was looking at, then I have to stop and check to see if it's doing what I want. If it isn't, I now have to extract my original code out and try to remember where I was 2 minutes ago.
All the really helpful examples I've seen are things like asking it to quickly fetch from an API. That's awesome, but it's not the hard part. I can see stuff like that expanding to save time on tediousness. I'd still want to write tests for it, I certainly wouldn't trust it to test itself.
Have you tried chatGPT itself instead of copilot? Because I can totally see where autocompletion tools might not be as beneficial. In fact, I don't trust current LLMs to write code directly for me. They are pretty good at being a pretty advanced rubber ducky assistant.
Helping me go through obscure errors, stack traces, decipher spaghetti code someone else wrote and a lot of the tedious Google searches.
The benefit of a chat style LLM is that you can follow up on it and ask it questions. I don't expect it to be perfect, but it gives me a jumping off point for a lot of things that previously would have cost me more time.
To give a few slightly more concrete examples.
Again, it isn't perfect. In fact, I can only use it like this because I have knowledge and experience myself that allows me to ask the right questions and validate answers.
You also need to be aware of other limitations. For example, I don't expect it to be up-to-date with cutting edge technology. In fact, I always assume the answers about frameworks/libraries/tools are based on older versions. For example, I recently asked it some things about Grafana. While largely the answers were helpful, some specifics about configuring graphs made it clear that it was giving answers for a different version. But, that was fine, as I now did have much more specific terminology to work with to dive in the documentation for the version I needed help with and get to the answer.
And all things considered, it isn't as different from encountering answers online for older versions of the software or language you are using. Although as the answers look much more tailored to you, it is much more important to be aware that they can be faulty.
So to tie it into the bigger discussion, I am not worried about it replacing me in the near future. If anything, more junior people will need training in how to use it in a way that is actually productive. In the meanwhile, it is very nice QoL improvement for my day-to-day work.
I appreciate you taking the time to put this together. I haven't tried using ChatGPT like this, mostly because it isn't allowed at work and I've been trying to rest my hands by not typing in my free time.
I like your suggestions a lot. One thing I've become convinced of over time is that the hardest part of leading a new language or framework is understanding how to read the error stack - I could see this being incredibly helpful in speeding that up.
I use it mainly for two things: writing sql, splunk queries, and gql. Also things like regex. Terminal invocations as well.
And also using it as a reference for more abstract concepts. For instance, I asked an LLM how something akin to dependency injection is done in modern JavaScript. It’d be a tricky thing to google - I don’t necessarily need literally an OO dependency injection framework or anything, just how that kind of abstraction is usually done. And I want to do as the Roman’s do.
I describe some stuff here:
https://tildes.net/~comp/1ej6/nvidia_ceo_says_kids_shouldnt_learn_to_code#comment-c4jp
https://tildes.net/~comp/1ej6/nvidia_ceo_says_kids_shouldnt_learn_to_code#comment-c4ll
I expanded on my use case in this comment, which you might find interesting.
I haven't used it to generate code for an actual product. Do you have experience with tools like Visual Studio, MVC, and entity framework, where you could build the database first, and it would generate all the CRUD interfaces, the C# rest API, and the HTML and client side stuff? That would be my benchmark, of can the LLM do better than the procedural in terms of getting the CRUD cruft out of the way.
If you can give a comparison that would be cool!
I haven't used it do anything like that. Mostly I just use it in neovim, where I can write a comment that describes what I want and it frames in the code, eg.
and it writes out code:
I can ask it to do other things like, "write the urls.py entry for this view" or "make a django admin view that overrides the change form to show the image in-line" and it'll do that as well. Or, instead of looking through documentation I can do things like "my image is being displayed with the wrong color scheme in jupyter when I use imshow, write an appropriate image conversion to show it in RGB" and it can just do it.
Has this really improved your efficiency by hundreds of percent? I’m a full time web developer. I find it is at best a 15% improvement. And that’s only when I’m writing a ton of code, which isn’t every day, and only when the code I need to write is fairly simple and I know what needs to be done.
It’s been the most helpful when, as an expert in one system, I need to learn a very similar but completely independent system. So when I needed to write Angular code as someone that is experienced in React. The two map almost 1:1 so I could describe exactly what I’d do in React and get decent Angular code out. Compared to me struggling through docs and trying to adapt existing Angular code it was perhaps 2-3x faster.
If you're comfortable sharing, what is your job title and level of experience? How much of your day is spent programming?
I would say yes, hundreds of percent. I spend a lot of time exploring new code or using obscure libraries, so having the LLM as a shortcut to digging through documentation is a huge benefit.
I would say that it is, for me, not just an efficiency improvement in coding, but for many tasks that I might not normally code a project for. Lowering the barrier for putting together a quick and dirty app to track and optimize some random thing means I'm more willing to automate things that I might normally do by hand.
I've been programming for almost 40 years now -- mostly building startups, currently just working on a bunch of hobby projects. I probably program about 30-40 hours/week.
Ah, are you doing a ton of different small projects then? I can see LLMs being a massive time saver for that case. They’re very good at boilerplate and implicit documentation lookups.
Hit and miss in my experience. I’ve had them hallucinate the perfect-sounding API operation for a problem I was working on more than once. Sounds and looks right, but had no basis in reality.
What context was this? For me it's pretty good with React, NodeJS, Python. Basically any time there's a lot of training data.
It was niche stuff I suppose, so not really surprising all I got was hallucinations. A concrete example is when I looking for ways to calculate azimuth between 2 points on a geodesic sphere using a specific library. ChatGPT gave me this helpful snippet (nestled among boilerplate):
Trouble is, the "getAzimuthCalculator()" function and the class it supposedly returns doesn't exist and never has :-/. A similar case where I don't have the original snippet any longer was for converting between UTF-8 and the ancient HP Roman 8 charset. Helpfully I was told to just call the "toCharSet("HP Roman 8") function or something along those lines.
Both examples are just using public ChatGPT (3.5 I guess), so maybe more specialized LLMs would have given better results.
Edit: All old-as-rocks Java btw.
Yeah, that’s pretty much what I’m doing.
That's neat. Similar to the scaffolding I've experienced, but allows for method by method generation. How complete or functional is it normally? I.e., how much do you typically need to add to make it minimally work in the most generic way, not necessarily exactly how you end up wanting it?
At least with the database drive functionality, everything worked as generated, but might be ugly or require additional business logic to be added.
Thanks for the details!
I'd say it depends on a lot of factors, but frequently the code just works as is. If it's something complex, it will probably take a bit of editing. Simple CRUD-like views maybe 95% likely to be exactly correct in my experience.
Excellent, thanks for the details!
I already made a comment but would like to separately say the code you provided as an example has issues:
Set the content type to JPEG
but the next line sayscontent_type='image/png'
.if
s in that situation will get out of control fast.To be clear I'm saying the code should be this:
In my experience having an LLM write too much code in one shot will cause issues because most of the training data comes from mediocre developers. But if you're getting it to write just a couple lines in the context of a file in which you, as a human, are being careful and thoughtful about what you write it will be well guided to continue your patterns.
Yeah, I manually edited it to PNG -- this is just the most recent thing I asked it to do for me, so it was already sitting on my screen and I just pasted it here half way through editing it. Initially I asked it for a JPEG, but realized a few minutes later that I actually stored PNGs.
The benefit to this code is: I did not remember how to get the actual binary data out of a Django ImageField (just instance.image.read as it turns out), and I didn't want to look it up. So I wrote the above prompt and I got that working code, and it also saved me the trouble of getting the necessary imports and setting the content type/handling 404s.
If it's just a matter of time and we're talking about what kids will need in the future...
But that's a rather big if. How would you know if the code produced is actually good, and the results are correct, if you don't understand it? People copied from Stack Overflow since its inception, and it didn't kill coding. Even if "programming language is human" you still need someone good at analytic thinking to phrase things correctly, to state the requirements. Kind of the point behind business analysts I guess.
I did not use any of my childhood programming knowledge in adulthood. The languages, environments, methods, practices all changed. I did use my analytic thinking skills that I trained on my childhood programming.
I applied them to physics. Because we'll always need analytical thinkers in science, engineering, and, yes, even biology mentioned by Huang. All the biologists I know would not survive without writing scripts, and it's not the only part of the job that requires the same mindset.
That's the cool thing about AI. You don't have to. Sure, it's not that great now, but we're only a year into the ChatGPT era and it's already way better than it was a year ago. As more products reach maturity and are integrated, the average person will be completely worthless compared to LLMs when programming. Sure, experts will be needed until we have something approximating AGI, but the average shitty dev won't be needed within a handful of years.
And thus we've further kicked the ladder from a path for junior devs to become senior devs.
Yeah, because those positions might not exist anymore by the time those junior devs would have the opportunity to become senior devs. Remember, we are talking about literal children here. They won't be at that point for multiple decades.
That, and a lot of the double-checking can also be done by AI I presume. No reason an AI can't take requirements, convert them to tests, write code that solves the problem, run the tests to ensure the code satisfies them and then validate it all against the original requirements, producing a report that highlights potential issues. Only slightly optimistically, that's just scaling what's already there - building models with better reasoning capabilities, gathering more data and some integration work to put it all together into a usable product. It's IMO entirely feasible within the time frame that's relevant for what kids should learn today. If you're going to use your coding skills in 15-20 years, my guess for their marketability is quite grim, considering the hordes of coders we have educated the last 10 years, a lot of which will probably be made redundant if my crystal ball isn't wrong.
You're absolutely correct, but this is one of the reasons I think there are going to be issues. Have you ever worked with a contracting team that was handed requirements and then turned in their code at the end of the project? In my experience, the resulting code will do exactly what the requirements say.
The code is put into production and now all the issues pop up. I wouldn't even call a lot of them bugs. These are things that weren't addressed in the requirements, usually because they were so obvious to the person who created them.
Here are some examples I've seen:
This is how we get a lot of government software, and why it's so expensive.
I remember a time in my career where I was updating a web page. This particular web page was dynamically generated with assembler running in a CICS region on a zOS mainframe. The data was queried through a Natural program that interfaced with an Adabase database and a C routine running in an OMVS segment running a UNIX environment which queried the central identity system. When the data came back, I used assembler to construct the HTML that would serve up people's pay statements.
Just that one little task required detailed knowledge of the hardware architecture, OS principles, and all sorts of things I'm glad I only vaguely remember. (I've been down the gullet of an interstellar cockroach; that's one of a thousand memories I don't want).
Do I think kids should learn programming? Yes. Do I think they should learn what I learned? No. I think they should focus on Python, and SciKit-Learn, and how to apply it to problem domains, and only get into the deep technical if that is where their interest and career takes them.
I don't see on the horizon yet LLMs being able to write the next version of LLMs in the same way that we used machine code to write an assembler compiler, which we then used to write a C compiler, and on and on. Until that happens, someone has to code, even if it is LLMs and other machine learning platforms.
I think this CEO is trying to appear visionary, but I don't think the comment will age well as we go 5, 10, 15 years with AI still not driving code development. I suspect it will be at least that long that you need developers to drive the prompts used to scaffold the different elements of a system, as well as to architect the system to even know what to ask for.
Edit: amusingly my phone keeps replacing LLMs with Llama, which is nicely confusing.
Maybe Facebook hacked your autocorrect
There is no ChatGPT, only Llama now. 🦙
Depends what you mean by coding.
The dull as hell busywork like setting up a basic website? Sure, an AI can webpage your bicycle shop in five minutes. You won't find a single living developer who actually wants to do that job, either. It's below the level of burger flipping, it's cleaning toilets with a toothbrush. Good riddance to this stuff once it's automated.
Real coding? The kind you need a master's degree in pure mathematics with a helping of several other hard domain sciences just to put a toe in the door? The kind you need to build something like an operating system or precision real time weapons systems or particle models at a collider? That's not going away.
I can imagine an AI is going to make certain things like deployment, debugging, prototyping, and refactoring (time sinks everyone hates) into little more than progress bars. Worry more about what one human developer with a sharp mind can do when the automation multiplies their effectiveness tenfold. Should take a lot of the gruntwork out of it all and leave the humans with more design and experimentation time, where the fun is.
I'd expect programmers to automate CEOs, business management, marketing, and even most HR work out of existence long before they manage to truly obsolete themselves. :P
No one plays chess for some practical goal. If it were important that people play chess well (eg, let’s say that in a parallel universe god rewarded humans by how many angels we could defeat in chess), then absolutely only chess algorithms would play chess in that context.
Wasn't it like 15 minutes ago that tech CEOs were running around shouting that everyone should learn to code ASAP? It's almost like we shouldn't pay attention to CEOs because what they say is just what's best for them at that particular moment in time.
Compsci is not coding.
I think kids should be given the opportunity to learn a whole boatload of things. The "should" in this sentence is what is problematic about the trend that this CEO is talking about. Is he saying kids "should" not be forced learn to code, or that there "should" be no need for humans of their generation to code? The former I would agree with, and the latter is just industry "buy my cereal" hype.
I agree that kids should not be forced to learn how to code. If they're talking about a graduation requirement level of "should", kids should definitely NOT be forced to learn how to code. Maybe they want to do networking or hardware or a billion different compsci things. Heck, maybe they want to learn HVAC and plumbing and growing food, which I would argue are even better things to learn than compsci. All kids should be given exposure to compsci, given a mile high view of how computers and programming works, and basic ready to play sandboxes to mess around for an afternoon. Any more than that and we are veering towards "leetcode summer camp" territories. Do they need to recite a sort or demonstrate how to balance a binary tree? Absolutely not, unless they want to as an elective. They don't need coding any more than they need to know how to change the oil of their car: it's good to know at least in theory how it's done, but honestly they're far far better off learning how to read a topography map, evaluate climate change, how investments and real estate works, how to change the air filter in the furnace, and how to unclog a toilet.
Kids' extremely limited time is better served by teaching them how to teach themselves anything, but that's a whole other rant..
As for what it sounds like he's saying here, that AI has "closed the gap", one needs to look no further than how folks living in lower income countries contribute to make AI work, and how little they're paid for their effort, to know it's a bunch of pfoowie.
The common mindset seems to be that we teach kids to code so that they can code as a job later on. So should we stop teaching kids math if the job of actuary is taken by AI? Writers usually aren't paid very well and I have a pile of obviously AI written books in my Kindle ads, we should probably stop teaching kids to write coherently. Taking this mindset to an extreme, do we even need school now that we suspect we've either reached the singularity or are on the cusp of it?
Learning to code teaches a style of problem solving. Part of that is breaking a problem down into a series of simpler problems until you reach a point that you can move forward. Another part is understanding not just what is literally asked for, but what is needed.
I'm pretty sure these are skills we want.
I posit the opposite:
We live in an era where damn near every job on the planet can benefit from some degree of coding. Everyone should be able to code at least a little so they don't strictly need to rely on somebody else to make their lives easier.
The difficult part of programming is understanding the requirements within the business logic. Handle all the edge cases and interfaces to other internal and external systems. While ensuring that whatever new stuff you add doesn't break anything existing or doesn't have unintended side effects. I am not see AI handling doing all that soon. It seems good at closed environments but how well will it handle the bigger picture so to speak? And who will ensure that the stuff it produces is correct and be able to fix its errors?
I think the Nvidia CEO is dumbing down his message to create nice headlines.
Most kids do not need to learn to code in order to succeed in life. Many jobs that require light queries or occasional scripts will not require the effort that they do now.
Computer Scientists will still need to know how to code. They may code less, the same way that math is done using calculators and spreadsheets instead of hand calculations, but they still will need to know how everything current Computer Scientists do.
I think that learning to do arithmetic yourself is useful even if you expect to use a calculator or spreadsheet, and similarly, an introduction to coding (which is all many kids will learn anyway) is useful background knowledge even if you don’t do a lot of it by hand.
It might be more useful to look at this in a less binary way: how much about programming should schools teach? I think it should be enough that kids get a taste for it, and the ones who are really into it will likely learn a lot more on their own.
Regarding AI, the future hard to predict, particularly in a rapidly changing field. LLM’s haven’t been around for very long and it seems too soon to say what AI-based tools will really be like. Programming jobs might still be around (in some form) when current LLM’s become obsolete.
I don’t think he’s wrong, he’s just too early by maybe 10-20 years. Right now, AI is a really useful tool in the hands of developers who know what they’re trying to achieve with it. It will write decent code if you prompt it well, which itself is a technical skill. Then you’ve got to review the generated code for bugs, which is tricky because AI-written bugs tend to be more subtle than those written by people.
After that, you’ve got working code but you have to know what to do with it. You need familiarity with your versioning process, your build tools, your CI/CD pipeline, deployment workflow, whatever. Just writing code is only one step toward delivering functioning software. As far as I’m aware, AI can’t do any of that other stuff yet. It will. But not today. I don’t think it’s able to architect more complicated modules with many files and dependencies either, not without significant hand-holding.
In 2024, AI is a tool… not an automaton. I fully expect that to change as the technology matures. This guy is jumping the gun though.