Feeling weird about my career with respect to AI
I’m a software engineer. I graduated in 2021 so I’ve only been one for around 4.5 years and definitely still feel fairly entry-level (at least, any time I look at jobs, the number of years of experience required for “senior” positions seems to have increased by one) and it feels like companies don’t particularly want anyone without a lot of experience anymore (and every time I do look at new jobs, the number of years required for “senior” positions seems to have increased by one). Meanwhile, I think it has its uses but I don’t actually enjoy using it. I want to solve problems and think and write code, not talk to an AI and become a full-time code-reviewer. My company is rebranding to have AI in the name shortly and, since early December, have been forcing us into 2+ hour long AI trainings once or twice a week. A lot of my coworkers seem like they’ve drank the Kool-Aid and are talking about new models and shit all the time and I just don’t get it.
I guess I’m kind of rambling but I just feel weird about all of it. I want to program but I don’t just want to use (or be forced to use) LLMs for everything, yet it seems like companies are just trying to get rid of actually human software engineers as fast as they can. I’ll even admit, Claude is way better than I expected, but I don’t actually enjoy sitting there typing “do this for me” and then having to just spend time reviewing code. I don’t know. I don’t think this is really even me asking for advice, just a rant, but yeah, just felt like I had to get something out there, I guess.
The depressing thing about AI is that regardless of the long term viability of the strategy companies are taking right now, it's only bad news.
If AIs long term capabilities are overstated, we've wasted a whole shit ton of money, effort, and time forcing something to be used that will just result in tens of thousands of companies going bankrupt and millions losing their jobs.
If they're not overstated, human beings have lost the competitive edge to do be given any sort of actual, rewarding work. Solving problems is interesting. Making art is interesting. Writing code is interesting. The long term path of AI if everything pans out the way they say it will is that humans are really only there to provide some sort of intent to the AI.
User: "Make me a useful product"
AI: alright. Here are the 10 most profitable ideas I have. Which would you like to pursue?
User: whatever. The one that will make me the most money.
Any sort of actual interesting work is totally stripped away. That's a best case scenario if this stuff pans out too.
The whole thing makes me really cynical about the future, and has turned me into sort of a luddite.
In the industrial revolution, there was some refuge against those feelings. Namely, "yes, my sweat and manual labor are no longer valuable, but I, or my kids can become a skilled laborer instead"
When the information revolution and automation started eliminating skilled labor, it was "yes, my skill with my hands is no longer valuable, but I or my kids can use our minds to solve problems instead".
Now that it seems like our minds aren't valuable anymore, what's even left?
In my perfect world, not only would this technology not exist; it wouldn't be possible to exist.
Your point about meaningful work being automated away reminded me of the second point I made in a thread I posted here on Tildes about two years ago. The more directly relevant counterargument I got in response:
The rest of the discussion might not be directly relevant, but you might find some arguments tangentially helpful.
...the latter path treads perilously toward a paradigm of users serving the machines rather the inverse...
I'd be somewhat open to the idea of it all working out like they say it will, if there was some kind of intent or plan to use this technology to create surplus to enable leisure time or optional work for the majority of the population.
Just as previous productivity gains from the industrial or information revolutions didn't translate to any giving time back to workers, I don't see any indications of this time being different. And as you say, the ultimate effect of previous labor revolutions was moving work of the hands to work of the mind. If that's gone, we're left with leisure and self-betterment, which in theory is awesome. [Key consideration: political leaders and AI evangelists aren't proposing provisions for the end of knowledge work, but let's say it magically happens, in a best-case scenario.]
If there was a plan to give us all a universal basic income and let us chill, in theory I think I'd be perfectly fine sitting on my ass for the rest of my life pursuing hobbies for the sake of it. But that's just me, and in practice I don't think that'd satisfy even a lazy person like me in the long term.
Our current societal bonds are formed through education and labor. I've got people to talk to and things to do because of that. Without the social connections I formed in school, college, vocational training, and work, I think I'd be pretty miserable. These might last for a decade or so, but I can see them trailing off. Future generations might have different social structures, but like it or not (and I don't, really), ours centres around human beings doing work.
If nothing else, that creates a sense that most of us - given the massive shrinking of the middle class - are at least in the same trench, even if some of us have nicer cots, blankets, and equipment.
I think the majority of people like to create some kind of meaning beyond just hanging out and having a good time.
If we were talking about actual machine intelligences that cared about our welfare and were making provisions for societal upheaval, I might be open to it. But the current iteration of 'AI' is a productivity tool, so it's hard for me to see it as anything other than a new kind of power loom or automated production line deployed without care for the people it impacts.
Yeah, I think an Iain Banks esque culture future is the absolute most positive scenario if the technology pans out. I don't know if it's feasible to get from where we are as a society now to there though. I don't even know if it's feasible without radically changing how the human mind works.
We evolved in a world with scarcity, where labor was valuable. It's not only how we were brought up, it's how our DNA has shaped the physical structure of our bodies and brains. I'm not even sure we are compatible with the idea of having no real "utility" or "purpose", and I definitely know that our political systems aren't.
I’m talking a sabbatical and form social bonds by making art, teaching people, learning from people, and solving problems of all kinds to make things easier for others. I don’t see why we wouldn’t all do stuff like that if we didn’t need to work.
That sounds awesome, and if I could see a viable pathway to us all taking sabbaticals, I'd be all in. As I tried to say, I don't love that my most important social relationships outside of family were formed in work.
I just can't see the people in charge of this technology giving us all the equivalent of a permanent sabbatical. I'd love to be wrong, but there seems to be no evidence for plans to provision workers made obsolete by AI with essential needs.
No definitely not.
I never understood this reasoning. The fact that AI can do some things does not make every problem magically solvable by AI. Some problems will disappear, others will pop up. Because the following is never going to happen:
For a very short period of time this might have worked, but soon all your competitors have learned to ask the same question to their models and now suddenly the AI is unable to execute on it, because it did not factor in that everyone would jump on the same idea. In other words: AI ideas are worthless, everyone can generate them in the blink of an eye. What is left is actual problem-solving and actual care about a solution.
I don't exactly know what that will look like, but I am certain humans will be needed for a long time. If anything we are making the world more complicated with it and thus we will need more brains to steer it in the right direction.
Yeah, I mean that would align with the first scenario, where the promise of AIs long term capabilities are overstated.
All of the insane investment and speculation is built up by at least some hope that that'll be the future, and all of the actual problem solving will be able to be accomplished by AI alone.
I hope you're right, but even if you are, that would coincide with a major market crash and a ton of unemployment and poverty once the reality of the limitations are made obvious.
I'm in a similar boat. My company has been all about AI for the last few years, but especially in the last year. It's not quite as shoved down our throats as it sounds like it is at your job, but it's not far oof. We had some training, we'e got a blank check to grab AI-powered IDEs and other tooling. We're using it fairly responsibly, with it being used to generate released documents, preliminary PRs, PR summaries, etc. Stuff that AI's really good at and is easily checked by a human. Devs also use it, but it's not mandated.
But then every now and then we get some blatant AI propaganda meme about how it won't take our jobs, but elevate them...and that just doesn't pass the smell test. I can absolutely see how in 5-10 years AI could (but certainly not should) replace entry-level developers. And it's terrifying because as you suggested, we could also get to a point where we're just glorified PR factories for the AI. And I'm not saying that's a sane choice, or that AI will actually be ready for that, but it's not developers that pull the purse strings in corporations--- it's the business dorks up top. And all they see is dollar signs with little regard for how their choices impact the people who do the actual work.
Years ago a Senior Dev warned me against using git UIs because they "make you lazy and forget the command line" and I kind of laughed it off, but he was right and I'm seeing that happen with AI. It's making me lazy and forgetful because I can just let the stupid robot do it for me.
At first it was a huge benefit because I could use it to quickly ramp up on new projects and start making impact immediately, but those accomplishments now feel hollow and I want to just...write code myself. I want to go back to the old days of nothing but my wits and an IDE. Even if my wits let me down from time to time!
Yeah, for us early last year they added some LLM PR-reviewer thing and it started off pretty crap given us suggestions that were blatantly incorrect and bad summaries, but it’s gotten better lately (not sure if they upgraded the model it’s using or just integrated it with our codebase better, because the whole repo is something like 70M LOC and the stuff that we actually work on day-to-day is around 1M). Even at its worst, that’s just a quick comment I have to read and can see if it’s worth exploring more, and at its best it’s helped me fix a couple of edge case bugs I missed.
I think this is kind of the thing for me. I actually had Claude help me figure out and fix a bug I’d been stuck on for a week or two, but the explanation it gave for the fix was just wrong/incomplete so it took me another three or four days of trying to look at it to actually be able to articulate why the fix worked to my coworkers. Yeah, it still probably sped me up, but at the same time, if I keep using Claude for every change I feel like I’d just continue getting lazier and forgetting how the whole codebase works.
I think this is the biggest part of my existential dread with AI.
For most people who've drank the kool-aid on it, this doesn't seem to be a step they even think about. They're not worried about understanding, because "the AI does that" (even if it demonstrably does not), so they will put the wrong explanation in the ticket / PR and move on. It's a lot of blind sprinting and assuming the road will still be under your feet and it's generating a lot of rust and deadends on a lot of codebases.
Treating LLMs as magic cognition machines feels like it's going to absolutely wreck every pipeline to every senior position in anything remotely white collar.
This is one of the reasons I am wary to enter back into really almost any part of the tech industry that isn't physical in some way.
I want nothing to do with any institution that legitimizes use of those systems. Will not work with people that use it. Will not be in a position that requires it. (Don't ask me why, no need to rehash the pro/anti AI discussion for the thousandth time- part of it is personal and emotional in addition to political and social)
Mainly just want to say, even as someone that isn't a programmer, but has done adjacent tech work, I feel you
When I left my most recent position it was already starting to infect systems administration with coworkers using it in ways that caused them to have scripts they couldn't fully explain or would oddly fail and they couldn't figure out why, etc, and this was not in an industry where I would want people to take such an approach to extremely vital systems.
Though I will say- utility is secondary for me. I would still be against these systems even if they were 100x more reliable than they are now, in fact I'd oppose them even more strongly, because they would be an even bigger threat. Whether they work or are useful or not doesn't matter to me or at least does not move the needle of my opposition- things can be useful and also bad for us at the same time
I’m a software engineer and I’ve witnessed LLM usage explode at work too, though not to the degree you’ve described. I just want to say I totally understand how you feel. It’s tough working through years of school only to have the career you envisioned change in the blink of an eye. I’m sorry that all this nonsense is happening to you and our field right now. Programming is a wonderful craft and it sucks that your company is taking the joy out of it.
LLMs are almost certainly here to stay. However I don’t think this is the end of software engineers as programmers. The industry goes through cycles and right now we’re in a hype bubble. Even if we have to endure a few years of this nonsense, there will be a light at the end of the tunnel. I encourage you to find joy in programming outside of work, if you have the time and energy.
Again, I’m sorry this is happening and I wish you the best moving forward in your career.
Thank you! I definitely try to do stuff outside of work, when I can. I usually spend more time working on my home lab so not directly programming related but it’s still something I enjoy a lot. The big thing I’ve wanted to do is work on a Window Manager for the new rwm protocol for the River Wayland compositor, but yeah, it’s hard sometimes to have the energy to get off work and then program more haha
Awesome, glad to hear you’re able to tinker outside of work. I was just reading about the rwm protocol last week! If you do build anything with it I’d love to hear about it, I’m a sucker for anything written in Zig.
Ignore the hype, give it time, the rapid pace will slow down eventually and along the way the software industry will figure out how to balance people and agents.
In the short to medium term, provided you're a good engineer (or even just above average), you won't become redundant. It's people graduating now, and over the next few years, who are going to have a rough time. And a lot of them will figure out how to adapt because they have no baggage to bring into the process of learning a completely new technology that almost no one has experience with at this point.
Whatever you do, don't avoid learning how to use coding agents. There is no future where they aren't an integral part of software development.
You mentioned using Claude Code... there are lots of ways to use it that don't involve just telling it what to do. For example, let's say you're writing core functionality for a new feature and you realize there are some helper functions you're going to need that aren't in the spec. Highlight relevant code and tell the agent what you need, then go back to writing the interesting code while it whips up some boilerplate. If by Claude you mean Opus 4.5, and it has codebase patterns and conventions in context, 80%+ chance it gives you what you need with no issues while you're busy making progress on the important bits.
I get your frustration, and the frustration of most everyone else who's posted so far. None (well almost none) of us asked for AI, and even if we had we wouldn't have asked for it to be unloaded onto society via a trillion dollar firehose of hype and FOMO in service of capital and little else. But being angry at it is akin to being angry at the weather. Unless you're going to move to a different climate, find a way to enjoy it.
Hi OP,
I'm in the same boat. With similar years of experience, but working in data science and machine learning. I can relate to your feelings.
Fair warning: this is a rather grim AI doom post. Please do not read this if you're looking for comfort.
The release of ChatGPT was one of the worst days of my life. Since then, I've been unable to escape the AI fever that's gripped my job and my life. AI is now firmly part of the public zeitgeist, and I get no respite from it. I never imagined I'd have to explain image/video generation models to my parents, but here we are. I now deeply hate this field.
I have no idea what my role is now. In the past, my job involved creating and training machine learning models for various business predictions. That's no longer the case. No one seems interested in those solutions anymore, even if they're the right approach. Now, my job primarily involves implementing agentic or LLM based solutions. But isn't that what every software engineer does now? Isn't every random corpo an expert in AI now? Or at least, that's what they want people to believe. So, where does that leave me? The skill set that once made me somewhat unique is now the bare minimum for developers. Not to mention that tools like automated PR reviews are being used to monitor developers which just makes me feel icky.
To be honest, I've accepted that I'm a dead man walking. I don't think my current job will exist in the near future. It certainly won't exist in its previous form or in a form that I find enjoyable/rewarding. I'm trying to transition into governance, machine learning operations, and automated testing suites for generative AI solutions. You know, stuff related to building the guard rails around these AI systems. I have no idea if this will work out, but I'm hoping for the best. If that fails, I might consider becoming an electrician or something similar. I really don't know. I've thought a lot about dropping out of my grad degree and using the time to pursue an entirely different field. The constant uncertainty and pressure are taking a toll on my mental health, to say the least. It's exhausting to constantly adapt and wonder if my skills will remain relevant. The fear of obsolescence follows me like a shadow.
Regardless, I'm trying my best to outrun this mess. I'm saving money like crazy. I'm investing as much as possible and trying to pay off my living expenses in case my career disappears tomorrow. I have a large emergency fund. My advice to you would be to do the same, since these are the only things you can control. Personally, I'm actively exploring backup careers. There aren't many great prospects at the moment, but I'm hoping to figure out a plan soon.
Unfortunately, this is just the beginning. AI will undoubtedly improve over time, and the question is how quickly? I am not sure that even matters considering the current models are more than enough to disrupt large portions of society. The software field and the world at large have changed forever. There is no going back.
This is really just my take on the professional world. Even if we find a new field, AI will continue to permeate every aspect of modern life. Unless something is done quickly (and I doubt it), AI will be used as a tool by the upper class to oppress, control, and monitor the lower class, facilitating a further transfer of wealth in that regard. Let's just hope that it is also used to find new cures for diseases and other altruistic purposes. I guess we just destined to live in virtual insanity.
Best of luck, OP. I pray that we can look at this post in the future and laugh at my paranoia.
I have a lot of concerns about AI too, but with respect to programming, here is a different perspective.
When I first became a programmer in the late '90s, I had a few books from college that I took with me everywhere. These books had algorithms and patterns. One of the books was "The C Programming Language", which showed the implementation of various low-level functions. I would refer to the books as needed.
A few years later, when java and then c# came out, programming became somewhat different. There were more high-level languages and things were moving a bit faster. I didn't ever have to figure out how to write a sort algorithm anymore, I just had to find which library had the best one. At this time, my colleagues and I would frequently go to bookshops and buy new books. We bought a lot of those O'Reilly books that have the animals on the covers. These books would usually be good and accurate for 6 months to a year. Things were speeding up though and you had to keep learning the new libraries etc.
A few years after that, things were speeding up again. Javascript was becoming a big thing. We stopped buying books to keep up, we would just go to sites like Stack Overflow (nee Experts Exchange).
Now we aren't searching the internet for answers. We are using copilot in vscode or intellij.
But it's really all the same thing, just sped up. It's slightly different in that copilot can kind of write some code for you, but that's hardly different than pasting a block of code from a book or from Stack Overflow. It's just happening faster. Also, a lot of the world thinks AI is more powerful than it is and that it can replace people. That's probably the dangerous part.
Early in my career I was kind of jealous of engineers who came before me when things were lower level. They often had to carefully write very low level code in assembly or whatever. It seems more satisfying than just calling someone else's libraries. I guess that is kind of what you are thinking, but magnified.
I was going to write the same thing, i graduated college in 2002 an have been a software engineer since. I used to keep my collage notebook along with all my reference books with me in my desk drawers. The older engineers I worked with had shelves of books behind them. I felt fancy when I got a html version of the orielly books to make searching for things faster.
But here we are in 2026 and half the languages I used aren't used anywhere, or i'm at a place that doesn't sue them, the software I used is dead, many of the day to day issues that I figured out how to solve on my own are solved by the tooling and IDEs, many of the base level techniques are wrapped in api's and frameworks. Some days I still feel like a beginner as I step into a new language and try to make use of it, and it gets harder to switch gears.
So for me AI is a super helpful way to investigate things then drill down for details and get custom explanations for things without having to look it up, most of the stuff I need to find are easily available so I can mostly trust the ai results and if they are shady asking for explanations and my own internal smell tests have been pretty good. I generally know what I want to do but am pretty tired of writing another XYZ along with all the support that makes it maintainable and good that you never get credit for (but makes my life easier), or, I just forget basic things like how to open a file in whatever language I'm using.
The AI has also really helped me learn a lot more than stack overflow was teaching me. Asking it how to do XYZ it sometimes comes up with libraries or ways of doing things that i didn't know existed in my language. Generally all the things i'm coding with now were self taught as needed for the job as requirements change. I plowed through tutorials and the docs for the language but generally usually noted things that were like c++ or java (what I learned in) and didn't get too into the idioms of each language. I then would research how to do stuff as i worked through a program, then carry those solutions forward into every project I did.
This way of learning gets things done, but it leaves a lot of holes if you don't go back and see what has been added to the language or keep piling up what is available. With the AI solutions dumping stuff at me I keep seeing new things that i didn't know were available, I then ask it about it, learn about the thing, and decide if I want it or not. It's super helpful if used that way IMHO.
The big downside is it doesn't really know what you are building and what you need, and that it isn't deterministic. I think it's dangerous how it's being pushed to provide data and summaries to information that should be 100% correct. It also is a terrible trap for the beginner to just not learn, and a weapon of the fraudster to inject crazy AI stuff into code bases that are very hard to detect.
I guess what I'm saying is that things are changing and yeah programming will always get more accessible, higher level, and hard won knowledge will become meaningless. I won't lie that it used to be 'easier' to be good at things as new tech emerged. Just knowing HTML you'd be considered wizard at one point in time. As fields get more saturated and common, the ability to stick out is harder and harder. However, while more people know about getting computers to do what they want, there are still so many people that just don't understand and don't care to, the knowledge will still be valuable. But, at the end of the day, only my immediate coworkers care if I write good code and they don't keep me hired. The people at the top have always only cared about results and solutions, so you always have had to be a person that can produce the correct results and solve problems and still will be.
I feel the same way.
It used to be you could pick a niche that you’re good at and enjoy and make that your career, like I was really good at making micro services that actually made sense.
But now everything is just AI. From top to bottom all engineering is about knowing your models and AI tech and working with that. Everything Ive made in the past 3 years is proprietary to AI. Used to be I could build from the bottom up, no third parties, but now I’m just an AI engineer. The latest API architecting standards are old news, now I have to know the latest AI standards. I was going to build a whole career off my knowledge before and now I’m basically a mid level SWE again with 3 years experience in “AI” which would be fine if I went to school for something like machine learning but I didn’t, I wanted to design data pipelines. I was a data architect.
4.5 is a fair amount for software engineering - you're certainly not entry level anymore.
Personally, I don't think it changes all that much. I already didn't do that much actual coding anymore. The more you go up in level, the more your responsibilities shift from writing code to executing projects holistically. It's a common joke that you go from writing code in an IDE to writing code in google docs as you develop as an engineer - that is to say, you spend more time defining objectives, scoping projects, and aligning other people on what you're going to build than actually heads down, code writing.
In that respect, tools like claude code just reduce that ever smaller sliver more. Which doesn't change all that much. In the back drop of "I'm measured by the amount of impactful work I can execute", I don't particularly mind using any tools that speed things up.
I guess all that is to say, you were going to eventually move away from doing all that much hands on coding regardless, if you were to progress your career.
This is not necessarily true anymore. I’ve heard more and more, and seen firsthand, principal engineers operating at the individual contributor level because they’re better suited to it than management. The principal architect at my company contributes just as much as any other team member, while simultaneously driving the direction of all our projects. The opportunities are there, you just need to take advantage of them.
I will admit, I’ve felt a lot better at my job over the past year or so than the first couple of years, but I think just looking at new jobs and everything says 5-7 years required for any higher level positions which probably batters my self-confidence a bit.
That’s definitely something I’ve thought about. The architect on my team has been here for like 20 years and he essentially never writes code anymore. Some of the other Sr Staff engineers who have been here a decade or more still write code, but are constantly getting pulled off to either review PRs (for humans, at least, lol) or to help with planning new projects and features. But at the same time, it still feels like they get to actually think about the code and try figure out what we need. Maybe it’s just my company going a little hellbent on AI (or the trainings from NVIDIA, Anthropic, etc., so of course they want us to use it more) but it seems like they don’t want us to really think at all for ourselves.
I think you’re ultimately right, but I still feel weird about it (and am probably pretty terrible at expressing my feelings haha).
In the end, if I have to use LLMs to help me, I’m okay with that, but maybe it’s just something I have to spend more time getting used to.
That feels like a bit of a overreaction to a whopping 0.5 years lol. The experience requirements aren't that strict. Generally, <1 is where you're only qualified for newgrad / entry level roles. 1-3 is junior engineers. 3-5+ is where you can qualify for terminal position roles (equivalent to an L6 at google). Senior can be something like 6-7 YoE, but often it's more about the quality and level that you're executing moreso than YoE at that point.
Oh yeah, for sure, lol. I’ve still applied to stuff that says 5 minimum anyways, but I think it’s just that, like I said, it feels like the minimum years of experience goes up every time I look at jobs and I’m always just shy of it ¯\_(ツ)_/¯
But yeah, when it says something like 6 months off I usually have still applied (and had some interviews before, but nothing so far this time of applying to jobs but also it’s been the holidays so who knows).
Most companies that rebranded to have dot com in their name didn't do well. The few that did well were founded with dot com in the name from the get go (Amazon.com, Flowers.com, Bookings.com.)
What sort of training is relevant in AI these days? Classic AI is largely dead to most companies. LLMs are so new the best training is to simply roll your sleeves up and get coding.
The need to think logically is not going away, but I fear the need to write anything more than psuedo code is mostly going away.
Early on in the dot com boom, things moved a lot more slowly, and so I spent most of my time enhancing other peoples code. You think reading code generated by an LLM is soul sucking? Try reading code hand written by one human, and edited by ten others, back when VI & SCCS was state of the art.
Those were the days. Working on new stuff, meant a one-three hundred page requirements document and 6-18 months to code, test and QA the beast. Except perhaps those weren't the days. Because the guy who wrote the specs specified the how, not the what or the why. Worse, he was in no way qualified to define the how. Even more worst, he had not spent enough time thinking about the why to even begin to specify what.
Perhaps I prefer the days I learned to write pure hex? Punching machine language in one hex code at a time. Those were the days. Where bits made bytes but nibbles turned me on (old machine coders joke.)
Ultimately I think the other posters are correct. Change is sadly inevitable. Client Server. HTML. Javascript. Frameworks. Agile. Scrum. Cloud. AI. Things change. You will change. The change is just happening crazy fast right now. Eventually AI might get so good you never want to go back to coding by hand. You might become so senior, your days will be filled with useless meetings, architectural discussions, code you don't want to review, or simply figuring out why that one enterprise customer who is loading 100 million transactions a day is not getting sub hour performance like they demand.
GL.
I hear this pessimism about AI all around, and I don’t quite get it (from my perspective)
I use AI often for my work (and outside), but I enjoy the help. I use it as a kind of rubber duck and the same way I’d take to a peer to figure out the solution to a problem, or iterate over architecture, or even code. But ultimately, I’m in charge and I write the code (based on some AI suggestions sometimes, and based on our discussions).
I find this much more pleasant, the AI becomes just a (very fancy) tool rather than I becoming its agent. I really see it the same as a bicycle: you can still walk if you want but by using a bicycle you go much further much faster with less effort. Someone still has to steer the bicycle, shift the gears, décide which route to take, and watch out for cars and pedestrians and trees etc.
The pessimism for me comes from seeing how others use it and what management thinks it can do. I personally use LLMs regularly for coding and sometimes get good results with a little massaging. But I'm a senior dev and I like to believe I've read enough code to pick sensible solutions that match code style of a project, and above all, solutions that won't introduce a massive amount of technical debt.
Looking at the code diarrhea that some of my coworkers pull out of LLMs is just extremely infuriating. I'm the one reviewing that crap and I'm the one that needs to nag and explain and take time out of my day to read and understand it all. It's just so frustrating to deal with. My fear is that this will only get worse with time. New devs won't be required to hone their skills. The amount of technical debt will just grow exponentially. It all just seems so short sighted and management is totally to blame. They jump on the hype train and push it without understanding the implications. The feeling of "these people have no idea what their company is actually doing" has never been greater for me, and that's after the "BigData, NoSQL, everything cloud, blockchain" nonsense we've already lived through.
I think you should use the AI to do better work than you otherwise could. It’s upsetting to me when people produce mediocre work. There are always constraints. You don’t have infinite time or money. But don’t use AI to produce way more at lower quality. Use it to produce a little more at a higher quality. You can spend that efficiency budget on both output and quality.
Use AI to write tests (always review them). More and better test cases than you normally would have because most people don’t like writing them. Use them to raise potential issues with your last commit. You’ll have to discount 80% of their suggestions but it doesn’t take long to pick through the list. Use them to find where in a library something is implemented. Use them to get you unstuck when debugging.
For me it’s not too upsetting to use these tools when I can see them help me do better work. The users are happier and so am I.
I want to write a blog post about this, but briefly, I think video games are a useful metaphor for speculating about the future of programming.
Traditional programming is like a first-person shooter. Sure, you might have nice tools, like maybe an auto-aimer or a really big gun, but you’re fundamentally driving one character. Or if you’re multi-tasking then it’s like a turn-based RPG where you directly control all the members of your party.
There are also games like Lemmings or the Sims or RimWorld where you have somewhat indirect control over multiple characters, by giving them tasks or controlling their environment. They might interfere with each other and won’t do quite what you want and that’s part of the challenge. Fortunately you can restart the level if you need to. This is what it’s like to write software using coding agents. I am writing software with one coding agent and I can report that it’s fun and educational. It probably helps that it’s a personal side project. I’m still wary about running more than one at a time; it seems like running around spinning plates?
There are also RTS games where you control a small army in real time and frantically scroll around giving them orders. People are trying to write orchestrators to make software development like an RTS, but this is currently a crazy science project. Maybe it will be practical in a year or two. It seems stressful to me; I prefer turn-based games.
Zooming out a bit more, there are games like Sim City or strategy games where you’re managing large populations of NPC’s (which may or may not be explicitly modeled). There’s no equivalent to this yet, but maybe it will happen if they can get the coding agents to coordinate well enough?
Writing code in assembly is still needed in certain niches. I wrote a bit of assembly as a kid and took a course in college where we wrote assembly. Even back then, it was taught as something you should understand rather than something you’re likely to do much of at work.
Similarly, I expect that hobbyist programmers will be able to play the programming game at whichever zoom level they like and people will do some software development at each level as part of their education. Commercially, I expect that there will be lots of demand for people who are comfortable managing coding agents and cleaning up their messes. It’s a different game, but it’s still software development. You are giving the coding agents tasks by giving orders (essentially, writing bug reports) and attempting to control how they do it by editing AGENTS.md and other documents that the agents refer to.