Any software engineers considering a career switch due to AI?
I've grown increasingly unsure about if I'll stay with this profession long term thanks to the AI "revolution". Not because I think I'll be replaced, I have an extremely wide set of skills thanks to working over a decade in small startups so I think I'm safe for a long while to come.
No, I've grown weary because an increasingly larger share of the code that we produce is expected to be ai generated and with it shorter timelines and I just plain don't like it. I think we reached a tipping point around Claude opus 4.5 where it really is capable and that's only going to continue to get better. But damnit I like coding, I enjoy the problem solving and I feel that's getting stripped away from me basically overnight. Also, as these models become more and more capable I think the number of companies vibe coding to a product with fields of junior level engineers is going to grow which is going to push down senior job opportunities and wages.
So now I'm left wondering if it's time to start pointing towards a new career. I really love building stuff and solving problems so maybe I go back to school and switch to some other flavor of engineering? Idk. Curious where other's heads are at with this.
Idk, I've been integrating some more of the LLM tooling into my workflow and honestly it's been fairly enjoyable. I feel like I get to do more of the fun coding, and less of the "fuck around with dagger injects until it does what you want" coding or "spend an hour making whatever bullshit boilerplate someone made two years ago and now there's no docs for".
A side effect of the prevalence of LLM tooling is that I felt people have gotten much more rigorous in writing docs and ticket descriptions, because now the audience includes LLMs that have no idea what anything is inherently.
Even greybeards like Linus Torvalds are using LLM tools for their hobby projects. I think it's fairly energizing, if anything.
edit:
To be put more details on how I've been using it:
For every new thing I want to use an LLM for, I spin up a new EC2 instance with the dev environment set up. This is already existing infrastructure where I work, because monorepo things.
Then I go into claude, and have a prompt which instructs it take a problem statement, and turn it into a plan in a markdown file. This is step 1, and can be useful even if I never use the LLM for anything else. Claude will take the time to find relevant code references, the order things should be done, etc. But anyhow this is the first stage I weigh in. I read the plan, and tell claude to modify it until it looks like something reasonable.
Then it depends on the type of problem. If it's something fairly simple, like deleting feature flags, adding metrics, and so forth, I just let claude give it a good ol' college try at it. One lesson I've learned is to just let it compile things and fail. It will often hallucinate things, especially with the amount of custom infrastracture, but it'll usually figure it out after doing a few compile loops. These kind of tickets are the kind that I dread doing anyway - most of the "difficulty" is figuring which of the 5 different mocking libraries is standard in this part of the monorepo.
If it's something more interesting - a complicated piece of business logic, for instance, I will modify the plan so that claude instead essentially does all the drugery for me. It will set up unit tests, it will do all the necessary research, it'll muck around with dagger until all the required imports are ready, it'll create all the autogenerated classes, or boilerplate generation.
Then I jump in, do the fun parts, make the PR, done.
Linus Torvalds isn't using gemini for his side project guitar project because management is breathing down his neck - no such person could exist. He's using it because the fun part of that is writing the code that details with the instruments, and not the python code to visualize it. So he does the former and gemini does the latter.
I have to agree with you to an extent. I've recently started incorporating agentic coding workflows into my projects, and it's been a positive experience. It has helped me tremendously with my personal project and significantly accelerated my development time. I'm now optimistic about finishing a longterm project that simply wasn't feasible before, given my day to day life.
I view LLMs more as tools that accelerate software engineering, not replace it. Like compilers, linting, static analysis, APMs, or IDEs.
Business and sales people still need someone to use the tools. Interpret their desires in a sensible way. Avoid pitfalls. Debug issues, etc.. A product owner or technical project manager or something might be able to do a lot of this, but will still probably fall short in areas.
I don’t know a single engineer that has lost their job to be replaced by AI. I have heard of engineers that lost their jobs because of the budget companies are spending on AI - but that’s different from being replaced by it.
I also suspect many companies are using AI as a convenient excuse for layoffs that are actually driven by the economy, politics, or offshoring.
This in general, but absolutely this when it comes to software development.
I’m pretty skeptical of this. Companies don’t need an excuse to lay people off. They can just do it.
If you can announce publicly "We are laying off 1,000 people because we can pay someone in another country 10% of their wage." versus "We are laying off 1,000 people because of our innovative use of artificial intelligence." which do you think they'd choose for framing the message?
Or: "we're laying off 1,000 people because the economy is bad and we're going to try to wring more pennies out of our existing business instead of investing in expansion. Oh wait, *waves hands*, AI."
That should be a major red flag for investors, but that's basically the last few years.
I mean that gets down to what you mean by "using".
Is it possible companies do a search and replace for "AI" when the wall street journal emails them to ask why they're doing the layoff? Potentially. But that also has no impact on anything.
Is it possible that companies are purchasing million dollars ChatGPT/claude code/whatever deals just so they have something slightly spiffier to say to the journalists emailing them for comment? No, that makes no sense economically. It ultimately just doesn't matter what a company says when they do a layoff. Companies don't really "say" anything to begin with, they're not people.
I'm not talking exclusively about journalists. I'm also talking about investors and boards. A pretty large amount of time is often spent gathering data, framing data, and doing presentations to boards and/or investors. Depends on the company structure and size. I get your point though.
They almost always give an excuse though... Usually it's "current economic factors" or stuff like that though
I think that’s also a mischaracterization. Tens of thousands of people get laid off every month, but most of these are small-medium sized companies no one has ever heard of, and they say nothing, and no one asks them anything. It’s just an undulation of the labor market.
You only notice layoffs from companies that are such pillars of the economy that when they have a layoff, reporters swarm over it. And in that case they usually peel off something about the economy. But ultimately it doesn’t really matter what they say, and certainly spending tens of millions of dollars on ChatGPT credits just to tell the Wall Street Journal a different one liner is not a course of action that passes Occam’s razor.
I noticed the last 5 layoffs at my company
For publicly traded companies, how stock owners perceive the current performance trend directly affects stock price. High level managers are significantly compensated with stock, and have an interest in keeping stock price high.
"We overhired in the weird pandemic economy (projected growth that didn't happen) and now need to right size" calls into question manager judgements (like projecting growth that failed to materialize)
"Tarriffs ate our capital project budget, and regulatory uncertainty makes now seem like a bad time to commit on a long-term direction anyway, so we're laying off the capital project team" calls into question the future growth of the company
"AI magic will let us grow with fewer people" covers up other explanations and feeds the stock price as a spin. AI spending can be any level the company believes is valid for other reasons - part of the appeal of this shtick is that it works with even minimal AI purchases. It's a stock-price-support trick for any publicly traded company in the current investor environment.
No way. If anything I’m doubling down. I’m not gonna let some MBAs and upjumped chatbots force me out of anything.
And just imagine the consulting opportunities that’ll pop up as companies with vibecoded products need real engineers to come clean up their mess.
However, I have to acknowledge I’m in a privileged position. My company isn’t mandating LLM usage at all. And if they did I’d just lie to their faces.
I found some vibe coded SQL this week that I took from a 14 minute query to a second or so. I am expecting the consulting opportunities to be fantastic.
I've experienced LLMs frequently writing unit tests where the mocking/stubbing they do in setting up the test effectively means they are only testing the mocking and stubbing. The test passes, it looks great, but it's not actually testing anything.
I've also seen some pretty significant security issues show up in code review processes because of AI generated code.
I’ve had a coworker try to generate unit tests and pass it off to me in PR review. The tests were exactly as you described, they validated nothing. It’s as if the training data contained the empty tests generated by project init tooling.
Can you say more? I've worked in software dev but for the last 5-6 years have been somewhat removed from classical software dev projects, mostly working on VR development using game engines. Your statement makes me feel really sad, it seems incredibly short sighted to mandate the use of AI like that.
I'm considering changing jobs, but that's due to leadership and internal communication problems. Which would most likely mean me going back into more classical software development. Which is unfortunate, I really enjoy the kind of projects i currently work on.
My company is... more optimistic about AI than I would like, but isn't forcing it on employees who don't find it useful. These jobs still exist, at least for now.
I have heard horror stories from friends, though. Several of them have AI tool adoption as a metric they've been told to optimize for, which is maddeningly backwards. Some of their bosses have "engineers don't read code anymore" as an explicit goal.
I miss when blockchains were the hot new thing everyone was trying to shove into their products.
I can't say it hasn't crossed my mind but I haven't given it too much thought either. It's certainly annoying just how much they've been pushing AI usage onto us and expecting us to do more. One of the pushes with AI I've seen in my company is to essentially trial replacing whole teams with AI where 2-3 engineers take on the role of a Project Manager, Data Scientist, and Software Engineer to rapidly prototype new features and demo them, with AI being used at every step along the way. It takes away from the joy of software engineering since you outsource your thinking and labor to a machine.
Having studied a combined math and computer science degree at university, I have thought about doing something more math related out of passion for the subject. I've also been thinking about just doing more math for fun in general since it feels like my brain has gone to mush with all this AI usage.
Maybe you've accidentally hit on the reason behind my recent math/physics hyperfixation. I don't have an academic background in math - I was very good at it and have great intuition when it comes to numbers, but that made it really difficult for me to write proofs or just show my work in general. Which I understand now is a very important part of math (and science in general) but when I was younger, it was hard to see that.
As I, like many others, began to be pressured to do more for less at work, I did start outsourcing some of my brainpower to LLMs. And I felt dumber for it - previously I could instantly recall why I wrote things the way I did and give detailed breakdowns. Now I have to struggle for a few minutes to remember and sometimes I have almost no recollection of writing things, even though I do still maintain my personal ethos of typing everything in by hand, no copy pasting em-dashes or Unicode crap for me, though I mostly do it just to feel like I'm doing something and it also helps me take pause and identify issues.
I even let myself fall further out of my neurons' graces by spending far too much time watching YouTube shorts.
At some point, the annoying YouTube games showed up and I wrote a quick ublock rule to block it. Apparently that also inadvertently blocked YouTube shorts from appearing on my home page. I started watching more math videos instead of garbage. Picked up a hard sci-fi book dealing with topology and differential geometry and found it absolutely fascinating. I'm now watching the MIT OCW lectures on linear algebra to refresh my memory of it. I think I'll probably even do some fun math stuff in the near future, whatever that may entail. One is porting a simple "4D maze" game from Swift to Rust, so I'll get to learn some math implementations I've not touched before.
While I still do use LLMs a bit, I feel like taking back that portion of my brain from the machines has been quite freeing. I've been finding math and physics to be liberating in a sense, perhaps because there's so much there and they're also subjects that current LLMs are not too good at (yet).
Viva la revolución humana
I am so very tired of this nonsense. It has crossed my mind to leave the field, but I don't know what else I'd do. I got lucky in that I am very good at something that people will give me lots of money for. I don't expect to have high chances at doing that for a second career.
It's frustrating to watch, because I empathize with the boosters. I like the idea of a development tool so high-level that non-engineers can use it effectively. But every time it's been tried, the result has been another thing engineers use, and usually don't particularly like using: COBOL, UML, low/no-code platforms, and now this. I like the idea of a tool that automates stuff I don't feel like doing, but it has to actually effectively do that. If I wanted another tech stack I had to babysit I'd just spin up another homeassistant instance.
I try again every few months, but I still haven't gotten results out of any of these tools that reach my standards of "I am comfortable submitting this as finished work". The most I'd trust them with is scripts I run once, verify the output, and throw away. The quality of the fully-vibe-coded software I've seen suggests that other people aren't getting significantly better results, they just have lower standards.
So I am optimistic that the fad will last just long enough that when the bubble pops and it's no longer cost-effective to run these things, I'll be one of the relatively few who remembers how to write code the normal way. Until then, I shall suffer through it.
Besides it not being fun, I just can't get over the ethical side of this.
Maybe the LLMs are getting to the point of actually being useful. Maybe they are getting people excited about building ambitious projects. Maybe I will get left behind for not taking a part in this.
Maybe. I don't know.
Regardless, I don't think it is right to fund this machine. Humanity should not be using our resources to build these new data centers. We should not be outsourcing our thinking. We should be doing less, not more. Using less energy, not more. I don't want to take any part in this. I will not start paying for Claude/whatever tokens or recommending my employer to do so. To me this feels obvious.
Yet most of my colleques seem to be completely fine with all this. None of the negatives seem to matter to anyone as long as the bots can generate reasonable looking lines of code fast enough. It makes me feel like I'm going insane when listening to them.
So yes, I am considering. Not sure what else I could do, if this keeps going on.
My head's current position is as such:
@kacey is tired and remembers the bad old days
Even if models stopped improving tomorrow, they're going to continue changing the industry irreparably. We're never going back to a pre-Stack Overflow world, where it was a PIA to find helpful forums or resources, and there were twenty different ways of solving any one problem. We're never going back to an era where reference books, stocked on the shelf of your local library, are a useful tool for understanding a new technology. We're well past the point where the majority of developers understand how the computer is executing what they asked it to do, and we'll soon cross the threshold where most do not understand what they are asking it to do.
I will take at face value the statements, from every major tech firm, that the majority of code will soon be written by LLMs -- if it has not already happened. That in mind, I figure that my traditional career paths have now been winnowed down to three directions: (1) "wrestle several dozen modestly competent LLMs into writing slop code", (2) "become an expert slop code debugger, skills finely honed by reading millions of lines of wrestling matches gone awry", or (3) spend all my time arguing with management, because now my job is LLM whisperer. What was once a beautiful clockwork, crafted by human-centuries of careful curation (and perhaps, here and there, a few years of care-lite, caffeine-fueled curation), is now being replaced by slop churned out by opaque blobs of linear algebra. Colleagues that once balked at the notion of understanding how a computer works at all can rejoice -- the playing field is leveled; there will no longer be experts, for we will all be held at the whims of 200 GB of 4 bit floats.
This job was tolerable when it involved sitting in a corner, deeply understanding a complex, technical machine, then making it do a backflip. It was at its peak when I had the opportunity to do that, in a mid-sized team, working on projects I cared about and with a management team that -- although, uh, competent in their own way -- still stayed mostly at the sidelines. Then I had the most recent gig, which involved learning about the wonders of LLMs. Looking at the trendline, I get the impression that the ride is now over ... or at the very least, my stomach isn't prepared for the next dip.
Breaking free from the AIs by escaping into another discipline, where surely the grass is greener
So, uh, I hate to break it to you, but most other disciplines will be feeling the heat soon enough. If they aren't already. Although the path to get here has been costly -- on the order of trillions of US dollars (in imperial units, approximately a stack of $100 bills, one foot deep, laid out over 17 football fields) -- we now have a repeatable method of automating most work that is done by a human in front of a computer. There are absolutely some societal benefits to this, but because this is a depressing post borne from my misery, I will only underline the downsides. You cannot escape the LLMs. Even in attempting to get your degree in a hypothetical field unaffected by our present slop era, you'll still be contending with an educational system that is increasingly strained by LLM-generated slop essays, slop coursework, and slop grading.
That said, if you can find something life critical where licensure matters, you stand a better chance of keeping your job. But as long as it could conceivably be done in front of a computer -- and you are working for someone, and are not a freelancer (or own a business) -- you'll likely be put under increased pressure to do it faster anyhow. Especially if your coworkers are aware, and are slop tamers themselves, which they are extremely incentivized to do.
Everything is worse now, but that's OK. AKA answering your actual question
Hopefully you have gotten as much of your bag as you could have during the boom era of technology. Doubly hopefully, you have the stomach to ride the AI wave directly into the brick wall it appears to be headed towards, with one hand on a grappling hook to bail out at the last minute. That metaphor got away from me. But the crux is that, since software development has been very lucrative for a while, you're hopefully sitting on enough of a stockpile to weather the career course change your comment suggested.
My thought process is as follows: LLMs (and ML more broadly) are extraordinarily good at a few things, and middling to bad at most. However! There are also trillions of dollars being poured into them, so identifying those few things represent an incredible opportunity -- and some of them may be untapped. The ideal disciplines are spots where I figure there's an overlap between (1) our current AI toolsets' abilities to excel, where they aren't currently being exploited, and (2) subjects I could ramp up in fast enough to become profitable before becoming destitute. Broadly, that strategy (expand software solutions into new industries) seems to have been successful over the last couple of decades, and AI tech represents new possibilities for several fields that probably could merit being explored.
^ that's only really a helpful perspective if you're OK with continuing to use LLMs/AI tech/ML/etc. in order to accomplish something in another technical discipline, though. I feel that the genie is out of the bottle and we're stuck in the post-LLM world, now, but capitalizing on it in whatever way I can feels like the right way to take a little control back and maybe make a difference.
And pay the bills. That part is also quite critical.
I've been using Claude Code a fair bit at work, and it's a useful tool but it takes a lot of goading to keep it on track and not making stupid mistakes or breaking things. To effectively use it you need a thorough understanding of both what the product is supposed to do and the mechanical underpinnings that make it do those things. One can vibe code something that technically runs without that, but unless it's extremely narrow in scope/features it will quickly collapse under its own weight and become a liability.
So I'm not looking at changing careers, at least for now. I am however starting to seriously consider starting some kind of software business of my own, because there's still opportunity to be had and although I'd estimate myself as just average among my peers, it won't take a lot to stand out against the masses of poorly built vibeware that the world will be flooded with.
My experience is that most companies are desperate to hire seniors. I think it will continue to be that way because they need people who can bring the bigger picture to the LLM output. Even the really good LLMs lose the plot pretty regularly, so you have to be on top of it.
I also agree with @stu2b50, I think it takes the drudgery out of coding, and I can spend more time thinking about the structure of the code without holding so much of the syntax minutia in my head. It also makes it easier to do things right. If I realize after the fact the code would be better if it was structured differently, I might have said, "well, this ticket is due today so that will have to be good enough", but now I can actually fix it. The LLM is pretty good at understanding the existing code and applying patterns, so a transformation that maintains the existing functionality is going to come out mostly correct.
I just started a new job, and it's a fast-moving team with a big codebase. I have been able to get up to speed a lot faster than I expected because instead of spending days grubbing through the code or begging time off people who don't have it, I can ask Claude to "find me the place where X happens in the code" or even "trace the dated from this component all the way to the database and give me a summary".
The thing I'm afraid of, which will 100% drive me to some kind of change if it comes to pass, is that even if we're not expected to use AI tools to help us code, the vast majority of work that we're expected to do may shift from building bespoke applications for humans to use toward building more generic tooling to power AI applications.
Like if you're working with any kind of data management system, instead of getting requirements and making decisions around how the data needs to be manipulated and what kind of reporting you need to support and what the UI/UX should be, you instead get to build more generic APIs and database schemas that get plugged into an MCP server so that users can do whatever they want through an LLM chatbot.
Not only is that less interesting and less fulfilling work, but the end result is going to be such a clusterfuck to support and debug--when users complain that something's not working right, instead of figuring out why and fixing the code, your job will boil down to writing LLM prompts like a teacher explaining to gradeschoolers what they should and shouldn't be doing with all the data they have access to. I don't want to be doing that. If that's where our industry is headed even for the short-to-medium term (until users and, more importantly, the C-level execs pushing all this AI nonsense realize that chatbots and LLMs are the absolute worst way to interact with most applications) I will probably nope out.
Option 2 is to build your own thing, which you can get started on any time, even keeping your current job and income. It's not for everyone but you'd get to decide exactly how much hands on building and problem solving you'd get to do. The trick IMO is finding a problem you really care about solving, rather than solving a problem just to make money.
It took me a few months to feel my way through it, but I’m no longer worried. My time will now be proportionally more design and product than engineering. That should let me produce a higher quality deliverable, which is really what matters. And I can still write code manually often enough to enjoy that part of the job.
My main concern is with how future coworkers might misuse these tools to write a lot of bad code quickly. I’m currently job searching and will try to determine if the culture at my future company cares enough about quality to prevent that.
This topic has been discussed some before. In fact, here is my comment on that thread.
I have been thinking a lot about this recently.
For context, A.I. is basically my life and has been for a long time. I have spent over a decade dedicated to the subjects of machine learning, statistics, and neural networks. I watched the development of ELMo, GPT-2, and BERT in real time. I have implemented agents made in langgraph and other frameworks. To be clear, there are many who have studied the subject longer and who know more than me, but I just want to highlight how much the topic of A.I. has been a direct and personal interest of mine.
It is very hard to decipher whats true regarding A.I. because of all the money and feelings involved. There are a lot of people whose livelihoods are dedicated to convincing folks like you and me that these tools are legit. That LLMS and agents are revolutionary, life changing, corporate buzzwords. I recently lost a lot of passion for the field due to the wave of generative A.I. in the corporate world. There are also a lot of people who refuse to use anything A.I. related and want to convince you that it is the spawn of Satan. Both could be correct.
I'm gonna give you my straight opinion. In my experience, the recent waves of models are legit. Claude Code, Codex, Cursor, Open Code are all insanely capable. The coding world has shifted immensely because of this. To not use A.I. tooling is going to limit you. If you have a morale conundrums, then you might consider using some of the open source models in combination with a tool like open code. I believe this combo should utilize your computers GPU which may help alleviate concerns related to privacy, energy consumption (to some extent) and the funding of big tech while still offering you a powerful tool.
On the positive side, the technology is certainly interesting and has the potential to speed up a lot of coding aspects if implemented into your workflow correctly. Anecdotally, these models have helped me in my personal projects. Take that for what it's worth. They also might be used to cure diseases or make other rapid advancements in medicine and science.
Regardless, this stuff isn't going anywhere. Ultimately, to reject A.I. entirely will likely be a very challenging thing in the same way that it is hard to avoid the internet and cellphones. At the individual level, we can do our best to try and work within the confines of our situations by utilizing open source tech and local tools or at the very least, voting with our wallet to fund the decent A.I. use cases and companies.
Of course, you and I have the right to decide if we want to continue in the field given the direction it is heading. Whether the field will still want us in X amount of years is anyone's guess. There is certainly a reality where CS jobs become more valuable because they need to clean up poor A.I. implementations. By the same token, there is a reality where white collar work continues to be decimated and wealth continues its rapid movement to the rich. The reality of the situation is that the world is changing very quickly. All we can do is run with it, or run from it.
Here is a paywalled article about the subject that has a lot of good discussion.
Hope this helps. Best of luck out there!