Any software engineers considering a career switch due to AI?
I've grown increasingly unsure about if I'll stay with this profession long term thanks to the AI "revolution". Not because I think I'll be replaced, I have an extremely wide set of skills thanks to working over a decade in small startups so I think I'm safe for a long while to come.
No, I've grown weary because an increasingly larger share of the code that we produce is expected to be ai generated and with it shorter timelines and I just plain don't like it. I think we reached a tipping point around Claude opus 4.5 where it really is capable and that's only going to continue to get better. But damnit I like coding, I enjoy the problem solving and I feel that's getting stripped away from me basically overnight. Also, as these models become more and more capable I think the number of companies vibe coding to a product with fields of junior level engineers is going to grow which is going to push down senior job opportunities and wages.
So now I'm left wondering if it's time to start pointing towards a new career. I really love building stuff and solving problems so maybe I go back to school and switch to some other flavor of engineering? Idk. Curious where other's heads are at with this.
My head's current position is as such:
@kacey is tired and remembers the bad old days
Even if models stopped improving tomorrow, they're going to continue changing the industry irreparably. We're never going back to a pre-Stack Overflow world, where it was a PIA to find helpful forums or resources, and there were twenty different ways of solving any one problem. We're never going back to an era where reference books, stocked on the shelf of your local library, are a useful tool for understanding a new technology. We're well past the point where the majority of developers understand how the computer is executing what they asked it to do, and we'll soon cross the threshold where most do not understand what they are asking it to do.
I will take at face value the statements, from every major tech firm, that the majority of code will soon be written by LLMs -- if it has not already happened. That in mind, I figure that my traditional career paths have now been winnowed down to three directions: (1) "wrestle several dozen modestly competent LLMs into writing slop code", (2) "become an expert slop code debugger, skills finely honed by reading millions of lines of wrestling matches gone awry", or (3) spend all my time arguing with management, because now my job is LLM whisperer. What was once a beautiful clockwork, crafted by human-centuries of careful curation (and perhaps, here and there, a few years of care-lite, caffeine-fueled curation), is now being replaced by slop churned out by opaque blobs of linear algebra. Colleagues that once balked at the notion of understanding how a computer works at all can rejoice -- the playing field is leveled; there will no longer be experts, for we will all be held at the whims of 200 GB of 4 bit floats.
This job was tolerable when it involved sitting in a corner, deeply understanding a complex, technical machine, then making it do a backflip. It was at its peak when I had the opportunity to do that, in a mid-sized team, working on projects I cared about and with a management team that -- although, uh, competent in their own way -- still stayed mostly at the sidelines. Then I had the most recent gig, which involved learning about the wonders of LLMs. Looking at the trendline, I get the impression that the ride is now over ... or at the very least, my stomach isn't prepared for the next dip.
Breaking free from the AIs by escaping into another discipline, where surely the grass is greener
So, uh, I hate to break it to you, but most other disciplines will be feeling the heat soon enough. If they aren't already. Although the path to get here has been costly -- on the order of trillions of US dollars (in imperial units, approximately a stack of $100 bills, one foot deep, laid out over 17 football fields) -- we now have a repeatable method of automating most work that is done by a human in front of a computer. There are absolutely some societal benefits to this, but because this is a depressing post borne from my misery, I will only underline the downsides. You cannot escape the LLMs. Even in attempting to get your degree in a hypothetical field unaffected by our present slop era, you'll still be contending with an educational system that is increasingly strained by LLM-generated slop essays, slop coursework, and slop grading.
That said, if you can find something life critical where licensure matters, you stand a better chance of keeping your job. But as long as it could conceivably be done in front of a computer -- and you are working for someone, and are not a freelancer (or own a business) -- you'll likely be put under increased pressure to do it faster anyhow. Especially if your coworkers are aware, and are slop tamers themselves, which they are extremely incentivized to do.
Everything is worse now, but that's OK. AKA answering your actual question
Hopefully you have gotten as much of your bag as you could have during the boom era of technology. Doubly hopefully, you have the stomach to ride the AI wave directly into the brick wall it appears to be headed towards, with one hand on a grappling hook to bail out at the last minute. That metaphor got away from me. But the crux is that, since software development has been very lucrative for a while, you're hopefully sitting on enough of a stockpile to weather the career course change your comment suggested.
My thought process is as follows: LLMs (and ML more broadly) are extraordinarily good at a few things, and middling to bad at most. However! There are also trillions of dollars being poured into them, so identifying those few things represent an incredible opportunity -- and some of them may be untapped. The ideal disciplines are spots where I figure there's an overlap between (1) our current AI toolsets' abilities to excel, where they aren't currently being exploited, and (2) subjects I could ramp up in fast enough to become profitable before becoming destitute. Broadly, that strategy (expand software solutions into new industries) seems to have been successful over the last couple of decades, and AI tech represents new possibilities for several fields that probably could merit being explored.
^ that's only really a helpful perspective if you're OK with continuing to use LLMs/AI tech/ML/etc. in order to accomplish something in another technical discipline, though. I feel that the genie is out of the bottle and we're stuck in the post-LLM world, now, but capitalizing on it in whatever way I can feels like the right way to take a little control back and maybe make a difference.
And pay the bills. That part is also quite critical.
...yeah? I can't get it to replicate any complex spreadsheets or data analysis. They don't understand the business context nor what makes for rigorous, engaging presentation. Unsurprising considering they're not sentient.
That's because you're probably a smart person who cares about things being correct. If you were the kind of person who would copy and paste a Wikipedia page for a writing assignment, or think that you should get credit for writing some nonsensical wrong numbers on a math assignment (because you put in effort!)...then LLMs are perfect. The bullshit accelerator generates plausibly correct-looking junk that takes more effort to prove isn't what's required than it takes to make, so one can DDoS the people who care and hope they can slide by without being called out.
I've found some decent use of LLMs (my employer is paying for them, anyway) as an aid for searching for information about things when I dont have solid terminology to use, but it's far been eclipsed by the egregious junk I've seen generated and foisted upon everyone else.
Working with the coding agent to write better documentation (explaining the context) might help as an intermediate step.
You’re saying it as if there’s only one LLM. In a constantly evolving landscape, what did you try and when? What’s stopped you from trying again?
It’s also important to share the data and chat history for us to truly evaluate.
^ apologies, I can't comment on your spreadsheets or data analysis, so engaging on that part of your comment isn't possible.
However! I'm curious why you figure that sentience is a necessary part of competence in this area? For context, this drew me back to the bygone years of my youth, listening to a respected, tenured Linguistics professor explain how statistical machine translation could never replace the true breadth of knowledge and learned skill of a real human. This was a person without sufficient context into the tectonic shifts in the ML community at the time, and who had a vested interest in believing that something they held dear would not be lost in the upcoming quake. But I never got the chance to ask them more about their perspective.
I guess I'd challenge the notion that any of us in this thread (except for maybe RobertJohnson) are really competent enough in the AI field to understand whether a specialized subsection of another field is truly under threat. Similarly -- especially for something critical to your job, or a task that you enjoy -- we're all human, and suffering from biases which make it difficult to analyze emotionally divisive problems, such as "how does this differentiate me from an LLM", or "what happens when all the engaging bits are automated away".
Thankfully, I'm an unfeeling, uncaring flesh machine, so I have no objectivity issues whatsoever and can predict with perfect clarity what is, and isn't, subject to automation by LLMs :3 (sarcasm, in case that wasn't conveyed well)
I'll offer you a fourth option which is to write well architected code using an llm. I don't think there's an inherent reason why llm generated code has to be slop. I regularly send the llm back to refactor code, change interfaces, etc. This is one of the reasons why I like tools like Cline – you can see all the changes as they come in, say "no not like that", and really shape the process of creation. Compare that to Claude code which is much more YOLO since you're not gatekeeping each edit that it makes.
I've seen plenty of badly architected, poorly tested code written by human engineers way before llms were in the picture. We didn't call it slop but that's what it was. And we had to train people how to write better code, how to have a process, how to look at the big picture. In this sense, LLMs are no different than compilers. A tool used badly will produce bad code, but that's a result of the process as much as it is a result of the tool.
Have LLMs disrupted the process? Yes. Do they appeal to human laziness? Yes. But bad code still has costs down the line, and I believe that eventually the industry (or at least, certain sectors of it) are going to grow up and face that.
Thank you for the fourth option; that is very kind 😅 I personally doubt that there's a future in what you're describing, for me, in my market, but it's plausible that it would be in yours! The world is vast.
Same! But it was always an uphill battle to convince management that training was necessary. Now that costly AI tooling is being crammed down everyone's throats in the name of productivity, the force pressing against "craftsmanship" (kinda hate that word but w/e) has redoubled.
LLMs promise us a world of more, worse quality code. When your colleague can vibe on their laptop overnight to slop out a fresh, new coat of paint onto your line of business app while you're asleep (or grinding through the evening to clean up the codebase), I feel that the traditional American workplace will reward the former rather than the latter, and dramatically so.
I'm admittedly trying to throw the dart several years into the future by guessing at dynamics which are only starting to arise now, though, so hopefully I'm wrong!
... not really agreed with this metaphor, since I've only seen poor compiler output for unusual definitions of quality (e.g. dealing with unsafe, shared memory access; issues involving timing frames; etc.). But I think that, broadly, you're making the garbage-in-garbage-out argument? Kinda disagreed with that in the case of LLMs, but to each their own.
Honestly I think this rubs me the wrongest way XD the industry has never paid for bad code! As a large corporation, one could, for example, leak sensitive information on one half of all Americans, and receive nothing more than a slap on the wrist. Or release massive security vulnerabilities into cameras deployed across wide swathes of the country. "The industry" has had the better part of fifty years to get its stuff together and put its big boy boots on. Instead of doing so, it's successfully pulled the rug out from under every large scale attempt at worker organization and industry regulation possible, then bought and paid for several successive complicit American governments.
With LLMs, some of the most irresponsible, morally dubious humans on Earth have been given the ability to fire large swathes of their educated and highly paid staff, at the mere cost of harming everyone else in society by ram scooping slop down their throats. I cannot fathom them doing anything but growing much, much worse.
The thing that still gives me a bit of hope is that open source projects aren't seeing a huge increase in useful contributions generated by AI. If LLMs are now at a state where they are actually good at understanding code bases and contributing in a meaningful way, I would expect to see at least some high profile open source projects begin to embrace them and see benefits. But instead we see the opposite--projects have to explicitly ban AI-assisted code because they become overwhelmed with (let's assume well-meaning) contributions that don't fix anything, break more than they fix, and/or waste the time of real contributors who have to wade through the pools of slop.
Maybe the corporate world is just leagues ahead of the open source world because the actually good AI tools are priced beyond the means of your typical open source contributor, but I'm skeptical. I still think there's a chance that much of the hype over generative AI is a bunch of over-eager C-execs and their cults being blinded to reality by all these dollar signs in their eyes.
I think it’s still too soon to see widespread impact on existing projects. But I expect that we’re going to see a lot of open source projects that were written largely with coding agents from the beginning. This is a bit meta, but an example I can point to is the Shelley coding agent.
I haven't done a whole lot of digging, but I wouldn't be too surprised if AI tooling will remain wholly unique in the open source world as the only projects that openly embrace and encourage AI generated code.
A lot of the discussions I see around the subject of AI coding assistants (and generative AI in general) feel very reminiscent of the blockchain mania we saw 5-10 years ago, where there were certain echo chambers in which blockchains were the answer to everything, and blockchains were creeping their way into all kinds of products, solving problems that nobody had in ways that they were wholly unsuited for. A really big part of me thinks generative AI is going through a similar hype period right now, and is also due for a drastic decline in enthusiasm where the gap between what it's capable of and what people are using it for gets closed.
I am also fully aware that it could just be my own personal biases coloring it that way for me. I suppose only time will tell.
I haven’t seen a large codebase that I admired that much. Most legacy code is crap, because cleaning it up was too much effort and too much risk. Now we have power tools. Using them skillfully will require good taste - fix the test suite first. But there’s never been a better to time to work on improving code health, provided that you know what you’re doing and get the leeway to do it.
I think we have different work experiences, and different perspectives, which is driving this disagreement! Personally: the larger codebases I've worked on (~200-300kloc for the part I was responsible for) certainly had their ups and downs, but I always appreciated how so many people could collaborate on a single machine and still have it function (mostly). Further, that stack was merely the tip of an iceberg spanning down through cloud orchestration services, database servers, distributed filesystems, and more obscure networking solutions than you could shake a stick at -- that, for me, was beautiful.
Somewhat tangentially, my favourite of the natural sciences was always biology, a field where literal random chance has driven the development of every living thing on Earth. There's ... a little bit of legacy code in there, and a non-trivial amount of spaghetti, but the fact that it functions (mostly) is part of what makes it so awesome. To me, at least; I'm not going to even pretend that my opinion is commonplace or rational!
That was a great read thank you! I drift with you on how quickly you think the take-over will be with other fields, though. Professions that operate over gobs of text and whos end product is again, gobs of text are just so, so ripe for AI takeover. I'm trying to think of some "basic" tasks between some fields, like a civil engineer needing to make some sense of a road network PDF or a plastics engineer reviewing some 3D modeled component for improvements - LLMs, multimodal or not, fail so very hard at these kinds of tasks. Compare that against a software engineer who can often one-shot copy-pasting in a ticket for a small feature into some agentic AI. Scary.
CAD is definitely not there yet but in a couple of years, who knows?
I'm curious about that too! We've seen some significant improvement in 3D model generation recently, so I'm curious to what extent that could be grounded in e.g. software CAD solutions in order to provide dimensionally accurate, simulation ready components. Assuming the AI bubble holds out long enough to see research in that area, we'll be in for interesting times.
Hah, no worries; thank you for giving me the opportunity to have a little rant :3 it was cathartic to get it off my chest. If you wind up making the leap to a different career, I hope you're able to find something more up your alley!
I view LLMs more as tools that accelerate software engineering, not replace it. Like compilers, linting, static analysis, APMs, or IDEs.
Business and sales people still need someone to use the tools. Interpret their desires in a sensible way. Avoid pitfalls. Debug issues, etc.. A product owner or technical project manager or something might be able to do a lot of this, but will still probably fall short in areas.
I don’t know a single engineer that has lost their job to be replaced by AI. I have heard of engineers that lost their jobs because of the budget companies are spending on AI - but that’s different from being replaced by it.
I also suspect many companies are using AI as a convenient excuse for layoffs that are actually driven by the economy, politics, or offshoring.
This in general, but absolutely this when it comes to software development.
I’m pretty skeptical of this. Companies don’t need an excuse to lay people off. They can just do it.
If you can announce publicly "We are laying off 1,000 people because we can pay someone in another country 10% of their wage." versus "We are laying off 1,000 people because of our innovative use of artificial intelligence." which do you think they'd choose for framing the message?
Or: "we're laying off 1,000 people because the economy is bad and we're going to try to wring more pennies out of our existing business instead of investing in expansion. Oh wait, *waves hands*, AI."
That should be a major red flag for investors, but that's basically the last few years.
I mean that gets down to what you mean by "using".
Is it possible companies do a search and replace for "AI" when the wall street journal emails them to ask why they're doing the layoff? Potentially. But that also has no impact on anything.
Is it possible that companies are purchasing million dollars ChatGPT/claude code/whatever deals just so they have something slightly spiffier to say to the journalists emailing them for comment? No, that makes no sense economically. It ultimately just doesn't matter what a company says when they do a layoff. Companies don't really "say" anything to begin with, they're not people.
I'm not talking exclusively about journalists. I'm also talking about investors and boards. A pretty large amount of time is often spent gathering data, framing data, and doing presentations to boards and/or investors. Depends on the company structure and size. I get your point though.
That depends WILDLY on where you’re at and who you’re dealing with.
Ignoring just normal law in places like California, AI was an excuse to nuke entire departments with the hopes of rehiring for cheaper
If you lay people off with a clear signal that "fuck you, were moving all the jobs to India" then you may also lose the people you still want to keep (even temporarily) if they see which way the wind is blowing. Similarly, if people had been overworking themselves because they feel they would be rewarded for "giving 110%" then that may also discourage that. On the other hand, maybe the execs want to make their employees feel under threat and they have fewer options to make it easier to take unpopular actions like RTO or reduced bonuses. That's all to say that there's a messaging game about how you present layoffs to the remaining employees too.
They almost always give an excuse though... Usually it's "current economic factors" or stuff like that though
I think that’s also a mischaracterization. Tens of thousands of people get laid off every month, but most of these are small-medium sized companies no one has ever heard of, and they say nothing, and no one asks them anything. It’s just an undulation of the labor market.
You only notice layoffs from companies that are such pillars of the economy that when they have a layoff, reporters swarm over it. And in that case they usually peel off something about the economy. But ultimately it doesn’t really matter what they say, and certainly spending tens of millions of dollars on ChatGPT credits just to tell the Wall Street Journal a different one liner is not a course of action that passes Occam’s razor.
For publicly traded companies, how stock owners perceive the current performance trend directly affects stock price. High level managers are significantly compensated with stock, and have an interest in keeping stock price high.
"We overhired in the weird pandemic economy (projected growth that didn't happen) and now need to right size" calls into question manager judgements (like projecting growth that failed to materialize)
"Tarriffs ate our capital project budget, and regulatory uncertainty makes now seem like a bad time to commit on a long-term direction anyway, so we're laying off the capital project team" calls into question the future growth of the company
"AI magic will let us grow with fewer people" covers up other explanations and feeds the stock price as a spin. AI spending can be any level the company believes is valid for other reasons - part of the appeal of this shtick is that it works with even minimal AI purchases. It's a stock-price-support trick for any publicly traded company in the current investor environment.
I noticed the last 5 layoffs at my company
No way. If anything I’m doubling down. I’m not gonna let some MBAs and upjumped chatbots force me out of anything.
And just imagine the consulting opportunities that’ll pop up as companies with vibecoded products need real engineers to come clean up their mess.
However, I have to acknowledge I’m in a privileged position. My company isn’t mandating LLM usage at all. And if they did I’d just lie to their faces.
I found some vibe coded SQL this week that I took from a 14 minute query to a second or so. I am expecting the consulting opportunities to be fantastic.
I don't even understand how that's possible... was it querying every single table in the db or something?
It was wild. Let's say that they needed to get a bunch of information from a table and then did a ton of joins to get additional information. A lot of the joins were not used in the final select and were there as vestigial joins from iterations of the query or something.The table had like a dozen values in a column that they cared about.
They made a dozen separate queries where they did all of the joins and filtered to one of the specific values they wanted, wrote all of the results to a temp table, then repeated 11 times and finally returned the whole temp table.
That is wild, thanks for sharing. I suddenly feel quite a bit more confident in my SQL skills.
I'd guess an awfully over-engineered (AI-style) query with loads of cross-products. Link 4 tables together and it can become hell real quick.
I've experienced LLMs frequently writing unit tests where the mocking/stubbing they do in setting up the test effectively means they are only testing the mocking and stubbing. The test passes, it looks great, but it's not actually testing anything.
I've also seen some pretty significant security issues show up in code review processes because of AI generated code.
I’ve had a coworker try to generate unit tests and pass it off to me in PR review. The tests were exactly as you described, they validated nothing. It’s as if the training data contained the empty tests generated by project init tooling.
I've seen this an extremely frustrating amount. Even a senior engineer's live demo had fake tests!
Hah, I hadn't considered this! I've had thoughts of moving into consulting at some point if i want to start winding down the number of hours I work.
I work at a consulting company right now. Our deal is building high quality replacements for low quality software, and sometimes modernizing existing codebases that aren’t too far gone. The precious thing I love about this job is we’re given time to develop a deep understanding of the domain and design a system that works elegantly.
Right now we have more businesses requesting our services than we have man power to actually do the work. I can only imagine this market will grow as the effects of LLM code ripple throughout the industry.
This actually sounds fun for a sicko like me. If you don't mind me asking, was there anything specific you did to find/get a position at a company like that, or did you just apply like any other dev job?
Idk, I've been integrating some more of the LLM tooling into my workflow and honestly it's been fairly enjoyable. I feel like I get to do more of the fun coding, and less of the "fuck around with dagger injects until it does what you want" coding or "spend an hour making whatever bullshit boilerplate someone made two years ago and now there's no docs for".
A side effect of the prevalence of LLM tooling is that I felt people have gotten much more rigorous in writing docs and ticket descriptions, because now the audience includes LLMs that have no idea what anything is inherently.
Even greybeards like Linus Torvalds are using LLM tools for their hobby projects. I think it's fairly energizing, if anything.
edit:
To be put more details on how I've been using it:
For every new thing I want to use an LLM for, I spin up a new EC2 instance with the dev environment set up. This is already existing infrastructure where I work, because monorepo things.
Then I go into claude, and have a prompt which instructs it take a problem statement, and turn it into a plan in a markdown file. This is step 1, and can be useful even if I never use the LLM for anything else. Claude will take the time to find relevant code references, the order things should be done, etc. But anyhow this is the first stage I weigh in. I read the plan, and tell claude to modify it until it looks like something reasonable.
Then it depends on the type of problem. If it's something fairly simple, like deleting feature flags, adding metrics, and so forth, I just let claude give it a good ol' college try at it. One lesson I've learned is to just let it compile things and fail. It will often hallucinate things, especially with the amount of custom infrastracture, but it'll usually figure it out after doing a few compile loops. These kind of tickets are the kind that I dread doing anyway - most of the "difficulty" is figuring which of the 5 different mocking libraries is standard in this part of the monorepo.
If it's something more interesting - a complicated piece of business logic, for instance, I will modify the plan so that claude instead essentially does all the drugery for me. It will set up unit tests, it will do all the necessary research, it'll muck around with dagger until all the required imports are ready, it'll create all the autogenerated classes, or boilerplate generation.
Then I jump in, do the fun parts, make the PR, done.
Linus Torvalds isn't using gemini for his side project guitar project because management is breathing down his neck - no such person could exist. He's using it because the fun part of that is writing the code that details with the instruments, and not the python code to visualize it. So he does the former and gemini does the latter.
I have to agree with you to an extent. I've recently started incorporating agentic coding workflows into my projects, and it's been a positive experience. It has helped me tremendously with my personal project and significantly accelerated my development time. I'm now optimistic about finishing a longterm project that simply wasn't feasible before, given my day to day life.
This topic has been discussed some before. In fact, here is my comment on that thread.
I have been thinking a lot about this recently.
For context, A.I. is basically my life and has been for a long time. I have spent over a decade dedicated to the subjects of machine learning, statistics, and neural networks. I watched the development of ELMo, GPT-2, and BERT in real time. I have implemented agents made in langgraph and other frameworks. To be clear, there are many who have studied the subject longer and who know more than me, but I just want to highlight how much the topic of A.I. has been a direct and personal interest of mine.
It is very hard to decipher whats true regarding A.I. because of all the money and feelings involved. There are a lot of people whose livelihoods are dedicated to convincing folks like you and me that these tools are legit. That LLMS and agents are revolutionary, life changing, corporate buzzwords. I recently lost a lot of passion for the field due to the wave of generative A.I. in the corporate world. There are also a lot of people who refuse to use anything A.I. related and want to convince you that it is the spawn of Satan. Both could be correct.
I'm gonna give you my straight opinion. In my experience, the recent waves of models are legit. Claude Code, Codex, Cursor, Open Code are all insanely capable. The coding world has shifted immensely because of this. To not use A.I. tooling is going to limit you. If you have a morale conundrums, then you might consider using some of the open source models in combination with a tool like open code. I believe this combo should utilize your computers GPU which may help alleviate concerns related to privacy, energy consumption (to some extent) and the funding of big tech while still offering you a powerful tool.
On the positive side, the technology is certainly interesting and has the potential to speed up a lot of coding aspects if implemented into your workflow correctly. Anecdotally, these models have helped me in my personal projects. Take that for what it's worth. They also might be used to cure diseases or make other rapid advancements in medicine and science.
Regardless, this stuff isn't going anywhere. Ultimately, to reject A.I. entirely will likely be a very challenging thing in the same way that it is hard to avoid the internet and cellphones. At the individual level, we can do our best to try and work within the confines of our situations by utilizing open source tech and local tools or at the very least, voting with our wallet to fund the decent A.I. use cases and companies.
Of course, you and I have the right to decide if we want to continue in the field given the direction it is heading. Whether the field will still want us in X amount of years is anyone's guess. There is certainly a reality where CS jobs become more valuable because they need to clean up poor A.I. implementations. By the same token, there is a reality where white collar work continues to be decimated and wealth continues its rapid movement to the rich. The reality of the situation is that the world is changing very quickly. All we can do is run with it, or run from it.
Here is a paywalled article about the subject that has a lot of good discussion.
Hope this helps. Best of luck out there!
I am so very tired of this nonsense. It has crossed my mind to leave the field, but I don't know what else I'd do. I got lucky in that I am very good at something that people will give me lots of money for. I don't expect to have high chances at doing that for a second career.
It's frustrating to watch, because I empathize with the boosters. I like the idea of a development tool so high-level that non-engineers can use it effectively. But every time it's been tried, the result has been another thing engineers use, and usually don't particularly like using: COBOL, UML, low/no-code platforms, and now this. I like the idea of a tool that automates stuff I don't feel like doing, but it has to actually effectively do that. If I wanted another tech stack I had to babysit I'd just spin up another homeassistant instance.
I try again every few months, but I still haven't gotten results out of any of these tools that reach my standards of "I am comfortable submitting this as finished work". The most I'd trust them with is scripts I run once, verify the output, and throw away. The quality of the fully-vibe-coded software I've seen suggests that other people aren't getting significantly better results, they just have lower standards.
So I am optimistic that the fad will last just long enough that when the bubble pops and it's no longer cost-effective to run these things, I'll be one of the relatively few who remembers how to write code the normal way. Until then, I shall suffer through it.
I feel you buddy.. I wish I shared your optimism. The bubble will pop but I think we're stuck with AI indefinitely.
Besides it not being fun, I just can't get over the ethical side of this.
Maybe the LLMs are getting to the point of actually being useful. Maybe they are getting people excited about building ambitious projects. Maybe I will get left behind for not taking a part in this.
Maybe. I don't know.
Regardless, I don't think it is right to fund this machine. Humanity should not be using our resources to build these new data centers. We should not be outsourcing our thinking. We should be doing less, not more. Using less energy, not more. I don't want to take any part in this. I will not start paying for Claude/whatever tokens or recommending my employer to do so. To me this feels obvious.
Yet most of my colleques seem to be completely fine with all this. None of the negatives seem to matter to anyone as long as the bots can generate reasonable looking lines of code fast enough. It makes me feel like I'm going insane when listening to them.
So yes, I am considering. Not sure what else I could do, if this keeps going on.
I can't say it hasn't crossed my mind but I haven't given it too much thought either. It's certainly annoying just how much they've been pushing AI usage onto us and expecting us to do more. One of the pushes with AI I've seen in my company is to essentially trial replacing whole teams with AI where 2-3 engineers take on the role of a Project Manager, Data Scientist, and Software Engineer to rapidly prototype new features and demo them, with AI being used at every step along the way. It takes away from the joy of software engineering since you outsource your thinking and labor to a machine.
Having studied a combined math and computer science degree at university, I have thought about doing something more math related out of passion for the subject. I've also been thinking about just doing more math for fun in general since it feels like my brain has gone to mush with all this AI usage.
Maybe you've accidentally hit on the reason behind my recent math/physics hyperfixation. I don't have an academic background in math - I was very good at it and have great intuition when it comes to numbers, but that made it really difficult for me to write proofs or just show my work in general. Which I understand now is a very important part of math (and science in general) but when I was younger, it was hard to see that.
As I, like many others, began to be pressured to do more for less at work, I did start outsourcing some of my brainpower to LLMs. And I felt dumber for it - previously I could instantly recall why I wrote things the way I did and give detailed breakdowns. Now I have to struggle for a few minutes to remember and sometimes I have almost no recollection of writing things, even though I do still maintain my personal ethos of typing everything in by hand, no copy pasting em-dashes or Unicode crap for me, though I mostly do it just to feel like I'm doing something and it also helps me take pause and identify issues.
I even let myself fall further out of my neurons' graces by spending far too much time watching YouTube shorts.
At some point, the annoying YouTube games showed up and I wrote a quick ublock rule to block it. Apparently that also inadvertently blocked YouTube shorts from appearing on my home page. I started watching more math videos instead of garbage. Picked up a hard sci-fi book dealing with topology and differential geometry and found it absolutely fascinating. I'm now watching the MIT OCW lectures on linear algebra to refresh my memory of it. I think I'll probably even do some fun math stuff in the near future, whatever that may entail. One is porting a simple "4D maze" game from Swift to Rust, so I'll get to learn some math implementations I've not touched before.
While I still do use LLMs a bit, I feel like taking back that portion of my brain from the machines has been quite freeing. I've been finding math and physics to be liberating in a sense, perhaps because there's so much there and they're also subjects that current LLMs are not too good at (yet).
Viva la revolución humana
I think if I shared your passion for math I'd be making a pivot to machine learning. If you cant beat em, join em :)
I've thought about it before and even had an opportunity to join one of the AI/ML teams at my current company before I locked in a different team. Unfortunately, those orgs have some horrendous work life balance, with long hours and things constantly breaking and requiring firefighting. Even if the work is interesting, I can't see myself dedicating that sort of time to any big corporation haha.
I have to say that I mostly agree with your feeling, except that I am not going to let this bullshit push me out. I do not like the experience of any of the LLM coding tools. It absolutely strips away the best parts of being a programmer and all that I am left with is reviewing questionable code written by an LLM, which may as well be reviewing code written by a semi-competent other person. That is generally a necessary, but wholly unpleasant part of the job. If my entire job gets turned into that, then maybe I will finally accept the push to management or take a career turn. There is a reason I never took a position "higher" than Lead Developer, despite the higher pay. All of the people that I personally know that are programmers that like LLMs have a very different relationship with programming. I have literally been writing code since I was 4 or 5 on my TI-99/4a. I'm not about to just offload the best part of my job to a tool any more than a mathematician would just start coasting when graphing calculators and Mathmatica came out.
I've been using Claude Code a fair bit at work, and it's a useful tool but it takes a lot of goading to keep it on track and not making stupid mistakes or breaking things. To effectively use it you need a thorough understanding of both what the product is supposed to do and the mechanical underpinnings that make it do those things. One can vibe code something that technically runs without that, but unless it's extremely narrow in scope/features it will quickly collapse under its own weight and become a liability.
So I'm not looking at changing careers, at least for now. I am however starting to seriously consider starting some kind of software business of my own, because there's still opportunity to be had and although I'd estimate myself as just average among my peers, it won't take a lot to stand out against the masses of poorly built vibeware that the world will be flooded with.
My experience is that most companies are desperate to hire seniors. I think it will continue to be that way because they need people who can bring the bigger picture to the LLM output. Even the really good LLMs lose the plot pretty regularly, so you have to be on top of it.
I also agree with @stu2b50, I think it takes the drudgery out of coding, and I can spend more time thinking about the structure of the code without holding so much of the syntax minutia in my head. It also makes it easier to do things right. If I realize after the fact the code would be better if it was structured differently, I might have said, "well, this ticket is due today so that will have to be good enough", but now I can actually fix it. The LLM is pretty good at understanding the existing code and applying patterns, so a transformation that maintains the existing functionality is going to come out mostly correct.
I just started a new job, and it's a fast-moving team with a big codebase. I have been able to get up to speed a lot faster than I expected because instead of spending days grubbing through the code or begging time off people who don't have it, I can ask Claude to "find me the place where X happens in the code" or even "trace the dated from this component all the way to the database and give me a summary".
This is one of the best use-cases in my experience. It's not perfect because if it takes a wrong turn at the start then it might lead you astray, but it works pretty well as an "advanced fuzzy search" that can run some nested queries.
I agree this is the case for now but I worry for the future. Many apps dont have to be well coded to ship. I foresee a shift in leadership that will (knowingly or not) make the trade on fast, dirt cheap, vibe code solutions over a team of seniors doing it "right".
Theres still a healthy chunk of our codebase that is in an ancient framework and contains some 10k line megafiles I dont dare touch. Literally my favorite use for AI is setting it loose on those to answer my questions. One of the rare times I am truly happy for AI lol.
Can you say more? I've worked in software dev but for the last 5-6 years have been somewhat removed from classical software dev projects, mostly working on VR development using game engines. Your statement makes me feel really sad, it seems incredibly short sighted to mandate the use of AI like that.
I'm considering changing jobs, but that's due to leadership and internal communication problems. Which would most likely mean me going back into more classical software development. Which is unfortunate, I really enjoy the kind of projects i currently work on.
My company is... more optimistic about AI than I would like, but isn't forcing it on employees who don't find it useful. These jobs still exist, at least for now.
I have heard horror stories from friends, though. Several of them have AI tool adoption as a metric they've been told to optimize for, which is maddeningly backwards. Some of their bosses have "engineers don't read code anymore" as an explicit goal.
I miss when blockchains were the hot new thing everyone was trying to shove into their products.
Maybe I'm in the minority, but technical management has drank the kool-aid and is pushing the use of AI very hard. They were obviously upset recently at not enough productivity gains and had all engineers send in a writeup up on all the ways we use AI. Any pointing out of areas where AI struggles is seen as being "difficult" or "too negative". Mind you this is coming at a time when we've lost all our project managers and designers so all devs are taking on extra work (I wonder why we're not seeing productivity gains? 🤔)
It took me a few months to feel my way through it, but I’m no longer worried. My time will now be proportionally more design and product than engineering. That should let me produce a higher quality deliverable, which is really what matters. And I can still write code manually often enough to enjoy that part of the job.
My main concern is with how future coworkers might misuse these tools to write a lot of bad code quickly. I’m currently job searching and will try to determine if the culture at my future company cares enough about quality to prevent that.
Good luck! I'm curious how are you broaching that quality culture question with them? And if you end up on a few interviews with answers I'd be curious what the vibes are like with regards to all this.
I don’t think it’s the kind of thing you need to ask. In my experience the culture is pretty obvious once you meet the team. I also try to see their code before signing.
More like vice versa. Tech has been an increasingly boring field for me in the last 10 years. Coding has become a real chore. LLMs appearance have brought new fuel for me at least, as coding is one of the things I don't have to really do anymore.
A smart person once said in some IRC channel that "everything interesting is always happening at the limits of what's possible". AI is stretching that limit by quite a lot. There's gonna be a lot of slop out there in the near future, but there's gonna also be lots of very interesting new things.
I knew software engineering as a profession was dead the moment, in late 2022, OpenAI released ChatGPT and gave normies the power of LLMs, like Prometheus giving fire to man.
It was only a matter of time before companies saw LLMs as a way to reduce overhead and maximize profits. And, as we all know, they were looking to reverse the employers' market that developed during the pandemic. If I were to make a career shift, it'd be on my own terms, not forced onto me by a humiliating layoff email or video call from a bunch of ghouls in human resources.
I spent the next year enrolled in trade school, and when I had enough money saved up from my job to offset a massive pay cut I'd suffer while doing a full-time apprenticeship, I quit. In the years since, I've found steady work. I'm not earning close to the salary of my previous tech job, but I'm making excellent money, with the potential to make even more should I start my own thing.
I like my boss and co-workers and the time away from the computer. I enjoy not constantly stressing about layoffs or spending months fighting against thousands of people for one tech position that required five days a week in an office. I learned last summer my old company laid off nearly everyone in its engineering team as part of an "all-in" bet on artificial intelligence. It's February 2026, nearly seven months later, and no one, literally no one, found a job that matched the salary, flexibility, and benefits of our previous company. Instead, most are taking on student debt for secondary school, and a few others found work doing something else. I heard one started dabbling in social media influencing.
Unfortunately, the time to get in the trades has come and gone. Spots at credible schools are constantly filling up (or have months-long waiting lists), and there aren't enough opportunities to get hands-on experience. If someone can get into a credible school and land an apprenticeship, the competition for high-paying work with good bosses is fierce. I only found my current role because my current boss and I go to the same church, and he wasn't interested in being down a man on his crew while he waited for his neighbor's high school to graduate the following year.
The thing I'm afraid of, which will 100% drive me to some kind of change if it comes to pass, is that even if we're not expected to use AI tools to help us code, the vast majority of work that we're expected to do may shift from building bespoke applications for humans to use toward building more generic tooling to power AI applications.
Like if you're working with any kind of data management system, instead of getting requirements and making decisions around how the data needs to be manipulated and what kind of reporting you need to support and what the UI/UX should be, you instead get to build more generic APIs and database schemas that get plugged into an MCP server so that users can do whatever they want through an LLM chatbot.
Not only is that less interesting and less fulfilling work, but the end result is going to be such a clusterfuck to support and debug--when users complain that something's not working right, instead of figuring out why and fixing the code, your job will boil down to writing LLM prompts like a teacher explaining to gradeschoolers what they should and shouldn't be doing with all the data they have access to. I don't want to be doing that. If that's where our industry is headed even for the short-to-medium term (until users and, more importantly, the C-level execs pushing all this AI nonsense realize that chatbots and LLMs are the absolute worst way to interact with most applications) I will probably nope out.
There was a Google engineer that worked on Google search that wrote against using machine-learning approaches because they're inherently less predictable. It's easy to add a few train and test cases which the model is optimized against, but models have always struggled in the real world.
Here's a Hacker News discussion about the slow death of rules-based rankings: https://news.ycombinator.com/item?id=40136741
The gradually declining quality of Google search might hint at the future of other software that relies on similar tech.
Option 2 is to build your own thing, which you can get started on any time, even keeping your current job and income. It's not for everyone but you'd get to decide exactly how much hands on building and problem solving you'd get to do. The trick IMO is finding a problem you really care about solving, rather than solving a problem just to make money.