23
votes
What is your opinion whenever you see news/opinion that tech companies are relying more on chatbots rather than junior developers/interns?
I see that in the headline from time to time. Not really sure how prevalent it is and it's pretty disappointing news.
but I also can't help but think:
- the news articles are probably overblowing it and it's not probably not as prevalent as it's being portrayed
- that any tech company doing that is shooting themselves in the foot. in total, I was an intern at various companies for a little under 3 years. I don't doubt that the work I did for the majority of the my co-ops were all things that could have been done by a chatBot. writing unit tests and small scripts and etc. but they were invaluable to me (1) understanding what is expected of me in a professional environment and (2) gave me a basic idea of how to code in a professional environment (2) gave me alot of perspective on what technologies and tools I should spend spare time learning cause my university very much focused on dinosaur-era languages, for the classes that did teach any coding related skills. same for the friends I went to uni with. So all I think is maybe in the short term, they are saving money on not hiring interns/co-ops/junior devs to do work that can be done by a bot but I feel like in the long terms that will reduce the number of intermediate/senior devs on the market which means they'll be in higher demand and cost more money.
Shortsighted greed led by gullible managers who have completely lost touch with what actual work is like.
I find the modern tech 'AI' obsession completely soul-crushing. It can be useful for generating ideas and rubber ducking. Most vibe coders are self-promoting lying charlatans or are just bad enough at coding that they think 'AI' is producing a decent result.
We've lost sight of what makes work worthwhile. We've lost sight of what makes a product or company worthwhile.
Hopefully the 'AI' (no, it is not intelligent, nor is it the AI I know from sci-fi stories, hence the quotes) craze starts to collapse under its own weight soon. At this point I hope it takes the whole corrupt, bloated, immoral tech industry down with it. Which I suspect it might either way, if these companies keep threatening to fire people who don't vibe code their way to 'efficiency'.
I think this is too simple. I know some very competent developers who find "agentic" coding (where most of the code is written/tested/committed/etc by the LLM calling out to agents) a massive help to them and a way to write the same quality of code significantly faster by letting them focus on the high-level concepts.
I know of other competent devs who don't find this helpful at all, and there is definitely the charlatan contingent in there as well, but I think dismissing this sort of approach entirely is like telling the inventor of the car that their machine doesn't work. That's not a good criticism of the car. It has many flaws, but "it doesn't work" isn't one of them.
I've not really tried out this style of programming yet, so I can't really pass judgement on it myself, but my impression is that these sorts of tools need to be learned first before they can be useful, which may be why a lot of developers are bouncing off them or finding them to be kind of useless. If you don't know how to use a tool properly, you won't really appreciate it. Also, it seems like particularly with agentic programming, that the quality of the tool scales with the experience of the developer using it. A junior developer will produce a lot of bad code - they'd have mostly produced bad code anyway, but less of it, so the LLM is basically just scaling their production up without making them a better programmer. On the other hand, someone who is already very experienced is going to be more discerning when deciding which suggestions to take and which to ignore, and will therefore produce much higher quality results.
I can only disagree. Sure, LLMs can be useful, especially in experienced hands. But I tried to keep an open mind and use them and I just do not find them particularly helpful.
The problem isn't the tool: anyone who wants to use LLMs should be able to. The problem is management at many companies, who increasingly believe the lies that LLMs are equivalent to a junior developer and that all of your devs need to embrace LLMs to 10x their output. Mandating and forcing LLM usage is making my job (and I suspect many others) totally miserable.
If I was an author, and my publisher tried to force me to use LLMs to make writing books go 5x faster, I would get a new publisher. I feel exactly the same way about my current employer.
I'd recommend trying out the agent mode in Copilot or especially Cursor. The edit mode, or Jetbrains AI is quite bad.
I'd find it is very capable when instructing them to do one task at a time. For example, map/filter operation in Go is very boilerplatey. You can just describe the whole map and let the LLM write it.
Thank you for the recommendation, but I have already tried agent mode with a number of different models. For my work, it simply doesn't meet my needs. I generate most boilerplate automatically. The rest of my code tends to require attention and deep thinking.
I think a lot of cookie cutter REST API jobs can benefit from LLMs just to ease the monotony, especially test writing. But when you write something comprised mostly of algorithms and your code needs to be highly optimized, I find LLMs come up short.
FYI, this 'but you need to try AGENTIC mode in <trendy new IDE>' mentality is exactly the kind of spammy nonsense I'm talking about. I have tried Cursor. I have tried VSCode with copilot. Extensively in the last few weeks. Why can't LLM enthusiasts understand that it just isn't that useful to me in my workflows? I'm glad it works for you. But LLM enthusiasts really need to understand that not everyone needs or wants to use the same workflow as you. Diversity is good.
That also gets me, in general the whole array of "You are not using the right workflow. You are not prompting in the right way. you aren't using the right model. You should roll the dice correctly." Which, I have been hearing some variation about for at least two years now (GPT-4 came out in 2023 which is where I feel like the hype really took off).
Specifically where I see the exact same people saying that you shouldn't use "X" because you can only get the right results with "Y". Even though a month earlier they were saying the same thing when "X" was new and "Z" the old news. It is this constant grift of moving goalposts.
That's not to say I don't get any value out of these tools and models. It is just not the mind-blowing game changer people advocate it to be. And to be frank, I honestly don't think I am wrong here.
If the majority of AI work is supposed to be done by agents, capable of doing the entire process including making PRs. Then, why isn't there an explosion in such PRs on a large amount of open source projects? Even more so, why am I not seeing these PRs on AI related open source projects? If I need to target it even more directly, why am I a not seeing hints of this being applied on code agent repositories?
Call me naive, but you'd think that these specifically want to demonstrate how well their product works. Making an effort to distinguish PRs that are largely the work of their own agents. Yet, I am not seeing that.
What programming languages do you use? I have a personal theory that the disconnect between « agentic LLMs are fantastic » and « agentic LLMs are kinda bad actually » is mostly down to a difference in programming languages used. AI agents seem to be really good at JavaScript and react, especially if you are using typescript. But I am learning Swift now, and the agentic LLMs seem pretty bad at it. It took me many rounds of agentic coding to get a functioning script, and I am in the process of rewriting it by hand because the code wasn’t very good or extensible.
Great point. I tend to use Kotlin, Python, and Swift a lot, and my frontend work is very basic compared to React.
I've extensively tested the agentic tools with Java, but they completely suck with proprietary frameworks and packages. I haven't found AI helpful at all when working on legacy systems with undocumented APIs.
Some devs say they use AI to understand the code base and get detailed explanations. I've mostly found those "explanations' to be bunk, filled with misleading assumptions that'll cause everything from subtle bugs to catastrophic failures.
I've used the tools as recursive search engines to find where a value is set in a huge code base, so it's not that I never use them. It's just that they're only rarely helpful.
(They also give vibe-coding junior devs a lot of rope to hang themselves with.)
I think that people do oversell it's capabilities. It works pretty well for one of my boilerplate heavy projects, but pretty poorly for my real-time application project that cares about things like reducing allocations and minimizing cache misses.
To bash Copilot specifically: the models that are implicitly recommended for agent mode due to being the defaults are garbage. In my experience all of the GPT models available in Copilot are effectively worthless in agent mode. Frankly, this is a bad/weird marketing decision from Microsoft. "Here, try this thing that sucks and that'll totally make you want to pay for more of it, right?"
None of this is meant to try convince you to use it. I just had a very similar opinion until a couple weeks ago when I tried Claude 4 specifically. That's when I realized the people using it and I were talking past each other because I was giving opinions on what I had access to (non-premium Copilot) without realizing that even though it was a newish model (GPT-4.1) it wasn't even playing the same sport as the models those people were talking about.
I have not experienced that, nor have I really experienced the sort of workplace where that might happen, so I can't really comment on that sort of mandated LLM usage, other than to say that it sounds bad.
I should note that vibe coding is different from agentic coding. Although I do have my reservations about both. Vibe coding very much assumes the person "vibing" doesn't need to know code. I have talked a lot about this on tildes already, so I hope you are okay with me mostly linking to relevant comments about this subject.
As far as vibe coding goes I only think that can end up with trouble except for the most simple projects, I expanded on that view a while ago. So yeah, anyone claiming with a straight face you can get high quality software out of vibe coding is either a charlatan or ill-informed.
As far as agentic code goes I don't think that these tools result in higher quality output. For a variety of reasons. What they might result in is higher throughput, which @SloMoMonday
had an excellent comment about last month.
As for why I think agentic code doesn't result in higher quality code I'll have to refer to what I said here in relation to people being asked to do writing assignments with and without the help of chatGPT. Agentic coding for a large past falls under the "lazy approach" as you let the agents handle a lot of the work and you only review the end result. It is slightly more involved as vibe coding and the advocates will (very loudly) say they still review all the outcomes. Which very well might be true. But, reviewing code you haven't written yourself is different from writing it yourself. Any programmer will know that bad code will slip through the cracks of the review process from time to time. If you consider that you are no longer keeping your skills as sharp as you would previously (agents are doing a lot of the work), this is only more likely to happen over time.
And to be very clear. I still do thing LLMs can be valuable tools for developers, I use them for all sorts of things. But as external tools to the process and in a way that doesn't atrophy my brain and skills.
The definition of vibe coding is very loose and varies a lot between different people. The original example of vibe coding from Kaparthy was someone who could code making simple things quickly, although it quickly got expanded to mean non-coders getting the computer to write code for them. These days I see a lot of people describing any sort of hands-off or agentic coding (i.e. more than just autocomplete or search-via-LLM) as vibe coding.
Also, to be clear, I have not tried out agentic coding myself. I tried out GH Copilot in the early days and found it useful, but got roughly the same QoL improvement when switching to the Helix editor and learning about modal editing. I've also used ChatGPT & friends for various things, but seldom to write code. Now that agentic coding is becoming a thing, I'm on parental leave and don't have enough time on my hands to try it out properly. So this isn't necessarily coming from experience, but from the things I've heard from programmers that I trust online.
I don't think human using agentic coding is likely to produce higher-quality code than that same human writing the code directly, but I don't think that's the only relevant metric here. One of the things I've had to learn as a developer is that my standards of quality can't be absolute. If I'm writing a quick script that's only going to be used a few times, I can't spend a week perfecting it. On the other hand, if I'm refactoring a heavily used part of a major codebase, then I need to be very careful about everything that I'm doing. Figuring out that balance — when to go high speed, when to go high quality — is part of what makes one a good developer. And my impression is that agentic coding can be a good tool for a significant portion of the speed/quality spectrum. Not all of it, but enough of it that it could be useful in the every day life of an experienced software engineer.
It is often said that reading code is the hardest part of coding. Using LLMs to generate code can be helpful, but the mental burden of fully reading and understanding every line it spits out can be much higher than writing the code from scratch.
Possibly, but I think this is also overblown to a certain extent. Reading code can be harder than writing it, but it's also a lot quicker than writing it. And with agentic coding, at least as I understand it, it's not quite like reviewing a patch that someone else has made independently, because you're also the person who decided what the patch should contain and what needs to be changed. So you're reviewing the code with the context already largely in your head.
But how often do you write code from scratch that you don't understand?
It's much easier to read something and misunderstand what you read than write something without understanding it. I've seen people make a lot of assumptions about LLM code based on their own intentions and get bitten real hard.
My company is doing that, but it's not as people would say.
I'm working with a few teams to teach them AI, and I think it is changing the way people work. If you're familiar with ChatGPT, my company's performance review explicitly exclude that - you should look at agentic AI now where they can perform actual tasks not just telling you what to do and you need to copy & paste.
kubectl
that yourself. Since security doesn't actually have kubectl experience, I teach them Kube MCP so that they can do that themselves.(As for the growing pains of me adapting to this new world, well, I think I talked about a few problems in my post history)
If you're in the US this makes sense with the tax code change that makes software engineer salaries a 4-year (25% at a time) tax write-off, but other expenses can be written off 100% in the first year.
I believe this just changed in the Trump budget bill.
Over the last few decades, as jobs have become increasingly insecure and thus mercenary and short-term, corporations have abdicated their role in training workers.
In other words, how is this new?
I don't think I understand what's really going on and I don't think sharing uninformed opinions is all that valuable. We need more in-depth reporting. What are some good articles about this?
This is happening where I work (new eng leadership). They are estimating a 30 to 50% increase in efficiency by using AI. They are leaving guardrails like testing and code reviews in place, and do acknowledge the limitations of AI code.
The thing is, I think it's working. I'm definitely more productive. It makes it easier to climb the learning curve on a new library or AWS service integration. I can add tons of automation to our dev too chain very easily. For example, "Write a bash script that polls until this container has been created then opens a terminal."
We have some difficulty running with Cursor on Linux, so right now we are using Cline and Claude Code, and we still have copilot. So far, I like Claude Code the best for "agentic work" and copilot for "1-3 lines that do a specific thing in a specific place in the code". Both are running Sonnet 4 under the hood.
Example of the agentic code - "Add an S3 and Athena deployment to my SAM template" followed by "the deployment is not working, please run tests to figure out what needs to be fixed." The latter led to 10-15 min of it running deployment commands, tweaking the deployments, trying smaller subsets, and converging on a working deployment without more intervention from me than sanity checking commands it wanted to run and clarifying requirements.
For the latter case, an example: I had a data structure with a list of objects defined as python dictionaries, but we're moving to fully typed Python. I said, 'Create a class that corresponds to the fields of these objects and convert the data to use the new class." Worked on the first try.
My past experiences working with junior developers has been that they need a similarly detailed level of guidance, so much so that I would rather do it myself than spend the time explaining it to them.
I don't think the model is sustainable though. It only works because I have the experience to monitor the AI and plan the bigger picture, but when this generation of coders has gone, we'll have a bunch of people who can't put the guardrails the AI needs.
I am mostly perplexed and what kind of work junior engineers are apparently doing at some companies. It sounds like many are just writing boilerplate or "spec to pr" code without much need for working with a bigger picture or allow for some element of critical thinking on the solution. That is not how I have worked with junior people at least. Maybe it will replace the type of people having done a 12 week Udemy course in Python, but I hope even newly educated computer scientists and software engineer have learned more valuable skills than what can be done with AI tools. In my opinion, a good software engineer isn't defined so much by their programming skills, but their ability to apply critical thinking to a business logic problem, see the potential side effects and compatibility issues, and generally apply good system design. Of course skills that improve with experience, but also something I would expect at some level from freshly educated engineers.
By definition, a junior engineer is not going to be good at system design. They haven't had the experience yet to know what works and what doesn't or even to know what they don't know. This is natural, but it is a reality. That's why spec-to-PR is basically how juniors work. If you give a junior a vague, large-sized task and they can implement it well, you need to be promoting them to a mid immediately.
I think there are levels between mindless spec-to-PR and fully autonomous large system design. And while everyone gets better with experience, the same skills can also come from proper education and a basic personal trait of well founded critical thinking. I am especially looking for the last part, where people are able to themselves stop and think if something is missing in the spec, something that might have unintended side effects or the proposed solution might not be the best.
Maybe there is a different in working in mostly smaller companies, but I have always been part of teams where all engineers of all experience levels were participating in those kind of things.
Juniors just implementing spec is pretty common. The software levels of Amazon at very roughly equate to
It can be broken down even more roughly with just a level of influence:
And this is why the lack of SE3 position at a lot of companies is so frustrating. There is clearly a level of competence between SE2 and senior for highly competent engineers that you can trust end-to-end with any team-scale project.
I use AI a lot in my day-to-day work as a frontend engineer. It has helped me ramp up quickly with new projects/frameworks and I'm easily able to throw simple, but time-consuming stuff at it. Which I would describe as being the kind of work that's a good candidate for a Junior to take on to get their feet wet. So I think there's some merit to being able to use AI to replace Junior devs, but I don't know why you'd do that. If you don't have juniors, then eventually you won't have mids, and then later on seniors.
But even if you can get an AI to just do the easy stuff, it doesn't mean you know how to get it to do the easy stuff. The current batch of AI bots seem to need a lot of hand holding (like a Junior). Or to put it another way, the reason I'm so effective with using AI is because I have years of experience writing this crap by hand so I already know how I want something implemented. What's it going to look like in 10 years when all of the juniors/mids learned on AI and never had to write this stuff by hand? There's bound to be huge gaps in fundamental knowledge.
To cap it off, I think in a few years the bar for a Junior is going to be set substantially higher than it is today. I'm already seeing it at my company. The way we hire Juniors is via paid internships. And those that make the cut get hired as a Junior. The bar for getting into the program is already pretty high, but the caliber of the interns seems to have skyrocketed in the last year or two. And I think it's because the company raised the bar on what they consider a Junior to be. The pessimist in me thinks it's because they want high-quality talent at Junior dev salaries for a year or two before they inevitably get bumped to a mid. But all this is to say that I think AI makes for a good cover excuse for companies to experiment with higher standards for devs, laying people off, etc.
What I fear will happen is an extreme stagnation of software development.
I think this had already begun happening with younger generations who didn't grow up having to figure out computers. They learned to use consumption devices like phones and tablets, rather than traditional computers that have tools and paradigms used to create software. Many college freshmen don't know what a file system is or a command line. Some younger teens don't know how to use a keyboard and mouse.
Relying on GenAI further abstracts away the reality of software development. Write out an idea and delegate the understanding, the handiwork, the craftsmanship to an LLM. Do that too much, and the art of software will be lost to humanity. Machines will iterate over the same ideas forever, at our command. But never innovating. Never creating.
Given my industry: it's pretty much a big lie to try and grift the current hype market. All while laying off people in the thousands and cancelling dozens of projects. If we were "being more productive", the latter wouldn't happen. This is partially to hide the fact that we're pretty much in a recession.
Today (not bothering to speculate "well maybe in 5 years it'll change everything!"), AI cannot do the job of even a junior programmer. It certainly cannot make game-ready assets, nor even fool consumers as of now. At best we have fancier autocorrect. It cannot do the jobs better, and any company trying to will only face an even worse disaster of a project.
I don't think any effects like that have manifested yet. The market for college grads is bad, but that's mainly due to volatile economic conditions. Low level hires are an investment, and it's a tough sell when markets can swing wildly.
"Burn them all to the ground, please and thank you".
There's the thought.
Everyone tends to have their reticle fixed on what AI can do. And I understand why: it's an impressive step change in a short period of time. No one seems to ask "What can AI never do?".
It can't ever take responsibility for anything. Which places a very hard upper limit on how useful it can be in any serious engineering environment. There is no neck to wring, no reputation to keep.
It can't ever buy anything. Which limits its economic impact. It can create things people value (again, amazing and I understand why people lose their religion over this) but economics is a loop and until Google gives Gemini and allowance for doing its chores, it cannot surpass an 8-year old child in terms of closing that loop.