46 votes

AI Coding agents are the opposite of what I want

I've been thinking a lot about LLM assisted development, and in particular why I keep dropping the available tools after a few attempts at using them.

I realized recently that it's taking away the part of software development I enjoy: the creative problem solving that comes with writing code. What's left is code review tasks, testing, security checks, etc. Important tasks, but they all primarily involve heavy concentration, and much less creativity.

Why aren't agents focused on handling the mundane tasks instead? Tell me if I've just introduced a security vulnerability or a runtime bug. Generate realistic test data and give me info on what the likely output would be. Tell me that the algorithm I just wrote is O(n^2).

Those tasks are so much more applicable to matching against existing data, something LLMs should be extremely good at, rather than trying to get them to write something novel, which so far they've been mostly bad at, at least in my experience.

47 comments

  1. [12]
    stu2b50
    (edited )
    Link
    They are? Those all exist. Personally I’m of the opposite opinion. The current suite of LLM tooling allows me to do the fun part - designing and thinking of systems - and skip the boring minutia....

    Why aren't agents focused on handling the mundane tasks instead? Tell me if I've just introduced a security vulnerability or a runtime bug. Generate realistic test data and give me info on what the likely output would be. Tell me that the algorithm I just wrote is O(n^2).

    They are? Those all exist.

    Personally I’m of the opposite opinion. The current suite of LLM tooling allows me to do the fun part - designing and thinking of systems - and skip the boring minutia.

    The fun part of programming is analyzing a problem realizing, oh, this is a perfect opportunity for a bloom filter, or, I should represent this as a finite state machine, or, this can be formatted as a monad.

    What’s not fun is mucking around with dagger or autovalue or writing out boilerplate value classes or figuring out which of 7 different mocking libraries is used conventionally.

    You still do the former, you don’t have to do the latter.

    One fun thing Claude and like have unlocked is that I’m back to writing code in vim. I essentially just bang out code, without worrying about syntax errors in moment, since Claude can fix those afterwards. I can also do things like add a comment and tell Claude to do some common boilerplate (for instance, error propagation in go, or telling it to catch all checked exceptions, log the exception and throw a wrapper exception). It’s fun, and nice to not have to deal with bulky IDEs.

    42 votes
    1. [4]
      post_below
      Link Parent
      Indeed. I've said this before, but it's going to bear repeating for while: In practice coding agents aren't simple tools. In theory they're simple, but two different people will get very different...

      They are? Those all exist.

      Indeed. I've said this before, but it's going to bear repeating for while: In practice coding agents aren't simple tools. In theory they're simple, but two different people will get very different results for the same task unless they're using identical scaffolding and similar prompts.

      It's not an all or nothing between pure vibecoding and "just autocomplete". You can decide what the tool is for and when to use and not use it. You can absolutely use agents just to audit for security issues, or just to write tests, or whatever.

      But the key point is that, unless you're doing something very straightforward or someone has dropped you into pre-made scaffolding that provides a specified workflow, you're going to need to play around and iterate to get it working the way you want reliably.

      Possibly part of the problem many have is that a lot of the conversation and marketing around agents is some version of "ask for a thing, get the thing". Which only works sometimes, and pretty rarely for certain kinds of tasks. People read hype pieces and get a distorted idea of how to use the tools.

      One of the things I love about building software is the space where the best answer isn't immediately obvious and there isn't an established canonical way to handle it. That's the most satisfying kind of problem to solve and making agents do what you want is very much that kind of problem. Everyone is still figuring it out in real time, the model providers included. They didn't build the models to deterministically solve particular problems, wasn't even really an option. They just built general models and then paid close attention to how people ended up using them and then worked to support that via fine tuning and harness design.

      It's uncharted territory

      16 votes
      1. [2]
        rich_27
        Link Parent
        Aside from instructing agents being this kind of problem, have you worked with an agent on solving this type of a problem? I haven't done any AI assisted dev since January, which puts me...

        One of the things I love about building software is the space where the best answer isn't immediately obvious and there isn't an established canonical way to handle it.

        Aside from instructing agents being this kind of problem, have you worked with an agent on solving this type of a problem? I haven't done any AI assisted dev since January, which puts me horrendously out of date I'm sure, but my experience was that this is the type of problem that AI agents cannot make heads nor tail of because there is nothing to regurgitate. When I was trying to create a novel solution that I could not find any evidence on the internet that anyone had identified the problem before - let alone written a solution - late last year, Claude and ChatGPT both simply could not approach the problem because they kept trying to default back to known solutions for similar problems that weren't applicable. Even when I gave a really clear structure and psuedocode for the solution they could not write it because it was, as far as I could tell, novel.

        4 votes
        1. post_below
          (edited )
          Link Parent
          Not really that out of date, November 2025 was the last big leap in capability. Which is odd, if you gave a frontier coding agent clear structure and pseudo code in January 2026, generally it...

          Not really that out of date, November 2025 was the last big leap in capability. Which is odd, if you gave a frontier coding agent clear structure and pseudo code in January 2026, generally it could turn that into working code even if that specific pattern wasn't in the pre-training. The top models are really good at translating from one language to another, even if the target language is code and the source is well structured english. They have a huge corpus of patterns to match against in both languages. The exception maybe being obscure programming languages, I haven't tried to use an agent for a language that wasn't in training but I imagine it would be more trouble than it's worth.

          Context matters a lot in this case, if you were to prompt "write code to solve X problem" and there were no solutions even vaguely similar in the training, the agent might struggle.

          If you said... "Our goal is to solve this [problem]. We want to normalize X data and then do Y with it, based on its relationship with Z. The result needs to conform to [criteria]. Let's talk about how to accomplish that". You'd be on your way to a solution. Or to put it another way, on your way to creating sufficient well structured english that the model could reliably translate it into code. Note that the above is an overly simplified example.

          As for myself, one of the first steps in solving a novel problem is often gathering information. Agents are really good at speeding up that process. By an order of magnitude sometimes.

          2 votes
      2. shrike
        Link Parent
        Using agents is a skill and there are many ways of using them correctly. At work I use them mostly to figure out answers to questions. I can just shove a bug or stack trace at claude and tell it...

        Using agents is a skill and there are many ways of using them correctly.

        At work I use them mostly to figure out answers to questions. I can just shove a bug or stack trace at claude and tell it to figure out what's the root couse - and go do something else.

        At home I use agents to build tiny bespoke tools on a whim. Like just last night I was watching a streamer do a 50/10 pomodoro coding session and whipped out a SwiftUI pomodoro timer for myself in maybe an hour or two of wall-clock time while I watched Jacks Black & White on SNL.

        Some people build massive multi-agent systems or structured workflows from specs to code etc. Not wrong either, but that's not the way I use agents.

        3 votes
    2. davek804
      Link Parent
      I strongly enjoy designing things more rapidly than I might be able to without LLMs.

      I strongly enjoy designing things more rapidly than I might be able to without LLMs.

      8 votes
    3. [6]
      karsaroth
      Link Parent
      I appreciate that you and I might just find different things enjoyable in the dev space. I used to consider certain things "boilerplate", but that problem was already solved by auto-completions in...

      I appreciate that you and I might just find different things enjoyable in the dev space. I used to consider certain things "boilerplate", but that problem was already solved by auto-completions in IDEs or syntactic sugar in other languages. And I've also realised how important it can be to consider certain choices when writing a class or function, because it can be the difference between a streamlined piece of code, and a future piece of technical debt.

      But all that said, I'm not disagreeing with you outright, I may simply not have found the right tool yet. In my mind the tool would be similar to a language server in an IDE, that would add context, indicators and other data to an open project and/or source file as I type - having to prompt takes you out of the workflow, in my experience. Is there anything like that? Sounds like you're using Java, so even just specifically for that language would be a good start.

      8 votes
      1. shrike
        Link Parent
        Artisanal coding vs AI assisted is a big divide in the professional world currently. Artisan coders who call everything "AI Slop" and have VERY distinct styles and requirements for how code should...

        Artisanal coding vs AI assisted is a big divide in the professional world currently.

        Artisan coders who call everything "AI Slop" and have VERY distinct styles and requirements for how code should look like will be in trouble in a few years if they refuse to budge. Some of them will be needed for the hard stuff, we can't outsource everything to AI assistance, but not at the scale we employ them currently.

        But the majority of solutions will be produced by "Vibe Engineering", which is different from Vibe Coding in that the person directing the agent knows how to do what they're asking the agent to do, but is using the agent to make the process faster.

        A CRUD API for a SQL database number 12387 doesn't need artisanal practices or careful code crafting, it's just boilerplate for the most part with tiny flecks of actual business logic sprinkled on top.

        6 votes
      2. [4]
        rich_27
        Link Parent
        Do you develop professionally, if you don't mind me asking? The bits of programming you mentioned are also what I really enjoy in programming; I used to work in R&D and enjoyed that, but I've been...

        Do you develop professionally, if you don't mind me asking? The bits of programming you mentioned are also what I really enjoy in programming; I used to work in R&D and enjoyed that, but I've been thinking about moving into more formal development and have so far got the impression that it is almost all high level design or boilerplate and testing, with almost no writing algorithms or focus on small scale, very optimised code.

        I'd be interested to hear if you knew of a part of 'proper' software engineering that would scratch that itch for me. It might just be my friends' experiences working in software development are quite dry, but I haven't seen much of the joy I find in writing code in their jobs!

        4 votes
        1. [3]
          karsaroth
          Link Parent
          I do yeah, I can give you some advice, but mind you the LLM surge is definitely shaking the whole industry up at the moment, so this advice might age like milk. I think my last point is probably...

          I do yeah, I can give you some advice, but mind you the LLM surge is definitely shaking the whole industry up at the moment, so this advice might age like milk. I think my last point is probably the most important.

          I've worked as a technical consultant for close to 20 years now, that means I've spent between a few months to a few years working on projects for various companies in various industries. A lot of my friends have worked for in-house dev teams instead, and although that does come with certain advantages (being able to iterate an improve on the software you build over time), it's likely to be quite mundane most of the time. Although consulting is much higher pressure, it also allows you to step into new interesting challenges much more frequently; with very clear goals. You frequently have to learn new technologies or languages, be quick at understanding the problems found in other industries, and then learn how to translate that into software that will do what the customer actually needs.

          But even within that context, most customers (companies) will just need you to help upgrade a price tracking system, or move an old HR system into a new cloud provider. Sometimes you do get to work on a project that feels like it will really change the lives of the people who use it, but that's rare.

          I guess I'd say, I don't think its reasonable to expect to enjoy every moment of software development in a professional context. Specifically because its professional there's a lot of extra non-technical work that's necessary, much like any job. Still, with the right company (e.g. one that works on embedded software), or in the right industry (e.g. aerospace, healthcare or specific technologies) you could find a place that gives you opportunities to write meaningful code.

          But above all, if all you're really looking for is the sparks of joy from building software that does something useful, then I'd actually recommend making it a hobby. Help with, or create an open source project, build a video game, or simply implement something at a low level to understand it better.

          7 votes
          1. DaveJarvis
            (edited )
            Link Parent
            In my experience, it depends on whether you make that a priority. I've worked on an incident management system for one of the world's largest ferry organizations, crafted a passenger manifest...

            Sometimes you do get to work on a project that feels like it will really change the lives of the people who use it, but that's rare.

            In my experience, it depends on whether you make that a priority. I've worked on an incident management system for one of the world's largest ferry organizations, crafted a passenger manifest system to ensure no lives are lost in a sinking, ushered high school graduates away from expensive paper-based transcripts to inexpensive electronic post-secondary applications, revamped a kidney transplant system, helped a ministry track cut blocks for sustainably managed forests, developed radio communications software used by first responders, and worked on a critical event resolution system used by Fortune 500 companies. (I've had some schlep software contracts, too, from time-to-time, but I tend to pass them by when offered.)

            These days, I'm reveling in the advent of AI/LLMs to take away the grunt work of software both professionally and personally. Take for instance my vibe-coded, self-hosted, PHP-based, dependency-free, FOSS Git repository viewer that was made in about three weeks:

            https://repo.autonoma.ca/treetrek

            My favourite part was telling the AI to write the syntax highlighter for the various languages and file formats across all my repos:

            https://repo.autonoma.ca/repo/treetrek/tree/HEAD/render/rules

            Coding those rules and the highlighter class by hand would have taken me weeks alone and brought very little joy. With the LLM, it took about 10 minutes.

            4 votes
          2. rich_27
            Link Parent
            Thank you! That's really useful insight 😊

            Thank you! That's really useful insight 😊

            3 votes
  2. [8]
    karsaroth
    Link
    A little bit of a rant, but I thought it worth writing out this thought and seeing what other people think.

    A little bit of a rant, but I thought it worth writing out this thought and seeing what other people think.

    13 votes
    1. [7]
      Darkflux
      Link Parent
      Wanted to reply and let you know, against the sea of voices who enjoy these tools and insist you're using them wrong, that you're not alone. I see it in friends who are forced to use tokens for...

      Wanted to reply and let you know, against the sea of voices who enjoy these tools and insist you're using them wrong, that you're not alone.

      I see it in friends who are forced to use tokens for their job, who I regard as competent engineers that care about the quality of what they produce. Forced to use LLMs which are producing homogeneous, bland paste that is "good enough". Sitting there for ten minutes watching it have a conversation with itself because that's how someone jury-rigged the probabilistic word generator to produce something approaching correct.

      I'm sure there are people who are enjoying being able to focus on higher-level architectural systems, or those who found the writing code bit to be one of the least interesting parts of their jobs. But even as someone who enjoys solving people's problems and views code as a means to an end, the creativity of using code to solve those problems is one of the parts I enjoy the most. Communicating through my code, to other developers, to myself in the future.

      Anyway yeah, LLMs suck all the fun out of my job and I'm not sure why more people don't care.

      22 votes
      1. karsaroth
        Link Parent
        Cheers - it took me a while to realise that a lot of people seem to get into software development in spite of coding, rather than because of it. Its good to find people who see it the same way as...

        Cheers - it took me a while to realise that a lot of people seem to get into software development in spite of coding, rather than because of it. Its good to find people who see it the same way as me, they do seem to be pretty rare.

        14 votes
      2. [5]
        shrike
        Link Parent
        It's the question of do you enjoy solving problems or the process of problem solving? Do you get satisfaction from the program you created solving a problem or do you enjoy writing the program...

        Anyway yeah, LLMs suck all the fun out of my job and I'm not sure why more people don't care.

        It's the question of do you enjoy solving problems or the process of problem solving?

        Do you get satisfaction from the program you created solving a problem or do you enjoy writing the program more and the actual problem being solved is a side-effect of that?

        2 votes
        1. [4]
          Darkflux
          Link Parent
          Both! I could talk about my concerns about AI tool usage, but we would get bogged down in the weeds of whether I'm using them properly, when ultimately it's as simple as the way I'm solving these...

          Both! I could talk about my concerns about AI tool usage, but we would get bogged down in the weeds of whether I'm using them properly, when ultimately it's as simple as the way I'm solving these problems is less fun when I get AI to do it for me.

          Although I do also think it doesn't do as good a job as I can at solving those problems. Based on how the technology works, I fail to see how it ever can.

          1. [3]
            shrike
            Link Parent
            Agents use tools in a loop. If you can create a tool that defines "good job" in a deterministic way, the Agent can do as well as you can. But for some things it's really hard, since its more about...

            Agents use tools in a loop.

            If you can create a tool that defines "good job" in a deterministic way, the Agent can do as well as you can.

            But for some things it's really hard, since its more about feels than actual mechanical checks.

            1. [2]
              Darkflux
              Link Parent
              I guess to get bogged down in the weeds then, my understanding is that there's no way to get these agents to do things deterministically. Even if they use deterministic tools, there's no tool that...

              I guess to get bogged down in the weeds then, my understanding is that there's no way to get these agents to do things deterministically. Even if they use deterministic tools, there's no tool that says "this meets the business requirements" or "this is well designed, refactorable and easy to read and maintain". Those are hard problems to solve, so that's pretty expected.

              For a technology that can predict reasonably well what the next sequence of words should be based on a given context and training data, I would expect it to be able to produce roughly the average of what I might find from a relevant search engine query. So for tightly scoped examples where the problem has been solved many times already, or greenfield, small codebases, it should do pretty well, and it seems to.

              For anything more complicated I've yet to see compelling evidence that it's worth using.

              1. shrike
                Link Parent
                "Business requirements" is a wishy-washy goal at best to measure. You CAN measure, for example, if that function there returns the correct value with the correct input. Or that process there...

                "Business requirements" is a wishy-washy goal at best to measure.

                You CAN measure, for example, if that function there returns the correct value with the correct input.

                Or that process there returns the right data given a starting data set of A and ruleset B.

                We do have tools for some code quality checks, but "well-designed" is more about feels than absolute data, it also depends on the problem context and whoever wrote it.

                AI Agents in general are not experienced coworkers where you can trust them to adhere to fuzzy goals like "redability" and "easy to maintain". Think of it more like a really really cheap Indian subcontractor who gets paid when the job is done.

                Now it's 100% on you if you assume things and don't write them down and they produce utter shit that takes in A and returns B, but when you put in 2A it fails. A "good programmer" would have managed that case based on life experience, but the contract said to only handle A.

                Iterate on small, testable and confirmable bits. Do the fuzzy and hard to measure stuff yourself.

  3. davek804
    Link
    Yeesh - sorry to hear that. Genuinely. My POV: I can't say I love the experience of driving all of my development through Claude on a daily basis. But it's kinda what I have to do right now to...

    Yeesh - sorry to hear that. Genuinely.

    My POV:

    I can't say I love the experience of driving all of my development through Claude on a daily basis. But it's kinda what I have to do right now to continue to say abreast of the times.

    Professionally, I only have to stay current for a certain period of time: until I am financially independent.

    While I love to write code, and I love to plumb distinct systems together into an emergent solution, I have come to the conclusion I mostly like a pay check.

    Once I have enough funds to cover my needs for my future, I will stop peddling my labor for a salary. At that point, I'll happily use LLMs in the way that make me happiest during the pursuit of my hobbyist passion of development.

    Until then? I'm not writing code for much of a reason other than to pay my mortgage.

    9 votes
  4. [7]
    skybrian
    Link
    Are you using a tool that limits what you can ask for? At a prompt, I can ask the coding agent to do whatever task I want.

    Are you using a tool that limits what you can ask for? At a prompt, I can ask the coding agent to do whatever task I want.

    6 votes
    1. [6]
      karsaroth
      Link Parent
      No, I've primarily been interacting with Github copilot either directly or using VS Code. In both cases though I'm expected to prompt the LLM and chat with it, which helps in some limited...

      No, I've primarily been interacting with Github copilot either directly or using VS Code. In both cases though I'm expected to prompt the LLM and chat with it, which helps in some limited circumstances, but doesn't fit very well into a "traditional" developers workflow.

      The chat interface makes plenty of sense if you're aiming to get the LLM to go away and do something specific, like build a series of files - but that comes back to my original point, I want it to analyse as I type, essentially the opposite interaction; it should prompt me. Make sense?

      1 vote
      1. skybrian
        Link Parent
        Yes. The coding agent I use is more like a command-line interface, so that’s “traditional” in an old-school sense. It’s not “as you type.” One way to think of it is that I can imagine a command...

        Yes. The coding agent I use is more like a command-line interface, so that’s “traditional” in an old-school sense. It’s not “as you type.” One way to think of it is that I can imagine a command that does what I want, and then I can ask the AI to do that (in English). It will use the appropriate tools.

        But I can turn the tables by asking it if it has any questions or suggestions for improvements. I commonly ask it to review design docs.

        3 votes
      2. [4]
        DistractionRectangle
        Link Parent
        Right now, low latency pair programming like use just isn't there, and trying to analyze/prompt you as you type requires implicit understanding from the AI, it has to intuitively discern what...

        Right now, low latency pair programming like use just isn't there, and trying to analyze/prompt you as you type requires implicit understanding from the AI, it has to intuitively discern what you're doing/trying to do. Turn based flows with explicit context works better with the current models/tooling.

        The closest thing to what you want, where it prompts you, is probably PR/commit code review. There it gets explicit context on what you're doing/trying to do via commit messages and diffs, and can take its sweet time looking for:

        • bugs
        • regressions
        • test coverage
        • algorithmic complexity
        • approach
        • security issues
        • style/code smells
        • etc

        And prompt you about what it finds.

        1 vote
        1. [3]
          karsaroth
          Link Parent
          My suspicion is that what I'm looking for might be possible with local models that are much smaller than the general ones - but perhaps there isn't much money in building them. Perhaps what I need...

          My suspicion is that what I'm looking for might be possible with local models that are much smaller than the general ones - but perhaps there isn't much money in building them.

          Perhaps what I need to do is work on some prompts that fit what I'm looking for, similar to what you mention, and automate frequent calls to them - but that likely will quickly run into token limits, at least that's my guess.

          1 vote
          1. post_below
            Link Parent
            That's the right direction (meaning automation, not local models, they aren't quite there yet). You can call agents programmatically so if you wanted a collection of agents watching over your...

            That's the right direction (meaning automation, not local models, they aren't quite there yet). You can call agents programmatically so if you wanted a collection of agents watching over your shoulder, ready to head off in various predetermined directions, you could definitely build it.

            But really there are already fairly low friction ways to handle those things without starting from scratch. Custom agents, slash commands, skills, hooks, more recently loops and scheduling. Most of the tools you need to customize the environment any way you want are already built into Claude Code and (always just a little behind) Codex. For anything that's missing, both CLIs have a headless option.

            1 vote
          2. shrike
            Link Parent
            It's not the models, they're perfectly fine (gemma 4 and qwen models specifically), you need the correct tools for them - as they all support tool calling. It's maybe 50 lines of Python to whip up...

            It's not the models, they're perfectly fine (gemma 4 and qwen models specifically), you need the correct tools for them - as they all support tool calling.

            It's maybe 50 lines of Python to whip up a basic agent harness that runs on top of Ollama/LMStudio/llama.cpp and you give it a "git diff" tool or something, then at the prompt ask it to evaluate the code just written and request improvements.

            You can run that in a loop or trigger it via fsnotify or something. Integrating that into your IDE is left as an exercise to the reader :)

            1 vote
  5. [7]
    teaearlgraycold
    Link
    I like the act of coding. I still do it when Claude Opus struggles, which it can on surprisingly simple tasks. If it fails for two attempts I just write the thing myself. Right now I'm working on...

    I like the act of coding. I still do it when Claude Opus struggles, which it can on surprisingly simple tasks. If it fails for two attempts I just write the thing myself.

    Right now I'm working on things that are intrinsically interesting to me. I'm making things I want to see made, so the fact that I cause them to exist with an LLM isn't a problem. But I can see if you're working on some random bullshit for someone else, deep in a system, then the joy of coding might have been the primary source of enjoyment. Without that there wouldn't be much left to love about the work.

    5 votes
    1. [2]
      karsaroth
      Link Parent
      Yeah, I get a lot more joy out of building things as a hobby in whatever area I'm currently interested in. Especially in those cases though, I'm usually trying to do something that has only rarely...

      Yeah, I get a lot more joy out of building things as a hobby in whatever area I'm currently interested in. Especially in those cases though, I'm usually trying to do something that has only rarely been done before, in which case the LLMs struggle and hallucinate a lot, so they feel even less useful in their current form.

      4 votes
      1. teaearlgraycold
        Link Parent
        I'm always shocked when I leave my normal domain of web development and try to get these "PhD level" artificial intelligences to write some simple code in say PyQT for a Linux desktop app or...

        I'm always shocked when I leave my normal domain of web development and try to get these "PhD level" artificial intelligences to write some simple code in say PyQT for a Linux desktop app or embedded code. It's a very different experience. It reveals how much the AI is really just a big search engine. They can do a little bit of novel mixing of ideas, but it's mostly just stochastic downloading.

        6 votes
    2. [4]
      em-dash
      Link Parent
      Interesting. I think it's actually the opposite for me: the turning point for me becoming okay with LLM coding was realizing that I don't really enjoy writing code when it's work stuff I don't...

      Interesting. I think it's actually the opposite for me: the turning point for me becoming okay with LLM coding was realizing that I don't really enjoy writing code when it's work stuff I don't care about.

      My personal projects are still all hand-written because those are the things I'm choosing to work on for fun.

      2 votes
      1. [3]
        teaearlgraycold
        Link Parent
        What do you care about with a job? For me if a job ever becomes just about the money my mental health plummets and I need to quit.

        I don't really enjoy writing code when it's work stuff I don't care about.

        What do you care about with a job? For me if a job ever becomes just about the money my mental health plummets and I need to quit.

        2 votes
        1. [2]
          em-dash
          Link Parent
          My job is the thing I do to make people give me money, which I then spend on doing the things I actually want to do. I optimize heavily for low (time+stress):money ratio‚ and seek enjoyment and...

          My job is the thing I do to make people give me money, which I then spend on doing the things I actually want to do. I optimize heavily for low (time+stress):money ratio‚ and seek enjoyment and fulfillment elsewhere.

          I do prefer certain types of work (interesting puzzles > arguing with CSS) but that's a secondary goal within a job, not something I would switch jobs over.

          on burnout, depression, and how I got here

          The shortest tenure I've ever had at a job was also the only job I've actively cared about beyond this purely transactional mindset. I left finance tech and moved over to education tech. But that care led me to accept far more of a time and stress commitment than I would have for any other job, and I did not handle it well. By the end of the one year I worked there, I burned out hard. I quit and took a few months off, and now I'm back to relatively boring business-y software. There's only so much emotion I can put into it, and that limits how badly it can hurt me.

          This is, of course, a terribly depressing way to look at things. I certainly don't claim anyone else should actively try to feel this way. If you don't, I'm legitimately happy for you and I hope you stay that way.

          But for me specifically, it's a mental health self-preservation thing.

          3 votes
          1. teaearlgraycold
            (edited )
            Link Parent
            I’m pretty good at setting limits even if I’m interested in the work. You have to be ready to let things fail, have things occasionally slip through the tracks, to live healthily. It’s not really...

            I’m pretty good at setting limits even if I’m interested in the work. You have to be ready to let things fail, have things occasionally slip through the tracks, to live healthily.

            It’s not really just the domain of the labor I find interesting. It’s the proximity to reality, the classes of problems, high expectations of quality under a realistic time constraint. Learning from smart coworkers. Learning people skills. Learning a new market, the customer needs, the possibilities and the past mistakes.

            I worked at a fairly boring home equity startup. But it was ran really well, with a well tended culture and smart people everywhere. Experienced founders. It was my best job. I learned a ton from engineer #1. I told one of the founders during my interview that if I wasn’t learning on the job I’d need to quit. He was smart enough to recognize that as a green flag. Two and a half years later and I wasn’t learning much anymore and so I left.

  6. [3]
    googs
    Link
    I'm someone who has loved writing code since I was a kid. I'm 28 now and I've probably been coding since I was 13, so about 15 years of at least exposure to code. I've been doing it professionally...

    I'm someone who has loved writing code since I was a kid. I'm 28 now and I've probably been coding since I was 13, so about 15 years of at least exposure to code. I've been doing it professionally for the past 6 years. I would say I really like coding. A few years ago, while working on a hobby project, I managed to implement a maze generation algorithm in Godot using gdscript. It took a lot of learning and iterating before I was able to get mazes generating. I would say getting that to work was very gratifying for me. I'm not just someone who cares only about "a means to an end" or "good enough" code. I've written some Java, C++, C#, Rust, Haskell, PHP, Lua, SQL, etc. I'm very interested in creating clean solutions to problems. I don't say any of this to brag, just trying to communicate that I'm not some careless slouch.

    With that said, not every problem demands functional purity and at the end of the day, code is meant to do something, whether it's solve a problem, provide entertainment, etc (my opinion). And the truth is, at least for me, using claude code is the path of least resistance for going from problem/idea to solution. I can stand up that maze generation code in Godot and have a working prototype in a day, something that took me minimum a week to learn and create before. Is it as gratifying as doing it myself? Maybe not, but the results are as good if not better. I can have it set up a test scene for me to try out 10 different maze gen algorithms for no extra effort. If I wanted to do that myself, it would take a lot, I'd have to write 10 implementations. It wouldn't even really be worth doing, probably. But when I can do it for no extra effort, why not? I have dozens of other ways it has genuinely saved me time, both in work and personal projects, but there's no reason to list those out since it has nothing to do with "the joy of coding" :)

    But it feels to me like the goalposts have shifted a little bit. A few years ago, there were a lot more people complaining about "broken AI slop code" and not saying "ok, the AI makes decent code, but I don't enjoy using it".

    5 votes
    1. [2]
      karsaroth
      Link Parent
      I think the first complaint was loudest, but the second has been there from the beginning too, and is becoming more obvious now that the tools are getting better, producing less slop. Its not...

      I think the first complaint was loudest, but the second has been there from the beginning too, and is becoming more obvious now that the tools are getting better, producing less slop.

      Its not surprising to me though that people complain about their job expectations shifting. How many F1 drivers would enjoy switching to an overview position for an AI driver? How many airline pilots want to sit in a 99-100% automated cockpit? How many chefs want to manage an assembly line of robot cooks?

      And that's just how technological progress works of course, but it shouldn't surprise you that people will dislike their key skills being automated away.

      4 votes
      1. DrStone
        Link Parent
        For commercial airplanes, autopilot does handle almost the entire typical flight, getting turned on shortly after takeoff and turned off shortly before landing. The pilot's job while while its on...

        For commercial airplanes, autopilot does handle almost the entire typical flight, getting turned on shortly after takeoff and turned off shortly before landing. The pilot's job while while its on (unless they choose to do more) is basically to monitor everything, input data for the autopilot and make adjustments, and handle the edge/failure cases. It actually sounds a lot like coding with AI.

        5 votes
  7. [4]
    Narry
    Link
    Until the AI models are powerful enough to run entirely locally and give me 99% good results, I will probably stick with hand coating my little hobby projects and using snippets and syntactic...

    Until the AI models are powerful enough to run entirely locally and give me 99% good results, I will probably stick with hand coating my little hobby projects and using snippets and syntactic sugar to accomplish my goals. For me it’s not that I wouldn’t use an AI agent, it’s that I don’t wanna have to pay for it. I’m frugal, let’s say.

    2 votes
    1. [3]
      shrike
      Link Parent
      Just define "99% good" first or you'll keep moving the goal post indefinitely. =) The lates/largest qwen and gemma4 models are pretty good, but YMMV based on how you define 99% good. They do need...

      Just define "99% good" first or you'll keep moving the goal post indefinitely. =)

      The lates/largest qwen and gemma4 models are pretty good, but YMMV based on how you define 99% good. They do need a pretty beefy M-series Mac to run them properly (mostly the memory counts) though, but you'll own the hardware and the software of it.

      For harness both pi.dev and opencode support local models out of the box, the others can be wrapped.

      2 votes
      1. [2]
        kari
        Link Parent
        I've tried both qwen3.5 and gemma4 on my 6800XT with 16GB of VRAM, so it's not amazing, but for whatever reason they both quit working as soon as I try use opencode. 🤷‍♂️

        I've tried both qwen3.5 and gemma4 on my 6800XT with 16GB of VRAM, so it's not amazing, but for whatever reason they both quit working as soon as I try use opencode. 🤷‍♂️

        1. DistractionRectangle
          Link Parent
          Qwen3.5 has quirks that need addressing, and Gemma 4 is new enough that there's plenty of bugs in support/quantization. Also, if you have older qwen3.5 quants, those had issues which required you...

          Qwen3.5 has quirks that need addressing, and Gemma 4 is new enough that there's plenty of bugs in support/quantization. Also, if you have older qwen3.5 quants, those had issues which required you download updated quants.

          Recent discussion with code addressing outstanding issues with qwen3.5: https://old.reddit.com/r/LocalLLaMA/comments/1sdhvc5/qwen_35_tool_calling_fixes_for_agentic_use_whats/

          1 vote
  8. [5]
    hungariantoast
    (edited )
    Link
    What "agent harness" software do you use? (Claude Code, OpenCode, Codex, etc.)

    What "agent harness" software do you use? (Claude Code, OpenCode, Codex, etc.)

    1 vote
    1. [4]
      karsaroth
      Link Parent
      Within the context of my job I've only been given access to Github Copilot so far, and I'm only aware of its chat capability built into tools like VS Code. I'd like to try others, and if you've...

      Within the context of my job I've only been given access to Github Copilot so far, and I'm only aware of its chat capability built into tools like VS Code. I'd like to try others, and if you've got suggestions that match what I'm looking for, I'd be keen to know!

      1. teaearlgraycold
        Link Parent
        From what I've heard that seems to be the worst one.

        only been given access to Github Copilot so far

        From what I've heard that seems to be the worst one.

        9 votes
      2. first-must-burn
        Link Parent
        You should be able install Cline and configure your copilot account as the back end model. When we had copilot at our last job, the admins were able to turn on access to Claude Sonnet as well. I...

        You should be able install Cline and configure your copilot account as the back end model. When we had copilot at our last job, the admins were able to turn on access to Claude Sonnet as well. I really like Cline as a development tool.

        To speak more to your original post, you might try an agent prompt like, "Use git to find the uncommitted changes / changes on this branch vs main and review those changes." You can be more specific about the files that get reviewed and the nature of the review (algorithm complexity, etc.).

        Though to get it to interact reliably with git, I had to make a .clinerules file with an entry like "When you run a git command, run it with `git --no-pager command` so you get the full output."

        3 votes
      3. shrike
        Link Parent
        Going to the F1 analogy above: Github Copilot with Opus 4.6 is like taking a finely tuned F1 engine and shoving it into a Pontiac Aztec using only duct tape and hot glue and letting a teenager who...

        Going to the F1 analogy above:

        Github Copilot with Opus 4.6 is like taking a finely tuned F1 engine and shoving it into a Pontiac Aztec using only duct tape and hot glue and letting a teenager who just got their license (on the 6th attempt) drive it.

        It's 2026, Since Opus 4.5 + GPT-5.2-ish the models (engines) have been pretty equal in quality with slight variations in what they're good at, it's the harness around it that makes them good or bad. It's the one that decides what tools are available and what data to give to the model for processing.

        Copilot (and all of its variants) is objectively the worst, but it's damn near everywhere - and cheap.

        1 vote