11 votes

Updating Eagleson's Law in the age of agentic AI

Eagleson's Law states

"Any code of your own that you haven't looked at for six or more months might as well have been written by someone else."

I keep reading how fewer and fewer of the brightest developers are writing code and letting their AI agent to do it all. How do they know what's really happening? Does it matter anymore?

Curious to hear this communities thoughts

9 comments

  1. [5]
    post_below
    Link
    My philosophy with agentic coding is that I need to be in the loop. I still write code myself and when the agent writes code I sign off on a detailed plan, with code, in advance, and then review...

    My philosophy with agentic coding is that I need to be in the loop. I still write code myself and when the agent writes code I sign off on a detailed plan, with code, in advance, and then review the results.

    The times where I have let that slip, I've regretted it.

    Right now I believe what's happening is that a lot of developers are letting it slip, the regret is pending. Though in large orgs where there's less personal stake, and potentially everyone is vibecoding, I expect there won't be much personal regret.

    I think there are essentially two possible outcomes: One, agents get so much better that true vibecoding becomes viable and the agents can fix the industry wide mess that's currently silently happening in the background. Or two, practices will need to change to keep humans in the loop.

    I suspect many teams have already learned to do the latter, vibecoding at scale simply can't work for production if you care about quality and reliability.

    I suppose there is a third option: The industry lowers its standards of quality and reliability. I think this strategy is currently being beta tested at scale. The upside is that it should increase the perceived value of high quality software and the people who can deliver it.

    Side note: I've revisited 6 month old code plenty of times and almost immediately understood what I was thinking. Other times not so much, but we remember complex details about things that happened 6 months ago all the time. It's possible that Eagleson's law primarily applies to Eagleson.

    9 votes
    1. vord
      Link Parent
      Before it was 99.999% uptime. Now it's been 'move fast and break things' for over a decade. The 5 9's are still highly in demand, but the fact that 3 companies have the ability to knock out half...

      Before it was 99.999% uptime. Now it's been 'move fast and break things' for over a decade.

      The 5 9's are still highly in demand, but the fact that 3 companies have the ability to knock out half the internet for an afternoon, and do with some regularity, tells me that quality hasn't been a priority for years.

      9 votes
    2. [2]
      devalexwhite
      Link Parent
      This is 100000% the direction the industry is going in. Deadlines are becoming tighter and justifying hires is becoming harder. It's always been a fight to spend time testing or tackling tech...

      I suppose there is a third option: The industry lowers its standards of quality and reliability. I think this strategy is currently being beta tested at scale. The upside is that it should increase the perceived value of high quality software and the people who can deliver it.

      This is 100000% the direction the industry is going in. Deadlines are becoming tighter and justifying hires is becoming harder. It's always been a fight to spend time testing or tackling tech debt, but now the expectation is that devs are 10x more productive, and the business shifts timelines with that in mind, making testing/reducing tech debt impossible. The result is devs that have to full time vibe code multiple tickets at once, pushing PRs with thousands of lines of code that get rubber stamped in order to have any expectation of meeting these new timelines.

      6 votes
      1. vord
        Link Parent
        From Doctorow. We will all be but sacrificial fall guys for bad AI implementations.

        From Doctorow.

        If my Kaiser hospital bought some AI radiology tools and told its radiologists: "Hey folks, here's the deal. Today, you're processing about 100 x-rays per day. From now on, we're going to get an instantaneous second opinion from the AI, and if the AI thinks you've missed a tumor, we want you to go back and have another look, even if that means you're only processing 98 x-rays per day. That's fine, we just care about finding all those tumors."
        If that's what they said, I'd be delighted. But no one is investing hundreds of billions in AI companies because they think AI will make radiology more expensive, not even if that also makes radiology more accurate. The market's bet on AI is that an AI salesman will visit the CEO of Kaiser and make this pitch: "Look, you fire 9/10s of your radiologists, saving $20m/year, you give us $10m/year, and you net $10m/year, and the remaining radiologists' job will be to oversee the diagnoses the AI makes at superhuman speed, and somehow remain vigilant as they do so, despite the fact that the AI is usually right, except when it's catastrophically wrong.

        "And if the AI misses a tumor, this will be the human radiologist's fault, because they are the 'human in the loop.' It's their signature on the diagnosis."

        This is a reverse centaur, and it's a specific kind of reverse-centaur: it's what Dan Davies calls an "accountability sink." The radiologist's job isn't really to oversee the AI's work, it's to take the blame for the AI's mistakes.

        This is another key to understanding – and thus deflating – the AI bubble. The AI can't do your job, but an AI salesman can convince your boss to fire you and replace you with an AI that can't do your job. This is key because it helps us build the kinds of coalitions that will be successful in the fight against the AI bubble.

        We will all be but sacrificial fall guys for bad AI implementations.

        2 votes
    3. HelpfulOption
      Link Parent
      The law seems to apply to me, but not because I can't understand it. 6 months old code is easy to spot the pitfalls and see what I should have written differently. I'm mostly self-taught on the...

      The law seems to apply to me, but not because I can't understand it. 6 months old code is easy to spot the pitfalls and see what I should have written differently.

      I'm mostly self-taught on the job, so I might be a specific case. There's always more I haven't learned, especially on the architecture side of things.

      3 votes
  2. Omnicrola
    Link
    We evolved a saying at my first dev job:

    We evolved a saying at my first dev job:

    Code is always twice as hard to debug as it was to write. If you write the most clever code you can, you are inherently unable to debug it.
    Don't be clever, be clear.

    7 votes
  3. skybrian
    Link
    Diving into other people's code is totally normal when working at large companies. Most of the code you work on was written by someone else. Sometimes it's fairly decent and sometimes... less so....

    Diving into other people's code is totally normal when working at large companies. Most of the code you work on was written by someone else. Sometimes it's fairly decent and sometimes... less so.

    Now you can have the same experience on your own projects. So, get used to it I guess?

    So the question is, how to do you set policy? What automatic safeguards should you put in place? How do you write guidelines and make sure that the coding agent actually reads them? What sort of cleanup processes do you have to find and fix problems when it cut a few corners?

    One way is to read every commit. Since it's just a personal project, I tend to let things go a bit, but then I get into a cleanup mood and spend some time getting the coding agent to clean up existing functionality and organize things properly.

    5 votes
  4. Durinthal
    Link
    It's now not your code from the start (outside of having your name on the commit) so I don't think the adage changes much in that regard. I don't think there's necessarily anything inherently...

    It's now not your code from the start (outside of having your name on the commit) so I don't think the adage changes much in that regard.

    I don't think there's necessarily anything inherently wrong with that, but your role changes to be more like a maintainer of an open source project: other people submit pull requests and it's up to you to approve and merge or give feedback or reject outright, and as the face of the project you're responsible for the outcome if you let a bad commit in even if you didn't write it. The difference is there aren't other people directly involved for better or worse and it's a game of prompts for the feature request and feedback instead.

    1 vote
  5. googs
    Link
    I've been thinking about this a little bit in context of my recent software work. I've been working on building a react app for a small non-profit that essentially is just table/form views for...

    I've been thinking about this a little bit in context of my recent software work. I've been working on building a react app for a small non-profit that essentially is just table/form views for tracking clients/referrals/supporting information. The thing about an app like this is that it's one of many and is full of little, already solved problems.

    Apps that have a business audience and only need to support a small number of users (around 10) all sort of share a bunch of components that have been solved, re-solved, abstracted, etc. The AI is a way of not only quickly bringing together all these little solutions, but tying them together and adding on top the special cases that this particular business wants.

    A small-business react app could be thought of like a family sedan. Every sedan has its own particular things, but they all share components: they all have engines, alternators, doors, mirrors, etc. Just like most web apps will have: a router for navigating between pages, a table layout, a form layout made up of reusable fields, a staff management page, a login system, etc.

    I'm putting this in the context of react since that is my particular language of choice for web apps, but the point I'm trying to make is that I don't necessarily need to see every line of code that an AI writes to have some understanding of what is happening under the hood. Sure, I might not know exactly how the routing code for this app was written, but if I needed to make a change to it, I probably could. I've worked with the react-router library before so I could pretty quickly figure out what it's doing.

    Obviously for more complex logic, this isn't always going to hold true. But the reality is, at least for the systems I personally work on, the complex logic accounts for maybe 10% of the code. Most things are problems that have been solved in the past 30 years one way or another and I see no issue in having an AI write code for easy to solve problems. I think the people that are using these tools most effectively are the ones that have an idea of how to architect a system and understand the sorts of tradeoffs they're making when they decide things like: modular vs one-off, self-service vs IT ticket, SQL vs NoSQL, etc. When you have a strong understanding of your system's high-level architecture, you can very quickly narrow in on a component that is failing or needs improvement without having to understand the entire codebase. This has been true forever and is still true with AI. Arguably it's more true now since the AI tools of today have very limited "context" and are going to work better on a specific module you tell it to work on, if it doesn't need to pull in the code for all of the related modules. It really comes back to "write simple code" because nobody can remember everything and if you have to come back to something later, at least it will be simple to re-learn. And luckily the current AI tools are pretty good at writing simple code.

    1 vote