Updating Eagleson's Law in the age of agentic AI
Eagleson's Law states "Any code of your own that you haven't looked at for six or more months might as well have been written by someone else." I keep reading how fewer and fewer of the brightest...
Eagleson's Law states
"Any code of your own that you haven't looked at for six or more months might as well have been written by someone else."
I keep reading how fewer and fewer of the brightest developers are writing code and letting their AI agent to do it all. How do they know what's really happening? Does it matter anymore?
Curious to hear this communities thoughts
Thanks for posting this. I’m so glad the Google TIG are working on this and publishing and sharing their learnings.
And from reading I’m glad to see but honestly surprised that there hasn’t been larger scale agentic attacks.
As the open source models improve or are “jailbroken” via distillation from Gemini or Opus, as mentioned in the article, in not many cycles will powerful reasoning models run on consumer grade compute. When that happens imagine agentic adversaries deployed across a bot network like we’ve seen in large scale DDOS attacks.