I think the take from the keepassXC maintainers is fairly reasonable. It is difficult to find any project these days where there is zero use of any llm based tooling. As you said, these tools...
I think the take from the keepassXC maintainers is fairly reasonable. It is difficult to find any project these days where there is zero use of any llm based tooling. As you said, these tools leave no signature.
But, that is also why I do take a bit of an issue with the statement.
It’s easier to sabotage an open‑source project as a human than with the help of an AI.
Willfully, yes. But, that does discount the influx of merge requests and “vulnerability" reports open source projects have to deal with now that are absolute bogus. Tying up maintainers in the process of reviewing merge requests is also a form of sabotage.
Something you do acknowledge just a few lines down. So it really strikes as an odd statement.
Not only that, if maintainers are overwhelmed by AI slop submissions, it makes it much easier for a human actor to slip in actual malicious changes.
So while I fully agree to a complete ban on code that has been near AI is not feasible. I do not think believe for a second that we are at a point in time where we can say that AI aided submissions aren't causing tangible issues. That some maintainers remain a realistic outlook that they can't avoid it entirely doesn't mean the issue is no longer there.
Great insight! Indeed, AI can be weaponized by a malicious human to create a distracting/confusing environment to facilitate rogue code to slip into the project.
Great insight! Indeed, AI can be weaponized by a malicious human to create a distracting/confusing environment to facilitate rogue code to slip into the project.
This is maybe where I risk a controversial take: I don't think there's any meaningful difference between an experienced programmer and an agent writing code guided by an experienced programmer. I...
This is maybe where I risk a controversial take: I don't think there's any meaningful difference between an experienced programmer and an agent writing code guided by an experienced programmer. I believe this because at the end of the day, no matter what the Salesforces and Cursors of the world want you to believe there's no such thing as a truly autonomous AI. All of the choices - regardless of whether an LLM wrote the code or not - are still traced back to a human.
The human chooses to use the LLM to write code. The LLM doesn't do anything on its own. And thus it's also on the human to make sure the LLM doesn't do bad things. This is why I don't like the blanket term "vibe coding", because what many people - myself included - do isn't having code be generated on a general "vibe". If used correctly, the tooling absolutely can make you a more productive programmer. I'm a graphic designer, but I know my way around modern JS tooling and PHP along with their most important libraries and frameworks. Claude absolutely takes work that I could do and does orders of magnitude faster.
That doesn't mean that it's perfect, or even good. If my mother sat down with ChatGPT or whatever and typed in "make me a todo list app" (or heaven forbid a password manager), and then something fails, she'd have no recourse other than to tell the system that something doesn't work. If you vibe code like this, if you have no background in at the very least the tools the LLM uses for you, I don't want your pull requests. Chances are I'll have to fix your stuff, with or without AI.
But that also applies to non-AI programmers with little experience. I think the KeepassXC team is absolutely completely correct in their stance. All, and yes all code changes need to pass some form of verification or at least testing. If you don't know what your LLM is doing and can't read and comprehend the code it outputs, you shouldn't really use those tools.)
That's nothing to say of the in my opinion very obvious point that you can't tell if code is AI generated most of the time. If you can, you're the skilled programmer these tools are meant for. If you can't... well, let's just say the Network tab in Chrome will only get you so far.
(And for what it's worth, I think companies like Anthropic knew this when they made Claude Code only work in the terminal initially. You want a barrier to entry. Not to make it less accessible, but to make sure only people who know enough to understand what's happening use your tool.)
The problem Keepass are having is that there might be an influx of new pull requests from people that don't have the background or experience but use these tools to "improve the app". And that... well, I don't maintain a big Open Source app with contributions, but I can absolutely see this get annoying for the maintainers. Still, I applaud their surprisingly nuanced take here - and I really love that they say this while also saying "we won't ever integrate AI tools into the app".
I think the take from the keepassXC maintainers is fairly reasonable. It is difficult to find any project these days where there is zero use of any llm based tooling. As you said, these tools leave no signature.
But, that is also why I do take a bit of an issue with the statement.
Willfully, yes. But, that does discount the influx of merge requests and “vulnerability" reports open source projects have to deal with now that are absolute bogus. Tying up maintainers in the process of reviewing merge requests is also a form of sabotage.
Something you do acknowledge just a few lines down. So it really strikes as an odd statement.
Not only that, if maintainers are overwhelmed by AI slop submissions, it makes it much easier for a human actor to slip in actual malicious changes.
So while I fully agree to a complete ban on code that has been near AI is not feasible. I do not think believe for a second that we are at a point in time where we can say that AI aided submissions aren't causing tangible issues. That some maintainers remain a realistic outlook that they can't avoid it entirely doesn't mean the issue is no longer there.
Great insight! Indeed, AI can be weaponized by a malicious human to create a distracting/confusing environment to facilitate rogue code to slip into the project.
This is maybe where I risk a controversial take: I don't think there's any meaningful difference between an experienced programmer and an agent writing code guided by an experienced programmer. I believe this because at the end of the day, no matter what the Salesforces and Cursors of the world want you to believe there's no such thing as a truly autonomous AI. All of the choices - regardless of whether an LLM wrote the code or not - are still traced back to a human.
The human chooses to use the LLM to write code. The LLM doesn't do anything on its own. And thus it's also on the human to make sure the LLM doesn't do bad things. This is why I don't like the blanket term "vibe coding", because what many people - myself included - do isn't having code be generated on a general "vibe". If used correctly, the tooling absolutely can make you a more productive programmer. I'm a graphic designer, but I know my way around modern JS tooling and PHP along with their most important libraries and frameworks. Claude absolutely takes work that I could do and does orders of magnitude faster.
That doesn't mean that it's perfect, or even good. If my mother sat down with ChatGPT or whatever and typed in "make me a todo list app" (or heaven forbid a password manager), and then something fails, she'd have no recourse other than to tell the system that something doesn't work. If you vibe code like this, if you have no background in at the very least the tools the LLM uses for you, I don't want your pull requests. Chances are I'll have to fix your stuff, with or without AI.
But that also applies to non-AI programmers with little experience. I think the KeepassXC team is absolutely completely correct in their stance. All, and yes all code changes need to pass some form of verification or at least testing. If you don't know what your LLM is doing and can't read and comprehend the code it outputs, you shouldn't really use those tools.)
That's nothing to say of the in my opinion very obvious point that you can't tell if code is AI generated most of the time. If you can, you're the skilled programmer these tools are meant for. If you can't... well, let's just say the Network tab in Chrome will only get you so far.
(And for what it's worth, I think companies like Anthropic knew this when they made Claude Code only work in the terminal initially. You want a barrier to entry. Not to make it less accessible, but to make sure only people who know enough to understand what's happening use your tool.)
The problem Keepass are having is that there might be an influx of new pull requests from people that don't have the background or experience but use these tools to "improve the app". And that... well, I don't maintain a big Open Source app with contributions, but I can absolutely see this get annoying for the maintainers. Still, I applaud their surprisingly nuanced take here - and I really love that they say this while also saying "we won't ever integrate AI tools into the app".