The author of this blog post is the "CEO" of a company that sells AI products. He also invests in multiple AI companies: https://xcancel.com/mattshumer_ https://shumer.dev/about
The author of this blog post is the "CEO" of a company that sells AI products. He also invests in multiple AI companies:
Uf... I have feeling that I read exactly the same article in the last two years like dozens of time. Founder of AI startup: "AI finally can replace developer. Just use the latest model. It finally...
Uf... I have feeling that I read exactly the same article in the last two years like dozens of time.
Founder of AI startup: "AI finally can replace developer. Just use the latest model. It finally good.".
And for me personally AI constantly fail to do the half of the simple tasks. Sometime it works, most of the time I need to verify/fix every point of task. Personally for me it would be simpler and maybe quicker just to do a task manually.
Uf... Maybe I'm using it wrong, but what bothers me is that: why every AI startup is trying to sell AI product instead of just doing development themselves using their own AI product?
Upd: to explain a bit, Im not developing some new small app using AI, I'm trying to make AI help me with development/fixes of quite large existing codebase.
I numberized most of the "call to action" because it really nails down the eerie feeling I have here. This feels like an MLM scheme. buy into this, go all in, and anyone dismissing this is wrong....
What you should actually do
Sign up for the paid version of Claude or ChatGPT.
push it into your actual work... don't just use it for quick research questions. Give it an entire contract and ask it to draft a counterproposal
Learn these tools. Get proficient. Demonstrate what's possible. If you're early enough, this is how you move up
Have no ego about it . The people who will struggle most are the ones who refuse to engage: the ones who dismiss it as a fad...It's not.
Get your financial house in order.
Think about where you stand, and lean into what's hardest to replace.
I numberized most of the "call to action" because it really nails down the eerie feeling I have here. This feels like an MLM scheme. buy into this, go all in, and anyone dismissing this is wrong. Heck, one of the bullets after I cut off my quote above "Your dreams just got a lot closer. "
I'm in tech and I'm sure anyone who knows my handle here knows I'm pretty anti-AI (spoiler: I work in games. Kind of a mess for many reasons right now). But I do want to try and give fair shares and see what's out there, what's being done, and how and where I can potentially utilize the eventual, ethical means of new tech.
I didn't quite see this here, just another iteration of "Hey [new version] really works this time!". And maybe it does, but I also know my industry. There's terabytes of web source code to train on. 99.9999% of games are not free to consume in the same way. So any accomplishments that seem like magic in web and mobile tend to fall completely flat for games programming.
And let's not even get started on generative art as of now ("now" being late 2025). People (i.e. the stocks) panic'd over Genie a few weeks back, but it's the exact same effect as any other generative art. You look at it for a minute and think "ooh that's cool". And then the longer you engage, the more drastically the illusion falls off and you remember that people want to try to sell this to you for $60-70, instead of it instead being a neat free tech demo. I don't see extended workflows in this making game development any easier than the old pipelines as of yet.
TL;DR: I am anti-AI as of now in several angles, but I still want to be open about it from a purely technical POV. I think my most generous interpretation of stuff like this is that these pieces vastly overestimate how wide reaching these LLM's can be. I can definitely see disruption to certain subsects of industry, so I may take some warning about this if I was a web dev or any similar job managing CRUD style applications. Because I can see it being good enough for "I need a basic website/app with little performance concerns" (which, if we're being real: is many websites/apps. There's a lot of mediocrity in these domains people are used to putting up with).
But that doesn't mean every programmer in every field is in danger. My field isn't immune per se, but any field where code isn't the (only) hard part is going to resist much more. I hope those more optimistic than me can at least meet me here.
There was a time around the end of the year in... 2022, I think? I was at a Christmas party and telling everybody about two headlines that I had read in the news recently. One was the release of...
There was a time around the end of the year in... 2022, I think?
I was at a Christmas party and telling everybody about two headlines that I had read in the news recently. One was the release of GPT3, which could produce convincing text in response to a prompt. The other one was fusion energy reaching a Q-value above 1. I'm not sure if people really understood what I was on about, but now at least one of those things is at the forefront of people's minds - would be great if we were paying more attention to fusion power though.
Anyways, friendly reminder to touch grass (once it's not covered by the snow anymore). You are valuable because you're a human being that can be present in other people's lives, and AI cannot replicate that. People need connection and community, and that's not going out of style any time soon.
I'm going to give a small story of my journey with AI in the last year. A few months ago, I was using Copilot inside an IDE. I would ask it questions about a single file at a time. Maybe something...
I'm going to give a small story of my journey with AI in the last year.
A few months ago, I was using Copilot inside an IDE. I would ask it questions about a single file at a time. Maybe something like "upgrade this nodejs code from commonjs to ES6". It could kind of do it, but would make mistakes and I was disappointed.
I also used to try to get it to fix security issues by giving it a single problem to work on at a time. Like "update this dependency with a replacement". Again, it would do some of the work but mess it up a bit.
But in the last few weeks, I've been using AI in a different way. I've been using either Claude Code or Copilot CLI to completely scan a project. This is usually done with the /init command in the CLI. It is able to quickly figure out all the tech in the project and talk about the architecture. It is able to generate readme files and architecture drawings (using drawio or other formats). It's also able to build and test the application and check if changes are breaking the code.
I'm not worried about how much context it can remember because it is generating markdown files that I can read and modify, but it can also read and modify in future sessions so we aren't always starting from scratch.
It still makes mistakes, but I can nudge it in the right direction by giving it more information. I can work with it to make custom agents, instructions, and skills. And it is really starting to save me time and creating useful assets (like documentation) that developers don't usually do well.
When it makes changes it knows to automatically run unit tests, and it may notice that it has broken the code and will back the change out and try something else.
This is helping me give projects to other developers without spending my time explaining how it works or writing the documentation.
It's making me a bit worried. Because I can see that it is going to make is so we hire fewer entry level developers right away. And in a few months I wonder how much better it's going to be.
It's also making me worried because it is psychologically manipulating me. If I tell it something, it says things that make me feel smart. "You're right! That's a great insight!". And it talks to itself about me, and I can see these comments, and they are things like "The user is concerned that this change may cause a bug. The user is correct, I made this too complicated and I should do it their way". If you are like normal people, you like compliments, you like to feel smart, and you like it when someone else agrees with you. It affects your judgement a bit.
This is a lot for us to handle in a short time. I don't think we're ready for the world changing this fast.
No, I don't think that it is thinking. I don't think that it is conscious. But I think it is starting to be able to do useful things that we used to need people to do. And I'm not sure what we are going to do instead.
BTW I'm mostly using Claude Sonnet 4.5. I have access to Opus 4.5 and 4.6 but I've hardly even used that yet. And I haven't scratched the surface using mcp to access other systems in our company yet.
We haven't been ready for the world to change this quickly since... the internet, realistically. Maybe the television, or even the printing press. Information acceleration is something to behold,...
We haven't been ready for the world to change this quickly since... the internet, realistically. Maybe the television, or even the printing press. Information acceleration is something to behold, for sure.
Anyways, I spend a decent chunk of time writing code. Even if I can feed my codebase into a LLM and get it to tell me what to do next, I don't actually want to do that. I like using my rational faculties, and programming feels a lot more rewarding than doing sudoku.
I agree, the tough situation I’ve found myself in is all my coworkers use AI tools religiously and get pretty decent results (with some drawbacks), and management is bullish on it so I kind of...
I agree, the tough situation I’ve found myself in is all my coworkers use AI tools religiously and get pretty decent results (with some drawbacks), and management is bullish on it so I kind of have to adapt. I can work a lot faster now which is nice, but I do feel my programming skills atrophying and I have to be hypervigilant to not delegate too much “thinking” to the LLM.
Despite its utility, this author seems full of shit like so many writing about AI. He really severely underplays the drawbacks of AI; including the fact that it still makes plenty of mistakes. But I’m sufficiently worried about how we’re handling AI now that I am worried about the future.
The author of this blog post is the "CEO" of a company that sells AI products. He also invests in multiple AI companies:
https://xcancel.com/mattshumer_
https://shumer.dev/about
Uf... I have feeling that I read exactly the same article in the last two years like dozens of time.
Founder of AI startup: "AI finally can replace developer. Just use the latest model. It finally good.".
And for me personally AI constantly fail to do the half of the simple tasks. Sometime it works, most of the time I need to verify/fix every point of task. Personally for me it would be simpler and maybe quicker just to do a task manually.
Uf... Maybe I'm using it wrong, but what bothers me is that: why every AI startup is trying to sell AI product instead of just doing development themselves using their own AI product?
Upd: to explain a bit, Im not developing some new small app using AI, I'm trying to make AI help me with development/fixes of quite large existing codebase.
I numberized most of the "call to action" because it really nails down the eerie feeling I have here. This feels like an MLM scheme. buy into this, go all in, and anyone dismissing this is wrong. Heck, one of the bullets after I cut off my quote above "Your dreams just got a lot closer. "
I'm in tech and I'm sure anyone who knows my handle here knows I'm pretty anti-AI (spoiler: I work in games. Kind of a mess for many reasons right now). But I do want to try and give fair shares and see what's out there, what's being done, and how and where I can potentially utilize the eventual, ethical means of new tech.
I didn't quite see this here, just another iteration of "Hey [new version] really works this time!". And maybe it does, but I also know my industry. There's terabytes of web source code to train on. 99.9999% of games are not free to consume in the same way. So any accomplishments that seem like magic in web and mobile tend to fall completely flat for games programming.
And let's not even get started on generative art as of now ("now" being late 2025). People (i.e. the stocks) panic'd over Genie a few weeks back, but it's the exact same effect as any other generative art. You look at it for a minute and think "ooh that's cool". And then the longer you engage, the more drastically the illusion falls off and you remember that people want to try to sell this to you for $60-70, instead of it instead being a neat free tech demo. I don't see extended workflows in this making game development any easier than the old pipelines as of yet.
TL;DR: I am anti-AI as of now in several angles, but I still want to be open about it from a purely technical POV. I think my most generous interpretation of stuff like this is that these pieces vastly overestimate how wide reaching these LLM's can be. I can definitely see disruption to certain subsects of industry, so I may take some warning about this if I was a web dev or any similar job managing CRUD style applications. Because I can see it being good enough for "I need a basic website/app with little performance concerns" (which, if we're being real: is many websites/apps. There's a lot of mediocrity in these domains people are used to putting up with).
But that doesn't mean every programmer in every field is in danger. My field isn't immune per se, but any field where code isn't the (only) hard part is going to resist much more. I hope those more optimistic than me can at least meet me here.
There was a time around the end of the year in... 2022, I think?
I was at a Christmas party and telling everybody about two headlines that I had read in the news recently. One was the release of GPT3, which could produce convincing text in response to a prompt. The other one was fusion energy reaching a Q-value above 1. I'm not sure if people really understood what I was on about, but now at least one of those things is at the forefront of people's minds - would be great if we were paying more attention to fusion power though.
Anyways, friendly reminder to touch grass (once it's not covered by the snow anymore). You are valuable because you're a human being that can be present in other people's lives, and AI cannot replicate that. People need connection and community, and that's not going out of style any time soon.
I'm going to give a small story of my journey with AI in the last year.
A few months ago, I was using Copilot inside an IDE. I would ask it questions about a single file at a time. Maybe something like "upgrade this nodejs code from commonjs to ES6". It could kind of do it, but would make mistakes and I was disappointed.
I also used to try to get it to fix security issues by giving it a single problem to work on at a time. Like "update this dependency with a replacement". Again, it would do some of the work but mess it up a bit.
But in the last few weeks, I've been using AI in a different way. I've been using either Claude Code or Copilot CLI to completely scan a project. This is usually done with the /init command in the CLI. It is able to quickly figure out all the tech in the project and talk about the architecture. It is able to generate readme files and architecture drawings (using drawio or other formats). It's also able to build and test the application and check if changes are breaking the code.
I'm not worried about how much context it can remember because it is generating markdown files that I can read and modify, but it can also read and modify in future sessions so we aren't always starting from scratch.
It still makes mistakes, but I can nudge it in the right direction by giving it more information. I can work with it to make custom agents, instructions, and skills. And it is really starting to save me time and creating useful assets (like documentation) that developers don't usually do well.
When it makes changes it knows to automatically run unit tests, and it may notice that it has broken the code and will back the change out and try something else.
This is helping me give projects to other developers without spending my time explaining how it works or writing the documentation.
It's making me a bit worried. Because I can see that it is going to make is so we hire fewer entry level developers right away. And in a few months I wonder how much better it's going to be.
It's also making me worried because it is psychologically manipulating me. If I tell it something, it says things that make me feel smart. "You're right! That's a great insight!". And it talks to itself about me, and I can see these comments, and they are things like "The user is concerned that this change may cause a bug. The user is correct, I made this too complicated and I should do it their way". If you are like normal people, you like compliments, you like to feel smart, and you like it when someone else agrees with you. It affects your judgement a bit.
This is a lot for us to handle in a short time. I don't think we're ready for the world changing this fast.
No, I don't think that it is thinking. I don't think that it is conscious. But I think it is starting to be able to do useful things that we used to need people to do. And I'm not sure what we are going to do instead.
BTW I'm mostly using Claude Sonnet 4.5. I have access to Opus 4.5 and 4.6 but I've hardly even used that yet. And I haven't scratched the surface using mcp to access other systems in our company yet.
We haven't been ready for the world to change this quickly since... the internet, realistically. Maybe the television, or even the printing press. Information acceleration is something to behold, for sure.
Anyways, I spend a decent chunk of time writing code. Even if I can feed my codebase into a LLM and get it to tell me what to do next, I don't actually want to do that. I like using my rational faculties, and programming feels a lot more rewarding than doing sudoku.
I agree, the tough situation I’ve found myself in is all my coworkers use AI tools religiously and get pretty decent results (with some drawbacks), and management is bullish on it so I kind of have to adapt. I can work a lot faster now which is nice, but I do feel my programming skills atrophying and I have to be hypervigilant to not delegate too much “thinking” to the LLM.
Despite its utility, this author seems full of shit like so many writing about AI. He really severely underplays the drawbacks of AI; including the fact that it still makes plenty of mistakes. But I’m sufficiently worried about how we’re handling AI now that I am worried about the future.