I use Microsoft Copilot at work and I can see after a pretty short time that it becomes a crutch and weakens my skills. It's a lot easier to let it write a whole page of code and then see if that...
I use Microsoft Copilot at work and I can see after a pretty short time that it becomes a crutch and weakens my skills. It's a lot easier to let it write a whole page of code and then see if that works rather than figuring out how to do it myself. I have to force myself to carefully review what it is doing.
Besides how it is weakening skills in general, it also makes a lot of mistakes. I see the following pattern a lot, this is just a vague example:
Me: I need to sort an array of objects descending by description field
Copilot: Proceeds to write code to sort an array, but does an ascending sort
Me: I said I need to sort descending
Copilot: Oh yeah, sorry. Here it is with descending sort
This example was easy to catch but it is very odd how sometimes it misses a key thing in the prompt, and then corrects it in a second or third prompt. And it needs me to be highly skilled to catch the errors. There is no telling how many obvious bugs are being created by AI every day. Sure, there are always a lot of bugs created by humans too but I have a feeling that the AI ones are going to be worse and harder to spot.
I also think that the AI is getting worse, not better, as it feeds on it's own AI slop.
I have LLMs write code for me all the time, but I always read every line they give me and I rewrite or modify about 75% of the lines they give me. But it’s still nice to have them write out...
I have LLMs write code for me all the time, but I always read every line they give me and I rewrite or modify about 75% of the lines they give me. But it’s still nice to have them write out something to modify instead of having to type out my own boilerplate.
They’re especially useful for textual transformation tasks. OpenAPI specs to TypeScript. Python to JavaScript. And the newer ones can be pretty helpful with debugging.
I recently had an issue where a function was returning false and not executing as expected. I gave OpenAI’s o3-mini 3 different decompiled .class files as context and described my problem. It traced through the execution path for my given inputs and correctly identified what I was doing wrong. In this case it was simple, I was calling the wrong function. But it was able to describe exactly why the function I was using was wrong, leaving me with a deeper understanding of the code. The support group for this code on Discord wasn’t any help so I’m glad I had the LLM.
I've doing it sucking really badly at those simple tasks. Just today I was trying to get it to take a nested list of data and translate it to HTML lists. It continually screwed up and asking it to...
I've doing it sucking really badly at those simple tasks. Just today I was trying to get it to take a nested list of data and translate it to HTML lists. It continually screwed up and asking it to fix things would regularly lead to it making things worse. This was on party ChatGPT. The list was only about 40 items long too and it couldn't manage that. I've found it's completely unreliable for translation work, which is sad because that's an incredibly valuable use case of replacing busywork.
It's funny. GPT-4 used to be pretty good at that. But I had o3-mini fail hard at that kind of work recently. It's a pain keeping track of which models are good at which tasks. I use Cursor as an...
It's funny. GPT-4 used to be pretty good at that. But I had o3-mini fail hard at that kind of work recently. It's a pain keeping track of which models are good at which tasks.
I use Cursor as an IDE. It lets you conveniently switch between different LLM providers so you can use Claude/o3-mini/4o where appropriate.
For me at least, the tricky part about software development is not really the actual coding, but making sure you have covered all edge cases, that the implemented business logic does what everyone...
For me at least, the tricky part about software development is not really the actual coding, but making sure you have covered all edge cases, that the implemented business logic does what everyone else expects it to, that the model design doesn't write itself into a corner making future changes harder and most importantly - that the additions or changes doesn't break existing business logic. And more often than not, those things materialize themselves when I am in the middle of implementing what I thought was a trivial change. I think letting an LLM do the grunt work will make me miss many of these "wait a minute"-moments that just naturally occurs when you are forced to think about how to implement stuff. I am not seeing LLMs anytime soon being able to fully comprehend decades worth of changing business logic and legacy requirements on top of getting new features right.
Yes. Actually I was talking to some coworkers about this the other day. In my whole career, I almost never got complete requirements for anything. Maybe in my early career when we were doing...
Yes. Actually I was talking to some coworkers about this the other day.
In my whole career, I almost never got complete requirements for anything. Maybe in my early career when we were doing "waterfall" development, there would be product managers/product owners who would try to think of everything and document it. Since the takeover of agile, development goes like this:
Product owner gives some vague guidelines, and maybe some bullet points
Developers go and try to come up with a solution. Why they are doing that, they actually come up with most of the use cases, and think of all kinds of edge cases
Developers report all the new use cases to the product owner, who asks the relative cost of them and picks and chooses
AI is not going to help much with this. It's actually kind of hilarious that people think that developers will be replaced within a few years. I mean, they might be replaced because a lot of managers and business owners don't understand software development, and products will be even worse than they are now.
What’s funny is many of those people who predict the end of software engineers are salivating either at the business savings or their ability to finally build “their app” without engineers getting...
What’s funny is many of those people who predict the end of software engineers are salivating either at the business savings or their ability to finally build “their app” without engineers getting in their way. I don’t think they realize that AI that is sufficient to replace programmers is going to replace whatever they do as well.
It'll replace them before it replaces engineers. The grand majority of project management is checking in and dissemination of information. An AI could do that work more easily than it can write...
It'll replace them before it replaces engineers. The grand majority of project management is checking in and dissemination of information. An AI could do that work more easily than it can write complex software.
I was part of the San Francisco dot com boom as a creative director. Coming from Hollywood, I pitched hundreds of digital games and animated series for the new internet over that brief period....
I was part of the San Francisco dot com boom as a creative director. Coming from Hollywood, I pitched hundreds of digital games and animated series for the new internet over that brief period.
Looking ahead, I worried about our imagination and creativity the way people are currently worrying about our reasoning skills. When you have only world-class people generating ideas, everyone else stops dreaming.
It didn’t become quite as widespread as I feared, but I still meet many people who don’t allow themselves to create or develop ideas because they can’t compare to what they consume online.
GenAI is transforming the workplace by reshaping not only how tasks are performed but also how individuals think and solve problems. While it offers efficiency, there is a growing concern about...
GenAI is transforming the workplace by reshaping not only how tasks are performed but also how individuals think and solve problems. While it offers efficiency, there is a growing concern about the erosion of critical cognitive skills as workers increasingly rely on AI for decision-making and problem-solving. This shift can lead to a workforce that is adept at consuming AI-generated outputs but lacks the ability to generate original insights.
The blog post outlines three waves of cognitive transformation driven by technology, culminating in the current phase where GenAI automates entire cognitive workflows.
Funny enough I was talking about this to someone. Essentially that we are losing our ability to think critically, and that with tools like AI, we need to look at them as that - tools. It shouldn’t...
Funny enough I was talking about this to someone.
Essentially that we are losing our ability to think critically, and that with tools like AI, we need to look at them as that - tools.
It shouldn’t be used to replace the work we do, but rather enhance it and make it easier.
Knowing how to research, reading journals , understanding caveats and exceptions to different studies vs asking Chat GPT to confidently regurgitate a potential piece of misinformation and believing it.
Starting with teaching safety - just like we do with children for being online.
The same arguments have been said about Web search and Wikis. To a degree, they were right. We did lose the old ways of doing things, but we learned new ways. AI is a new level for sure and...
The same arguments have been said about Web search and Wikis. To a degree, they were right. We did lose the old ways of doing things, but we learned new ways. AI is a new level for sure and there's legitimate fear in putting ourselves into the hands of corporations that can easily control the narrative with their AI. I don't think there's any putting the genie back, so instead we need to look for ways to guard ourselves from the new misinformation highway.
I use Microsoft Copilot at work and I can see after a pretty short time that it becomes a crutch and weakens my skills. It's a lot easier to let it write a whole page of code and then see if that works rather than figuring out how to do it myself. I have to force myself to carefully review what it is doing.
Besides how it is weakening skills in general, it also makes a lot of mistakes. I see the following pattern a lot, this is just a vague example:
This example was easy to catch but it is very odd how sometimes it misses a key thing in the prompt, and then corrects it in a second or third prompt. And it needs me to be highly skilled to catch the errors. There is no telling how many obvious bugs are being created by AI every day. Sure, there are always a lot of bugs created by humans too but I have a feeling that the AI ones are going to be worse and harder to spot.
I also think that the AI is getting worse, not better, as it feeds on it's own AI slop.
I have LLMs write code for me all the time, but I always read every line they give me and I rewrite or modify about 75% of the lines they give me. But it’s still nice to have them write out something to modify instead of having to type out my own boilerplate.
They’re especially useful for textual transformation tasks. OpenAPI specs to TypeScript. Python to JavaScript. And the newer ones can be pretty helpful with debugging.
I recently had an issue where a function was returning false and not executing as expected. I gave OpenAI’s o3-mini 3 different decompiled .class files as context and described my problem. It traced through the execution path for my given inputs and correctly identified what I was doing wrong. In this case it was simple, I was calling the wrong function. But it was able to describe exactly why the function I was using was wrong, leaving me with a deeper understanding of the code. The support group for this code on Discord wasn’t any help so I’m glad I had the LLM.
I've doing it sucking really badly at those simple tasks. Just today I was trying to get it to take a nested list of data and translate it to HTML lists. It continually screwed up and asking it to fix things would regularly lead to it making things worse. This was on party ChatGPT. The list was only about 40 items long too and it couldn't manage that. I've found it's completely unreliable for translation work, which is sad because that's an incredibly valuable use case of replacing busywork.
It's funny. GPT-4 used to be pretty good at that. But I had o3-mini fail hard at that kind of work recently. It's a pain keeping track of which models are good at which tasks.
I use Cursor as an IDE. It lets you conveniently switch between different LLM providers so you can use Claude/o3-mini/4o where appropriate.
For me at least, the tricky part about software development is not really the actual coding, but making sure you have covered all edge cases, that the implemented business logic does what everyone else expects it to, that the model design doesn't write itself into a corner making future changes harder and most importantly - that the additions or changes doesn't break existing business logic. And more often than not, those things materialize themselves when I am in the middle of implementing what I thought was a trivial change. I think letting an LLM do the grunt work will make me miss many of these "wait a minute"-moments that just naturally occurs when you are forced to think about how to implement stuff. I am not seeing LLMs anytime soon being able to fully comprehend decades worth of changing business logic and legacy requirements on top of getting new features right.
Yes. Actually I was talking to some coworkers about this the other day.
In my whole career, I almost never got complete requirements for anything. Maybe in my early career when we were doing "waterfall" development, there would be product managers/product owners who would try to think of everything and document it. Since the takeover of agile, development goes like this:
AI is not going to help much with this. It's actually kind of hilarious that people think that developers will be replaced within a few years. I mean, they might be replaced because a lot of managers and business owners don't understand software development, and products will be even worse than they are now.
What’s funny is many of those people who predict the end of software engineers are salivating either at the business savings or their ability to finally build “their app” without engineers getting in their way. I don’t think they realize that AI that is sufficient to replace programmers is going to replace whatever they do as well.
It'll replace them before it replaces engineers. The grand majority of project management is checking in and dissemination of information. An AI could do that work more easily than it can write complex software.
I was part of the San Francisco dot com boom as a creative director. Coming from Hollywood, I pitched hundreds of digital games and animated series for the new internet over that brief period.
Looking ahead, I worried about our imagination and creativity the way people are currently worrying about our reasoning skills. When you have only world-class people generating ideas, everyone else stops dreaming.
It didn’t become quite as widespread as I feared, but I still meet many people who don’t allow themselves to create or develop ideas because they can’t compare to what they consume online.
GenAI is transforming the workplace by reshaping not only how tasks are performed but also how individuals think and solve problems. While it offers efficiency, there is a growing concern about the erosion of critical cognitive skills as workers increasingly rely on AI for decision-making and problem-solving. This shift can lead to a workforce that is adept at consuming AI-generated outputs but lacks the ability to generate original insights.
The blog post outlines three waves of cognitive transformation driven by technology, culminating in the current phase where GenAI automates entire cognitive workflows.
Funny enough I was talking about this to someone.
Essentially that we are losing our ability to think critically, and that with tools like AI, we need to look at them as that - tools.
It shouldn’t be used to replace the work we do, but rather enhance it and make it easier.
Knowing how to research, reading journals , understanding caveats and exceptions to different studies vs asking Chat GPT to confidently regurgitate a potential piece of misinformation and believing it.
Starting with teaching safety - just like we do with children for being online.
The same arguments have been said about Web search and Wikis. To a degree, they were right. We did lose the old ways of doing things, but we learned new ways. AI is a new level for sure and there's legitimate fear in putting ourselves into the hands of corporations that can easily control the narrative with their AI. I don't think there's any putting the genie back, so instead we need to look for ways to guard ourselves from the new misinformation highway.