33 votes

Using AI generated code will make you a bad programmer

54 comments

  1. [18]
    PendingKetchup
    Link
    This is an interesting mix of great insights: More powerful tools fostering dependence The ultimate class goal being the removal of software engineers from the process of producing software Using...

    This is an interesting mix of great insights:

    • More powerful tools fostering dependence
    • The ultimate class goal being the removal of software engineers from the process of producing software
    • Using code generators functionally replacing code writing with code review, which is often worse

    And ideas that seem wrong to me:

    • The idea that the key part of software engineering is not figuring out what to call all the pieces of the problem and what those pieces have to do and what the implications of those choices are, but actually is the mostly-rote process of producing the syntax called for by each comment.
    • The idea that writing boring code is good for you like some kind of digital vegetable. People have been writing code generators and trying to express programmer intent in more-natural languages forever. Hardly anybody programs by flipping the front panel switches to encode each instruction anymore, and indeed many have forgotten how, but that's not necessarily a bad thing.
    30 votes
    1. [10]
      archevel
      Link Parent
      I think the key thing to realize about this is that what is boring code for a senior dev is something that a junior have no to little experience of. Writing "boring" code is writing something...

      The idea that writing boring code is good for you like some kind of digital vegetable.

      I think the key thing to realize about this is that what is boring code for a senior dev is something that a junior have no to little experience of. Writing "boring" code is writing something you've done a bunch of times before. If the problem is new, if the tools are new or if it's been a sufficient amount of time since you last did it, then it likely won't be "boring". I agree with you that you shouldn't write boring code (to some extent it is unavoidable, but it should be an aspiration). But I don't think "boring" code can be objectively defined. It can only be done in a subjective context.

      On another note I've been thinking recently that it might be helpful to flip the table on AIs. Maybe they are more useful for reviewing code I write rather than writing code I have to review... Now, I don't want Clippy AI(tm) to critique my code every step of the way, but maybe when I commit it?

      21 votes
      1. [8]
        em-dash
        Link Parent
        That's a thing. It is not good. Effective code review requires domain knowledge and codebase knowledge, in addition to "best practices" knowledge. That's why code reviewers tend to be more senior...

        Maybe they are more useful for reviewing code I write rather than writing code I have to review

        That's a thing. It is not good.

        Effective code review requires domain knowledge and codebase knowledge, in addition to "best practices" knowledge. That's why code reviewers tend to be more senior employees; you can't just hire an intern and give them a How To Do Software Architecture Real Good book and tell them to review all the code everyone else writes.

        13 votes
        1. [5]
          mattinm
          Link Parent
          I’m the senior program on a project right now filled with entry level developers, so myself or the one other experienced developer have to sign off on PR’s. What I’m struggling with right now is...

          I’m the senior program on a project right now filled with entry level developers, so myself or the one other experienced developer have to sign off on PR’s. What I’m struggling with right now is that the entry level developers have essentially two consistent deficiencies:

          1. They take any review notes as criticisms, despite being told multiple times not to sweat it, that the PR’s are there basically to act as a mentor step as well as a quality check.

          2. They are almost universally unwilling to reach out when there are issues in the code, despite me being available for paired programming essentially any time. In standups, I’ve basically learned that “everything’s going well” is tantamount to “I’m not sure what I’m doing”.

          If you’ve got any experience, do you know any strategies to make entry level developers feel more confident? To trust that reaching out to me for help doesn’t mean I’m going to be upset or that it’ll reflect in their performance reviews? Etc?

          Thanks!

          10 votes
          1. ButteredToast
            Link Parent
            I don’t have any experience managing junior devs, but I can say that as an IC with almost a decade under my belt that I’ve never enjoyed pair programming (especially early on) and if I knew that...

            I don’t have any experience managing junior devs, but I can say that as an IC with almost a decade under my belt that I’ve never enjoyed pair programming (especially early on) and if I knew that it was the likely outcome of asking one of my seniors a question, I’d probably try to figure the problem out on my own instead. The type of help that worked well for me was more conceptual in nature and maybe took place in front of a whiteboard instead of at a desk. Just don’t make me write code and hold a conversation at the same time.

            I might just be weird though, so YMMV.

            10 votes
          2. BroiledBraniac
            Link Parent
            I think a lot of this depends on tone. If you come off as harsh and pedantic, no matter how good your advice is, they may not want to listen to you. I don't know if that's the case in your...

            I think a lot of this depends on tone. If you come off as harsh and pedantic, no matter how good your advice is, they may not want to listen to you. I don't know if that's the case in your situation. I'm an EM and the tone I try to give off is "I respect your ability as an adult to make decisions, but I'm here to help if you need it." Generally people don't feel afraid to ask questions in a 'no stupid questions' environment.

            4 votes
          3. [2]
            archevel
            Link Parent
            Have you tried asking them for help? If they don't see seniors (which is what they presumably aspire to be) asking for aid/advice/help, then they'll likely emulate that behavior. Asking for help...

            Have you tried asking them for help? If they don't see seniors (which is what they presumably aspire to be) asking for aid/advice/help, then they'll likely emulate that behavior. Asking for help can make some feel vulnerable (e.g. what will they think of me if I can't solve this myself?) so showing them that it is safe and normal, by actually showing that vulnerability, is a way of establishing trust.

            This can also lead into teaching moments. Maybe you have a .net app with some Entity Framework query that is looped over multiple times. Great! You can then discuss why this happens, why it is a problem, and what an appropriate approach is in general. Maybe you have a fairly simple feature that you could get done in a few hours. You can then ask someone else to pair with you on that. In short pull people in to help you rather than pushing yourself onto others :)

            Note that some people don't like to do pair programming, so a short whiteboard session where you together go over the design might be better in some cases (like what @ButteredToast suggest). Lots of other considerations too, level of difficulty, describe the problem not necessarily the solution, be open to change your approach (even if you know your way is better) as a way to establish trust, etc etc.

            Also this stuff is really hard! It's easy to fall back on just cranking out code as a senior dev since that's what you're probably comfortable with. But, the key to being a really good senior/lead dev is to enable others to become better developers. Doing that require another skill set than regular dev work.

            4 votes
            1. Maxi
              Link Parent
              I have similar problems to mattimn in a project, and this is what I suspect it is related to. The problem is that I am the only senior on the project, so there's no one for me to ask help from...

              Have you tried asking them for help? If they don't see seniors (which is what they presumably aspire to be) asking for aid/advice/help, then they'll likely emulate that behavior.

              I have similar problems to mattimn in a project, and this is what I suspect it is related to. The problem is that I am the only senior on the project, so there's no one for me to ask help from when I get stuck.

              I am sure this is the main reason our junior do this, say in standups everything's OK and then get angry when their code is reviewed.

              3 votes
        2. [2]
          archevel
          Link Parent
          Yeah, figured it must've been done already. I'm still hopefully it could be done and be helpful for catching e.g security issues. Maybe suggest performance optimizations. But as you say good code...

          Yeah, figured it must've been done already. I'm still hopefully it could be done and be helpful for catching e.g security issues. Maybe suggest performance optimizations. But as you say good code reviews are hard... And often come up too late (better to discuss the problem with someone first and come up with an initial design, code it up and iterate).

          1 vote
          1. em-dash
            Link Parent
            The problem is code exists in a context. Is this tiny helper function a security problem? That depends on how it's called. Is this database query going to be slow? That depends on the indexes and...

            The problem is code exists in a context. Is this tiny helper function a security problem? That depends on how it's called. Is this database query going to be slow? That depends on the indexes and size of the data. If I went around reviewing pull requests at work leaving comments like "make sure there are indexes!" without actually looking to see if there are indexes, or if the table being queried has a two-digit number of rows in practice and doesn't need indexes, I'd expect to be quickly told to knock it off.

            Code review requires reasoning, something LLMs can only emulate. They are good at one thing, generating text. Generating text is not the hard part of code review.

            Personally, I think there's some mostly-unexplored territory around automated code quality, but LLMs ain't it. I'd like to see a small fraction of this level of effort go into better deterministic static analysis tools. If I'm touching a tiny helper function deep inside a codebase, go look at all its callers, and their callers, etc., and see what behavior actually changes, and tell me - deterministically, with certainty - if that will cause issues.

            4 votes
      2. GOTO10
        Link Parent
        Biggest (work) compliment someone can give me is "that's a boring PR".

        I agree with you that you shouldn't write boring code

        Biggest (work) compliment someone can give me is "that's a boring PR".

        8 votes
    2. [4]
      Eji1700
      Link Parent
      I feel like this is so core to some of the more elitist coders out there. Nothing is ever good enough except for THEIR first sacred cow. There are legit reasons to be annoyed (people using the...

      The idea that writing boring code is good for you like some kind of digital vegetable. People have been writing code generators and trying to express programmer intent in more-natural languages forever. Hardly anybody programs by flipping the front panel switches to encode each instruction anymore, and indeed many have forgotten how, but that's not necessarily a bad thing.

      I feel like this is so core to some of the more elitist coders out there. Nothing is ever good enough except for THEIR first sacred cow. There are legit reasons to be annoyed (people using the wrong tool for the job because it's the only tool they know), but the massive overreaction strikes me as humorous given my dad built his first computer back when you had to wire wrap and only had assembly. He sure didn't bitch about better languages as they came out.

      Edit-
      I think the right tool for the job analogy really sticks. You do not want someone coding military hardware controllers in javascript. Likewise you probably don't need to do your basic CRUD api in assembly. The problem is that a lot of people will easily agree with the first, but then "tut tut" you for picking an appropriate tool for the second.

      6 votes
      1. [3]
        jackson
        Link Parent
        I think the big distinction here is that AI prompting is non-deterministic, produces artifacts in a human-readable programming language, and prompts are not saved alongside generated code (at...

        He sure didn't bitch about better languages as they came out.

        I think the big distinction here is that AI prompting is non-deterministic, produces artifacts in a human-readable programming language, and prompts are not saved alongside generated code (at least in the environments I've seen).

        This means prompters must be skilled at reading code (and probably writing code – even the boring parts!) to correctly evaluate their generated code. Code reviews (aside from the prompter) will be missing the context of the prompt, which may have included a fundamental misconception of the task's goal, making the code harder to review. Even if the prompt is included, it will produce different output if re-executed using today's models.

        If I write a Python script, it will generate the same machine code (at least on the same machine) and is deterministic unless explicitly written to be otherwise. Sure, different environments may produce different results (OS differences, environment variables), but we have tools to work around this (containers, standardized CI environments).

        5 votes
        1. creesch
          Link Parent
          How is that different from people having written the code based on a different understanding of the task at hand? In my experience that also makes it equally difficult to review the code. As in...

          will be missing the context of the prompt, which may have included a fundamental misconception of the task's goal, making the code harder to review.

          How is that different from people having written the code based on a different understanding of the task at hand? In my experience that also makes it equally difficult to review the code. As in both cases you are talking to someone who has some misconception in their head and is not sharing their complete thoughts with you.

          The result will be equally frustrating and equally wrong.

          Other than that I honestly have not much to add to the entire discussion in the comments as the framing is weirdly narrow or overly broad depending on how you look at it. Using one line of generated code, using a generated function, generating your entire code, generating several examples and using those as input. All of these fall in some way in the spectrum of using AI generated code.

          They also are a variation on how people use the internet and sites like stack overflow.

          They are not equally as damning and somehow AI as a more general tool has been excluded from the article. While that to me seems the most valid and interesting use case to discuss. Honestly, feeding some concepts to different LLM models and asking them for general and specific feedback has actually made me a better programmer in some ways. Not because they are always right, but they do make me double check a lot of stuff might not have done otherwise.

          Granted it only has helped me become better from a position of already being capable and being able to critically look at their output. But that is the same for many other tools we use.

          Sorry, bit of a tangent in the context of replying to you specifically. Still on topic for the entire conversation though.

          6 votes
        2. Eji1700
          Link Parent
          Oh sure. I broadly agree that AI is NOT the be all end all for coding, and am probably one of the biggest skeptics you'll meet on it's benefits. As before, right tool for the right job. I find the...

          Oh sure. I broadly agree that AI is NOT the be all end all for coding, and am probably one of the biggest skeptics you'll meet on it's benefits.

          As before, right tool for the right job. I find the AI decent on those simple things. I dip in and out of having to write sql, so sometimes for something more complex like a windowing function across several nested queries or something I'll throw parts of it at AI for help, and it does "ok" there.

          I'm more just ranting about the general stuck in the mud mindset of a lot of coders who seem to think that anything that isn't their tool is somehow inferior. If even I can see a use in AI, then I feel that "ai will make you worse" is just doomsaying. The bad coders will mostly still be bad coders, no matter the toolset. But there's some subsection of coders who think that the learning curve somehow eliminates the bad coders from getting through (or that that's a good thing).

          2 votes
    3. [3]
      Rudism
      Link Parent
      Fair point. In my head (and not clearly expressed in the article), being forced to write boring code can be good for you in the scenario where that drives you to investigate and figure out clever...

      The idea that writing boring code is good for you like some kind of digital vegetable.

      Fair point. In my head (and not clearly expressed in the article), being forced to write boring code can be good for you in the scenario where that drives you to investigate and figure out clever ways to reduce or eliminate that workload yourself--something that I feel is a primary driver of self-improvement as a programmer, so long as it's not taken too far.

      4 votes
      1. [2]
        TangibleLight
        Link Parent
        There's also something to be said about the level of control and presence when using traditional-style code generators or static analysis that I think is missing in AI prompts. I admit I'm not...

        There's also something to be said about the level of control and presence when using traditional-style code generators or static analysis that I think is missing in AI prompts. I admit I'm not well-practiced in using AI for these things so maybe there are ways to be similarly present while code is produced, but from what I've seen proponents of AI tools don't use it that way.

        More algorithmic code generators force you to engage with the structure a bit, find patterns, and synthesize some way to generate those patterns. It also forces you to contend with the mere fact that the repetition is tedious, and likely indicates some greater architectural problem. You might do a rough cost-benefit analysis and determine it's better to leave the architectural problem in order to ship faster - but at least you're engaged with that decision.

        AI feels more like handing the rote task off to an intern and waiting for them to return after they do all the tedium. You aren't even there while they do the work, so how can you engage with the problem? If you have this AI-shaped hammer, everything looks like a tedious nail, and I worry about the larger architectural issues that will fester if people don't continue to engage with the systems they maintain. All the studies I've seen about increased code churn and bugs seem to line up.

        I do have to concede that AI does seem to do a fine job at the tedious tasks, so if you do engage with the problem and decide that reviewing the output of a hyper-flexible code generator is the best approach, then sure, let the AI do it faster.

        7 votes
        1. Jakobeha
          Link Parent
          I like algorithmic refactoring tools much more than LLMs because I trust that the code is refactored properly. Even when the algorithm fails, it fails in predictable, reasonable ways. e.g. a good...

          I like algorithmic refactoring tools much more than LLMs because I trust that the code is refactored properly. Even when the algorithm fails, it fails in predictable, reasonable ways.

          e.g. a good "rename method" tool in a statically-typed language will:

          • Only rename calls on values of the correct type. If I rename Foo#bar, it won't touch calls to Bar#bar, even though both look like some_value.bar(...) without context.
          • Skip (or ask to confirm) renaming in strings and comments, because the tool isn't smart enough to guarantee a literal occurrence of the method's name actually refers to the method.

          Sure, I have to search for and rename true negatives in the strings and comments manually, but it's easy (I can "find and replace in project" the old name). Importantly, I can rely on references outside of strings and comments being renamed properly.

          An LLM doing "rename method" may be even smarter, because when it finds the method name in a string or comment, it can use English comprehension to determine whether it's an actual reference. But (AFAIK) there is no LLM-based tool that can guarantee it won't rename literal occurrences that are not references to the method, or can guarantee it will rename every reference (and maybe some). So when I ask an LLM to refactor, I have to check true negatives and false positives, and look over every line of code the LLM changed. At this point it's faster to skip the LLM and go straight to "find and replace in project" and looking over every occurrence manually.

          10 votes
  2. [16]
    brogeroni
    Link
    I disagree with this post a lot. It reads like someone complaining how automatic transmission is gonna make people bad drivers, and how real drivers all drive stick. My stance is that while ai...

    I disagree with this post a lot.

    It reads like someone complaining how automatic transmission is gonna make people bad drivers, and how real drivers all drive stick.

    My stance is that while ai will get better and likely become standard kit with your ide, it will never get as good as a human (at least within the next 20 years). And humans, being the smart little cookies that we are, will learn through many many examples, even if it's just accepting and rejecting ai code suggestions. This means more people coding, and more stuff being created. As someone who likes coding and wants more people to join, it's a win win if you ask me.

    Point by point:

    • You Rob Yourself of Learning Opportunities: eventually, after writing enough ai code and accepting and rejecting suggestions, I believe eventually people will be able to differentiate between bad and good code. They're smart enough for that. And in the meantime, you can make cool things! The alternative is losing lots of people to boredom and attrition, reducing the amount of programmers in the future.

      • as an aside, think about how many developers and infosec professionals of today were yesterday's script kiddies. I'd imagine the vast majority of them were at some point.
    • Skills You Already Have May Atrophy: going back to the car analogy, sure. I can no longer assemble a 2-stroke engine using only parts I made in my shop, and that's OK. Using a few commercial off the shelf parts (even without ai you're probably importing libraries for sorting, let's be honest) I can make a go kart multiple times cheaper, more efficient, and more performant than the people who invented automobiles in the first place. We should stand on the shoulders of giants. And if you want, you're not locked out of implementing your own sorting algo whenever you feel like it.

    • You May Become Dependent on Your Own Eventual Replacement: amazing! If this does happen, we enter a new post work era and Yada Yada Yada star trek utopia. But I also don't buy into that. In the case this doesn't happen, programmers will be programmers, and we'll just be doing stuff at a higher level than before. It'll be a different job, not no job.

    • Do You Even Own AI Generated Code?: pragmatically speaking, there's so much money and potential riding on this already. I doubt the politicians would be willing to risk losing geopolitical standing in order to preserve... Licensed code? Even if it does become illegal one day, the ramifications on the entire software industry would be so large I can't see any extremely painful penalties being applied. But I'm not a lawyer

    • Your Code Will Not Be Respected: I'd argue if you can distinguish between what is good/bad software and you led the Ai to the solution that it wrote down for you, you were just as much of an artist. Again the it's just programming at a higher level of abstraction compared to before. I look upon the guy who programmed roller coaster tycoon in assembly with great respect. But that doesn't also mean that I don't respect people making games in scratch or roblox.

    • You Are a Masochist Who Prefers Code Reviews to Coding: valid

    20 votes
    1. [9]
      heraplem
      (edited )
      Link Parent
      Does anyone really think this? Society has never been kind to laborers who have been made obsolete in the past. Why would anything be different this time?

      amazing! If this does happen, we enter a new post work era and Yada Yada Yada star trek utopia.

      Does anyone really think this?

      Society has never been kind to laborers who have been made obsolete in the past. Why would anything be different this time?

      11 votes
      1. [5]
        brogeroni
        Link Parent
        If all software is made completely automatically, imagine the efficiency gains you'd get from... Everything. Even if we don't have the same relative status as before, I'm sure you could live...

        If all software is made completely automatically, imagine the efficiency gains you'd get from... Everything. Even if we don't have the same relative status as before, I'm sure you could live comfortably, and probably have decent quality of life improvements.

        But IMO much more likely is this whole ai bubble sigmoid flattens out and then a good 10-20 years pass before we have the real killer use case for it (similar many of America's previous bubbles like dark fiber).

        8 votes
        1. [4]
          jackson
          Link Parent
          What makes you think the owners of this efficiency will pass down these benefits to the rest of us? Untold efficiency sounds like a great excuse to fire your workforce and distribute the earnings...

          If all software is made completely automatically, imagine the efficiency gains you'd get from... Everything. Even if we don't have the same relative status as before, I'm sure you could live comfortably, and probably have decent quality of life improvements.

          What makes you think the owners of this efficiency will pass down these benefits to the rest of us? Untold efficiency sounds like a great excuse to fire your workforce and distribute the earnings to a smaller pool of people.

          8 votes
          1. X08
            Link Parent
            All shall bow to our shareholder overlords!

            All shall bow to our shareholder overlords!

            2 votes
          2. [2]
            teaearlgraycold
            Link Parent
            What if perhaps everyone has access to this new technology?

            What if perhaps everyone has access to this new technology?

            1 vote
            1. Raymonf
              Link Parent
              In my somewhat pessimistic opinion, if something like that were available to everyone, most developers (myself included) would lose their jobs. But I think this is OK. It’s like how we no longer...

              In my somewhat pessimistic opinion, if something like that were available to everyone, most developers (myself included) would lose their jobs. But I think this is OK. It’s like how we no longer have telephone operators or computers. Life goes on and people find new opportunities in different/adjacent fields, because it’s just not practical to pay someone for a job that can be done by a machine [some number] times more efficiently and accurately.

              At the current trajectory of how things are progressing, it feels like this will happen within the next few decades at the latest. If our torch goes out, I can foresee a different country like China continuing where we left off, and getting us to that point.

              I’d love to see how right or wrong I was in 10 years’ time.

              2 votes
      2. [3]
        teaearlgraycold
        Link Parent
        An AI that can replace software engineers is AGI. That’s an unprecedented new technology and we can’t necessarily predict how society will change in response.

        An AI that can replace software engineers is AGI. That’s an unprecedented new technology and we can’t necessarily predict how society will change in response.

        6 votes
        1. heraplem
          (edited )
          Link Parent
          I'm not so sure. Lots of people once thought that it would take AGI to generate humanlike art. I think we're going to increasingly find that there isn't a hard barrier between "mere" AI and AGI....

          An AI that can replace software engineers is AGI.

          I'm not so sure. Lots of people once thought that it would take AGI to generate humanlike art.

          I think we're going to increasingly find that there isn't a hard barrier between "mere" AI and AGI. Actually, I find it pretty likely that we never end up with "true" AGI, but instead with various local optima that act "enough" like AGI for whatever problem domain. Just because we can imagine AGI doesn't mean that it's inevitable, or that it will look anything like how we've imagined it. (It almost certainly will not.)

          Also, this idea is rather flattering to software engineers, and unrealistic besides. It won't happen all at once. Instead, various bits of functionality will gradually get automated, slowly reducing the overall need for labor, and especially less-skilled labor. To a certain extent, that is already happening, or at least many expect it to happen in the very near future.

          3 votes
        2. Eji1700
          Link Parent
          It's about on the level of Fusion power for both "massive systemic changes" and "we'll have it in 20 years, every 20 years"

          It's about on the level of Fusion power for both "massive systemic changes" and "we'll have it in 20 years, every 20 years"

    2. TangibleLight
      (edited )
      Link Parent
      First: personally, the reason I respect people who create things in assembly or scratch is because of the constraints inherent in the medium; I respect the technical feat of creating great things...

      Again the it's just programming at a higher level of abstraction compared to before. I look upon the guy who programmed roller coaster tycoon in assembly with great respect. But that doesn't also mean that I don't respect people making games in scratch or roblox.

      First: personally, the reason I respect people who create things in assembly or scratch is because of the constraints inherent in the medium; I respect the technical feat of creating great things despite those constraints. Such things don't really apply to AI-generated output.

      I suppose I would have respect for someone who creates something great out of AI-generated slop, given the poor quality of their building blocks. It's hard to build a strong castle when your bricks are made of sand. If someone manages to do so despite this, I'd be impressed. I'd be especially impressed if they manage to keep it from crumbling over time.

      To be clear, I will be impressed with the feat of defying constraints. I will not be impressed with the castle itself. It would have been stronger if it were made of better materials.

      Second: In software engineering, abstraction is not a virtue in its own right. At the end of the day we are all constrained by the same physics on the same machines. Each layer of abstraction between yourself and the metal is a step away from reality. An abstraction is only good if it brings some value that outweighs its cost. That cost-benefit analysis is what engineering fundamentally is.

      For example, I'd never advocate writing assembly in practice because structured control flow and function calls have exceedingly high benefit to reasoning and communication, and their computational cost is low.

      AI does not offer a benefit proportional to its costs. Assembly and Scratch are red herrings. If someone demonstrates that AI does bring some benefit that outweighs its costs, I'd be on board.

      So I'll interpret the remaining points in this way, as benefits of AI that outweigh costs and might make the abstraction valuable.

      You Rob Yourself of Learning Opportunities: eventually, after writing enough ai code and accepting and rejecting suggestions, I believe eventually people will be able to differentiate between bad and good code.

      Right. There's this period of time where novice programmers need to look at examples, build toy projects, and stretch their muscles a bit before they're really good at identifying good vs. bad code.

      What if we curated some list of topics they could use to stretch those muscles more efficiently? We could weed out AI hallucinations at the same time so the person isn't led astray early on.

      Follow this kind of reasoning about efficiency and you re-invent textbooks and school. AI doesn't offer any "value" here except misdirection and disengagement. Frankly, two issues I already take with current computer science education. AI will make this problem worse, not better.

      Skills You Already Have May Atrophy: going back to the car analogy

      Your argument is predicated on the idea that the off-the-shelf components are of higher quality than what you could produce on your own. This is a good argument for using well-written libraries, and the popularity of open-source demonstrates that it's a compelling argument.

      Drawing conclusions here about AI is a non-sequitur. In what way do AI tools improve the fundamental building blocks that one uses to build software? How is it beneficial to use an AI-generated gasket rather than manufacture one myself or design around an off-the-shelf component? Does the availability of an AI-generated part positively impact the design process for the rest of the engine? How will the quality of the typical engine change over time if such parts become widespread?

      You May Become Dependent on Your Own Eventual Replacement

      I tend to agree, although I suspect we have different ideas on what "different job" is likely to mean. Just as we have entire industries dedicated to maintaining or rewriting "legacy" code, I expect a similar attitude and industry around "AI" code.

      Do You Even Own AI Generated Code

      Large (private) organizations tend to be very risk-averse and very protective of IP. I see litigation around this as an inevitability, not something that can be ignored because "politicians" will favor national security over code licensing. If anything I would expect the situation where the private sector litigates and litigates over this, while conglomorates and state actors break their own rules. This already happens. For the rest of the private sector, any such liability is still going to be a big problem.

      7 votes
    3. [5]
      archevel
      (edited )
      Link Parent
      I think your reasoning is flawed. Going with this analogy: The writer (as I interpreted them) isn't arguing that you'll become a bad driver from using an automatic transmission. They are arguing...

      I think your reasoning is flawed. Going with this analogy:

      It reads like someone complaining how automatic transmission is gonna make people bad drivers, and how real drivers all drive stick.

      The writer (as I interpreted them) isn't arguing that you'll become a bad driver from using an automatic transmission. They are arguing you'll be bad at driving with a manual transmission if you only ever practice driving an automatic. This seem like a wholly valid stance, people who've never done a particular activity will be worse at it than those that have (in general). An even more apt analogy might be that of a "driver" of a self driving car. If they've only ever used self driving vehicles; should they be allowed to drive a regular car? Arguably they wouldn't be allowed to drive the self driving car if they don't have a license, but I hope it illustrates the point.

      If you build better code with AI, that is great! In most contexts it is more important to deliver value than delivering it using artisinally crafted hand made assembly code (even if the latter is superior in X and Y ways). Speed of delivery usually trumps other concerns. But, using AI to write your code most certainly will leave you lacking in ability when/if you need to work without it. As with everything; it's a trade-off.

      5 votes
      1. [3]
        vord
        Link Parent
        That and I'll dig my heels in on this bit and write the rant GP was referring to. And I will 100% stand by that if you take two otherwise-equivalent drivers (not using phone, being attentive,etc),...

        That and I'll dig my heels in on this bit and write the rant GP was referring to.

        It reads like someone complaining how automatic transmission is gonna make people bad drivers

        And I will 100% stand by that if you take two otherwise-equivalent drivers (not using phone, being attentive,etc), every single driver that has competently driven a manual transmission is better at driving automatic cars than drivers that have not. Driving a manual transmission requires a degree of attentiveness that an automatic transmission does not, much how the author discusses algorithmic autocomplete vs AI autocomplete. It forces you to get a 'feel' for the vehicle and how it interacts with the terrain, and results in more intuitively understanding 'how' a transmission works. And that learned 'feel' will translate to driving any other vehicle later....it's half of why I dislike not being able to easily transition in and out of neutral on automatics and EV/Hybrids. There are huge number of scenarios where the 'correct' course of action is to pop the vehicle in neutral and let it coast, rather than killing all of your momentum. And people who have only ever driven automatics never really learn that, because automatics mostly discourage that behavior.

        I can also say with some degree of certainty that people who never learned to drive without backup cameras are worse at parking than people who had to do it the old fashioned way. It has ceased to be just an increased-safety tool, and has turned into a virtually-mandatory feature.

        It's kind of like learning LISP in 2024. It's a mental exercise that might never translate to regular usage, but it's often useful as a learning tool.

        5 votes
        1. [2]
          Gaywallet
          Link Parent
          Just want to briefly point out that this is your experience in the matter and isn't necessarily true for all other folks. Simply listening to the engine and paying attention to gauges can also...

          It forces you to get a 'feel' for the vehicle and how it interacts with the terrain, and results in more intuitively understanding 'how' a transmission works. And that learned 'feel' will translate to driving any other vehicle later

          Just want to briefly point out that this is your experience in the matter and isn't necessarily true for all other folks. Simply listening to the engine and paying attention to gauges can also calibrate one's "feel" for transmission. But perhaps more importantly, whether one has the capacity to do this while driving and what their attention looks like when they are driving automatic or manual are perhaps even more important. It turns out who is behind the wheel is way more important when it comes to assessing efficiency, safety, and other factors.

          For example, in elderly drivers, driving an automatic is objectively much safer. Crash rates and adverse outcomes are significantly lower for this group when driving an automatic instead of a manual. The same is not observed in other age groups. Conversely, for young adults with ADHD, driving a manual seems to result in increased attentiveness and thus less driver error. I believe that both of these paint a broader picture of how attention will vary from person to person and broad claims like "every single driver" do not hold up to the diversity that humanity has to offer. Perhaps more drivers who know how to drive manual are better by some metric than those who only have driven automatic, but we simply do not have evidence to support that outside of anecdotal and theoretical means.

          1 vote
          1. vord
            (edited )
            Link Parent
            Maybe, but I'm not talking just myself. While I acknowledge this is still ancedata, I'm talking at least 10 stick-shift drivers in my periphery, with 5 accidents between them over the course of 25...

            your experience in the matter

            Maybe, but I'm not talking just myself. While I acknowledge this is still ancedata, I'm talking at least 10 stick-shift drivers in my periphery, with 5 accidents between them over the course of 25 years. Admitted none of us are elderly. That's much lower than the much-larger number of people I know who have only driven an automatic.

            And the links you've provided kind of re-enforce the point. Young drivers benefit from the more-to-do, because it reduces distractability, and also ADHD is way more prevalent than previously thought....a lot of un-diagnosed ADHD adults out there.

            And as I said, I'm pretty sure these benefits are perpetual, much like riding a bike. You don't ever really 'unlearn' it, you just fall out of practice.....it's been 20 years since I drove a stick shift and I still slam the non-existent clutch when hard braking.

            And it would make sense that elderly folks do not do well with added tasks....especially since odds are they shouldn't be behind the wheel at all, unless they're passing regular driving tests. But I'd also bear in mind that the elderly also mostly likely drove manuals in their youth. It's really only around my generation (young X, elder millenial) and younger where manuals had the steep drop-off in adoption by the time we could drive. So I would be curious to see if the accident rate of the elderly goes even higher in about 30-40 years (presuming that self-driving cars are not ubiqutous).

            2 votes
      2. TangibleLight
        (edited )
        Link Parent
        Something about your comment reminded me of this talk: Up to Code - David Sankel - CppCon 2021 I think that generative AI as-currently-exists would not have gotten so much attention if we, as a...

        Something about your comment reminded me of this talk:

        Up to Code - David Sankel - CppCon 2021

        I think that generative AI as-currently-exists would not have gotten so much attention if we, as a society, were not so conditioned to tolerate software failures.

        If they've only ever used self driving vehicles; should they be allowed to drive a regular car? Arguably they wouldn't be allowed to drive the self driving car if they don't have a license, but I hope it illustrates the point.

        As I recall, Sankel doesn't suggest that all programmers should have a license. He makes a distinction between programmers and software engineers, although I don't think the exact terminology here matters too much. His equivalent example "fixer/handyman" vs "electrician".

        The main point is on thinking about reliability, safety, and modes of failure. In other domains, when liability and reliability are a concern, you bring in an engineer and worry about regulation. There is a distinction between new work and old work. The software industry has no real equivalents.

        That talk is pre-ChatGPT. I'm curious how it would be different if he'd given it a couple years later.

        3 votes
  3. ButteredToast
    Link
    For my usage, I've found the best place for LLMs is as talking documentation and occaisionally as a pseudo-intelligent rubber duck. So instead of telling the LLM to write full code to do a...

    For my usage, I've found the best place for LLMs is as talking documentation and occaisionally as a pseudo-intelligent rubber duck.

    So instead of telling the LLM to write full code to do a particular thing, I'll instead ask it for an example of a particular API in usage. This has a few benefits: risk of hallucination is reduced since the scope of the request is smaller, any hallucination that does occur is more easily spotted, and since I'm still doing significant mental processing in adapting the example to my needs, there's a better chance I'll retain that information and not need to consult the LLM about it again in the future.

    The "rubber duck" use case is there because sometimes it's occaisionally helpful for getting out of my own head when working out a problem. LLMs are also trained on material that I might never manage to find on my own, so sometimes they'll be able to point out fixes that I wouldn't have been able to google.

    I'm not really sold on LLM-powered autocomplete in IDEs and such, though. The extent of smartness I need there can be accomplished without LLMs and falls in the same bucket as being able to fill in missing cases in a switch statement.

    9 votes
  4. DawnPaladin
    Link
    I agree with some of this. I am picky about code formatting, readability, and variable names, which is why on the rare occasion I do generate code, I'll typically rewrite it before committing...

    I agree with some of this. I am picky about code formatting, readability, and variable names, which is why on the rare occasion I do generate code, I'll typically rewrite it before committing (same as I do with StackOverflow). Here it's primarily used for helping me get the syntax right, as a replacement for usually-inadequate documentation.

    My primary use for AI is troubleshooting. Being able to hand it an error message and often figure out what's wrong puts it leagues ahead of Google or StackOverflow.

    7 votes
  5. [3]
    skybrian
    Link
    I don't think being dependent on your text editor or IDE is something programmers should worry about. The days of writing code on paper are long gone. Getting immediate feedback from your editor...

    I don't think being dependent on your text editor or IDE is something programmers should worry about. The days of writing code on paper are long gone. Getting immediate feedback from your editor is great. I don't buy the argument that it prevents learning because immediate feedback is so helpful when learning things.

    Now we have glorified autocomplete and it's a nice improvement, too. Some have argued that typing code out is useful for beginners to improve their typing skills and it's a plausible argument, but that doesn't mean you can't get good at using autocomplete as well. Sometimes we can learn things from working without tools, but that doesn't mean giving up tools - we can also learn how to use tools well.

    I have learned things from autocomplete, where it suggests an coding idiom that makes sense, so I pick it up. It doesn't seem that different from learning things from reading code. You do want to be careful about what you learn from unreliable sources, but that's true of StackOverflow as well. Learning from unreliable sources is skill we all need to navigate the Internet, and AI certainly gives us a lot of opportunities to practice it.

    7 votes
    1. [2]
      tauon
      Link Parent
      At least as long as there’s instances of vi without syntax highlighting, let alone type definition/docs based autocomplete (not even speaking of LLM autocomplete…) running on remote servers, I...

      I don't think being dependent on your text editor or IDE is something programmers should worry about.

      Sometimes we can learn things from working without tools, but that doesn't mean giving up tools - we can also learn how to use tools well.

      At least as long as there’s instances of vi without syntax highlighting, let alone type definition/docs based autocomplete (not even speaking of LLM autocomplete…) running on remote servers, I think there’s a benefit (at least for a subset of developers) to being able to read code or config files without those amenities. And if you can still “write without tab”, I’m positive you’ll be able to with it later on.

      I have learned things from autocomplete, where it suggests an coding idiom that makes sense, so I pick it up. It doesn't seem that different from learning things from reading code. You do want to be careful about what you learn from unreliable sources, but that's true of StackOverflow as well. Learning from unreliable sources is skill we all need to navigate the Internet, and AI certainly gives us a lot of opportunities to practice it.

      But I do heavily agree with this. Even asking “what is this construct in $newLanguage called?” might help beginners immensely, since you can then plug that newly learned terminology into an old style SO/web search.

      2 votes
      1. Weldawadyathink
        Link Parent
        VS Code runs just fine on ssh connections to a remote. So there is always an option of having a full IDE with syntax hi-lighting and code completion on any host you can imagine. I guess you could...

        VS Code runs just fine on ssh connections to a remote. So there is always an option of having a full IDE with syntax hi-lighting and code completion on any host you can imagine. I guess you could envision a scenario where you are running on an airgapped network and VS Code isn't allowed anywhere on the network. But that is such a far fetched scenario that I don't think its worth the effort to plan for.

  6. [7]
    TheMediumJon
    Link
    On some level I might agree with some of the general premise, but I think this goes far beyond what is reasonable. Here's my thoughts from one read through. I consider that to be a very...

    On some level I might agree with some of the general premise, but I think this goes far beyond what is reasonable.

    Here's my thoughts from one read through.

    I don't think I could write a single line in Pascal today that would be syntactically valid, let alone accomplish anything meaningful. Another exercise: Try programming for a day without syntax highlighting or auto-completion, and experience how pathetic you feel without them.

    I consider that to be a very likely/true statement. (Not Pascal in my case, but the premise still holds). But unless your conclusion is that we should forego any sort of IDEs beyond basic text editors so that we all may eternally remember all syntax perfectly I don't see what the point of this is.

    I do use an IDE which does offer completions of vars and funcs as relevant. Because the cost of me not necessarily remembering by heart each and every standard function for some generic purpose (does this language do strVar.concat(strVar2) or StringClass.concatenate(strVar, strVar2)? Etc) but getting them in near zero time with that IDE completion is worth the trade off of me remembering most of those but then having to look up the edge case or alternate language or whatever.

    The same could be argued for things larger in their scale quite easily, imo.

    Imagine an exercise equipment manufacturer releases a new "Artificial Strenth" product tomorrow, promising that you can "level up" your lifting and "crush your next workout" with their amazing new Exercise Assistant—a robot that lifts the weights for you. If you started relying on that product, what do you think would happen to your max bench over time?

    I find this example also rather silly.
    If I'm bench pressing it's with the purpose of building muscle, not lifting the weights.
    If my goal was the raising/transporting of weights (the actual fair equivalent of writing code for a given non-educational purpose) then an exo-skeleton or crane or some other tool would absolutely help me achieve that aim in a much more efficient fashion.

    If my aim is to be the best coder in a theoretical environment, purely training for skill, then yes obviously an AI assistant undermines that - just like an IDE does, except at a different scale.

    If I want to be effective at using that code in any non-theoretical environment then it does have its place because it serves as a tool, not as the target.

    There is of course the response:

    The former are tools with the ultimate goal of helping you to be more efficient and write better code; the latter is a tool with the ultimate goal of completely replacing you.

    Maybe
    But the issue there would not be the AI deprecating my job, it would be with the system that deprecates my job (and then kicks me out the door with no hesitation).

    If you want to be anti(-corporate)-capitalism that's absolute a fair case. I might agree with you.

    But AI's just a tool there. You might as well argue against the existence of healthcare, as a whole, because in the US it is commonly tied to employer and thus gives them major leverage.

    I honestly don't know the answers to these questions, but I also don't want to be the one footing the legal bills when it's time for the courts to answer them. Do you?

    This is the first argument here that I considered even moderately compelling.
    And the conclusion might very well be to only use it when in a corporate context where I'm covered. (At the very least until sufficient precedent exists ruling one way or the other). But even this isn't really a principal argument, it is a legal one which would need a legal solution (case law or legislation establishing definitely either way).

    nobody outside of other AI code kiddies are likely to be impressed with you as a programmer

    Maybe I am tilting at windmills and authorial intent is entirely different, but I've had modules and systems that included some degrees of AI boilerplate and/or framing both myself and from team-mates. Might be a thing entirely unique to our company, but I've had yet to hear anything re that matter from anybody.

    But then I am explicitly not talking about code that is entirely AI generated, but rather only includes such sections. If the intent is for the former then I'd agree, but it doesn't seem so to me...

    You Believe We have Entered a New Post-Work Era, and Trust the Corporations to Shepherd Us Into It

    I'd like to be, despite it not being so, indeed because of those corporations. But that goes beyond this small slice of the market and their general rise or fall will not be determined by this.

    6 votes
    1. [6]
      Rudism
      Link Parent
      My intent there was to demonstrate a concrete example where the atrophy has happened for probably the majority of us already, not to imply that these tools themselves are a bad thing (I don't...

      But unless your conclusion is that we should forego any sort of IDEs beyond basic text editors so that we all may eternally remember all syntax perfectly I don't see what the point of this is.

      My intent there was to demonstrate a concrete example where the atrophy has happened for probably the majority of us already, not to imply that these tools themselves are a bad thing (I don't think they are).

      If my aim is to be the best coder in a theoretical environment, purely training for skill, then yes obviously an AI assistant undermines that - just like an IDE does, except at a different scale.

      I suppose this is the crux of my opinion (and the reason I wrote the article). I don't disagree in the least that one day it may be more efficient to let an AI assume the role of software engineer to a degree where coding skills become moot. My point is, essentially, that sucks, and I want to wave my fist and complain about it. I'm the horse-and-buggy mechanic crying about the inevitable demise of the craft I've dedicated my career to in response to those darn-tootin' automobiles.

      4 votes
      1. [5]
        TheMediumJon
        Link Parent
        I mean, either the atrophy and tools are both bad or they both aren't really bad, since we both seem to agree on linking the one to the other. Why does it suck, though? Again, my current usage is...

        My intent there was to demonstrate a concrete example where the atrophy has happened for probably the majority of us already, not to imply that these tools themselves are a bad thing (I don't think they are).

        I mean, either the atrophy and tools are both bad or they both aren't really bad, since we both seem to agree on linking the one to the other.

        I suppose this is the crux of my opinion (and the reason I wrote the article). I don't disagree in the least that one day it may be more efficient to let an AI assume the role of software engineer to a degree where coding skills become moot. My point is, essentially, that sucks, and I want to wave my fist and complain about it. I'm the horse-and-buggy mechanic crying about the inevitable demise of the craft I've dedicated my career to in response to those darn-tootin' automobiles.

        Why does it suck, though?

        Again, my current usage is occasional, partial, and in a corporate environment.

        I don't mind the current state.
        I won't particularly mind the entire profession being replaced (as far as business non-hobby purposes go), since it will happen or not happen in a greater economic context (and all office jobs dying off will give me a bigger headache than merely myself being unemployed, unless all of them are taken care of in which case I won't have much of an issue at all).

        So what's left? The lost art of coding? Continue to code, nobody's going to stop you from doing so in general, just like there's blacksmiths and charioteers and whatever still around, just not necessarily professionally.

        3 votes
        1. [3]
          Rudism
          Link Parent
          The same reason it would suck if human authors were replaced by AI-generated novel factories, or the way it would suck if nobody ever painted or took another photograph or made a movie again...

          Why does it suck, though?

          The same reason it would suck if human authors were replaced by AI-generated novel factories, or the way it would suck if nobody ever painted or took another photograph or made a movie again because coming up with a 20-word prompt and having an AI generate it for you is more efficient. Sure, maybe at some point in the future those works will become indistinguishable (or even surpass) the work of talented humans, but I'm not looking forward to that future where we just consume artificially regurgitated commercial slop where there's no human intent behind anything anymore. That idea just seems cold and empty and sad to me.

          3 votes
          1. [2]
            stu2b50
            Link Parent
            Eh, I understand it more for novels or art. Code is often very pragmatic. The point is not the form itself, but the result. The same could be said of mechanical engineering today. Factory produced...

            Eh, I understand it more for novels or art. Code is often very pragmatic. The point is not the form itself, but the result.

            The same could be said of mechanical engineering today. Factory produced works of engineering and cold and soulless compared to artisan-ally crafted mechanical wonders… but I’m more than fine with my car being made by a factory.

            There wasn’t particularly a lot of soul in enterprise software engineering to begin with. I’m sure there’ll still be hobbyist making neat, handcrafted code for the sake of it.

            2 votes
            1. vord
              Link Parent
              On the other hand, houses built after 1960 or so are much more generic and boring than homes built before. There was a lot more variety of architecture that often better-suited the specific areas...

              On the other hand, houses built after 1960 or so are much more generic and boring than homes built before. There was a lot more variety of architecture that often better-suited the specific areas in which the homes were built in the days before somewhat-universal air conditioning.

              And I'd say there's a fair parallel there. I also appreciate cars being customized though, and am saddened by how much doing so kills their resale value.

              I'd love to see an era where most cosmetic features of cars are no longer offered by factory. Factory only offers plain cloth seats and white vehicle. If you want it customized, you gotta find a third-party artist or DIY. It'd certainly make the roads and used car markets more interesting.

              1 vote
        2. Rudism
          Link Parent
          I was ruminating on this part of your comment last night, because it stuck with me. Why am I comfortable with algorithmic-based code assistants like autocomplete but feel I must draw the line at...

          I mean, either the atrophy and tools are both bad or they both aren't really bad, since we both seem to agree on linking the one to the other.

          I was ruminating on this part of your comment last night, because it stuck with me. Why am I comfortable with algorithmic-based code assistants like autocomplete but feel I must draw the line at generative autocomplete? I think it's a simple matter of trade-offs vs benefits. I disagree that it's a simple dichotomy where either all code assistants are good or they're all bad. Algorithmic assistants make us slightly worse programmers, but the benefits they bring in terms of efficiency and guarding against bad syntax and basic errors far outweigh the negative aspects. Generative AI also makes us worse programmers and has the potential to do so to such an extent that the negatives crush the positives (in my opinion).

          Returning to the example of weightlifting, I'd compare it to wearing some kind of brace that slightly reduces the efficiency of your weight lifting but ensures you have good form and won't injure yourself, versus having the robot that can just lift the weights for you.

          I realize that analogy may not be particularly powerful to someone who doesn't see inherent value in the existence of software engineering as a profession in general, but figured I'd get my thoughts out anyway.

  7. [5]
    Jakobeha
    (edited )
    Link
    I use LLM code generation a lot, but I always check (and fairly often change) what it generates afterward. I'm pretty sure when not using AI, I end up writing the same code by hand, slower....

    I use LLM code generation a lot, but I always check (and fairly often change) what it generates afterward. I'm pretty sure when not using AI, I end up writing the same code by hand, slower. Actually when not using AI, I frequently copy-paste large chunks of code and then heavily edit them, so it's not so different.

    Is AI making me forget how to write code? Probably not, because I still write a lot of the code by hand, and I read over all of it. Code is often buggy and/or needs to be modified (e.g. to handle new features), especially LLM-generated code, so even if at first I don't really understand the LLM-generated code, I often end up learning it later (to debug or modify it).

    Will AI replace me eventually? Maybe, but I don't see how using AI is making that non-negligibly faster. Current models train on written code, presumably not factoring who wrote it. I can ensure AI companies don't train on my code or writing process by using a local model (although I don't, but that would be a separate argument).

    Will AI-generated code retroactively become illegal? If so, that means a lot of recently written code retroactively becomes illegal, so it seems very unlikely.

    There are problems related to LLM-generated code, such as less developer positions and bad software. But these have been problems before LLMs, and are exacerbated by IDEs and frameworks like Electron respectively. I don't think getting rid of IDEs and frameworks are the solution, and likewise, I think the root cause (allowing more people to write software easier, albeit most of it low-quality) is a net positive.

    6 votes
    1. [4]
      Rudism
      Link Parent
      You and I may have the ability to read and rewrite AI generated code into something that we understand and fits our style or overarching vision for the project we're working on, but what about the...

      You and I may have the ability to read and rewrite AI generated code into something that we understand and fits our style or overarching vision for the project we're working on, but what about the students just starting to learn who are getting bombarded by free code generation tool subscriptions? How will those guys develop the foundational skills to understand what the AI is generating to the same degree that we can? What happens when we're retired or dead and the AI coders take our place as the new crop of senior-level devs?

      I'm not trying to say that we can avoid this seeming inevitability, but I think it's still important (and a bit cathartic) to point out that it's happening.

      3 votes
      1. [3]
        Jakobeha
        (edited )
        Link Parent
        I agree with the first part: LLMs hurt education. With LLMs, a student can accomplish a lot without really understanding what their code is doing, but they can't accomplish as much or with as good...

        I agree with the first part: LLMs hurt education. With LLMs, a student can accomplish a lot without really understanding what their code is doing, but they can't accomplish as much or with as good quality as if they did understand. Students can pass entry-level classes and maybe even graduate and get jobs developing software, barely learning anything, until eventually they reach a point where the LLMs aren't good enough. At this point they're like students who skipped the first half of a course because there are no graded assignments until the midterm. Maybe these students are still able to learn the fundamental skills they missed, but at the very least they wasted a lot of time not learning the skills earlier.

        But I disagree this is inevitable. Students still can learn the fundamentals to write good code. At minimum, schools can give assignments and exams in an IDE that doesn't support LLMs, and I think this is necessary for the entry-level classes. But I also think it's possible to design assignments that LLMs aren't good enough to solve for higher-level classes, so that students still truly learn how to write code even when they have access to LLMs.

        I think in this way an LLM is a lot like a calculator or parent/friend/tutor who you could convince to do your work for you. In theory, it's easy for someone to "complete" assignments outside of class without truly learning, and this has been the case before LLMs. But (most?) students still learned the fundamentals, because they still had to pass in-class assignments to get a good overall grade, and because the honor system and potential of being caught was enough to deter them (at least me) outside of class. I believe most schools nowadays give every student a cheap locked-down laptop, and colleges should have enough money to do the same. In this case, a teacher can ban LLMs for an assignment by requiring students do the assignment on their locked-down computer, in a restricted IDE that has syntax highlighting and autocomplete but no LLMs.

        4 votes
        1. vord
          (edited )
          Link Parent
          Life is learning, if not from somebody else, but for yourself. Every day is a chance to learn something new or improve at something you already know. Therefore LLMs hurt our ability to genuinely...

          LLMs hurt education.

          Life is learning, if not from somebody else, but for yourself. Every day is a chance to learn something new or improve at something you already know. Therefore LLMs hurt our ability to genuinely live.

          Every time you do something wrong, you've learned something.
          Every time you do something right, you've learned something.
          Even if you're doing boring things that you've done 100 times, doing it insures that you're still good at it. I can certainly still use an arc welder after 25 years, but the next 200 welds I attempt are gonna be pretty messy and probably ruin whatever I was trying to weld more than fix it.

          C++ was my second programming language, and I used it exclusively for 5 years. I probably couldn't write a properly compiling "Hello World" without looking it up. I first learned on QBASIC, using it for 3 years. I couldn't even begin to tell you the syntax now.

          2 votes
        2. DawnPaladin
          Link Parent
          Agreed. When I was learning to code, the most frustrating part was hitting an unhelpful error message, not understanding why, and not being able to find an answer on Google or StackOverflow. In...

          Agreed. When I was learning to code, the most frustrating part was hitting an unhelpful error message, not understanding why, and not being able to find an answer on Google or StackOverflow. In situations like that, the best fix is an experienced tutor who can recognize the real problem and get you unstuck.

          But good tutors are hard to find. Theoretically StackOverflow could help, but they usually won't. Coding bootcamps have paid tutors, but bootcamps have their own problems. LLMs are maybe 80% as good as an experienced tutor, always on call, and much more affordable.

          1 vote
  8. ajwish
    Link
    This is not the main point of the piece, but I think the sentiment is kind of required scaffolding for the main point: I see this view a lot and don't really understand it - the idea that being a...

    This is not the main point of the piece, but I think the sentiment is kind of required scaffolding for the main point:

    Serial AI coders may want to be big boy developers, but by letting predictive text engines write code for them they're hurting the chances of that ever happening. At least for now, the real big boy developers are the ones writing code that those predictive text engines are training on.

    I see this view a lot and don't really understand it - the idea that being a "real" programmer means meeting a standard of difficulty or doing things from scratch (does it really matter whether I have the syntax memorized for multiple languages? or whether I can implement a specific, routine task off the top of my head? Is that really what makes a skilled programmer?). I think this is maybe what @Eji1700 was referring to? I'm not sure this is even all that specific to using AI for code generation (as others have pointed out, it seems pretty applicable to many of the other tools that are regularly used).

    I will be the first to admit that I am not a developer, or a programmer, or a skilled coder or anything of the sort. So being a "real big boy developer" is probably not within my reach.1 I do bioinformatics work now, most of which is totally self taught and objectively terrible from a code quality stand point. I joke that I will become obsolete once people realize all I do is google variations on [insert concept here] in [language]. So it's definitely possible that I'm missing some context or experience to understand this idea. Why is this such a common perspective?

    1 Additionally, as I will never be a big boy, the most I could possibly aspire to would be a grown up developer. I'm not a huge fan of "big boy" as a descriptor. But I digress.

    1 vote
  9. [2]
    rubaboo
    (edited )
    Link
    Am interested to hear what their opinions are on it as a learning tool. I reach for it sometimes for this reason because it can suggest the correct keywords for me to look up (e.g., in docs or...

    I'm not addressing the use of AI as a learning tool to gain better insight into programming languages and libraries in order to improve the code that you write yourself. (I have opinions on that too.)

    Am interested to hear what their opinions are on it as a learning tool.

    I reach for it sometimes for this reason because it can suggest the correct keywords for me to look up (e.g., in docs or what have you) without me having to uncover them manually. But I'm wondering if there's unforeseen drawbacks using it in this way too.

    1. Rudism
      Link Parent
      I think the overall gist of my opinion on that is that it can be a fantastic, possibly game-changing tool for this purpose, so long as it's approached with a healthy dose of skepticism (at least...

      I think the overall gist of my opinion on that is that it can be a fantastic, possibly game-changing tool for this purpose, so long as it's approached with a healthy dose of skepticism (at least given the current state of LLMs and their tendency to hallucinate legitimate-sounding nonsense).