12 votes

The Assist - Thoughts on AI coding assistants

22 comments

  1. [12]
    rkcr
    Link

    The main thing I dislike about AI coding assistance is that I have to very carefully review the AI's code to make sure that it does the right thing and none of the wrong things. And I, personally, find that reviewing code is more difficult than writing equivalent code myself.

    17 votes
    1. [7]
      winther
      Link Parent
      Add to that, code you haven't written yourself, you don't understand as well. You haven't been through the mental problem solving process that made you arrive at this specific solution. Sure it...

      Add to that, code you haven't written yourself, you don't understand as well. You haven't been through the mental problem solving process that made you arrive at this specific solution. Sure it might be good enough short term to get code running you only halfway understand, but what about months and years down the line? If things go wrong, will the AI be good enough to find problems with the code it has generated? I see that as a pretty big risk with regards to lack of knowledge and control over your own codebase.

      9 votes
      1. [3]
        unkz
        Link Parent
        I have heard these words being spoken about Python by C developers and about C by assembly developers.

        I have heard these words being spoken about Python by C developers and about C by assembly developers.

        7 votes
        1. [2]
          winther
          Link Parent
          Heh, that is a fair point. I might be getting to that age myself now where I think "the new way" of doing things is wrong, damn kids these days and so on. Every generation adds a new layer of...

          Heh, that is a fair point. I might be getting to that age myself now where I think "the new way" of doing things is wrong, damn kids these days and so on. Every generation adds a new layer of abstractions and maybe AI will turn into another layer on top of things, just like I don't have to deal with memory management in my daily work - which I am sure some C++ developers frown upon.

          1. em-dash
            Link Parent
            The difference, I think, is determinism*. Code usually always does the same thing when run with the same inputs. When it doesn't, it's considered a bug, often a particularly annoying one to debug....

            The difference, I think, is determinism*.

            Code usually always does the same thing when run with the same inputs. When it doesn't, it's considered a bug, often a particularly annoying one to debug. LLMs generally do not have that property, which means you can't reliably share LLM input to allow others to get the same result you did. That makes it a fundamentally different sort of thing than source code.

            Sure, you can share the generated code, but when people don't understand the generated code, that's equivalent to sharing a compiled binary. There are reasons we store the source code in source control and not just the resulting binaries.

            * In the sense of observable behavior. One could argue that some types of garbage collectors are nondeterministic, for example, but that doesn't meaningfully affect what the program does in the way normal application code would.

            6 votes
      2. [3]
        teaearlgraycold
        Link Parent
        Most recently I used Github's Copilot to tell me how to adapt a NextJS stream into a NodeJS stream. In the Node ecosystem streams are nice and consistent and pretty universal so you can pretty...

        Most recently I used Github's Copilot to tell me how to adapt a NextJS stream into a NodeJS stream. In the Node ecosystem streams are nice and consistent and pretty universal so you can pretty much pipe anything into anything. But whenever you get into web programming (like the official AWS S3 client which supports Node and web platforms, or NextJS which is both server-side and client-side) there's no longer a standard. So you always need to do some song and dance to adapt the two together.

        I was proxying a stream from my back-end through a NextJS API route and Copilot gave me the magic lines of code to make it work. It wasn't obvious at all.

        import { Readable } from "stream";
        import { pipeline } from "stream/promises";
        
        export default async function handler(
          req: NextApiRequest,
          res: NextApiResponse
        ) {
          // ...
          const backEndResponse = await fetch(...);
          // ...
          const reader = backEndResponse.body.getReader();
          const stream = new Readable({
            async read() {
              const { done, value } = await reader.read();
              if (done) {
                this.push(null);
              } else {
                this.push(Buffer.from(value));
              }
            }
          });
        
          await pipeline(stream, res);
        }
        

        It's a small amount of code so I'm not worried. Basically, the AI was an in-situ adapted Stack Overflow.

        I do not think you can have it write much more than boilerplate and snippets like the bit above without the whole app falling apart rapidly. I'm not worried about anyone trying to do that because they won't make it very far and should learn their lesson soon enough.

        2 votes
        1. [2]
          creesch
          Link Parent
          Heh, you clearly haven't worked in some of the corporate environments I have. With "developers" who are little more than seat warmers at the best of time and horrible spaghetti messes months down...

          I'm not worried about anyone trying to do that because they won't make it very far and should learn their lesson soon enough.

          Heh, you clearly haven't worked in some of the corporate environments I have. With "developers" who are little more than seat warmers at the best of time and horrible spaghetti messes months down the line. I can fully see these types of folks just happily hacking things together until it resembles the functionality they want to.

          To be fair, overall it wouldn't really make a difference in the quality of their code so I guess it doesn't really matter there :D

          4 votes
          1. teaearlgraycold
            Link Parent
            If anything AI may actually eliminate those people's jobs (and save the rest of us from working with them). I've interviewed people who were clearly just "requirement to diff" machines, operating...

            If anything AI may actually eliminate those people's jobs (and save the rest of us from working with them). I've interviewed people who were clearly just "requirement to diff" machines, operating in simplified environments and making small code changes with limited understanding. Soon, if not already, LLMs will make that last-mile English to code translation fully automated.

            As for what I do, I think we'll need AGI to replace me. Going from the level of a vague business opportunity or customer request to a full working app tens of thousands of lines and 8 new skills later isn't something you can do with an LLM.

            2 votes
    2. devilized
      Link Parent
      This is something that comes with practice and experience. When Github Copilot was first being piloted in our company, I was pulled into many meetings with our engineering leadership to see if it...

      This is something that comes with practice and experience. When Github Copilot was first being piloted in our company, I was pulled into many meetings with our engineering leadership to see if it was something that would be worth leveraging for us. One of the concerns I had was that it could be particularly problematic for junior-level developers. They would be caught between not understanding the code that was being generated, but also perhaps just blindly accepting it and not actually solving the problem for themselves. It would be like learning math on a calculator that sometimes makes horrible mistakes. We did decide to adopt it, and I haven't seen any increase in problems during reviews, but time will tell about the impact code auto-completion will have on an engineer's ability to think and problem-solve for themselves.

      Reviewing code that you didn't write is a critical part of software engineering as you move up in your career. As my team's senior engineering lead, I spend more time reading other people's code than writing my own nowadays. That's just how the role works as you move up. But over the years of both writing and reviewing code, I've become much more comfortable reading and understanding someone else's code to the point that it's not really difficult for me anymore. The same goes with AI-generated code. I can figure out pretty quickly whether I want to adopt and modify what the prompt is offering, or trash it and just do it myself.

      8 votes
    3. unkz
      Link Parent
      This is something that probably comes with time, and I can see how this is going to be difficult for junior developers, but experienced developers are the ones who will benefit most from LLM based...

      This is something that probably comes with time, and I can see how this is going to be difficult for junior developers, but experienced developers are the ones who will benefit most from LLM based coding assistants.

      1 vote
    4. tanglisha
      Link Parent
      I feel like this is an argument that code reviews should be given separate time of their own and not ignored in the planning process. I haven't used copilot, but I imagine it's not all that...

      I feel like this is an argument that code reviews should be given separate time of their own and not ignored in the planning process. I haven't used copilot, but I imagine it's not all that different from what a code review is supposed to be when they aren't handwaved.

      1 vote
    5. ButteredToast
      Link Parent
      This is why I think that it may be best for LLMs to stay firmly outside of IDEs and text editors. I find that they can be useful for very pointed sort of questions (e.g. give me an example of...

      This is why I think that it may be best for LLMs to stay firmly outside of IDEs and text editors. I find that they can be useful for very pointed sort of questions (e.g. give me an example of using X library to do Y thing with Z condition), where I can then evaluate the answer and pluck any useful bits from it without fear of problematic bits slipping by.

  2. creesch
    Link
    As I said in the other thread currently up about AI, I don't trust LLMs to write code for me either. With a few exceptions for one-off, one use scripts. They are however pretty good as a rubber...

    As I said in the other thread currently up about AI, I don't trust LLMs to write code for me either. With a few exceptions for one-off, one use scripts. They are however pretty good as a rubber ducky and general tool.

    LLM driven code assistants are popping up everywhere, though they come with a lot of risks. Which I feel is because they are so easy to sell, their output at first, second and even third glance can look very impressive.

    At the end of the article, the author seems to reach a similar conclusion

    So I feel that the better place for the AI assistant is in a supporting role like: "Hey, AI. I have written this code. Do you see anything wrong with it? Could it be improved? Have I missed an obvious simplification? Do my unit tests seem to be exhaustive?"

    Though, even that to me is slightly on the naive side or at least incomplete. Because you still need to have the knowledge and experience to validate answers to "improvements" and such.

    So yeah, LLMs can be very useful tools, as long as you have the knowledge and experience to ask the right questions and validate the answers. Any tool that pretends that this is not the case raises a red flag for me.

    6 votes
  3. [3]
    unkz
    Link
    Every time I read one of these almost identical articles I get the feeling that this is the root of it all, and everything else is justification. For me, I am more interested in solving problems...

    Which is bad, for all the reasons I mentioned, but also, and mainly, because I love coding!

    Every time I read one of these almost identical articles I get the feeling that this is the root of it all, and everything else is justification. For me, I am more interested in solving problems than the minutiae of the actual programming, and coding assistants are like having an infinite number of junior developers available to get problems solved faster.

    5 votes
    1. [2]
      creesch
      Link Parent
      It might be part of it, though boiling it down to all of it does feel a bit dismissive to me. As far as you not being interested in solving programming problems, I get that. At the same time, if...

      It might be part of it, though boiling it down to all of it does feel a bit dismissive to me.

      As far as you not being interested in solving programming problems, I get that. At the same time, if you are using the output in a production like environment, you will need to do a proper review of the solution. Well, you can also not do that as long as it "just works" but then you are opening yourself up for horrible tech debt later on not to mention security risks.

      It's entirely possible you are employing LLM's in a situation where that is not the case. But for the majority of projects, that review process will still be essential. Which does require the amount of detail that I feel like you just said you are not interested in. Although it is possible that I am reading too much into your statement.

      3 votes
      1. unkz
        Link Parent
        The review process is essential, but I don't find that it requires anywhere near the same amount of work that this article and others seem to suggest. The only thing I can think of is that these...

        But for the majority of projects, that review process will still be essential. Which does require the amount of detail that I feel like you just said you are not interested in.

        The review process is essential, but I don't find that it requires anywhere near the same amount of work that this article and others seem to suggest. The only thing I can think of is that these articles are being written by what I would consider to be junior developers, where they have not yet read enough code to be able to quickly assess whether the solution is adequate or not, and that's a fair point -- coding assistants have the greatest productivity value to senior developers, and they pose a moderate risk to junior developers.

        3 votes
  4. [2]
    drannex
    Link
    Not directly related to the post, but I have always said that code is just the English language with some new grammar rules. When I first started to realize that when I was ~10 it drastically...

    Code is actually really efficient, relative to human language. It's extremely dense, expressive and specific. Compare the amount of time taken to write out:

    For each element in the field array, we need to emit a debug log entry explaining what element we're working on. Then, examine the type property on the element. If it's "bool", then we want to add the element to the array of Boolean fields, otherwise, add it to the array of non-Boolean fields. In either case, emit a debug log entry saying what we did.

    versus:

    for (const field of fieldArray) {
     logger.debug('processing field', field)
     if (field.type === 'bool') {
       booleanFields.push(field)
      logger.debug('field is Boolean')
     } else {
       nonBooleanFields.push(field)
       logger.debug('field is not Boolean')
     }
    }
    

    Not directly related to the post, but I have always said that code is just the English language with some updated new grammar rules. When I first started to realize that when I was ~10 it drastically changed how I looked at and understood code, and has helped many other people when I am teaching them to code at any age. It unlocks something to being able to understand it, and this section of the post will really help me exemplify that to others in the future.

    2 votes
    1. drannex
      Link Parent
      This is one of my biggest issues with AI code, and why I don't particularly allow it at my company, it's one thing for someone to comment "# Snagged this from StackOverflow: [[LINK]]" for some...

      When a human writes code, we have the original programmer to vouch for their code, plus another person to review and double-check their work.

      This is one of my biggest issues with AI code, and why I don't particularly allow it at my company, it's one thing for someone to comment "# Snagged this from StackOverflow: [[LINK]]" for some basic niceties of understanding, but with generated code you have no one to fall back on to debug or understand why something was written that way. Sure, it's on the fault of the primary programmer to audit the code, but it's not the same level of trust. Perhaps the code generated is safer, or more practical, but I like to see the thought process of the human in the machine.

      4 votes
  5. BroiledBraniac
    Link
    Never use it to write "real" code, but copilot writes some nice test suite boilerplate. Of course, you have to go and edit everything it writes, but it does save me a good chunk of time there.

    Never use it to write "real" code, but copilot writes some nice test suite boilerplate. Of course, you have to go and edit everything it writes, but it does save me a good chunk of time there.

    2 votes
  6. Rudism
    Link
    The sentiment that using an LLM essentially converts your work from writing code into doing code reviews feels pretty spot-on to me. It's the main reason why I'll probably never use AI assist to...

    The sentiment that using an LLM essentially converts your work from writing code into doing code reviews feels pretty spot-on to me. It's the main reason why I'll probably never use AI assist to generate code--it removes the part of my job that I actually find enjoyable and replaces it with one of the things that I enjoy doing the least.

    1 vote
  7. Staross
    Link
    I think another aspect is short time vs long term, I feel like LLM can help you to go quick early on at the price of long term learning. E.g. compare starting from code LLM snippet and toying with...

    I think another aspect is short time vs long term, I feel like LLM can help you to go quick early on at the price of long term learning. E.g. compare starting from code LLM snippet and toying with them vs reading the doc.

  8. SteeeveTheSteve
    Link
    AI's place right now is best for snippets of code or feeding it a code outline so it has set parameters to work within. Having it just make a complicated program from a description sounds like a...

    AI's place right now is best for snippets of code or feeding it a code outline so it has set parameters to work within. Having it just make a complicated program from a description sounds like a terrible idea, too much room for creativity. That's how you end up with a game that does an equivalent of the sudo r m -rf /* command when you lose.

    I'd love to have a text editor where you write code, psudo code or a description and have it suggest code. It could just aid you or write an entire program as you see fit. I'd also want it to use AI to review code, checking as you write it to spot mistakes, dangerous code and other potential issues before you run it. It wouldn't replace a coder or a reviewer, it'd just help both be more efficient and allow more people to do either job. At least until AI is good enough to replace you or allow any halfwit to do your job at a fraction of the cost.