35 votes

A coder considers the waning days of the craft

26 comments

  1. [10]
    post_below
    Link
    I enjoyed the article, but there's a bit of clickbaity shock value in the headline. Though the author does use the word eulogy early on, he ultimately concludes (as pretty much everyone else with...

    I enjoyed the article, but there's a bit of clickbaity shock value in the headline. Though the author does use the word eulogy early on, he ultimately concludes (as pretty much everyone else with domain knowledge has) that software engineers aren't going anywhere any time soon. It's the non-engineers, and almost engineers, that are sure ChatGPT will replace coders sometime next month.

    I thought this was interesting:

    At one point, we wanted a command that would print a hundred random lines from a dictionary file. I thought about the problem for a few minutes, and, when thinking failed, tried Googling. I made some false starts using what I could gather, and while I did my thing—programming—Ben told GPT-4 what he wanted and got code that ran perfectly.

    It's hard to be sure what language he was using, but in every language I can think of, this is a trivial problem. Basic file I/O and one call to a built in library function. So it kind of surprised me, given that he's a professional coder.

    Even if for some reason you couldn't solve it yourself, there will likely be a snippet on stackexchange that does exactly what you want, so it would be a quick Google. Indeed that's exactly why ChatGPT was so good at solving it, the code was present in its training data multiple times over.

    Maybe he was taking some artistic license in recounting the situation, or maybe my perception of what professional coder means is inaccurate.

    In any case, ChatGPT definitely changes things, as so many new technologies have before, but it's not time for eulogies, just adaptation.

    53 votes
    1. [5]
      lou
      Link Parent
      I would suggest that being "a bit of clickbaity" is simply not clickbait. It is just a title performing its rhetorical function of persuading people to engage with the content. I am not fond of...

      I enjoyed the article, but there's a bit of clickbaity shock value in the headline.

      I would suggest that being "a bit of clickbaity" is simply not clickbait. It is just a title performing its rhetorical function of persuading people to engage with the content.

      I am not fond of the broadening of the concept of "clickbait" to include every title that is not a cold and literal description of the content.

      17 votes
      1. [4]
        post_below
        Link Parent
        I would argue that coding is, in no way, a waning craft. The opposite is true. Nor does the author argue that there is any waning going on.

        I would argue that coding is, in no way, a waning craft. The opposite is true. Nor does the author argue that there is any waning going on.

        18 votes
        1. [2]
          lou
          Link Parent
          A title that does not literally reflect the conclusion of an article is not, in my view, clickbait. Titles have a long history of flourish, poeticism, irony, etc. "Clickbait", as a concept, is...

          A title that does not literally reflect the conclusion of an article is not, in my view, clickbait. Titles have a long history of flourish, poeticism, irony, etc. "Clickbait", as a concept, is meant to address a use of rhetoric that is malicious and manipulative. Do you believe that title is malicious and manipulative?

          5 votes
          1. skybrian
            Link Parent
            I don’t think “clickbait” implies malicious content. It implies a headline that “persuades people to engage with the content” (as all headlines try to do) but in a way that’s somehow misleading...

            I don’t think “clickbait” implies malicious content. It implies a headline that “persuades people to engage with the content” (as all headlines try to do) but in a way that’s somehow misleading about the contents of the article. Furthermore it says this is bad, a fraudulent claim on the user’s attention. But no more malicious than that.

            This is a matter of degree: how misleading is the headline? How willing are you to forgive any minor deception after reading the article? Your willingness will probably depend on how much you liked the article.

            I agree that it’s not so bad in this case, so I would go with “slightly misleading” rather than “clickbaity shock value.” I liked the article, so I think it’s easily forgiven.

            Why it’s misleading: the word “waning” is used in a statement that implies there will be an argument claiming that programming is declining as a craft. But the author doesn’t make a clear claim. They go back and forth on whether it might be true. A more accurate headline might leave it a question: “A coder ponders whether the craft is waning.” Maybe I should have used that instead?

            Many of us have become sensitive about language use (not a bad thing) but also impatient with the random assortment of Internet articles that are seemingly clamoring for our attention.

            I think this impatience comes from spending too much time on the Internet and getting bored with our usual reading. We hope to read something fresh and original and are easily disappointed. Yet another article about AI, ho hum. We already know about ChatGPT, were hoping to read something new about it, and a headline briefly got our hopes up.

            18 votes
        2. raze2012
          Link Parent
          well, it depends on how you use "coding". The author clearly thinks of coding as "the process of thinking about an approach to the solution and how to express it in a programming language". In...

          well, it depends on how you use "coding". The author clearly thinks of coding as "the process of thinking about an approach to the solution and how to express it in a programming language". In that sense , coding the way the author considers it may or may not be waning in that sense in that it's transforming into "understanding how an AI prompt works and giving it small tasks until the larger product is built".

          But he definitely argues that even if coding is waning, programmers aren't.

          3 votes
    2. NaraVara
      Link Parent
      He mentions early in the article that it's analogous to the dotcom days, when people with some pretty basic programming skills could pull a very respectable income doing stuff that Squarespace...

      It's the non-engineers, and almost engineers, that are sure ChatGPT will replace coders sometime next month.

      He mentions early in the article that it's analogous to the dotcom days, when people with some pretty basic programming skills could pull a very respectable income doing stuff that Squarespace lets anyone do for a pittance. Programming as a craft doesn't go away, but the whole clade of independent businesses and freelancers is gonna get rinsed out.

      7 votes
    3. redwall_hp
      Link Parent
      I'd consider that to be a trivial first-semester freshman homework problem. Certainly not something anyone would even have to think about at a professional level, unless there were some sort of...

      I'd consider that to be a trivial first-semester freshman homework problem. Certainly not something anyone would even have to think about at a professional level, unless there were some sort of optimization required for a massive file.

      You can even do that with a single bash command (shuf, for example), without even needing to program anything.

      5 votes
    4. [2]
      skybrian
      Link Parent
      I assumed there was some subtlety he left out. If you want to do it in one pass without loading the whole list into memory then it’s a bit tricky. Maybe it doesn’t seem tricky if you already know...

      I assumed there was some subtlety he left out. If you want to do it in one pass without loading the whole list into memory then it’s a bit tricky. Maybe it doesn’t seem tricky if you already know the answer.

      2 votes
      1. imperator
        Link Parent
        That's my thought as well. I'm not a programmer but know enough to be dangerous. I assumed they said that problem since it would be easier for non coders to understand.

        That's my thought as well. I'm not a programmer but know enough to be dangerous. I assumed they said that problem since it would be easier for non coders to understand.

        3 votes
  2. [3]
    nocut12
    Link
    I guess the thing that bothers me about LLMs is less a fear of losing my job because of them and more a fear that computers are going to get away from what I like about them. A big part of what...
    • Exemplary

    I guess the thing that bothers me about LLMs is less a fear of losing my job because of them and more a fear that computers are going to get away from what I like about them.

    A big part of what drew me to computer science is the idea of finding some kind of deep understanding in these systems — the idea that we can break something complex down until we understand every little step. The core idea of machine learning is in pretty direct opposition to that — the premise is that you don't need to understand the model that gets built! Recent LLMs are arguably the most complex things anyone has ever built, and they're built on a foundation that rejects the idea that we should understand what we make. I'm worried this is indicative of a change in the ethos of this field, and I think the old one is a big part of what I like about it.

    I think there's an honest possibility that this stuff becomes the "main thing" we do with computers in the future (the same way "using the Internet" is more or less the main thing we do with computers today). If most software people care about becomes AI-based, that's where the field will be for a long time. If that happens, I think we're going to be de-emphasizing the things I think are beautiful about programming and it's going to get a lot harder for me to stay enthusiastic about it.

    28 votes
    1. [2]
      teaearlgraycold
      Link Parent
      We could go from placing pavers to planning overpasses. The atomic unit will go from a token of code to an API interface. From quantum physics to relativity.

      We could go from placing pavers to planning overpasses. The atomic unit will go from a token of code to an API interface. From quantum physics to relativity.

      5 votes
      1. KeepCalmAndDream
        Link Parent
        I code as a hobby rather than for a living. I haven't tried using GPT to code yet. This is where I'm coming from. Industry keeps piling more and more abstractions on top of each other, to the...

        I code as a hobby rather than for a living. I haven't tried using GPT to code yet. This is where I'm coming from.

        Industry keeps piling more and more abstractions on top of each other, to the point where no one really understands what's going on. (Lately I've been watching Jonathan Blow and Casey Muratori talk about this. Here's one video, but it's not short.) Some code bases are millions of lines long and depend on other large code bases. Most of this is architectural glue code rather than actual functionality. Performance and code complexity has suffered as a result. Hardware has grown much faster (despite reaching the limits of Moore's law), for the most part software hasn't, the gains in hardware performance are being spent on processing middleman code.
        All that architecture has become ossified, and people work with and 'improve' it by piling on more architecture, rather than creating new, leaner systems based on what they've learned from existing code bases.

        GPT is a particularly powerful tool, and it's still new so there are still lots of places where it hasn't been applied before and can do something. But it's ultimately a tool. It's one that's useable without having to understand the foundational details of how things (e.g. hardware, programming languages, dependencies) work, so it's easy and tempting to use it to pile on more code. I hope it can be used instead for e.g. learning how to code, writing code quickly where adding to computational costs/code complexity isn't important compared to the human cost of writing it by hand.

        I'm selfish, I like to understand what I do and use (as much as possible/feasible). I've never been comfortable using large, obtuse frameworks. I've built tiny little cottages (or maybe 'huts' is a better metaphor), and I see lots of sprawling megastructures that I mostly don't want to engage with (but can't avoid). I think well-engineered buildings and skyscrapers are possible too, but it requires a big shift in values to get more of them out in the market and for people to use them.

        10 votes
  3. infpossibilityspace
    Link
    I'm not convinced. None of the tasks he describes seem particularly difficult, just time-consuming. There are no architecture/design decisions being made, which is one of the key skills a good...

    I'm not convinced. None of the tasks he describes seem particularly difficult, just time-consuming. There are no architecture/design decisions being made, which is one of the key skills a good programmer needs.

    I have no doubt LLM's will automate tedious work away, but they are a long way from producing software that is scaleable and maintainable over the product lifecycle.

    Additionally, the author makes no comment on the security of the produced code. Is it vulnerable to anything in the OWASP 10? How confident are you that you could patch it if a vulnerability was disclosed?

    16 votes
  4. [3]
    Minori
    Link
    Frankly, I've mostly used LLMs to help me name things that I can't think of anything succinct for. It's a hit and miss process, but it helps with the brainstorming.

    Frankly, I've mostly used LLMs to help me name things that I can't think of anything succinct for. It's a hit and miss process, but it helps with the brainstorming.

    12 votes
    1. [2]
      Lonan
      Link Parent
      I've done this. Basically put in the bad name and get it working, don't waste time coming up with good names. Then get ChatGPT to come up with something more suitable later. Doesn't even have to...

      I've done this. Basically put in the bad name and get it working, don't waste time coming up with good names. Then get ChatGPT to come up with something more suitable later. Doesn't even have to be fancy, just not stupid. One I had, my original name was something like "dontProcess(int ID)" so a particular identifier would be put on a list to not process... the replacement was the more elegant "exclude(int ID)", which hadn't occurred to me.

      7 votes
      1. cdb
        Link Parent
        There's the old, "I would have written a shorter letter, but I didn't have time," or somesuch. It's hard to be concise sometimes, but in the case of variable/function names, why not just be...

        There's the old, "I would have written a shorter letter, but I didn't have time," or somesuch. It's hard to be concise sometimes, but in the case of variable/function names, why not just be verbose? I think something like addToDontProcessIDList is pretty good (assuming dontProcessIDList is what you're calling the list) . It's easier to think of because you just write what you're thinking without editing it down, and it describes your thinking so hopefully when you come back to it in a month you'll remember what you were thinking when you coded this part. Compared to this, I feel that "exclude" is a little generic and could be misinterpreted in the future.

        3 votes
  5. [4]
    teaearlgraycold
    Link
    As someone that's already in the industry and feels pretty good about his skills in coding as well as many adjacent skills (customer interviews, system design, code review) I feel like there will...

    As someone that's already in the industry and feels pretty good about his skills in coding as well as many adjacent skills (customer interviews, system design, code review) I feel like there will be two stages to AI's effect on my job. We're at the beginning of stage 1.

    1. Programmers will take larger and larger steps in each cycle of work. I feel that AI can speed me up 10-15% when working on something I'm deeply familiar with, and 200-500% on special areas where I know a similar technology to the one I need to use for a project. So I can say "I know how to do X with Y, but I need to use Z. What's the equivalent in Z?" and then I can ask follow up questions to fill in gaps in my understanding. But those situations are not super common. Today it's really more so in the 10-15% range and that's only for programming. But as time goes on that efficiency gain will improve.

    2. We develop AI systems that are functionally a replacement for any human whose job can be performed by viewing and typing into a computer. This will be such a large societal upset that I don't have any plans to get ahead of it. I probably won't be in the first class of knowledge workers to be made obsolete by such a system, so I can start my planning when that happens to others first.

    11 votes
    1. [3]
      TransFemmeWarmachine
      Link Parent
      I appreciate your perspective, but I think that you are jumping from point one to two optimistically quickly. AI has an issue. It can't generate new solutions. This makes it useful for problem...

      I appreciate your perspective, but I think that you are jumping from point one to two optimistically quickly.

      AI has an issue. It can't generate new solutions. This makes it useful for problem solving existing problems, but damn near useless for fixing new issues. It will definitely impact repetitive jobs like data entry, but due to the extreme amount of edge cases in any industry, it's just not correct to assume that knowledge workers are going to get replaced by this anytime in the near or even medium term future.

      At present, AI is really only capable of tasks that are essentially "Take this piece of information, and do this with it." This means, organize existing documents and data, grabbing information the user needs from data, changing the data from one form to another (ex. programing languages, and eventually human ones.), changing the aesthetics of a piece of information, and recalling previous solutions to that piece of information.

      This is because LLMs are drawing from massive sources of information. Essentially, averaging and consolidating correct responses to existing questions. As of present, AI cannot create a new solution to a problem, without drawing from existing sources. Literally, it's why it's called "large" as it needs a fair amount of data to even run in the first place.

      Ex. The meme "Thanks Obama." Essentially, this is a sarcastic meme from the 2010s era where Obama was blamed for various nonsensical problems in such a sarcastic way that it quickly becomes clear that it has nothing to do with President Obama's actual career or political choices. (I would link it here, but I'm on a work terminal <.<) If such a meme were to arise without the knowledge base, it can't make heads or tales of it. Hilariously, you can literally try this right now. Ask ChatGPT about "Dark Brandon." As this meme originated in early 2022 i.e. after the knowledge update, ChatGPT can't figure this out. It just attempts to latch onto words involving "dark."

      Prompt: -attempt to create a meme with the punchline "Dark Brandon" based on your existing knowledge

      -Caption: "When someone asks how I take my coffee." Image: A picture of a mysterious, shadowy figure sipping coffee with a caption next to it saying, "Dark Brandon: Like my coffee, my secrets are strong and black."

      AI isn't going to design a building without a human. If it's researching chemical interactions, it's because we've already got a knowledge base of existing chemicals. AI can generate art, because it grabs from existing artists.

      Another thing I've tried to have it do, is to generate "life stories" for people. (I suck at character work and I wanted inspiration.) Asking it to make a brand new thing makes it stutter, since it doesn't have any way of creating new ideas.

      I'll start worrying a hell of a lot more when we get generative AI, but right now, AI is a great tool for figuring out software solutions, pulling info from annoying PDFs, and a hat trick for muggles.

      8 votes
      1. Mendanbar
        Link Parent
        I get your point and agree that we've WAY overblown the term AI to include things that I wouldn't call AI, but I also did chuckle at the Dark Brandon meme. :D

        I get your point and agree that we've WAY overblown the term AI to include things that I wouldn't call AI, but I also did chuckle at the Dark Brandon meme. :D

        3 votes
      2. skybrian
        Link Parent
        We should be careful to distinguish present and future AI. We all agree that it's not there yet, but "stage 2" is about an imagined future, so that's not directly relevant. It's hard to put bounds...

        We should be careful to distinguish present and future AI. We all agree that it's not there yet, but "stage 2" is about an imagined future, so that's not directly relevant. It's hard to put bounds on what smart machine learning researchers will come up with. You can imagine what you like for what will be available in a year, or five years.

        3 votes
  6. EgoEimi
    Link
    I'm very pro-use-of-LLMs. They really shine when you use them critically. One has to approach an LLM with great skepticism, interrogating it thoroughly. But an LLM is great for quickly traversing...

    I'm very pro-use-of-LLMs. They really shine when you use them critically. One has to approach an LLM with great skepticism, interrogating it thoroughly. But an LLM is great for quickly traversing a knowledge domain and revealing new thought pathways that have been previously obscured.

    But I fear that people will use them uncritically. I already see this at work. My current client will just ask it point blank "how do we do X migration?" and just copy and paste the answer in an email to me with the expectation that the steps laid out by ChatGPT are authoritative and objective and that the task would be accomplished as easily as 1-2-3.

    They are but another tool, albeit an extremely powerful one, in the toolbox.

    9 votes
  7. skybrian
    Link
    A well-written description of how ChatGPT is changing programming. (It's more about the present than the future, which is anyone's guess.)

    A well-written description of how ChatGPT is changing programming. (It's more about the present than the future, which is anyone's guess.)

    Afraid to use GPT-4 myself—and feeling somewhat unclean about the prospect of paying OpenAI twenty dollars a month for it—I nonetheless started probing its capabilities, via Ben. We’d sit down to work on our crossword project, and I’d say, “Why don’t you try prompting it this way?” He’d offer me the keyboard. “No, you drive,” I’d say. Together, we developed a sense of what the A.I. could do. Ben, who had more experience with it than I did, seemed able to get more out of it in a stroke. As he later put it, his own neural network had begun to align with GPT-4’s. I would have said that he had achieved mechanical sympathy. Once, in a feat I found particularly astonishing, he had the A.I. build him a Snake game, like the one on old Nokia phones. But then, after a brief exchange with GPT-4, he got it to modify the game so that when you lost it would show you how far you strayed from the most efficient route. It took the bot about ten seconds to achieve this. It was a task that, frankly, I was not sure I could do myself.

    7 votes
  8. terr
    Link
    I'm not a programmer really, but I've dipped a toe or two in enough to understand this section and it gave me a hearty chuckle: To be honest, I'm interested in what ChatGPT can teach me about...

    I'm not a programmer really, but I've dipped a toe or two in enough to understand this section and it gave me a hearty chuckle:

    Imagine explaining to a simpleton how to assemble furniture over the phone, with no pictures, in a language you barely speak. Imagine, too, that the only response you ever get is that you’ve suggested an absurdity and the whole thing has gone awry.

    To be honest, I'm interested in what ChatGPT can teach me about coding. I've played with it a little bit, asking it to create programs to print 'Hello World' in a few languages and to explain the components of the code. It was actually really nice to have an interactive reference that I could talk to and ask the simple, dumb questions that only a true beginner can ask, the likes of which would most likely prompt a human response of 'RTFM'. Of course, I know that GPT isn't 100% reliable and that one needs to check its work, but it gives me some hope that I might be able to learn a little bit about coding without pestering a human with my stupid questions.

    7 votes
  9. gingerbeardman
    Link
    For the multiple mentions and comparisons to Lee Sedol, it's worth noting that AlphaGo (the Go "ai" that caused him to retire) has since been discovered to have a flaw that can cause it to be...

    For the multiple mentions and comparisons to Lee Sedol, it's worth noting that AlphaGo (the Go "ai" that caused him to retire) has since been discovered to have a flaw that can cause it to be beaten every time by playing in a certain naive way. https://arstechnica.com/information-technology/2023/02/man-beats-machine-at-go-in-human-victory-over-ai/amp/

    4 votes
  10. hao
    Link
    Another excellent article by the New Yorker's de-facto resident coder-writer. Hello, Somers, if you see this :^) Having used ChatGPT extensively over the past two weeks to build a simple wedding...

    Another excellent article by the New Yorker's de-facto resident coder-writer. Hello, Somers, if you see this :^)

    Having used ChatGPT extensively over the past two weeks to build a simple wedding website — a photo gallery with automatic thumbnail generation (face detection with OpenCV), and lightly dynamically generated pages (an N-days-til-wedding counter in Microsoft's Razor templating language) — I think we are posed to bring more aspirational coders into the craft than ever. At the boundary, there are all these people who would be happy to code but need a little activation-energy push to get over a hump. Maybe they have kids and don't have many free evenings; maybe they don't have a good text editor or shell setup. If ChatGPT 5 or 6 can be that push, that seems like a net positive for the craft. Who, really, cares what happens to the elite echelon coders who have gone through Ivy League CS curricula? If they get bored and move on to other industries, if their salaries are marginally garnished by the same forces of automatization that have come for every other industry, that does not seem so bad.

    More concerning are the concerns, the same ones Ted Chiang raised, that a few monopolies will capture all the profits. AI, conditional on its success, should be the rising tide that lifts all boats. It doesn't seem poised to. Are we to bequeath our generation's set of corporate behemoths to our children? Probably. Let's hope they do a better job of trust-busting than we did.

    3 votes