42 votes

Study shock! AI hinders productivity and makes working worse.

26 comments

  1. [4]
    arqalite
    Link
    This is why I actually appreciate my company implementing a mandatory 2-hour long AI course for all employees. They made it clear - AI can be an immensely helpful tool, but it's important to...

    This is why I actually appreciate my company implementing a mandatory 2-hour long AI course for all employees. They made it clear - AI can be an immensely helpful tool, but it's important to remember it can be biased, it can hallucinate, and never ever should its output ever be used without a human reviewing it.

    42 votes
    1. [3]
      Kenny
      Link Parent
      What was the course outline?

      What was the course outline?

      3 votes
      1. [2]
        arqalite
        Link Parent
        There's two introductive chapters - A definition of AI, the types of AI (ANI, AGI, ASI), what is an LLM and why it's not close to achieving AGI (but is a step in the right direction), how LLMs are...

        There's two introductive chapters - A definition of AI, the types of AI (ANI, AGI, ASI), what is an LLM and why it's not close to achieving AGI (but is a step in the right direction), how LLMs are trained (and the inherent biases of the training data), and why they might not be suited for domain-specific applications without additional training on that domain.

        Then they introduce a proprietary platform with our own LLMs (and licensed copies of major AI players' models) and how to use it for text, documents, images and video, both as input and as output.

        A slight guide to prompt engineering and a prompting framework our AI team has built, then a short mention of our internal prompt library.

        And at the end, an entire chapter of LLM drawbacks and pitfalls, followed by our internal AI procedures (here is the "never ever should its output ever be used without a human reviewing it" part) and a note on never using client/IP data in our models without permission, and without anonymizing personal data.

        14 votes
        1. Kenny
          Link Parent
          This is fantastic, thanks for sharing.

          This is fantastic, thanks for sharing.

  2. [9]
    killertofu
    Link
    Just as an anecdote, we've had AI-based auto complete for coding for a while now. Even before the latest hype cycle. It's just more now, and the things it's trying to auto fill are slightly more...

    Just as an anecdote, we've had AI-based auto complete for coding for a while now. Even before the latest hype cycle. It's just more now, and the things it's trying to auto fill are slightly more complex. On one hand, it's useful for filling in some boilerplate stuff a little faster than I would do otherwise. But it's also kind of a huge risk even for that because its easy to become complacent and forget that you have to double-check everything single thing it does.

    The other day I had it auto fill a template file following a prompt. And it looked all good at first glance. However, when I looked closer, one of the property names was wrong. It had assumed a certain naming pattern that was not accurate. This error would not have been caught by a compiler. There's a good chance it would have been caught in testing, which still would have wasted way more of my time than it saved overall. But if it hadn't, it would have likely been detected in experimentation, which really would have wasted huge amounts of time debugging, rolling out a fix, waiting for production release, etc.

    On average, you probably shave some time off your baseline performance. That is, it helps you put characters in the file faster. But the time it takes to carefully check the work is an additional burden. And the new opportunities for and types of errors it can potentially introduce really make me question whether it's a value add at all.

    27 votes
    1. [4]
      winther
      Link Parent
      I haven't used AI much for coding, but my main concern would be long term issues you need to fix or refactor. I would say I already spend most of time reading and understanding existing code, and...

      I haven't used AI much for coding, but my main concern would be long term issues you need to fix or refactor. I would say I already spend most of time reading and understanding existing code, and that is hard enough even though you might have written it yourself. But you still have some understanding of it and your learn more about a projects codebase by writing code yourself in it. Adding AI output to mix might speed things up short term, but over time you end up with even more code no one on your team has written and the general knowledge about the codebase within the team will lessen. Good luck to whoever has to untangle years worth of AI generated code when some important business logic breaks at some point.

      15 votes
      1. blivet
        (edited )
        Link Parent
        I think this is a great point. I don't think it's a coincidence that the places I've worked at that had the lowest developer morale were where no one currently there had a hand in writing the...

        I think this is a great point. I don't think it's a coincidence that the places I've worked at that had the lowest developer morale were where no one currently there had a hand in writing the original codebase. Throwing AI into the mix would make it a thousand times worse.

        5 votes
      2. [2]
        RobotOverlord525
        Link Parent
        I only do light coding in my job, much of it not very complex, but would there not be value in the fact that, when LLMs write code, they can always be told to comment on all of it? As I understand...

        I only do light coding in my job, much of it not very complex, but would there not be value in the fact that, when LLMs write code, they can always be told to comment on all of it? As I understand it, a lack of comments is a huge problem in a lot of coding.

        1. winther
          Link Parent
          Comments are only good if they explain a why and not just a verbose explanation of what the code does, which is what these LLM outputs. Often very verbose.

          Comments are only good if they explain a why and not just a verbose explanation of what the code does, which is what these LLM outputs. Often very verbose.

          3 votes
    2. [4]
      drannex
      Link Parent
      I think of AI coding as this: You have one or two junior developers under you, they are new, they studied the basic concepts and understand how to make somewhat complex ones. But they are still...

      I think of AI coding as this:

      You have one or two junior developers under you, they are new, they studied the basic concepts and understand how to make somewhat complex ones. But they are still junior developers, they've never compiled a complete program, they don't know about data or architectures, they are new to the industry.

      When you're stumped, and you want some fresh ideas, ask them how it would work as an exercise, and then grade them on their performance and how you could do what they suggested, but make it better. Your edits are just redlines on a printed diagram for them and you are the teacher grading them.

      They're junior developers, you need to ensure that what they write works, and that includes ensuring there aren't any mistakes.

      And sometimes, they are just absolute idiots who need to start from scratch.

      9 votes
      1. [3]
        killertofu
        Link Parent
        Yeah, I don't really want that. I'm almost never searching for ideas on how to code something in a way that would make sense to throw to an AI. Maybe possibly for bootstrapping or prototyping a...

        Yeah, I don't really want that. I'm almost never searching for ideas on how to code something in a way that would make sense to throw to an AI. Maybe possibly for bootstrapping or prototyping a toy personal project. Virtually everything I've worked on in a professional capacity would be too complex for this to be of much use at all.

        Filling boiler plate is fine, I guess. I doubt its overall effect on grand scale productivity as mentioned, but at least it's optional.

        For the rest, how about hiring an actual junior programmer? Then when I'm spending my time checking work and correcting mistakes, both the company and the programmer are receiving actual value from this in the form of them being a slightly more experienced programmer. Trying to sidestep this process with AI seems short-sighted at best.

        6 votes
        1. chocobean
          Link Parent
          Not to mention, if we don't mentor junior programmers, we're soon going to have no programmers overseeing these bluffing machines at all.

          Not to mention, if we don't mentor junior programmers, we're soon going to have no programmers overseeing these bluffing machines at all.

          8 votes
        2. parsley
          Link Parent
          I fear an AI programmer is "good enough" compared to junior developers that you need to replace every year or two because they get better and leave, at least from the 10k foot view of a decision...

          For the rest, how about hiring an actual junior programmer? Then when I'm spending my time checking work and correcting mistakes, both the company and the programmer are receiving actual value from this in the form of them being a slightly more experienced programmer. Trying to sidestep this process with AI seems short-sighted at best.

          I fear an AI programmer is "good enough" compared to junior developers that you need to replace every year or two because they get better and leave, at least from the 10k foot view of a decision maker.

          I see AI as a form of outsourcing. Some companies will embrace it no matter what because their economic performance is detached from their engineering output, but for the most part companies won't be able to eat up the costs of lower quality code and lack of senior engineers who understand the code and can fix bugs or evolve the system.

          3 votes
  3. [5]
    creesch
    Link
    That's not surprising in the least, considering how I also see C-suites jump on it around me. Seemingly afraid they are otherwise going to miss out on the next thing. Reality is of course a bit...

    That's not surprising in the least, considering how I also see C-suites jump on it around me. Seemingly afraid they are otherwise going to miss out on the next thing. Reality is of course a bit different, but that doesn't matter as the article also mentions, it is mostly magical thinking that is being applied here.

    For me personally, use of LLMs does have an advantage with the sort of work I do. But as I have mentioned in previous discussions, only as just another tool in my tool belt and only because I know what to expect and look for.

    For a large part of the workforce, I don't think there are that many tangible benefits. At least not to the degree LLMs are currently being pushed.

    22 votes
    1. [4]
      nacho
      Link Parent
      LLMs as writing tools and aids in summarizing/going through large amounts of text are incredible, extremely time-saving and close to error-free. It's prompting that people can't do right. If you...

      LLMs as writing tools and aids in summarizing/going through large amounts of text are incredible, extremely time-saving and close to error-free.

      It's prompting that people can't do right. If you ask an LLM to output parts of a text, what page numbers to find things you're looking for, or provide references for their assertions and the like, you have extremely powerful tools. You can easily verify that the output is sound, or evaluate whether or not to spend time on something yourself.

      LLMs are hugely useful to a ton of different groups, but for language-related things, not for all sorts of other tasks where the hallucination issues etc. make themselves prominent.

      8 votes
      1. [3]
        creesch
        Link Parent
        I have to be honest, I am not exactly sure what bit of my comment you are responding to?

        I have to be honest, I am not exactly sure what bit of my comment you are responding to?

        6 votes
        1. [2]
          nacho
          Link Parent
          This part: Also this part:

          This part:

          For a large part of the workforce, I don't think there are that many tangible benefits. At least not to the degree LLMs are currently being pushed.

          Also this part:

          it is mostly magical thinking that is being applied here.

          3 votes
          1. creesch
            Link Parent
            Sure, that is another area where LLMs under specific circumstances can be useful. I'd wager though that a majority of the workforce that have a desk job deal with that specific task all that...

            Sure, that is another area where LLMs under specific circumstances can be useful. I'd wager though that a majority of the workforce that have a desk job deal with that specific task all that often. Certainly not to the degree that it hugely improves their productivity.

            You also recognize one of the many issues the article also talks about. People need to be trained in order to get useful results. Something a lot of C-suites do not recognize.

            5 votes
  4. [2]
    infpossibilityspace
    Link
    Gross. This idea that AI will magically make people more productive needs to die. These execs don't have a clue how it actually works, they just believe the hype sold to them by AI salespeople.

    And if freelancers get burned out, there are always fresh freelancers on the shelf.

    Gross. This idea that AI will magically make people more productive needs to die. These execs don't have a clue how it actually works, they just believe the hype sold to them by AI salespeople.

    14 votes
    1. g33kphr33k
      Link Parent
      "So we made this awesome prediction engine using math!" "What does it do?" "It makes very educated guesses about how things work and gives you code and advice" "Awesome. Sounds like the L1 people...

      "So we made this awesome prediction engine using math!"

      "What does it do?"

      "It makes very educated guesses about how things work and gives you code and advice"

      "Awesome. Sounds like the L1 people and the scripted telecoms people, let's replace them all."

      "It makes loads of mistakes though."

      "So do they."

      "It makes serious mistakes, can be racist, out right lies..."

      "Just like the staff I mentioned."

      "Look, it's not really read...."

      "Marketing has already put it out and sales have sold access to 1.2 billion people. Make it work."

      "It's still learning and we need time to..."

      "MAKE IT WORK, now get out!"

      Engineering and the CEO, general conversation at many companies.

      20 votes
  5. [3]
    Fiachra
    Link
    Many new technologies have gone through this same hype cycle where they peak in acclaim before crashing down in disappointment, and then regaining a more modest peak when they've found their...

    Many new technologies have gone through this same hype cycle where they peak in acclaim before crashing down in disappointment, and then regaining a more modest peak when they've found their realistic place in the economy.

    LLMs are engineered to statistically emulate the appearance of someone who knows what they're talking about. In a way, you can classify them as "bluffing machines". I think this has contributed to LLMs having an exagerrated hype peak, which means they're being optimistically jammed into roles where they don't belong and getting in the way of productivity instead of helping. The crash of disillusionment this causes could be as drastic as the rising hype was.

    10 votes
    1. [2]
      raze2012
      Link Parent
      When you put it this way, it's no wonder upper corporate are jumping on it like blood in the water. Many companies these days basically care more about selling ideas instead of quality products,...

      LLMs are engineered to statistically emulate the appearance of someone who knows what they're talking about. In a way, you can classify them as "bluffing machines".

      When you put it this way, it's no wonder upper corporate are jumping on it like blood in the water. Many companies these days basically care more about selling ideas instead of quality products, after all.

      11 votes
      1. Fiachra
        Link Parent
        Best way to scam someone is to convince them you're helping them scam someone else.

        Best way to scam someone is to convince them you're helping them scam someone else.

        4 votes
  6. donn
    Link
    I will say AI is fantastic at doing rote stuff that I would procrastinate doing, like swapping out one library for another. You know, something where the output is verifiable and there is no...

    I will say AI is fantastic at doing rote stuff that I would procrastinate doing, like swapping out one library for another. You know, something where the output is verifiable and there is no creative input.

    5 votes
  7. [2]
    tanglisha
    Link
    We just have to throw return to office into everything, don't we? There are folks out there dedicating time to figuring out if and how AI can make them more productive. Maybe it'll help, maybe...

    Bosses are urging employees to increase their output with the help of AI tools (37 percent), to expand their skill sets (35 percent), take on a wide range of responsibilities (30 percent), return to the office (27 percent), work more efficiently (26 percent), and work more hours (20 percent).

    We just have to throw return to office into everything, don't we?

    There are folks out there dedicating time to figuring out if and how AI can make them more productive. Maybe it'll help, maybe it'll make things worse. The thing is, this is going to depend on what they even mean. "Use AI" is so vague, it could mean anything from having it write code to summarizing a paper for you. I really feel like any studies on this need to be more targeted.

    2 votes
    1. cdb
      Link Parent
      The sentence you quoted really stuck out to me while I was reading the article, because the numbers show only a small amount of what they are trying to suggest. The percentages are at most 37% and...

      The sentence you quoted really stuck out to me while I was reading the article, because the numbers show only a small amount of what they are trying to suggest. The percentages are at most 37% and as low as 20%. The start of the sentence would be more accurate if it said: "Only a minority of bosses are urging employees to..."

      I think the usefulness of a lot of these surveys is pretty low without some comparison to historic numbers. I would imagine some significant percent of bosses would always say that they would like their employees to take advantage of emerging tech, take on a range of responsibilities, or any of the other points. Is 30% of bosses saying something high or low? I have no idea.

      2 votes