24 votes

Behind the curtain: A white-collar bloodbath

28 comments

  1. [19]
    teaearlgraycold
    Link
    The top comment on Hacker News says this is all a game of misdirection from the reality that we aren’t in a zero interest rate world anymore. Personally I’m pretty disappointed with the...

    The top comment on Hacker News says this is all a game of misdirection from the reality that we aren’t in a zero interest rate world anymore. Personally I’m pretty disappointed with the performance of LLMs when coding. At first I was extremely impressed with Claude 4 Opus, but increasingly I’m finding I’m out $10 from using it for 15 minutes and the code it writes either works but is extremely messy or doesn’t work at all. To be fair this has been in the uncommon coding domain of a just in time compiler for an esoteric programming language. But Opus can’t manage to refactor a few lines of assembly meant to add two 8 bit numbers together… In fact none of the major LLM providers can manage this task. I’ll have to learn a bit of aarch64 and step through the assembly in lldb to make it work.

    As always they are absolutely time savers. But they’re not very smart.

    20 votes
    1. [11]
      hobbes64
      Link Parent
      There are a few issues that you may not be considering. I'm going to concentrate on software development because that's what I know, but I think broad concept of "work is going to be totally...

      There are a few issues that you may not be considering. I'm going to concentrate on software development because that's what I know, but I think broad concept of "work is going to be totally different soon in a way that people can't counter" can be applied to other white collar work.

      First: Whether or not AI could do your job, if your boss thinks it can then your job is in danger. And your boss may replace you and find out AI isn't able to do your job. That's expensive and stressful for both you and your boss.

      Second: The other thing is that AI won't be able to do your job today, or tomorrow. But someday it will. There are a lot of problems in the world that can be solved by brute force. It's infinite monkeys creating Shakespeare. Once people start setting up test driven development with AI, it can start trying solutions until one works. There may be a few years of people getting jobs creating proper test cases but not doing any coding, but that will take fewer employees than are used now for development plus testing.

      In addition to those things, I think that AI will completely disrupt how software works in general, and this will make it much harder for people to participate in software development. For example, a very large part of development now is interaction between systems. This is normally done using APIs. APIs currently need to be designed (by people) and setup coordination between two or more parties (of people).

      A lot of the design of APIs is to decide what kind of data is going into the API. Let's say you want to make an API to run a petstore. Here, you need to predefine what is a pet, what is a user, what is an order, and all of the verbs that let you retrieve and alter those things. You have to spend some time modeling a subset of the world, then you have to document all of that and publish it and write the code etc.

      I think in a few years (maybe 5-10), work by people on APIs will mostly be dead. I think this because the push for Agent to Agent communication with AI. This will start with some careful API design, but will transition quickly to mass sharing of raw data, and the object relationships will be built in the AI based on context. If an AI has access to a bunch of raw data, it will only need a relatively small amount of metadata to separate pets from users from orders. In the same way a person could go to someone's desk and look at electric bills and store receipts and family pictures they could tell you a lot of things about the desk owner. So these agents will be communicating and people will only be involved a little bit. Both the designers and implementers won't be needed. Just AI. Sure, there will need to be consideration of sharing of all this data. Like somebody needs to set up data sensitivity levels and things like that. But you don't really think that a typical manager or business owner understands that do you? So the chaos around this will cause more disruption.

      15 votes
      1. raze2012
        Link Parent
        K, ill burn thst bridge when we get to it. The theme of 2020's seems to be to kick the can down the road and inflate the present with speculation, so I'll treat this as such.

        But someday it will

        K, ill burn thst bridge when we get to it. The theme of 2020's seems to be to kick the can down the road and inflate the present with speculation, so I'll treat this as such.

        11 votes
      2. ogre
        Link Parent
        Honestly, I don’t think it will. Maybe a year or two ago I operated a level that could eventually be replaced by LLMs. I’m not a sitting duck though. My personal growth outpaces the incremental...

        The other thing is that AI won't be able to do your job today, or tomorrow. But someday it will.

        Honestly, I don’t think it will. Maybe a year or two ago I operated a level that could eventually be replaced by LLMs. I’m not a sitting duck though. My personal growth outpaces the incremental progress LLMs have been making for the last year. AI cannot do what I do today and I don’t think it will be able to do what I’ll be able to do in a few years.

        Fundamentally I don’t believe LLMs can do All the Things of Tomorrow. The proselytizing around what they could do in the future all sounds like “what if the world was made of pudding?” to me.

        11 votes
      3. [5]
        teaearlgraycold
        (edited )
        Link Parent
        I’ve been trying to make my work more AI-proof in the last year. I’m focusing on more design and UX stuff under the assumption that what makes an app pleasing to use is more out of reach to AI...

        I’ve been trying to make my work more AI-proof in the last year. I’m focusing on more design and UX stuff under the assumption that what makes an app pleasing to use is more out of reach to AI than translating a set of specs into a CRUD backend. I also just transitioned to self-employment, contracting to startups in San Francisco. Constantly needing to justify my getting paid for each project is going to keep me far fitter than the average salaried employee. And the skillset has a ton of overlap with being a startup founder. So far all of my clients are pushing me to found a business. I’m just a bit too lazy to do the 60+ hour weeks necessary for that. But I of course could - if forced to for survival.

        That all said I don’t think that’s much solace to the average software engineer out there. Most people are trained to work in environments that are much more easily automated away.

        Edit: I would like to throw in that even in the “doomsday” scenario where software engineering is fully automated that is something of a tech utopia. Imagine if every person can have bespoke software for every purpose at an affordable cost. Cheap bespoke operating systems, games, etc. Corporate strangleholds over consumers will disappear, as effectively as if all software had been open sourced overnight.

        9 votes
        1. [2]
          tauon
          Link Parent
          Going to go on a bit of a tangent here, but it’s not hard to imagine a situation alternative to the utopia. Personally I’m a fan of the Manna story and would highly recommend giving it a read if...

          Corporate strangleholds over consumers will disappear

          Going to go on a bit of a tangent here, but it’s not hard to imagine a situation alternative to the utopia. Personally I’m a fan of the Manna story and would highly recommend giving it a read if you have the time, and contemplating the two perspectives it brings to the table a bit. I can definitely say it had a lasting impact on me, and I think about it relatively frequently these days with all the discussion around jobs disappearing due to machine learning in all aspects of life. (It’s a bit older already, so not about only AI specifically, but the point is not how exactly the jobs disappeared, it’s what we do after that as a society.)

          8 votes
          1. BroiledBraniac
            Link Parent
            That one was a ride, very interesting concept on both ends.

            That one was a ride, very interesting concept on both ends.

            1 vote
        2. [2]
          hobbes64
          Link Parent
          Remaining fit is a good thing. I notice that using AI quickly erodes skills and I'm sure that most developers are falling into that trap. Hopefully you will also have an advantage when...

          Remaining fit is a good thing. I notice that using AI quickly erodes skills and I'm sure that most developers are falling into that trap. Hopefully you will also have an advantage when interviewing and that the hiring people can tell your responses from someone who is cheating during the interview. That's one of the first things we look for now when doing remote interviews.

          Good luck to you!

          4 votes
          1. teaearlgraycold
            (edited )
            Link Parent
            Absolutely. There are junior engineers I sit next to when working in person and I can see them regressing as they rely entirely on AI. The AI gives them bad code and then they try to have the AI...

            Absolutely. There are junior engineers I sit next to when working in person and I can see them regressing as they rely entirely on AI. The AI gives them bad code and then they try to have the AI fix it instead of doing simple transformations themselves.

            For example (and this is a common anti-pattern from AIs) this engineer got code from an LLM that was something like:

            if list.any(condition); list.each(x => condition(x) && do_thing(x)); fi

            In reality the code was far more obtuse. Point being the enclosing if is redundant.

            I looked over his shoulder and mentioned how the code was borderline incomprehensible (more politely). It took him 60 seconds to even understand what I’d pointed out as he hadn’t written it. Then to finally remove the if he asked the AI to perform the change instead of deleting two lines. I asked why he wouldn’t do the change himself and he was confused. The guy is otherwise pretty smart but is completely reliant on the LLM to work.

            9 votes
      4. [3]
        tauon
        Link Parent
        This would be an interesting direction for future software to go in, and one I hadn’t considered much previously. Two immediate reactions around this style of inter- and, possibly at first,...

        This would be an interesting direction for future software to go in, and one I hadn’t considered much previously.

        Two immediate reactions around this style of inter- and, possibly at first, intra-application communication:

        1. It doesn’t sound very justifiable to the legal department. For example, if this becomes practice, who in the company can guarantee my GDPR rights to access, correction, erasure of my data?
        2. It sounds extremely inefficient to toss around nearly all or the entirety of data at hand about one or multiple (when interacting) users/people for every single situation? Let alone the processing aspect… Or am I misunderstanding something here?
        7 votes
        1. davek804
          Link Parent
          Not addressing your two questions directly, but I watched a pretty informative (< 10 minute) video explaining some of the potential sea-changes that could come along if MCP (agent to agent...

          Not addressing your two questions directly, but I watched a pretty informative (< 10 minute) video explaining some of the potential sea-changes that could come along if MCP (agent to agent protocol) ends up becoming something important: https://www.youtube.com/watch?v=cPdVbVx5Z3Q.

          2 votes
        2. hobbes64
          Link Parent
          Regarding the legal issues, yes that’s a problem but historically powerful companies write a lot of the laws. Regarding data size, that’s a technical issue that is solvable. Maybe there will be...

          Regarding the legal issues, yes that’s a problem but historically powerful companies write a lot of the laws.

          Regarding data size, that’s a technical issue that is solvable. Maybe there will be shared storage. Maybe most of the data involved is compressed text that is small compared to (for example) streaming video which is sent all over now.

    2. [3]
      Omnicrola
      Link Parent
      While I appreciate that Amodei is saying things like "you should tax me, to help offset the disruption this will cause", it does still read like hype. To your point, I also have repeatedly run...

      While I appreciate that Amodei is saying things like "you should tax me, to help offset the disruption this will cause", it does still read like hype.

      To your point, I also have repeatedly run into the limits of Claude's ability to code. It's great for getting started and brainstorming but it makes mistakes all the time. And the further you get from popular languages and common use cases, the worse it gets. A human still need to check and review all of it, and debugging someone else's code is always harder than writing your own.

      8 votes
      1. [2]
        slade
        (edited )
        Link Parent
        I'm struggling with this at work. I have a non-engineer coworker who is fully bought into the AI hype. He will regularly come to the team of engineers with something he's "built" in AI and ask if...

        A human still need to check and review all of it, and debugging someone else's code is always harder than writing your own.

        I'm struggling with this at work. I have a non-engineer coworker who is fully bought into the AI hype. He will regularly come to the team of engineers with something he's "built" in AI and ask if we can use it to "save time". It's worth noting that his time saver proposals never (ever) begin with asking how long it'll take the engineers. This kind of thinking is what will get me replaced with AI. With nothing more than hype, and being wrong every time, he continues to go straight to assuming that his 15 minute AI output is better. If he were my boss, I would be fired.

        I know that day will come, but so far when I ask to see the code he produced and not just the end result, I'm disappointed. Monolithic, overly complex, redundant state, bugs. The last time I forced myself to use his work as a shortcut, it took me far longer to reverse engineer and fix it than if I'd just built it.

        I predict that soon these things may not matter, but today, any code built by AI will need to be maintained by an engineer. Someday I expect AI to be good enough that a non engineer can build and maintain their own product. But I haven't seen that yet.

        The same problems of technical debt and bad architecture exist, and these are where I find AI least useful. I can use AI to refactor code more efficiently, but have never seen AI do a good job of refactors without a lot of guidance from a knowledgeable driver.

        16 votes
        1. Scheigs
          Link Parent
          My coworkers and I have already started speculating on when engineering positions will start opening to fix these vibe coded projects down the line. Maybe they already have. “Did an actual...

          My coworkers and I have already started speculating on when engineering positions will start opening to fix these vibe coded projects down the line. Maybe they already have.

          “Did an actual engineer create/maintain this project?” might be a legitimate thing to think about in an interview a few years from now. I imagine a lot of management feels the same as your coworker.

          7 votes
    3. [4]
      TMarkos
      Link Parent
      When using them for coding tasks I get the impression that we're missing a layer of complexity. The problem is that nobody has put together an AI suite that can test the output for functionality...

      When using them for coding tasks I get the impression that we're missing a layer of complexity. The problem is that nobody has put together an AI suite that can test the output for functionality in the target environment. It's well within the abilities of current AI to put out an initial sample of code, evaluate the results, and continually revise it until it works (for some things, at least) but right now the human has to do all the run/evaluate QA stuff and provide feedback to the AI. It's easy to see how you could plumb AI into a full requirements-driven coding process and have it autogenerate tests and fixes.

      That's the real threat, I think, and the missing piece is not AI performance but instead the infrastructure to allow the AI to act in a multi-step iterative process in a sandbox to create the desired product.

      I do think the AI CEOs are selling a product that won't be on the market for five, maybe ten years, because I'm realistic about the amount of work and config that will have to go into making a toolkit like that, to say nothing of getting businesses to actually install and use it. I don't think they're being hyperbolic about the eventual impact of AI on jobs, though. Again, five to ten years plus time for adoption. As the sector grows more robust and has more mature product offerings, it will absolutely decimate a lot of entry-level positions.

      Given the speed of public policy, it isn't absurd to start the conversation now.

      4 votes
      1. [3]
        teaearlgraycold
        Link Parent
        My recent tests were in such an environment. Cursor allows LLMs to make tool calls into your local machine, things like reading files and running commands. I even specifically instructed Opus to...

        My recent tests were in such an environment. Cursor allows LLMs to make tool calls into your local machine, things like reading files and running commands. I even specifically instructed Opus to run lldb (non-interactively through lldb scripts). All of the LLMs I tested got stuck in a loop.

        2 votes
        1. [2]
          TMarkos
          Link Parent
          I think that's close to what I'm describing but I'm specifically talking about something that has several differently-instructed agents working in parallel to self-monitor and bias subsequent...

          I think that's close to what I'm describing but I'm specifically talking about something that has several differently-instructed agents working in parallel to self-monitor and bias subsequent attempts against the output of past tests in order to break looping behavior and improve future outputs. It's all about the amount of context you can jam in to the machine.

          Also very possible that AI performance just isn't where it needs to be yet. My own tests with OpenAI's products have been pretty disappointing in terms of complex problem-solving, but there was a big leap in performance when 4.1 came out, and before that the transition from 3 to 4 was similarly large. It's possible that they won't be able to improve performance at all from here, but I think that's less likely than the case where it continues to get marginally better every iteration, at least for the near future.

          1 vote
          1. teaearlgraycold
            Link Parent
            I have to disagree a bit. GPT 3 - 4 was a step function improvement and represents what I see as the beginning of the plateau given the available data. From there we’ve seen improvements to...

            I have to disagree a bit. GPT 3 - 4 was a step function improvement and represents what I see as the beginning of the plateau given the available data. From there we’ve seen improvements to context size, structured outputs, tool calling, etc. But base models aren’t meaningfully more intelligent. It’s been incremental for the last few years.

            2 votes
  2. albino_yak
    Link
    While I think there are plenty of reasons to be concerned about AI, this particular article feels disingenuous. As I read it, the authors are just elaborating on statements from the CEO of...

    While I think there are plenty of reasons to be concerned about AI, this particular article feels disingenuous. As I read it, the authors are just elaborating on statements from the CEO of Anthropic. He obviously has a financial stake in continued AI investment. So when he "warns" that AI will replace 20% of entry level jobs, it feels like a sales pitch disguised as concern. There's a big difference between "AI is so powerful it should be illegal" and "my AI is so powerful it should be illegal".

    14 votes
  3. [2]
    aetherious
    Link
    I would not want to be entering the job market right now. Anecdotally, I've seen work that would've been done by an entry-level copywriter and business development executive been replaced with AI....

    I would not want to be entering the job market right now. Anecdotally, I've seen work that would've been done by an entry-level copywriter and business development executive been replaced with AI. LLMs still don't seem equipped to replace higher-level thinking and particulars that would come from experience, but working with them has been equivalent to working with interns, who are very skilled in some ways but dumb in others. You give detailed instructions, get the results immediately, and then you revise and check the work.

    I can't speak to the particulars of coding or curing cancer, but the current capabilities are enough for most marketing functions, sales, putting together basic reports (especially work like transforming data from one form to another), and initial research.

    There also seems to be data to back this up.

    The latest that I was able to find is from a workforce data company, Revelio Labs, May 13

    Unlike rising blue-collar pay, salaries offered in new job postings for white-collar roles have remained flat since mid-2024. Early-career roles have experienced the most significant stagnation in listed pay, while salaries in top executive job postings have increased.

    There's definitely a lot of hype mixed in with what's achievable with current AI capabilities which makes it difficult to ascertain what future work trends might be, but it's certainly going to become more challenging for those who were previously able to have a role that was entirely 'low'-level work that AI can do.

    6 votes
    1. imperator
      Link Parent
      Yeah, pay is stagnant more likely as a result of a hot economy cooling. Companies are saying they have an AI plan and are looking into it, but this isn't translating into revenue or even material...

      Yeah, pay is stagnant more likely as a result of a hot economy cooling.

      Companies are saying they have an AI plan and are looking into it, but this isn't translating into revenue or even material savings.

      I think those that have been got the hardest are gig workers where AI is good enough for smaller companies marketing.

      3 votes
  4. [6]
    BeanBurrito
    Link
    My apologies if this is posted in the wrong category. The article warns of a 20% unemployment surge coming in the next 2 years due to AI. Most entry-level white collar jobs will be eliminated....

    My apologies if this is posted in the wrong category.

    The article warns of a 20% unemployment surge coming in the next 2 years due to AI.

    Most entry-level white collar jobs will be eliminated.

    This will be especially devastating for young people. Some jobs most at risk of being eliminated are entry and mid- level: coders/engineers, researchers, marketing, proof reading, brainstorming, administrative assistants, bookkeepers, software developers, radiologists, paralegals, customer service etc.

    The article outlined how governments need to quickly warn the public and help retrain people. How companies need to be charged a token AI tax to pay for the devastation that will happen in the very near future.

    4 votes
    1. [4]
      mimic
      Link Parent
      As others have said I feel like this is largely just going off what the Anthropic CEO has said and it's being used as misdirection, fear mongering, and effectively a sales pitch. Maybe the full...

      As others have said I feel like this is largely just going off what the Anthropic CEO has said and it's being used as misdirection, fear mongering, and effectively a sales pitch.

      Maybe the full quote would help people understand how ridiculous of a statement it is.

      “Cancer is cured, the economy grows at 10% a year, the budget is balanced — and 20% of people don’t have jobs,” said Amodei, describing one potential scenario.

      Not only is it stated as "one potential scenario" he's also calling out cancer being cured and a balanced budget.

      9 votes
      1. [3]
        PuddleOfKittens
        Link Parent
        Suggesting AI can balance the budget is a strong indicator he hasn't thought it through - balancing the budget isn't (just) a financial problem, it's a political problem - the US has probably the...

        Suggesting AI can balance the budget is a strong indicator he hasn't thought it through - balancing the budget isn't (just) a financial problem, it's a political problem - the US has probably the highest revenue in the world, there are far poorer governments who still balance their budgets. If AI finds a way to save tons of money, that will just permit politicians to cut taxes.

        10 votes
        1. boxer_dogs_dance
          Link Parent
          Just noting that the current US administration had an opportunity to bring in forensic accountants and interview their employees with regard to waste but they put on a performative act instead. My...

          Just noting that the current US administration had an opportunity to bring in forensic accountants and interview their employees with regard to waste but they put on a performative act instead. My conclusion is that they are in favor of some of the waste and fraud that they claim to be against.

          8 votes
        2. mimic
          Link Parent
          Imo it's just a CEO throwing out click baity garbage. He definitely hasn't thought through anything he said past "will this drive attention to my product?". He has a vested interest in keeping the...

          Imo it's just a CEO throwing out click baity garbage. He definitely hasn't thought through anything he said past "will this drive attention to my product?". He has a vested interest in keeping the money flowing in and also people scared and worried about LLMs to get them to invest their time into learning the tools, bringing them into the ecosystem, which drives investment and keeps the money flowing. It's just another set of meaningless words thrown out by an out-of-touch CEO. And these news outlets are running with it because it gets clicks.

          4 votes
    2. Sodliddesu
      Link Parent
      Luckily, part of my job has already been adjusted to add the duties of detecting fraudulent submissions through AI generated content. Doesn't really mean I'm 'safe' from getting fired but it would...

      Luckily, part of my job has already been adjusted to add the duties of detecting fraudulent submissions through AI generated content. Doesn't really mean I'm 'safe' from getting fired but it would be hilarious if I was.

      5 votes