35 votes

Is the AI bubble about to burst?

42 comments

  1. [22]
    an_angry_tiger
    Link
    Boy I hope so! No I don't think so! Not yet at least, I feel like these things just shamble on for longer than you think. "The market can remain irrational longer than you can remain solvent", and...

    Boy I hope so!

    No I don't think so!

    Not yet at least, I feel like these things just shamble on for longer than you think. "The market can remain irrational longer than you can remain solvent", and so forth. I remember being convinced in the mid-2010s that the whole software industry was going to have another dot-com bubble pop, still waiting for that to materialize (aside from the blip around covid).

    47 votes
    1. [2]
      kacey
      Link Parent
      I think the article’s title is a bit clickbait-y, but their thesis is sound: the amount of investment in LLMs/diffusion models needs to turn dramatic improvements in per-work hour GDP output in...

      I think the article’s title is a bit clickbait-y, but their thesis is sound: the amount of investment in LLMs/diffusion models needs to turn dramatic improvements in per-work hour GDP output in order to make sense economically. But so far, we’re not seeing it.

      The result of that will be some sort of gasp, or at least, a concerning cough.

      21 votes
      1. raze2012
        Link Parent
        Yeah, turns out humans, even in places like the US, are relatively cheap compared to trying to maintain a perfect, potentially more efficient, more obedient robot to do it all for you. Trying to...

        Yeah, turns out humans, even in places like the US, are relatively cheap compared to trying to maintain a perfect, potentially more efficient, more obedient robot to do it all for you. Trying to threaten the creative landscapes was easier than trying to replace plumbers (who themselves are already expensive, typically making 6 figures once established).

        Despite falling costs for robot hardware — often cited as evidence that automation would accelerate — the true expense lay not in acquiring robots but in integrating them into production systems. Programming, optimising, and maintaining industrial robots typically costs three times more than the machines themselves, meaning that only large firms producing highly standardised goods could justify their widespread use. Small and medium-sized enterprises, which tend to specialise in customised, small-batch production, saw little incentive to automate.

        There's a lot of good explanations and breakdowns of culture here as well, but this cold economic reality should make it clear why things aren't moving as fast as billionaires want it to.

        11 votes
    2. elight
      Link Parent
      This post grew to be a lot longer than I expected... Logically, the AI field will either be over- or under-developed. Capital and power interests won't allow under-development so some sort of...

      This post grew to be a lot longer than I expected...

      Logically, the AI field will either be over- or under-developed. Capital and power interests won't allow under-development so some sort of over-development makes some degree of AI investment meltdown inevitable.

      Despite this, LLM-based AI is already and will change society just as the advent of the internet to the general public, in the late 90s, dramatically altered contemporary civilization.

      We're not due for a 1999-2000 AI industry meltdown yet. Governments and private investors are currently too invested in staking claims in this gold rush environment.

      Let's talk history.

      1999-2000 saw damn seemingly any moron who had an idea for "X except on the web" acquiring absurd levels of investment with little more than an idea. It was a gold rush to stake claims to territory on this new thing called "the internet".

      If there is to be a meltdown, it's likely a few years off. Comparing to '99, we would need more eToys.com-levels of failures before investors pull back.

      Disclaimer: I certainly have some bitterness over this meltdown as I was in the industry at the time. I saw the obvious profligacy, every time I routinely (I lived just off of it in Reston) drove down the Dulles Toll Road, the "Silicon Valley of the East", as it was called in the 90s and 2000s, driving by dozens of buildings with the dumbest of early web company names affixed to the top floor facades of the office buildings.

      We're not here yet.

      It remains to be seen if the AI heavy hitters can continue to progress the technology. When the technology ceases its Moore's Law level of improvement, then the field will fill with applications of the existing technologies until big interests start to fail and startups fail to acquire that territory and fail left and right. If and when that starts to happen, that will likely be the lit fuse of a meltdown.

      I'm not convinced yet that this is a guaranteed outcome in the near future. However, given how greed works, it seems inevitable.

      Just as Google and its contemporaries grew out of the beginning of the internet age, many other large interests (AOL, Et al) had to fail for those successful companies to establish themselves.

      AI will continue to establish a new population of bourgeoise who own this new means of production. It will centralize more wealth. Since Trickle Down Economics is at best laughable, there will be less for everyone else, even as economies still grow.

      Will it change the world? It already has. Will it create a utopia? Unlikely.

      13 votes
    3. hobbes64
      Link Parent
      I agree that it's best if the AI bubble bursts and also that there will continue to be irrational investment and basically chaos around it. There always has to be some kind of bubble, and whether...

      I agree that it's best if the AI bubble bursts and also that there will continue to be irrational investment and basically chaos around it. There always has to be some kind of bubble, and whether or not AI is very useful it's the current target for investment.

      5 votes
    4. [17]
      okiyama
      Link Parent
      What makes you say shamble? The progress is still, at least personally, stunning. In the near future, ARC-AGI-2 will ensure we have computers that can reason. In the distant future, the energy...

      What makes you say shamble? The progress is still, at least personally, stunning. In the near future, ARC-AGI-2 will ensure we have computers that can reason. In the distant future, the energy efficiency will get better and better.

      I simply don't understand why everyone hates these programs so much. The problems are entirely political and social. Yes, they will make Google the most valuable company by an order of magnitude, but I don't understand cheering for a technology that can solve large swaths of the service economy for us to fail. I don't like having a service job, I'd much rather a computer did as much of it as possible.

      I'll reiterate I know the economic realities of that meaning I become unemployed. Look into the forward party, we need ubi and universal SNAP.

      4 votes
      1. [2]
        Omnicrola
        Link Parent
        Not the person you're replying to, so speaking just for myself: I don't hate the program/tech specifically, I hate that people are trying to shove square pegs into round holes.... with a power...

        I simply don't understand why everyone hates these programs so much.

        Not the person you're replying to, so speaking just for myself:
        I don't hate the program/tech specifically, I hate that people are trying to shove square pegs into round holes.... with a power hammer. GenAI is a genuienly useful technology, and it's clear that we're going to find more and more uses for it. But the people hyping it up are often blind to or willfully ignoring the downsides and limitations of the technology. So having to deal with those people is what I find tiresome.

        24 votes
        1. okiyama
          Link Parent
          That I can give a +2. I personally find the "AI Bad" crowd a lot more annoying simply because I'm around them much more often. I can see how if you followed whatever drivel Sam Altman was spewing...

          That I can give a +2. I personally find the "AI Bad" crowd a lot more annoying simply because I'm around them much more often. I can see how if you followed whatever drivel Sam Altman was spewing today you'd have the opposite experience.

          6 votes
      2. [2]
        an_angry_tiger
        Link Parent
        Well I just like the word shamble, but I'm referring to the companies as the ones shambling, the ones pouring billions of dollars in to initiatives that are so far wildly unprofitable and are...

        Well I just like the word shamble, but I'm referring to the companies as the ones shambling, the ones pouring billions of dollars in to initiatives that are so far wildly unprofitable and are forecasted to remain wildly unprofitable. I'll defer the reasoning behind that to the article because, well, I'm an idiot who doesn't know much and the article seems to be written by someone less dumb and who knows more.

        The progress is still, at least personally, stunning. In the near future, ARC-AGI-2 will ensure we have computers that can reason.
        ...
        I simply don't understand why everyone hates these programs so much. The problems are entirely political and social. Yes, they will make Google the most valuable company by an order of magnitude, but I don't understand cheering for a technology that can solve large swaths of the service economy for us to fail.

        As for these parts, I don't have as much faith as you. There's nothing to me to give confidence that AGI will be reached with current LLMs or their extensions, a lot of studies coming out about the efficacy of these models in real work seem to point to it being a wash, there's been indications that the major players have been cheating on their benchmarks to achieve their performance, yadda yadda.

        For me personally, someone working in the field of software engineering, reading endless news articles and discussions about AI being great or AI being terrible, it feels like a wash. Every discussion has one side being wildly enthusiastic, talking about spinning up 10 agents and having them do the work and throwing out results that aren't good and getting 10x productivity and wow its going to eat up the world and everyones going to be left behind, and the other half have experiences where it doesn't work at all and is a complete waste of time -- and I don't know how to reconcile those. I give them a fair shot, I've tried various models and IDEs over various times on various projects and don't feel any productivity boost from them. I still get hallucinations of API calls that don't exist, I get sources/citations that don't exist or don't corroborate what the AI is saying at all, I still get code that is not at all what I'm prompting for, even after repeated direction. I still see "no no not that model, that one is garbage, you have to use this model, and this IDE, these are the ones that are good", and I see them applied to various models and IDEs, with contradictory examples of which ones are the good ones and which ones are the garbage ones. I still find myself replacing actually Just Doing the work (investigation, or coding, or debugging, or writing tests) with the same amount of time taken if not more with writing prompts that seem to trend more towards "actually doing the thing that needs to be done".

        I forget where I was going with this post to be honest, I guess I'm on the side of "I tried them and they're disappointing and I have low confidence in their results". Maybe they'll continue to improve and eventually I'll join the side of "wow they're so useful they're doing all my work for me", but that's not guaranteed and that's not what I'm seeing. As for the AI industry in general, it has an insane amount of investment plowed in to it, the results still seem dubious for the expected returns, and there's indicators that it's not going to reach what it needs to be to come close to reaching the hype justifying the investment (as the article details).

        In the end who knows, I'm not confident enough either way to actually bet money on it, I'll just continue being half-way in for the ride and seeing how it goes, but without much confidence in it replacing all of jobs and thrusting us in to a beautiful utopia of no work needing to be done -- or a dystopia of no work needing to be done and we all starve to death.

        8 votes
        1. okiyama
          Link Parent
          Yeah I land similarly with the huge caveat that I simply enjoy following the state of the art in what computation is capable, and believe that to be a large portion of my competitive advantage...

          Yeah I land similarly with the huge caveat that I simply enjoy following the state of the art in what computation is capable, and believe that to be a large portion of my competitive advantage against other engineers.

          That's why I'm confused how non plussed folks like you that have the capability to understand these things on a deep level are. These programs are amazing, and so deeply fascinating to learn about. Like, LLMs aren't even it anymore, large reasoning models and text diffusion are the two current state of the art techniques. Apple put out that FUD article about how LRMs have paradoxical reasoning capabilities, and slapped a "computer can't reason" title on it. My point about arc AGI 2 is that it's a comprehensive test, "can the user reason?". Since that exists, computers will reason (on no specific timeline).

          What you said about struggling to work productively with the state of the art LRMs are all simply a result of inexperience. I struggled with all the things you discussed, hallucinations are inherent to search incapable AI systems, like LLMs, but not inherent. RAG (and the new hotness, graphRAG) can guarantee no hallucinations.

          I haven't touched the agents stuff, besides the fact that Claude is just a single agent you work with nowadays (the way it has a mix of diffusion and token prediction is a delight to watch debug), just because I can't use it on the job yet. Definitely looking forward to what Google cooks up in that regard.

          Thank you for taking the time to write such a thorough comment, I am not disagreeing with you, I'm merely inviting curiosity if you want to be working near the top of the market. Cobol pays like crazy, nothing wrong with angling that way.

          1 vote
      3. [10]
        Apex
        Link Parent
        I work in a software role where AI is pushed more and more, and for me it’s really all about those economic realities you mention. I’m trying to grow my career within the tech industry and with...

        I work in a software role where AI is pushed more and more, and for me it’s really all about those economic realities you mention. I’m trying to grow my career within the tech industry and with the implications of AI, I could easily be replaced if my company went that way. With our current political administration and the trending direction, we’ve had little to no chance of UBI or anything like that. They’d sooner replace us all with AI and automation and sit on their pile of gold than give up a single coin.

        2 votes
        1. [9]
          okiyama
          Link Parent
          You think you could be replaced today? I take it you're a junior of some sort? Sorry I have a lot of questions and thoughts, I'd love to hear more but broadly, what's stopping you from becoming...

          You think you could be replaced today? I take it you're a junior of some sort?

          Sorry I have a lot of questions and thoughts, I'd love to hear more but broadly, what's stopping you from becoming the master of the AIs? They can't replace you if you're the one doing the replacing, that's a story as old as automation.

          1. [8]
            Apex
            Link Parent
            Not a junior, but a base-level Systems Engineer-turning Software Engineer. We've been in a process of skilling up towards software engineering as we've been moved around from an Ops team to an SRE...

            Not a junior, but a base-level Systems Engineer-turning Software Engineer. We've been in a process of skilling up towards software engineering as we've been moved around from an Ops team to an SRE team to an Infrastructure Services team. Unfortunately, I found out yesterday I'm very likely to be laid off in a few weeks due to AI, so my fears aren't unfounded!

            3 votes
            1. [4]
              teaearlgraycold
              Link Parent
              Can you provide more information about how you might get replaced?

              Can you provide more information about how you might get replaced?

              1. [3]
                Apex
                (edited )
                Link Parent
                My company is pivoting towards implementing AI in our actual product, as well as more AI tooling, and seems to believe that operational work will just disappear if they shout "AI!" enough. They...

                My company is pivoting towards implementing AI in our actual product, as well as more AI tooling, and seems to believe that operational work will just disappear if they shout "AI!" enough. They want to eliminate all junior/entry-level roles and only have senior/staff/principal engineers. They also want to use vibe coding design tools and that eliminates need for design associates and juniors (and PMs seem excited to just whip things up themselves without engaging the design team at all).

                I will say that I don't dislike AI as a tool - I use it as a pair-coding partner instead of a replacement or just disengage entirely from what I'm coding. I have it explain things, break down concepts, etc., and it's great for generating boilerplate code. I just don't think it should replace people, but act as a force multiplier. Intelligence isn't the same thing as wisdom.

                1 vote
                1. [2]
                  teaearlgraycold
                  Link Parent
                  So… no plan? Sorry to hear you’re being replaced by hopes and dreams. In some ways that’s even worse than being replaced by AI.

                  So… no plan? Sorry to hear you’re being replaced by hopes and dreams. In some ways that’s even worse than being replaced by AI.

                  1. Apex
                    Link Parent
                    Edited my comment to add some more context. They seem a bit...lost in the sauce, chasing the AI bubble but constantly pivoting to the new hot thing, methodology, etc. It's left a lot of people...

                    Edited my comment to add some more context.

                    They seem a bit...lost in the sauce, chasing the AI bubble but constantly pivoting to the new hot thing, methodology, etc. It's left a lot of people feeling overwhelmed, exhausted, and lost.

                    2 votes
            2. [3]
              okiyama
              Link Parent
              Jesus, I'm so sorry to hear that. I'm betting that company will regret that choice soon enough. My offer stands, I'm happy to tutor or coach if you'd like. I'd highly recommend looking into...

              Jesus, I'm so sorry to hear that. I'm betting that company will regret that choice soon enough.

              My offer stands, I'm happy to tutor or coach if you'd like. I'd highly recommend looking into becoming a Salesforce developer. It's in perpetually high demand, and you just need to earn a few certificates to get your foot in the door. When I was in your spot, fired from my SWE job and certain I'd never get another, that was my backup plan. I lucked out in the job search though so didn't have to go through with it.

              1. [2]
                Apex
                Link Parent
                Thanks for the offer! I'll definitely look into Salesforce developer and see what it entails and how far I can get in qualifying myself for the role. What would I have to read/learn to study up to...

                Thanks for the offer! I'll definitely look into Salesforce developer and see what it entails and how far I can get in qualifying myself for the role. What would I have to read/learn to study up to take the certification(s)? Do you consider it an easy job?

                I feel like my skills are fairly shallow, but they're very wide as I've done lots of cross-functional collaboration within my company and originally came on as Support before moving to Engineering, so I've dealt with all of our product verticals in some way or another.

                I find myself a bit disillusioned with tech at the moment and fearful about putting myself out there on the job market, and I think I'm entirely disinterested in any FAANG-related companies as they just seem to lack any morals. Do you have any recommendations on places to look for jobs? I started 7 years ago, and used Built In at the time, and LinkedIn (though I didn't see any return from it).

                1 vote
                1. okiyama
                  Link Parent
                  Luckily, Salesforce has really high quality learning materials for free through their "Trailhead" thingamabob. https://trailhead.salesforce.com/ It has different "Trailmixes" that are basically...

                  Luckily, Salesforce has really high quality learning materials for free through their "Trailhead" thingamabob. https://trailhead.salesforce.com/ It has different "Trailmixes" that are basically just bundles of courses with a theme. The learnings are all free, you'll only ever pay to take an exam to get a certificate.

                  I'd recommend starting out with targeting Salesforce Administrator certification https://trailhead.salesforce.com/en/credentials/administrator this is useful for giving context on what the software is, and what it can do if all you're doing is configuring it. I'll level that this one is pretty boring, but I found it pretty necessary to know all the wild stuff Salesforce can do.

                  Next would be Platform App Builder, which is going beyond basic configuration to building widgets and what not to make custom solutions, but without (much?) programming https://trailhead.salesforce.com/en/credentials/platformappbuilder

                  Then you'd go for Platform Developer I (and II if you care, but usually you'd do that while on the job. Most places will support your studies at this point, and you can use it for promos/raises) https://trailhead.salesforce.com/en/credentials/platformdeveloperi

                  The most fun I had in my studying was https://www.apexsandbox.io/ which teaches the Apex language (basically a Java variant but with some really cool Object Relation Management stuff built right into the language). Apex would be relevant for a developer job.

                  Along the way, you can and should apply for positions after each exam you pass. Getting a foot in the door as an Administrator will let you work full-time on learning and growing.

                  One major benefit for going this route is that you definitely do not need to target FAANG or even a "tech company". It's a Customer Relationship Management and Human Resources software that tons and tons and tons of companies use. When I was on my job search for Salesforce stuff it was entirely small-medium sized local businesses that did non-tech stuff (well, the ISP is kinda tech-y, but not a "tech company" if you feel me).

                  Being wide and generally good with handling business will be really good for you. Salesforce jobs are inherently cross-functional since what you're building is targeted at non-technical people to get their job done quickly and efficiently.

                  For the job search, LinkedIn is probably your best bet, though Indeed/Monster/Dice are surprisingly fruitful for these jobs. When you're ready to start looking, grab LinkedIn premium, turn on all the "please alert recruiters and the whole world when I make updates", then get your LinkedIn cleaned up, of course prominently with any and all certificates you have.

                  Additionally, Trailhead has its own career market (that I have not used) - it's worth a look https://trailhead.salesforce.com/careers

                  Lastly, as for working with AI. I'd say that for this process, Deep Research (either through Gemini or ChatGPT) will do you wonders. If you have any questions on this, you can absolutely run Deep Research queries to get really in-depth answers for stuff like "what's the best path to take to get certified?" and it'll produce something better, albeit more verbose, than what I'm offering here.

                  While doing the learnings, do not use AI. Obviously you need to learn the core functionality and how to do everything on your own so that when you get the job and you do use AI to make you way more efficient than your coworkers, you know what is available and how to implement what the AI suggests.

                  I hope that helps, it really can be a good path to look into for a technically minded person that leans more on the side of soft skills (i.e. what AI ain't never gonna do) than raw technical knowledge (i.e. what the AI is eating, real fast)

      4. [2]
        an_angry_tiger
        Link Parent
        What are you going to be doing instead?

        I don't like having a service job, I'd much rather a computer did as much of it as possible.

        What are you going to be doing instead?

        1 vote
        1. okiyama
          Link Parent
          Something with my hands where my primary purpose in my career is to improve the physical, analog world around me rather than digital iteration loops that in very convoluted ways improve the human...

          Something with my hands where my primary purpose in my career is to improve the physical, analog world around me rather than digital iteration loops that in very convoluted ways improve the human experience.

          In short, hopefully pottery.

          7 votes
  2. [7]
    Greg
    Link
    That part is the crux of it, for me; “de-skilling” sounds like a euphemistic way of saying “a whole lot of jobs are going to get even more pointless than they already were”. As the author rightly...

    As such tools become more widespread, there is a risk of a digital de-skilling of fields such as computer programming, graphic design, and legal research, where algorithmically generated outputs could substitute for outputs produced by workers with average levels of competence.

    […]

    The lessons of the past decade should temper both our hopes and our fears. The real threat posed by generative AI is not that it will eliminate work on a mass scale, rendering human labour obsolete. It is that, left unchecked, it will continue to transform work in ways that deepen precarity, intensify surveillance, and widen existing inequalities. Technological change is not an external force to which societies must simply adapt; it is a socially and politically mediated process.

    […]

    The current trajectory of generative AI reflects the priorities of firms seeking to lower costs, discipline workers, and consolidate profits — not any drive to enhance human flourishing. If we allow this trajectory to go unchallenged, we should not be surprised when the gains from technological innovation accrue to the few, while the burdens fall upon the many.

    That part is the crux of it, for me; “de-skilling” sounds like a euphemistic way of saying “a whole lot of jobs are going to get even more pointless than they already were”.

    As the author rightly says, technological change is intertwined with the backdrop of society and politics - and right now, it’s untenable to say that we need to look at seriously reshaping work (and realistically have needed to for at least 40 years). So AI won’t displace jobs because there’s a political requirement for “enough” jobs to exist, and that means it’ll just make the work crappier and the wages less liveable instead.

    Eliminating jobs is a good thing, and an inevitable thing - not just from AI, but from a century or more of exponential technological growth in almost every field. The problem is managing resource distribution, and right now we seem to be defaulting to “job = some share of resources” not because that specific job needs doing, but because we’ve got no other politically viable infrastructure to give people money.

    17 votes
    1. [4]
      koopa
      Link Parent
      Yeah, congrats we made it even easier for FAANG to print money with AI generated and AI targeted ads without having to share it with labor. That’s not a good outcome for society. I think a system...

      That part is the crux of it, for me; “de-skilling” sounds like a euphemistic way of saying “a whole lot of jobs are going to get even more pointless than they already were”.

      Yeah, congrats we made it even easier for FAANG to print money with AI generated and AI targeted ads without having to share it with labor.

      That’s not a good outcome for society.

      I think a system similar to Alaska’s Permanent Fund is badly needed to properly redistribute AI wealth. But I have little hope that when this idea comes up against American “work ethic” that we won’t see anything like this without a Great Depression level economic event.

      11 votes
      1. [3]
        skybrian
        Link Parent
        They aren't profitable yet, so it seems a bit early to be thinking about this?

        They aren't profitable yet, so it seems a bit early to be thinking about this?

        2 votes
        1. [2]
          koopa
          Link Parent
          Labor market disruption can arrive before profits do. I’m optimistic about the technology and pessimistic about the politics. In my mind the political side is probably heavier lift than the...

          Labor market disruption can arrive before profits do. I’m optimistic about the technology and pessimistic about the politics.

          In my mind the political side is probably heavier lift than the technical at this point. So I think it is vital to start building support for these kind of ideas, especially when we’re in a culture like America that will be hostile to them.

          14 votes
          1. okiyama
            Link Parent
            This sums up my feelings. I use Gemini on a daily basis, it makes me able to express skills I don't have, like frontend web design. In areas I'm expert, it's a useful toddler that needs a constant...

            This sums up my feelings. I use Gemini on a daily basis, it makes me able to express skills I don't have, like frontend web design. In areas I'm expert, it's a useful toddler that needs a constant eye, but which unambiguously improves my efficiency.

            The part that bothers me is, I fully believe this will solve large swaths of the service economy, and every damn cent of that is going to go to one of 3 or 4 companies.

            5 votes
    2. [2]
      feanne
      Link Parent
      I agree, this is the crux of it! In response to AI proponents who claim that AI will just "produce more jobs"-- Ok but what kind of jobs? The whole point of generative AI is to "lower the barrier"...

      AI won’t displace jobs because there’s a political requirement for “enough” jobs to exist, and that means it’ll just make the work crappier and the wages less liveable instead.

      I agree, this is the crux of it! In response to AI proponents who claim that AI will just "produce more jobs"-- Ok but what kind of jobs? The whole point of generative AI is to "lower the barrier" (required skill level) to produce work. Not high quality work, just "good enough", minimum viable quality work produced fast at a low cost.

      As with fast food and fast fashion, mass automation will result in work roles becoming more de-skilled, standardized, and replaceable. Perhaps this will mean more jobs, but also more menial rather than creative (kind of ironic since automation was supposed to take over menial labor). Think fast-food worker vs. chef. Perhaps most roles will be some type of "assist the AI" work.

      5 votes
      1. okiyama
        Link Parent
        Assist the AI jobs will become very prevalent, I'm of the personal belief that every software engineer should expect this eventuality. We definitely need to accept that some people will produce...

        Assist the AI jobs will become very prevalent, I'm of the personal belief that every software engineer should expect this eventuality.

        We definitely need to accept that some people will produce enough to support more than just themselves and support those left behind. I'm not optimistic.

        3 votes
  3. [6]
    Deely
    Link
    Not strictly related to discussion, but Miguel Grinberg perfectly explained why (in his opinion) AI does not make you more productive. And I completely agree with him. Src:...

    Not strictly related to discussion, but Miguel Grinberg perfectly explained why (in his opinion) AI does not make you more productive.
    And I completely agree with him.

    It would be easy to use GenAI coding tools to have code written for me. A coding agent would be the most convenient, as it would edit my files while I do something else. This all sounds great, in principle.

    The problem is that I'm going to be responsible for that code, so I cannot blindly add it to my project and hope for the best. I could only incorporate AI generated code into a project of mine after I thoroughly review it and make sure I understand it well. I have to feel confident that I can modify or extend this piece of code in the future, or else I cannot use it.

    Unfortunately reviewing code is actually harder than most people think. It takes me at least the same amount of time to review code not written by me than it would take me to write the code myself, if not more. There is actually a well known saying in our industry that goes something like "it’s harder to read code than to write it." I believe it was Joel Spolsky (creator of Stack Overflow and Trello) who formalized it first in his Things You Should Never Do, Part I article.

    Src: https://blog.miguelgrinberg.com/post/why-generative-ai-coding-tools-and-agents-do-not-work-for-me

    Upd: author makes some good (again, in my opinion) claims, so, if you have time please read the article.

    15 votes
    1. [2]
      raze2012
      Link Parent
      This does succicty explain one of the two divides in AI. Those who know what the sausage is made out of vs those treating AI like a black box. If you lack the ability to understand and reason...

      I could only incorporate AI generated code into a project of mine after I thoroughly review it and make sure I understand it well. I have to feel confident that I can modify or extend this piece of code in the future, or else I cannot use it.

      This does succicty explain one of the two divides in AI. Those who know what the sausage is made out of vs those treating AI like a black box. If you lack the ability to understand and reason about the problem, AI will feel like magic if it can get anywhere in the ballpark. If you work that craft yourself: that's when you find the odd fingers in the art. The incomrehensible, unmaintanable code in the logic. The inhumanity in the writing.

      To be blunt: AI culture rewards mediocrity and "results-oriented" mindsets. Something that is sadly common in a society that is filled with distrust and anxiety.

      15 votes
      1. Greg
        Link Parent
        I'm glad to see someone pointing this out. AI's impact on work and culture is quite separate to its capacity (or lack thereof) for great art or creativity. It's one of the reasons I'm always...

        To be blunt: AI culture rewards mediocrity and "results-oriented" mindsets. Something that is sadly common in a society that is filled with distrust and anxiety.

        I'm glad to see someone pointing this out. AI's impact on work and culture is quite separate to its capacity (or lack thereof) for great art or creativity.

        It's one of the reasons I'm always spamming these kind of threads with thoughts about so many jobs being unnecessary in the first place: because most people, even those with incredible creative skill, aren't being paid for that creativity and skill. The vast majority of creatives are either stuck making fairly generic work to spec (often for the marketing industry) because that's what pays the bills, or they're working in entirely unrelated fields because they figured that would pay the bills even better.

        Discussing whether or not AI has a theoretical ability for great art or superhuman reasoning is fascinating, but our current society and economy rarely rewards either of those things.

        6 votes
    2. [3]
      hobbes64
      Link Parent
      I agree with the blog post but I fear it won’t matter. AI is coming and a lot of programmers are going to lose their jobs. They may eventually get another job at lower pay, but the upheaval will...

      I agree with the blog post but I fear it won’t matter. AI is coming and a lot of programmers are going to lose their jobs. They may eventually get another job at lower pay, but the upheaval will be painful.

      The reason I think this is because of Test Driven Development and results-based thinking at most companies. (This is mentioned in the post but then it’s dismissed).

      If the management sees that the AI code passes tests, then that’s good enough. Also people are creating prompts that include testing. As for the quality and maintainability: In my experience, most managers don’t think much about code maintenance except when they have to assign people to fix security scans. And future maintenance is a problem for the following fiscal year.

      5 votes
      1. teaearlgraycold
        Link Parent
        I’ve interviewed people that called themselves programmers but really just copy-pasted modified snippets within a small proprietary environment. Given a blank Python file they could not make...

        I’ve interviewed people that called themselves programmers but really just copy-pasted modified snippets within a small proprietary environment. Given a blank Python file they could not make anything happen. These people are screwed right now. I’m not sure yet if more capable people are at risk of losing their jobs to transformers.

        4 votes
      2. EgoEimi
        Link Parent
        I think it does create opportunity for businesses that can understand that in order to go fast, you have to go slow. Good architecture, paying down tech debt are investments: costly at first but...

        I think it does create opportunity for businesses that can understand that in order to go fast, you have to go slow. Good architecture, paying down tech debt are investments: costly at first but compound in dividends.

        There are many companies whose businesses are, behind the scenes, hobbled by mediocre tech stacks.

        I think we'll see some engineer-led companies emerge that'll run circles around such companies.

        2 votes
  4. [5]
    skybrian
    Link
    Predictions are difficult, especially about the future. However, sometimes it just takes longer than you'd expect for the future to arrive. Driverless cars seem like a good example of that?...

    Predictions are difficult, especially about the future. However, sometimes it just takes longer than you'd expect for the future to arrive.

    Driverless cars seem like a good example of that? They've been in the "coming soon" phase so long that a lot of people have written them off. And despite steady progress, it doesn't seem like they got enough adoption to show up in productivity statistics. But ride share drivers in San Francisco are starting to complain about the competition.

    I expect that a lot of breathless predictions will be wrong. But there's enough progress every year in the AI field that I think it will continue, despite us getting tired of following the latest developments.

    For software developers, there is a cambrian explosion of tools with no clear winners yet. Early adopters have to try a lot of different approaches to find something that sort of works.

    10 votes
    1. [4]
      ButteredToast
      Link Parent
      As a dev, the murkiness of where it’s all headed is starting to get to me. I know it’s a fantasy but sometimes I wish it’d just disappear and things would go back to how they had been, because I...

      As a dev, the murkiness of where it’s all headed is starting to get to me. I know it’s a fantasy but sometimes I wish it’d just disappear and things would go back to how they had been, because I really don’t need an extra thing to think about.

      There’s the moral quandaries, the concerns of skill atrophy, the worries about my usage data helping develop my own replacement, and what it all means for me as a hobbyist (as opposed to as a professional). It can’t be ignored though, because if there’s a surefire way to sideline yourself in IT it’s to stop paying attention to tech trends. It’s all so messy.

      13 votes
      1. [3]
        okiyama
        Link Parent
        I'm a professional developer who will use any tool to accomplish my job as effectively as I can. Broadly speaking I'll just say, this shit is gonna get insane really really fast. I understand...

        I'm a professional developer who will use any tool to accomplish my job as effectively as I can. Broadly speaking I'll just say, this shit is gonna get insane really really fast. I understand golden age fallacying but you're right that it's here to stay, and we should broadly be learning to work with it.

        AGI is vaporware but Artificial Super Intelligence already exists in chess and what not. More and more and more domains will get chessified where you can use a computer to help learn, but you will never beat it.

        I find it freeing. I know the computers are going to be better programmers than me, in short order. I need to focus on being as good as possible at review and quality assurance.

        2 votes
        1. [2]
          heraplem
          Link Parent
          The chess analogy is imperfect because only human chess is economically viable. You can't sustain sponsorship for an engine chess league. People want to watch humans play chess, and it turns out...

          More and more and more domains will get chessified where you can use a computer to help learn, but you will never beat it.

          The chess analogy is imperfect because only human chess is economically viable. You can't sustain sponsorship for an engine chess league. People want to watch humans play chess, and it turns out that they don't much care if computers are better.

          In any domain where the product is the point, "you can use a computer to learn but you can't beat it" means that that domain ceases to become an economically viable human activity.

          I need to focus on being as good as possible at review and quality assurance.

          Going to be honest, that sounds like hell. I'm legitimately unsure I'll be able to live in that world.

          14 votes
          1. okiyama
            Link Parent
            An important caveat is I do not expect every domain of my job to be taken over by automation. That's exactly why I ended saying I expect to be a reviewer in the near future. The next few years...

            An important caveat is I do not expect every domain of my job to be taken over by automation. That's exactly why I ended saying I expect to be a reviewer in the near future.

            The next few years will be refining agents, so rather than telling juniors to go do Jira tasks, I'll be responsible for managing and tuning a fleet of agents to do the same.

            Capitalism is hell, I've made my peace with having to do a lot of things I don't want to do in order to thrive in an inherently barbaric system (while of course doing what I can do oppose it).

            2 votes
  5. [2]
    patience_limited
    Link
    This is an essay from Aaron Benanav, the author of the 2020 Verso Books release, Automation and the Future of Work. For the upcoming Brazilian edition, he updates and expands on the book's...

    This is an essay from Aaron Benanav, the author of the 2020 Verso Books release, Automation and the Future of Work. For the upcoming Brazilian edition, he updates and expands on the book's conclusions, in the face of accelerating AI deployment and totalizing narratives about the impacts.

    From the essay:

    At the centre of today’s AI discourse lies a set of dramatic claims about labour market disruption and technological unemployment. In 2023, researchers affiliated with OpenAI and the University of Pennsylvania released a study claiming that 49 percent of workers’ tasks were exposed to large language models, suggesting an impending transformation of work across sectors ranging from education to legal services. This forecast directly updates a 2013 paper by Carl Benedikt Frey and Michael Osborne, which had sparked an earlier wave of automation anxiety by predicting that 47 percent of US jobs were vulnerable to machine learning technologies. Then as now, automation theorists imagined a tipping point at which machines would become capable of performing enough human tasks to render millions of occupations redundant, triggering an unprecedented collapse of the labour market.

    It is worth recalling what became of the last round of predictions. Following the publication of Frey and Osborne’s paper in 2013, a wave of journalistic and policy commentary warned of mass technological unemployment. Yet between 2013 and the time I completed Automation and the Future of Work in 2020, no such labour market catastrophe materialised. Faced with mounting doubts, the OECD re-analysed Frey and Osborne’s methods in 2017, concluding that only around 14 percent of jobs faced a high risk of automation — a far cry from the original 47 percent figure that had captured public attention.

    But even this lowered estimate proved too extreme. By 2020, it had become clear that many of the occupations thought most vulnerable to automation — such as food preparation, machine operation, driving, and other forms of manual or repetitive labour — had not seen significant employment declines. In most cases, employment in these sectors actually grew. Far from ushering in a wave of technological unemployment, the years following the financial crisis were marked by tepid labour market expansion and deepening economic stagnation. Productivity growth, particularly in US manufacturing, flatlined, reaching its lowest sustained rate since records began in the 1960s. The automation revolution, it seemed, had failed to arrive.

    The failure of these predictions was not accidental. It reflected fundamental flaws in the methods used to forecast the future of work. Neither the 2013 study nor its 2023 successor based their projections on empirical investigations of real workplaces, workers, or production processes. Instead, both relied on the subjective judgments of computer scientists and economists, who were asked to guess whether certain tasks could, in principle, be performed by machines. If enough tasks associated with a job were deemed automatable — typically more than 50 percent — the entire occupation was classified as at risk of disappearance. No consideration was given to how jobs are structured in practice, how tasks are bundled together, or how economic and social factors mediate the adoption of new technologies. The result was a deeply mechanistic model of technological change, in which machines would displace workers whenever technically feasible, regardless of cost, institutional barriers, or political resistance. It was a model blind to the complex ways in which work is organised, contested, and transformed — and thus singularly ill-equipped to predict the actual course of economic development.

    The whole thing is worth reading. It's annoyingly broken up with blurbs for other Verso titles, but they are relevant to the topic.

    8 votes
    1. kacey
      Link Parent
      Thank you for the link! It puts into — terribly eloquent — words why, despite being pessimistic at a pending AI takeover (or The Singularity, or AGI, or …), I’m still extremely concerned about the...

      Thank you for the link! It puts into — terribly eloquent — words why, despite being pessimistic at a pending AI takeover (or The Singularity, or AGI, or …), I’m still extremely concerned about the development of these tools.

      7 votes