115 votes

I will fucking piledrive you if you mention AI again

31 comments

  1. [6]
    Macil
    (edited )
    Link
    The aggressive tone makes it easy to mistake this article for a worse unnuanced one that has nothing but hot air. There's actually some decent nuance and advice in this article. I liked this part...

    The aggressive tone makes it easy to mistake this article for a worse unnuanced one that has nothing but hot air. There's actually some decent nuance and advice in this article.

    I liked this part and consider this point the real meat of the article. Modern AI is very useful for some specific things right now, but business leaders need to cool it on overhyped plans to revamp their businesses around AI and focus on regular improvements.

    In the case that the technology continues to make incremental gains like this, your company does not need generative AI for the sake of it. You will know exactly why you need it if you do, indeed, need it. An example of something that has actually benefited me is that I keep track of my life administration via Todoist, and Todoist has a feature that allows you to convert filters on your tasks from natural language into their in-house filtering language. Tremendous! It saved me learning a system that I'll use once every five years. I was actually happy about this, and it's a real edge over other applications. But if you don't have a use case then having this sort of broad capability is not actually very useful. The only thing you should be doing is improving your operations and culture, and that will give you the ability to use AI if it ever becomes relevant. Everyone is talking about Retrieval Augmented Generation, but most companies don't actually have any internal documentation worth retrieving.

    I'm obsessed with modern AI developments, but right now AI hype should be for nerds experimenting with it, not business leaders who don't know anything about its strengths and weaknesses.

    51 votes
    1. [4]
      krellor
      Link Parent
      My gripe with AI isn't typical businesses getting hyped about it, where their job is to build a widget or service that people want to pay for. If they want to throw spaghetti and the wall and see...

      My gripe with AI isn't typical businesses getting hyped about it, where their job is to build a widget or service that people want to pay for. If they want to throw spaghetti and the wall and see what sticks, God speed.

      However, in sectors with legal and ethical obligations I still see people wanting to rush ahead and try things without addressing the important issues. For example we have people wanting to use ChatGPT for medical research who have never heard of ClinicalBERT and don't understand the basic preconditions to satisfy before applying different AI tools to a problem domain. Just like machine learning and regressions have preconditions, so does deep learning. In my case it is often people not understanding the differences in appropriateness of sequence to sequence models, vs encoder only, etc. with respect to their problem domain.

      Can ChatGPT predict if a patient has a rare disease, or will be readmitted in 30 days? Maybe. But can you verify and defend the output, or tune false positive rates for alarm fatigue, or pass the quantitative result to a downstream process, or extract embeddings that can be validated by the clinician?

      I'd love to help solve someone's AI problem if they actually came with the right tool for once, like DeCode, ClinicalBERT, COATI, or some boutique model for protein folding, etc. But it's always the people being wowed by NLP.

      21 votes
      1. [3]
        first-must-burn
        Link Parent
        We saw this with autonomous cars (the N–1 hype cycle). It's a solution looking for a problem. And in both cases it eliminates those pesky humans who are so messy and expensive and hard to monetize...

        We saw this with autonomous cars (the N–1 hype cycle). It's a solution looking for a problem. And in both cases it eliminates those pesky humans who are so messy and expensive and hard to monetize because they keep wanting healthcare and paternity leave and basic human dignity.

        7 votes
        1. Moonchild
          Link Parent
          the premise of autonomous cars was to increase safety and automate drudge work. i don't find that objectionable. and i think the primary target market is people who would otherwise be driving...

          the premise of autonomous cars was to increase safety and automate drudge work. i don't find that objectionable. and i think the primary target market is people who would otherwise be driving themselves (although taxi drivers and truckers would indeed be displaced)

          9 votes
        2. raze2012
          Link Parent
          Those businessmen need to look to their bosses back in the day l and remember why they tied Healthcare to jobs to begin with. It's almost as cynical a reason, but when you actually care about...

          because they keep wanting healthcare and paternity leave and basic human dignity.

          Those businessmen need to look to their bosses back in the day l and remember why they tied Healthcare to jobs to begin with. It's almost as cynical a reason, but when you actually care about retaining talent, you will pay for stuff that matters to them, as well as having long term bonuses like pension.

          Ironically many of these businesses also keep shooting down universal Healthcare so at this point it's all on them.

          4 votes
    2. ali
      Link Parent
      The worst thing as someone with expertise in the field is how many people are self proclaimed AI experts that just parrot stuff they see in some random YouTube videos and press announcements.

      The worst thing as someone with expertise in the field is how many people are self proclaimed AI experts that just parrot stuff they see in some random YouTube videos and press announcements.

      9 votes
  2. [4]
    Notcoffeetable
    Link
    An excellent article that perfectly sums up my position on AI in business. I lead a global analytics team. I am asked monthly about using AI. Now fortunately the people I work with pose the...

    An excellent article that perfectly sums up my position on AI in business.

    I lead a global analytics team. I am asked monthly about using AI. Now fortunately the people I work with pose the question more like "do you think AI can help here?" Where I can easily tell them "maybe but your data isn't good enough, you have fundamentals to focus on, and if it did work here are all the legal implication you would be responsible for." Usually dampens their interest.

    But in our IT department, there have been repeated attempts to form a "tech innovation group" with the focus on bringing AI tools into the business. I've been here 3 years and I've seen 2 iterations of this team dismantled and rebuilt. Last week I met with the leader of the third iteration of this team. He really wanted to talk about some kind of RAG chatbot. So I posed the following questions to him:

    • If 95% of our employees spend their day disassembling animals, what the hell is a chatbot going to do for ROI when less than 5% of our business would actually have access or a use for it?
    • If someone is spending so much of their time making agendas and drafting emails within the abilities of a chatbot, then should we be more concerned with job scoping?

    I'm a required approval on any access to external AI tools. Our security team and I have agreed that these tools are completely blocked on network without a written exception. The stuff I see coming through is lawyers wanting AI to write contract templates which is bad enough, but there are frequent requests for nebulous "things." I think the only one I approved recently was marketing wanting to use it to help with slogans and branding. I figure that's vacuous enough that sure, if they want to generate 50 versions of "pork, the other white meat" knock yourselves out. My skin still crawls with the lack of controls we have with what people put in.

    Anyway, I hate it. I spin up one AI (it's usually more of an ML type thing) project a year for an ad hoc analysis just to allow our leadership to check the box. But as the author says, there are so many fundamentals we need to put effort into before we could even take advantage of a functional AI model.

    45 votes
    1. [3]
      skybrian
      Link Parent
      You work in the meatpacking industry?

      disassembling animals

      You work in the meatpacking industry?

      16 votes
      1. [2]
        Notcoffeetable
        Link Parent
        Yes, not an industry I like. Trying to move out of it, my plight has been recorded elsewhere on Tildes. But happy to answer questions about it.

        Yes, not an industry I like. Trying to move out of it, my plight has been recorded elsewhere on Tildes. But happy to answer questions about it.

        22 votes
        1. jredd23
          Link Parent
          Not for nothing, but AI, and meatpacking industry sounds so darn interesting. Out of curiosity, would you be able to say what are the requirements that an AI tool would get approved? I am going...

          Not for nothing, but AI, and meatpacking industry sounds so darn interesting. Out of curiosity, would you be able to say what are the requirements that an AI tool would get approved? I am going through that myself, different industry, and we are struggling with the approval process, and explaining to others what use cases (business or technical) the AI tool would be approved.

          5 votes
  3. [8]
    nacho
    Link
    There are a lot of important lessons of truth in this piece. However, due to the author being limited in his understanding by being a data scientist, the top level conclusions are entirely,...
    • Exemplary

    There are a lot of important lessons of truth in this piece.

    However, due to the author being limited in his understanding by being a data scientist, the top level conclusions are entirely, entirely wrong. They're the direct opposite of how large businesses should think about AI today.

    Just because many businesses have bad and wrong perspectives on AI, doesn't mean we should implement the author's equally wrong perspectives about AI.

    Let me explain in much greater detail:


    Your large company will get fucking piledriven if you don't mention AI again and again and again

    The AI bubble today is about all the crazy, insane, unreasonable expectations from what AI can accomplish in the future, project of the following formula:

    1. Company has problem/design point
    2. We should AI this!
    3. ?????
    4. Insane future profit and efficiency gains.

    LLMs are hugely, hugely good at doing what they're designed to do: time-consuming language related tasks. These aren't the shiny tasks that will revolutionize your company with single implementations, these aren't life-changing, unheard of specs, solving novel problems. They're different ways of sorting or automating work that today takes a lot of human time/analysis of language.

    This article gets it wrong because the author can't envision just how incredibly much time, money, effort and boring busy-work a large company will save by proper, good implementation of LLM-tools that exist today, that only need to be slightly adapted/trained to suit exactly Company ABC's specific needs.


    We will realize untold efficiencies with machine learning

    That's exactly what we're doing. Yes, the AI field is largely fraudulent, especially the parts that are opinion leaders and get attention in the media.

    Here are some examples of efficiencies every single large company can get from implementations of LLM and AI technology that exists today. In my own company, we're realizing untold efficiency gains. We're talking tens of millions of dollars a year. And everyone is happy about it because it makes our days at work more rewarding.

    Sadly I can't share images/videos of how the interfaces work and look, because they're proprietary and strictly confidential. These are LLM AI-tools I interact with every single day.

    • In every internal meeting, a microphone is recording. That was the case for documentation/legal reasons previously too. A bot goes through what's said and with detailed instructions creates the first draft for meeting notes. It also creates a searchable transcript from the meeting, if you ever need to go back to look into something in detail. It's trivial for me to go through the AI-draft and make much better meeting notes and implementations than before in much less time.
    • I get a tremendous amount of email. I have an AI assistant that sorts mail for me. I've made my own categories to sort into. Email is presented in an interface I customize. All email is sorted in order of importance. I get AI-generated alerts for email I should know about immediately upon receipt. I no longer have to pay attention to every single new email in case it's important. I give a thumbs up or a "more important/less important"-rating or a "this should be in Y category instead"-feedback with single clicks so my personal email sorter is as efficient as possible. Huge time-save. Just think about how this works for all email sent to post@[company].com, and how email is now mostly automatically forwarded to the person the AI system deems the likely correct recipient.
    • For all written product, I have an AI assistant that suggests improved language/clarity, it'll suggest whole sentences from previous, related work, links me to previous materials etc. etc. etc. I train my own personal variant. I can now write work-related documents faster than I can type, with professional level quality. I don't reinvent the wheel every time. I can write short-hand if I like to have phrases autocomplete etc. etc. The time spent writing out things is so much shorter. The end-product is more concise and useful to the human recipient at the other end.
    • I have to go through huge, huge volumes of documents in my role. AI does that for me and provides a 30 word summary of content. All documents I'm responsible for are automatically sorted by suspected importance and category, just like email. The AI system intelligently sorts documents I "archive" based on how I usually want similar documents sorted. In long documents, AI leads me through and highlights the portions of documents I often stop to read in detail scrolling through, or searching for keywords to find etc. etc. These tasks are performed automatically. They're personalized and become better and better.
    • I have lists of ideas, tasks to pursue etc. These are suggested to be by AI when I work on documents that seem related in fields of "suggestions". In case I, a forgetful human, forget these things, AI helps me remember. Based on previous work, suggestions become better and better.
    • Rough translations of things that aren't do or die: we get what we mostly need. We know enough to send documents (like contracts, specs etc.) to high quality human expert translators. We don't overload them with busy-work that's unimportant. This also means we have much shorter response times on things that happen in languages we don't speak, or at a time of day where that office is sleeping.

    What's the result of all this? I spend my working time to actually use my expertise more: The value created for the company is much, much greater. These efficiencies aren't about replacing me, or downsizing me. They're about getting more of what they pay my obscene salary for: things AI or other workers can't. I spend less time on "office-worker"-tasks and much more time on developmental tasks for the company, strategies and outsmarting our competitors.

    The end result is that I sit much more at my desk simply thinking, or walking around talking to my colleagues about ideas, or following up smart, small things others have mentioned that we should be thinking creatively about.


    We need this AI to remain competitive

    Every large business can use these AI tasks to become more competitive. Sure, some will use AI to get rid of folks and replace them by bots.

    The author is completely right: You need to know exactly what you're going to use AI for and what tasks AI can make simpler/better for you. However, those tasks exist for every single large company.

    For us the competitive advantage are that the days at work become much, much more meaningful and fun. We recruit folks based on how they hear our days at work are. Who wants to be an IT-related person doing menial office-worker tasks instead of having fun tasks of problem solving that are much more engaging?

    AI systems go through all the product our competitors publish, what potential clients publish etc. We follow every single paper trail. We beat our competitors every single day.

    We have AI systems that can go through everything we do in the company. All the analysis is internal and stays internal. Our competitors could do all the things we do. They may be doing other smart things we haven't thought of yet.

    Competition isn't quite a zero-sum game, but mostly, since we use AI smartly, if our competitors don't, they lose so much competitive advantage they lose lots of business that we can gain, and we can grow with. It becomes self-reinforcing.

    Beyond that, we need to be using LLM and AI-technology to understand what we should could be using AI to do for us.

    Like having an interactive map putting together huge sets of public information so that by knowing an address, or where a building is located, I get all the relevant information put together in a single interface, literally at my fingertips instead of having to spend lots of time putting together before I can perform the task for the consumer that my client actually needs.

    Sorting, summarizing, writing, data synthesis, visualizations, integration of systems: this is what our competitive edge is. AI is perfect for augmenting expertise.


    We HAVE already seen extensive gains from AI. You have to be stupid to have failed AI projects before harvesting the low-hanging fruits

    Do only 8 percent of companies fail at having positive outcomes from AI? I think the author is right that this number is way too low, but it should be even lower than 8 percent.

    Things like writing assistance, summary-making, sorting information, securing company documents, making email and meetings more effective, suggesting tasks that may be smart now due to previous patterns, these low-hanging fruits are so low-hanging and easy to implement with so large gains, you have to have really bad management of any large company not to succeed or benefit. This is stuff you can buy off the shelf and then train internally without sending any data externally to have huge gains.

    But yes, many companies have stupid management and priorities, so there is surely more failure than this.

    Unlike the author's limited data scientist perspective, I totally belive that 34 percent of companies, probably a much higher percentage, have ad AI specifically assist with strategic decision-making. See the huge list above for some ways.

    But even if it's just plugging some keywords into ChatGPT just for brainstorming ideas, or well, using AI so you have the time to spend on strategic decision-making instead of administrative busy-work, the gains are huge. This number should also be way, way higher.


    Yes, we must prepare for the future of AI-assisted companies

    LLMs will only ever provide language-related tasks. Yes, the hype is hugely over-hyped. But within their own domains, the opportunities are insane. Today's AI, technology that's right there, when implemented throughout companies in every appropriate place will revolutionize how efficient companies structure their work, what types of employees they have and how they're organized.

    We DO need to roll out generative AI in the organization now, because companies like mine are, and the longer other companies wait, the more steps ahead we've though and implemented. The risks are incredibly low. The gains are there right now, ready to be harvested.

    • It doesn't matter if we're headed for an intelligence explosion. The gains are here, today with the technology we have now.
    • We don't need more data to scale. We just need to use the huge volume of data that isn't being used because it's organization-internal and company specific. Off the shelf-LLMs you can tune to your company needs are more than enough.
    • Yes, companies that realize they need to fix their shit to have data to use, historical data archived etc. need to do so now. Those of us who do are getting farther ahead every single day. You need to have a company that manages to implement computer-related changes. You need a work-force where every employee knows how and what to use AI for, and what not to trust AI answers and solutions to.

    The worst take in the entire article: Everyone says they're using AI because leaders actually do

    I cannot emphasize this enough. You either need to be on the absolute cutting-edge and producing novel research, or you should be doing exactly what you were doing five years ago with minor concessions to incorporating LLMs. Anything in the middle ground does not make any sense unless you actually work in the rare field where your industry is being totally disrupted right now

    This is the worst take the author has in the entire article:

    • Every industry is being totally disrupted right now. If you aren't aware, your company is lagging behind. Hard.
    • Every company should be using their data to train shelf-bought LLMs to suit their specific needs. This takes time and data. You need to get that working or you're SOL.
    • The sum of all these "minor concessions" compound to a revolution in how work and companies should be organized.
    • To understand this whole field, the opportunities and limits, every organization needs to get their toes properly in so we don't have folks use ChatGPT to misdiagnose, or lead you to buy the wrong products for your hair salon.

    The crux: Copilot and other hype-things are junk and spam generators, that's what the author thinks AI is all about, because all the good stuff is hush-hush and company internal

    Yes, copilot produces boilterplate nonsense. That's because it's a GENERAL implementation of an LLM. The more specific your task and model, the more you train it for that specific task, the better it becomes.

    Yes, people have latched onto and are parasites building off hype. That's mostly because the decision makers in companies don't know enough about AI or computers in general to know what's smart and what's not. Business leaders are too old, or know too little about modern technology to get things right with AI.

    The companies that are doing big things ^TM with AI are doing so quietly because that's giving huge competitive advantage. OR they're loud about it because their implementation is so obvious that everyone else will also do this too or the products they make are so obviously AI-driven that everyone will serve these tasks very soon.

    Chatbot-hell is only a hellish experience if the bot isn't continually trained with a goal of being good and tailored to answering the questions the chatbot itself is supposed to answer. Half-assed AI-implementation is leading people to believe they can tell when AI is being used. (And they'll even believe we can use AI-systems to identify when AI is being used, which is just a massively stupid take. That can only ever identify BAD AI-use where the human isn't sufficiently in the loop to leave a product they're entirely responsible for).


    The author is the perfect example of why we need executives who have training both within technical fields, but also in leadership and how to run companies, because he's clearly not qualified to say anything about how AI is impacting business right now.

    It's super easy to buy this piece because it's a counter to what we all see is "AI-savior nonsense". Most of the observations are great. Most of the analysis of those observations is junk.

    26 votes
    1. [2]
      skybrian
      Link Parent
      Yes, as you say, one of the things that makes it hard to see what's going on (besides the huge amounts of hype) is that a lot of the use cases are in specific businesses, and you'd need to...

      Yes, as you say, one of the things that makes it hard to see what's going on (besides the huge amounts of hype) is that a lot of the use cases are in specific businesses, and you'd need to understand the business to see why it's useful. Often they don't talk in public about what they do, and when they do, sometimes they don't tell the story well.

      Meanwhile, consumer-facing use of chatbots is often entertainment and often done badly, plus finding AI weirdness is pretty entertaining and gets a lot of attention. And of course everyone's got their own opinion, often not well informed.

      When everybody starts talking about something vague, you get a lot of noise.

      There's also AI marketing that seems pretty clueless in part because it is clueless and also because it's not really directed at consumers. Companies like Microsoft and Google serve both businesses and consumers.

      It might be nice to have more concrete stories (specific examples) of what businesses do.

      9 votes
      1. nacho
        Link Parent
        You're totally right. Again, because of competitive advantages, few companies are being loud and proud about this. Companies like IBM have lots and lots of agreements where they support these...

        You're totally right.

        Again, because of competitive advantages, few companies are being loud and proud about this.

        Companies like IBM have lots and lots of agreements where they support these types of pragmatic, practical initiatives with individual companies. Just as one example.


        All the hype and attention is going to the bells and whistles, not the huge, boring changes that are affecting millions of workers.

        7 votes
    2. raze2012
      Link Parent
      Given how the last 30 years of efficiency increases have rewarded the worker, I don't really see this as much of an upside. When we get more efficient they use that as an excuse to put more burden...

      We will realize untold efficiencies with machine learning

      Given how the last 30 years of efficiency increases have rewarded the worker, I don't really see this as much of an upside. When we get more efficient they use that as an excuse to put more burden on less people, not scale up properly and benefit society.

      All the tools you mentioned are great and how I envision AI being in non-cutting edge tech in about a decade. I think that still plays to the author's point that all these tools aren't things that will make or break a company. Companies as is have a lot of dysfunction and accept inefficiencies if it means either more profit or keeping people they like around them.

      totally belive that 34 percent of companies, probably a much higher percentage, have ad AI specifically assist with strategic decision-making. See the huge list above for some ways

      I suppose it comes down to definition. Technically, if AI helped sort your emails it did assist in strategic decision making. But not necessarily in a way that businesses selling AI want you to think.

      The worst take in the entire article: Everyone says they're using AI because leaders actually do

      I think this is the core misalignment between you and the author. I believe what the author's lens in "using AI" refers more to companies that are developing on top of LLMs or relying on LLMs to outperform a swath of humans. This makes sense because this is what AI companies are marketing.

      Your definition seems to lie more on "we can leverage current and near future tools to increase efficiency". Which yes, is happening as we speak. Sometimes to great effect, sometimes to great failure. But I think the author delineate sbetween "using a program powered by AI to assist in boring stuff" and "making a program using AI to hype investors". Which is definitely what many loud executives are doing.

      Chatbot-hell is only a hellish experience if the bot isn't continually trained with a goal of being good and tailored to answering the questions the chatbot itself is supposed to answer.

      In my cynical lens, many aren't. It's there to say "we are AI powered!" to investors. And it works. So Why fix what isn't broken?

      ..., oh, the chatbot is broken? No, thars not what we care about working. It's about the cash flow. The perception makes more money than any actual inefficiencies, so things won't change.

      9 votes
    3. [3]
      underdog
      Link Parent
      Did you use AI to help write your exemplary analysis? If so, I'd be very interested in understanding your process. Off-topic: I initially took the article at face value and felt it resonated...

      Did you use AI to help write your exemplary analysis? If so, I'd be very interested in understanding your process.

      Off-topic: I initially took the article at face value and felt it resonated deeply with what I've been seeing and thinking, even though I couldn't express it as eloquently. Then, as is customary on Tildes, someone with a lot of expertise, like yourself, breaks down the original, offers more information, points out inconsistencies, and blows my mind because I thought the original was already an unbeatable masterpiece.

      6 votes
      1. [2]
        AntsInside
        Link Parent
        Agreed this is a valuable perspective. More generally, I am also interested to know whether experience with the value of AI at work have lead nacho to use it more widely, or if writing and...

        Agreed this is a valuable perspective. More generally, I am also interested to know whether experience with the value of AI at work have lead nacho to use it more widely, or if writing and personal assistant AIs generally available just do not measure up to their company tuned examples.

        5 votes
        1. nacho
          Link Parent
          I make a point of not using AI-tooling to write in my personal life. In the "use it or lose it"-perspective, I think it's important to retain practice with how to write, proof read, organize...

          I make a point of not using AI-tooling to write in my personal life. In the "use it or lose it"-perspective, I think it's important to retain practice with how to write, proof read, organize thoughts and so on. Writing is an important tool for reflecting, learning and developing thoughts fully that meander around inside the mind, but may not be caught or stand up when you try to spell them out.


          You're also totally right though: If I were to use a general assistant, the experience would be bad.

          I could use company software, but it's not specifically trained for this type of content, so it'd be way worse than it is for work-related stuff. I'm sure it'd still be very useful, particularly linking up sources and documents relating to this topic, news articles and so on, but also to cut down the length of prose substantially.

          6 votes
    4. pete_the_paper_boat
      Link Parent
      That's not a comment, that's a whole counter article lol

      That's not a comment, that's a whole counter article lol

  4. arqalite
    Link
    Oh, how I love these unhinged rants that are actually very thoughtful. Some people will definitely misunderstand the post, which is not great, but personally I think it slaps. I wish I worked in...

    Oh, how I love these unhinged rants that are actually very thoughtful. Some people will definitely misunderstand the post, which is not great, but personally I think it slaps.

    if you continue to try { thisBullshit(); } you are going to catch (theseHands)

    I wish I worked in an edgy startup so I could use this forever and ever. It's the kind of juvenile programmer humor that absolutely gets me.

    23 votes
  5. 0x29A
    (edited )
    Link
    This article gets at multiple important points, even though this author is still more generous to "generative AI / LLMs and similar things" than I am (I abhor the majority of it with the fire of a...

    This article gets at multiple important points, even though this author is still more generous to "generative AI / LLMs and similar things" than I am (I abhor the majority of it with the fire of a thousand suns- particularly when used/released/available in capitalist society and when the intention is that it produce entire entities like works of art, providing (mis)information like Google's absolute garbage dump AI overviews, etc- any kind of output that I feel should be more human that we're sapping the humanity out of- but I digress):

    • The overwhelming hype and marketing
    • The lack of nuance or discernment in what we call AI (generative vs. non, LLMs vs. machine learning vs. altogrithms that companies "pretend" is AI ... ad nauseum)
    • The quickness with which ignorant executives hop onto hype trains or buy into marketing
    • How things that these systems can do can be replicated or implemented in very specific niche technological ways (as conversion tools or whatever) but aren't the "shiny" generative bullshit thrust onto us
    • Many of the people pushing these systems and the kind of techbro culture they are (Web3/blockchain hypers, etc)
    • The fact that businesses expect "AI" to fix problems for them when they can't even solve their own much simpler problems to begin with (like wanting to use an AI for documentation, but having such absolute shit documentation to begin with that the AI isn't going to magically fix it)

    So I think much of the points made here are pretty solid and they also kinda get at maybe some of the underlying ideas behind a lot of other concerns they don't talk about- like the non-techie public's trust of these systems, the amount of absolute trash fake "art" that is saturating the world (social media posts from pretend "artists" and "photographers" that are just shitty AI output and all of the people commenting that can't tell it's AI, even when it's painfully obvious)

    17 votes
  6. post_below
    Link
    The ranting and threats of violence shtick is hilarious. And also you can only get so much mileage out of a gimmick. That's my only critique though, all their points are right on, and I'm grateful...

    The ranting and threats of violence shtick is hilarious. And also you can only get so much mileage out of a gimmick.

    That's my only critique though, all their points are right on, and I'm grateful for the catharsis.

    Fuck hype. At its core it's only ever money grabbing and the people who are the best at it are so often the worst examples of what humanity has to offer.

    We can't gift the masses the tech literacy to see through crypto and AI grifts, but it would be nice at this point if everyone just assumed by default that if it feels at all like there's hype involved, whatever it is, it's mostly bullshit. That formula will almost never fail.

    13 votes
  7. Cycloneblaze
    Link
    It's very difficult to do hyperbolic threats of gratuitous violence well and it's very common to do it poorly. This article actually does it well - if it outstays its welcome a little, it's saved...

    It's very difficult to do hyperbolic threats of gratuitous violence well and it's very common to do it poorly. This article actually does it well - if it outstays its welcome a little, it's saved by its lucid points about how useful AI actually is.

    8 votes
  8. Thomas-C
    Link
    I appreciated their perspective as well as their tone. Their tone helped me follow along a bit where my professional experiences came up short. I want to explore better what all is going on. As a...

    I appreciated their perspective as well as their tone. Their tone helped me follow along a bit where my professional experiences came up short. I want to explore better what all is going on. As a layperson it's often difficult to sift through what folks are saying on almost every end of how AI is developing. Because of the potential money in it as well as the internet being how it is, hard to get a straightforward story about what it looks like "on the ground". I remember before I left, my prior workplace had "AI" written into their future meetings and it was just amusing to me. They'd tried to integrate a tool unsuccessfully before - it just couldn't get things "right" in the ways they wanted (the task, imo, was too specialized for that type of model to be a worthy pursuit), but too, it wasn't a feature anyone really asked for. As I've tried to broaden my understanding it only got more confusing why they'd even try - I couldn't come up with really any situation where an LLM/chatbot/etc would serve much of a purpose at all, much less provide something unique/competitive to the customer base. I was also friendly with our IT/Sysadmin folks and they were not positive on the idea in any way. The impression I had was management just being irrational because money, and despite continuing to fail to integrate things apparently it's still an ongoing project. I'm just glad I got away before having to train up a chatbot or something stupid like that.

    Following along in my own weird layperson way, I feel the outcome has been a mixed bag, at best. On the one hand there's some stuff that's kinda interesting, gets a "neat" out of me, but other than being surprised a bit now and then it never really gets further than that. Whenever I take the time to dig a bit deeper the uselessness/mindlessness of it starts to become really apparent. I'm sure the author would lay claim to a couple of teeth for the topic I did a little bit ago but it's part of trying to see what folks think, what is actually happening vs how people talk about it vs what is being presented. The amount of resources being poured into scaling further is why I retain my interest - something is going to come of that, but what exactly is a thing I choose to try to think about in terms of what is presently happening, since the future feels less and less predictable. I understand and largely sympathize with folks who are tired and frustrated, because it's tiresome and frustrating. I appreciate the author being frank because seeing how frank they're willing to be is part of understanding them, at least I think so.

    I choose not to get much into pro- or anti- because by the time I feel confident enough to deliver a solid argument, some other thing has reared its head and much of the arguing is just the same things with a little modification. I appreciate it when folks hone in a bit like the author did and deliver what they have to say with more specifics/context. I think the concern I have is similar to the one I had about the internet - broadly, this deployment is about as reckless of an approach as I could imagine. A free for all among startups produces some cool things from time to time but I don't want that to be the primordial soup if something like an intelligence explosion becomes a real thing. It's hard to shake a feeling like tech is running away from people, but then I read an article like this one and it feels like coming back to earth - no, not yet at least. None of the really crazy stuff has come about just yet, this could all end up being Silly Shit in the long run. Just have to see. The investment broadly is insane so I don't think it makes sense to just ignore things, but I think the author's plea to keep one's head on straight is a very good one. If "AI" doesn't actually solve a problem, don't go looking for a problem for it to solve, spend the time strengthening what actually makes the enterprise work/"fixing your shit" lol. In the world of the everyday person it's not much different, at least far as I've seen. I guess if you're good at grifting it's a fine time, there's that. You can shitpost on a level never before seen. Sometimes the models can be halfway helpful when the degraded state of search rears its head. But we haven't yet hit shit like "terminator", "creativity is over", so on and so forth. Just because those seem like possibilities doesn't mean it makes sense to talk like it's already happened. If where I live is any indication, no, folks are not all abandoning their pursuits and giving up. Plenty are just flat out ignoring all of it because they're sick of hearing the term, "AI". They're kinda living out what the author is saying - ain't broke, don't fix. AI doesn't do anything helpful for what folks around here do so no one cares.

    My biggest concern with it is more in the realm of socializing/psychology. I am very weirded out by the advent of shit like using chatbots as stand-ins for partners/friends/etc. Treating the things like they're people is a stupid move, we're allowing a bias to just rule the day doing that and I don't like to think about what it might mean for the loneliest among us. The idea of things like an "ai girlfriend" or "ai therapist", today, with what's here now, makes me shudder in the same way I used to reading old sci-fi novels.

    5 votes
  9. [6]
    drannex
    (edited )
    Link
    Before this gets changed (to sentence case) I think the title should stay the way it is, because that's exactly how it should be read. Edit: I prefer sentence case, especially on tildes, please...

    Before this gets changed (to sentence case) I think the title should stay the way it is, because that's exactly how it should be read.

    Edit: I prefer sentence case, especially on tildes, please keep it up! But, this article specifically should be Kept As Is For The Emphasis.

    18 votes
    1. [2]
      redwall_hp
      (edited )
      Link Parent
      I also dislike sentence case being used for titling, anyway, to be honest. This post is amazing. I always enjoy this style of writing (Gonzo journalism is the best description I can think of, I...

      I also dislike sentence case being used for titling, anyway, to be honest.

      This post is amazing. I always enjoy this style of writing (Gonzo journalism is the best description I can think of, I guess?) and it perfectly encapsulates how I feel as a software engineer. Before that, it was blockchain hype. It feels like you're surrounded by posers who bought what grifters are selling, or cult victims or something.

      I've never once found an LLM useful for programming. At best, it finds me what vendor docs and Stack Overflow have. At worst, it literally makes things up. It can generate boilerplate, but it can't solve problems.

      1. I don't need help writing boilerplate. It's easy and probably more effort to ask something else to do it.

      2. If you're writing a lot of boilerplate, you're either screwing up at an architectural level and need to refactor things or you're spinning up and throwing away a bunch of prototypes at a concerning rate.

      It's also a violation of Kernighan'a Law:

      Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it? — Brian Kernighan, 1974

      If you had to lean on a tool to come up with a solution, you probably don't understand it enough to maintain it. Or to defend why you did it in a code review. And if you rely on peer review to catch something being bad, because you don't understand it, that's dangerous.

      You probably spend 10% of programming time writing code and 90% reading existing code. If you use an LLM, now you're spending 100% of the time reading and trying to understand existing code. Such an improvement. >_>

      12 votes
      1. davek804
        (edited )
        Link Parent
        I found the article fantastic and a wonderful representation of my perspective. That being said, I absolutely find CoPilot useful in my day to day. I'm all DevOps and so spend an inordinate amount...

        I found the article fantastic and a wonderful representation of my perspective. That being said, I absolutely find CoPilot useful in my day to day.

        I'm all DevOps and so spend an inordinate amount of time in HCL, Bash, YAML. I don't let AI drive my data models or really even propose novel concepts. But I absolutely find value in it helping me find the right way to flatten, merge, and sort maps in HCL. I have no idea why basic things I can do in OOJ I have such a hard time with in HCL. But I do. And I certainly notice that, while it often times takes a few tries to get the right solution out of ChatGPT for a more complex little data-model reshuffle, I get there with its assistance. That being said, I know for sure I'm getting no better at the language without learning the language. Bleh!

        11 votes
    2. [3]
      cfabbro
      (edited )
      Link Parent
      I didn't see this before I edited the title to sentence case. I changed it back after seeing your comment since it's clearly a popular opinion... although I don't personally think making an...

      I didn't see this before I edited the title to sentence case. I changed it back after seeing your comment since it's clearly a popular opinion... although I don't personally think making an exception is really justified, TBH, even for "emphasis". I don't know why, but I find Title Case really hard to read. :(

      8 votes
      1. [2]
        skybrian
        Link Parent
        Even without the title case, it would still be a clickbait title. But I guess some people like clickbait they agree with?

        Even without the title case, it would still be a clickbait title. But I guess some people like clickbait they agree with?

        5 votes
        1. cfabbro
          (edited )
          Link Parent
          Eh, obviously tongue-in-cheek titles aren't necessarily "clickbait". And, in this case, the tone of the blog post was similarly faux-angry and humorous too, despite the author also making some...

          Eh, obviously tongue-in-cheek titles aren't necessarily "clickbait". And, in this case, the tone of the blog post was similarly faux-angry and humorous too, despite the author also making some serious points, so the title is actually pretty fitting, IMO. I just personally can't stand Title Case since I genuinely do find it hard to read.

          13 votes
  10. jujubunicorn
    Link
    I absolutely love the author's writing style. It perfectly represents my emotions towards Tech Bros.

    I absolutely love the author's writing style. It perfectly represents my emotions towards Tech Bros.

    8 votes
  11. Gopher
    Link
    Remember when GPS caused people to drive into lakes and shit, I can't wait till employees start doing stupid shit because the boss implemented AI and it told them to do something stupid

    Remember when GPS caused people to drive into lakes and shit, I can't wait till employees start doing stupid shit because the boss implemented AI and it told them to do something stupid

    3 votes