80 votes

Is AI actually useful for anyone here?

Sometimes I feel like there's something wrong with how I use technology, or I'm just incredibly biased and predisposed to cynicism or something, so I wanted to get a pulse on how everyone else feels about AI, specifically LLMs, and how you use them in your professional and personal lives.

I've been messing with LLMs since GPT-3, being initially very impressed by the technology, to that view sort of evolving to a more nuanced one. I think they're very good at a specific thing and not great at anything else.

I feel like, increasingly, I'm becoming a rarity among tech people, especially executives. I run cybersecurity for a medium sized agency, and my boss is the CIO. Any time I, or any of her direct reports write a proposal, a policy, a report, or basically anything meant to distribute to a wide audience, they insist on us "running it through copilot", which to them, just means pasting the whole document into copilot chat, then taking the output.

It inevitably takes a document I worked hard on to balance tone, information, brevity, professional voice, and technical details and turns it into a bland, wordy mess. It's unusable crap that I then have to spend more time with to have it sound normal. My boss almost always comes up with "suggestions" or "ideas" that are very obviously just copy pasted answers from copilot chat too.

I see people online that talk about how LLMs have made them so much faster at development, but every time I've ever used it that field, it can toss together a quick prototype for something I likely could have googled, but there will frequently be little hidden bugs in the code. If I try to use the LLM to fix those bugs, it inevitably just makes it worse. Every time I've tried to use AI in a coding workflow, I spend less time thinking about the control flow of the software, and more time chasing down weird esoteric bugs. Overall it's never saved me any time at all.

I've used them as a quick web search, and while they do save me from having to trawl through a lot of the hellhole that is the modern internet, with blogspam, ads, and nonsense people write online, a lot of times, it will just hallucinate answers. I've noticed it's decent at providing me results when results exist, but if results don't exist, or I'm asking something that doesn't make sense, it falls flat on its face because it will just make things up in order to sound convincing and helpful.

I do see some niches where the stuff has been useful. Summarizing large swathes of documents, where the accuracy of that summary doesn't matter much is a little useful. Like if I were tasked to look through 300 documents and decide which ones were most relevant to a project, and I only had an hour to do it, I think that would be a task it would do well with. I can't review or even skim 300 documents in an hour, and even though an LLM would very likely be wrong about a lot of it, at least that's something.

The thing is, I don't frequently run into tasks where accuracy doesn't matter. I doubt most people do. Usually when someone asks for an answer to something, or you want to actually do something useful, the hidden assumption is that the output will be correct, and LLMs are just really bad at being correct.

The thing is, the internet is full of AI evangelists that talk about their AI stack made up of SaaS products I've never even heard of chained together. They talk about how insanely productive it's made them and how it's like being superhuman and without it they'd be left behind.

I'm 99% sure that most of this is influencer clickbait capitalizing on FOMO to keep the shared delusion of LLM's usefulness going, usually because they have stake in the game. They either run an AI startup, are involved in a company that profits off of AI being popular, they're an influencer that makes AI content, or they just have Nvidia in their stock portfolio like so much of us do.

Is there anyone out there that feels this technology is actually super useful that doesn't fall into one of those categories?

If so, let me know. Also, let me know what I'm doing wrong. Am I just a Luddite? A crotchety old man? Out of touch? I'm fine if I am, I just want to know once and for all.

149 comments

  1. [9]
    ogre
    Link
    It pains me to see widespread adoption of LLMs. That doesn’t sound like a healthy mindset, and I worried for some time that I’m a luddite, or crotchety, or some other out-group. However I think I...
    • Exemplary

    It pains me to see widespread adoption of LLMs. That doesn’t sound like a healthy mindset, and I worried for some time that I’m a luddite, or crotchety, or some other out-group. However I think I finally understand why I feel this way. When I see my friends, family, and coworkers using these services everyday I’m not upset because they’ve bought into hype for a product that can’t perform the miracles it claims. I’m upset because I see people hurting themselves. A lifelong friend who’s struggled with social anxiety runs all of his text communication through LLMs, he’s completely lost his own voice in writing. Conversations with him feel sterile and inhuman, I just want my friend to feel comfortable again. My brother talks to an LLM all day about his conspiracy theories and it tells him he’s right after calculating the probability of every possible scenario. My mother gets all of her medical advice from an LLM, despite begging her to see an actual doctor. My father in law will interrupt conversations to announce that he’s asking an LLM for input and proudly share the output like it’s gold. A lonely friend who’s recently had a baby chats with an LLM instead of talking to his wife or friends. Junior engineers at work can’t write or read code without an LLM. I see people losing their humanity, outsourcing it to a computer program. I see the largest tech corporations successfully pulling off another massive transfer of wealth via deskilling and layoffs. I don’t think that makes me a crotchety out of touch luddite. Although, maybe it does? The luddites had some worthwhile concerns.

    88 votes
    1. DynamoSunshirt
      Link Parent
      God, the way you write this makes LLM usage sound awfully bleak! I couldn't agree more, and unfortunately I don't think I can offer any counter-examples to give much hope. But I have ironically...

      God, the way you write this makes LLM usage sound awfully bleak! I couldn't agree more, and unfortunately I don't think I can offer any counter-examples to give much hope. But I have ironically been finding community with people over a distaste for generic LLM output! So try not to let it get you down too much.

      Keep fighting the good fight, encouraging people to touch grass, to set down ChatGPT, to live their lives instead of filtering them through an LLM. But don't let yourself get too frustrated by it. At the end of the day, you can't control other people. You can lead a horse to water, but you can't make them drink.

      11 votes
    2. ssk
      Link Parent
      I think the solution to your pain, though you didn't ask for one, is to connect on your humanity. Recognize that anxiety is damn near universal.. That having a conversation with a doctor can be...

      I think the solution to your pain, though you didn't ask for one, is to connect on your humanity. Recognize that anxiety is damn near universal.. That having a conversation with a doctor can be scary. That social anxiety feels tremendously isolating. That the world is super scary out there right now. Focus on those connection points when talking with your friends and loved ones. Your humanity can be a better connection than any AI.

      I don't think that makes AI bad, inherently. I just think that in this day and age connecting vulnerably and personally is a daunting task. There's so much that can go wrong. There's so much convenience in seeking out AI. But the act of connection, of love, is so much more powerful than any of that. Truly

      6 votes
    3. [3]
      CrypticCuriosity629
      Link Parent
      I don’t entirely disagree with your concerns overall because they are real concerns, however I do think that in almost every example you mentioned, the real problem is with the people using LLMs,...

      I don’t entirely disagree with your concerns overall because they are real concerns, however I do think that in almost every example you mentioned, the real problem is with the people using LLMs, not the LLMs themselves.

      So yes, you bring up valid issues, but in my opinion those issues would be solved by addressing the human element instead of the LLM.

      The issues you describe, like people leaning too hard on AI for communication, falling into echo chambers, or trusting LLMs over experts, aren’t new. LLMs are just a new tool that can highlight or magnify existing tendencies. The core problems, struggles with communication, lack of critical thinking, or poor judgment, would still be there, with or without AI.

      If we want to solve these problems, we have to address the human side, not just the tools people use.

      Unfortunately, we've ignored the human issues for way too long in our society and it's only been up until recently we've started to see the major cracks starting to show, and not just with LLMs, but in general with everything from Business to Politics.

      We should have been prioritizing the teaching critical thinking and emotional intelligence in schools instead of standardized memorization based education. It's the whole Give a man a fish, he'll eat for a day, teach a man to fish and he'll eat for a lifetime thing, but applied to learning and critical thinking.

      6 votes
      1. [2]
        Shevanel
        Link Parent
        I largely agree with your points, but I’m going to gently push back a little bit here, or at least add my own clarification that hopefully aligns with your thoughts. As a former teacher, most of...

        We should have been prioritizing the teaching critical thinking and emotional intelligence in schools instead of standardized memorization based education.

        I largely agree with your points, but I’m going to gently push back a little bit here, or at least add my own clarification that hopefully aligns with your thoughts. As a former teacher, most of the teachers I worked with and/or know now on a personal level so very desperately want to prioritize these things, but things like No Child Left Behind (at least in the U.S.) have made it all but impossible.

        Don’t get me wrong, we’re coming to the same conclusion, but I always like to add this because I think that, again, at least in the U.S., we’ve done a really fantastic job completely crippling educators at the very foundation of their institution.

        7 votes
        1. CrypticCuriosity629
          Link Parent
          Don't get me wrong, that comment is not blaming teachers it's very much a criticism of the entire educational system and things like the no child left behind. Specifically with school boards and...

          Don't get me wrong, that comment is not blaming teachers it's very much a criticism of the entire educational system and things like the no child left behind.

          Specifically with school boards and higher ups in the education system.

          I just wish that us as a society took more of an interest and priority of this kind of thing to vote accordingly and effectively and in some cases run ourselves if there are no good options.

          2 votes
    4. [3]
      chocobean
      Link Parent
      The only possibly legitimate use among yours listed is your friend with social anxiety. If he's using this time to slowly de-personalise communication and de-escalate fear of having to respond and...

      The only possibly legitimate use among yours listed is your friend with social anxiety. If he's using this time to slowly de-personalise communication and de-escalate fear of having to respond and guilt of taking time to respond, to outsource the anxiety of choosing the right words to reply to people, to detach cycles of negative self talk after replying, then these awful machine lines are possibly helping your friend.

      5 votes
      1. [2]
        ogre
        Link Parent
        I actually think a lot of their LLM usage is understandable. My friend with social anxiety probably feels an immense weight lifted off his shoulders, and while that’s great I can’t help but be...

        I actually think a lot of their LLM usage is understandable. My friend with social anxiety probably feels an immense weight lifted off his shoulders, and while that’s great I can’t help but be disappointed that talking to him has changed so much. My mom would previously use google search for medical advice, but that experience has degraded to the point that it’s more convenient to use an LLM instead. I imagine that’s the gateway for most people. I don’t think it’s the right solution but it’s not my place to fight it.

        3 votes
        1. chocobean
          Link Parent
          I hope it's temporary. Both the infatuation with AI, and using it to mitigate anxiety.

          I hope it's temporary. Both the infatuation with AI, and using it to mitigate anxiety.

          3 votes
  2. [30]
    nic
    Link
    Yes. AI is actually useful. First, as you have noted, it gets you up to baseline very quickly. If baseline is commonly on the internet, it is a lot faster to use ChatGPT than it is to use Google....
    • Exemplary

    Yes. AI is actually useful.

    First, as you have noted, it gets you up to baseline very quickly. If baseline is commonly on the internet, it is a lot faster to use ChatGPT than it is to use Google. As you have noted, however, you have to be careful. If it is important, baseline is the beginning, not the end of your research.

    Second. AI is fantastic at automating complex/ confirmable tasks. Got a user that wants a report, but the user doesn't know how to create reports, and they are probably going to ignore the data and go with their gut anyway? Got a spreadsheet that needs to be loaded into a highly structured database, that takes a thousand clicks through the UI, but takes a minute to confirm it was loaded in correctly?

    Third. AI can be useful at brainstorming, if you are an expert. I am an expert in a rather esoteric field. I don't have many folks to talk too, who know what I know. AI is only helpful 20% of the time, but I know enough to realize to quickly identify that 20%, so I use it all the time when I want to think things through. It is the perfect yellow rubber ducky (if you don't get the reference then you are todays lucky 1000.)

    Lastly AI is useful at taking care of bullshit tasks. Like marketing copy. You just have to delete all the obvious LLM tells.

    Sorry your CIO thinks everything is a bullshit task, or even worse, is deluded enough to think AI is always useful. Good luck with the job search :/

    51 votes
    1. DynamoSunshirt
      Link Parent
      Thanks so much for sharing this, I am staunchly anti-LLM at this point but I can't help but agree with your points here!

      Thanks so much for sharing this, I am staunchly anti-LLM at this point but I can't help but agree with your points here!

      6 votes
    2. [28]
      Lobachevsky
      Link Parent
      Thank you. I don't really understand why AI hate is so prevalent online. Seems like in the real world everyone is using these tools successfully to some degree with no issues. It's not...

      Thank you. I don't really understand why AI hate is so prevalent online. Seems like in the real world everyone is using these tools successfully to some degree with no issues. It's not particularly different from any other tech that "came for jobs". In particular I'm shocked to see digital artists repeat the exact same arguments that were levied at Photoshop back in the day - something that essentially created digital art as a medium. And the same people that have directly benefited from that are rejecting the next iteration.

      4 votes
      1. [27]
        Shevanel
        Link Parent
        There’s nuance that you’re bulldozing over here, IMO. Photography:Photoshop and Photoshop:GenAI imagery are absolutely not equitable ratios from a “get with the times” perspective. Photoshop is a...

        There’s nuance that you’re bulldozing over here, IMO. Photography:Photoshop and Photoshop:GenAI imagery are absolutely not equitable ratios from a “get with the times” perspective. Photoshop is a set of tools with a high skill ceiling in and of itself. The use of AI in photography (and the arts in general) at this point is mindless point-and-click drivel that requires little more than a pulse to operate. It is the artist’s equivalent of a “script kiddie” saying they can do the job of a photographer because they wrote a sentence describing the same picture that a photographer could ostensibly take.

        And to be totally clear, I agree entirely with nic’s parent comment here. But I don’t think we ought to take what he’s saying and imply that it should apply to art in the same way.

        10 votes
        1. [26]
          Lobachevsky
          Link Parent
          Why? It was a new medium, required a new skillset, simplified some aspects of the work (how many digital artists know how to mix paints?), largely moved traditional art into a niche with digital...

          Photography:Photoshop and Photoshop:GenAI imagery are absolutely not equitable ratios from a “get with the times” perspective.

          Why? It was a new medium, required a new skillset, simplified some aspects of the work (how many digital artists know how to mix paints?), largely moved traditional art into a niche with digital being the mainstream, lowered the barrier to entry (no need to buy canvases and paints, a smartphone is sufficient, an ipad is sometimes a tool of choice even for professionals), had the exact same arguments levied against it (it's not real art, it's a cheap shortcut to real skills, it's not soulful enough).

          The use of AI in photography (and the arts in general) at this point is mindless point-and-click drivel that requires little more than a pulse to operate. It is the artist’s equivalent of a “script kiddie” saying they can do the job of a photographer because they wrote a sentence describing the same picture that a photographer could ostensibly take.

          But digital art was always full of poor quality amateur hour examples. Look at Deviantart since forever. Sorry, I'm not seeing a fundamental difference here. You could argue a difference in scale, which is perhaps true, but otherwise I'm not particularly convinced.

          4 votes
          1. [21]
            CrypticCuriosity629
            (edited )
            Link Parent
            Yeah, I'm a designer and I just see AI as the natural progression of things like Photoshop. Like if not AI, what's the natural technological progression of design software like Photoshop and other...

            Yeah, I'm a designer and I just see AI as the natural progression of things like Photoshop.

            Like if not AI, what's the natural technological progression of design software like Photoshop and other similar software?

            Were we supposed to just quit innovating after Photoshop?

            The problem that I've been screaming from the top of my lungs since at least 2009 is that we as a society needed to prepare for what at the time I called automation, so that it's not as economically disruptive as it's being. We also needed to start looking into a Universal Basic Income for those who's careers would be displaced by automation, funded by charging companies an automation/AI tax for use of automation that would level the playing field between the cost of AI and cost of hiring humans.

            Unfortunately we live in the reality where we literally did not do a single rational thing to address these issues, mainly because of the reality that we live in a gerontocracy of politicians on dementia meds and barely know how to open PDFs without help, and so now technology is quickly catching up to people's lack of foresight like a freight train plowing into a dynamite factory.

            Once again it just backs up my opinion is that the problems people have with AI isn't actually the AI itself, it's the fact that AI is starting to highlight the major cracks in society that we've ignored and refused to take seriously for decades at this point.

            4 votes
            1. [20]
              Shevanel
              Link Parent
              I feel like you’re putting two arguments forward here. Like, I don’t disagree with the argument about working towards more automation, UBI, etc, but surely you’re not hoping that society is going...

              I feel like you’re putting two arguments forward here. Like, I don’t disagree with the argument about working towards more automation, UBI, etc, but surely you’re not hoping that society is going to start automating the creative arts? And if you’re not, then I don’t see how your comment applies to my concerns in my comment? Because that sounds like actual hell to me; it’d be the worst possible outcome of all of this.

              1 vote
              1. [19]
                CrypticCuriosity629
                (edited )
                Link Parent
                I wouldn't say I'm hoping for this or even excited about it. Automation and AI are just the next step in how technology evolve, including in the arts. It's not about what anyone wants; it's just...
                • Exemplary

                I wouldn't say I'm hoping for this or even excited about it. Automation and AI are just the next step in how technology evolve, including in the arts. It's not about what anyone wants; it's just the reality that tech moves forward unless something actively stops it. That's true in every industry, not just creative work.

                My original comment was to someone else, so it might not line up perfectly with your concerns. The main thing I was pointing out is that I don't think the problem is entirely with AI itself, it's that we never prepared for it, economically, socially, or culturally, and that's where a lot of the danger of AI comes from.

                As I asked before, was the expectation we just quit innovating after the creation of Photoshop/Digital design software, and if that wasn't the expectation, then what else would be the next technological progression after that?

                If we'd put things in place like an automation tax or UBI, this wouldn't be nearly as devastating for artists' livelihoods. That's the real problem, not the tech itself, but how disruptive it is when nobody's ready for it.

                As for art and creative spaces being under threat, I personally am not concerned about that outside of economic dangers for Artists themselves.

                History shows the same panic every time something new arrives. Painters in the 19th century were terrified of photography, and the arguments sound almost identical to what we're hearing now, and every single time a new technology comes out people say that "No, this time is different, this is not like last time, this is what will definitely kill art."

                The fear has sometimes been expressed that photography would in time entirely supersede the art of painting. Some people seem to think that when the process of taking photographs in colors has been perfected and made common enough, the painter will have nothing more to do.

                And critics dismissed photography as "thoughtless replication" with no genius or soul, again, just like we're hearing about AI:

                When critics weren't wringing their hands about photography, they were deriding it. They saw photography merely as a thoughtless mechanism for replication, one that lacked, "that refined feeling and sentiment which animate the productions of a man of genius," as one expressed in an 1855 issue of The Crayon.

                That article was written in 2016 and it cites articles written in 1855, well before AI was even on the radar, so it was not written knowing the arguments being made today about AI, so the fact that they're the same basic fears and arguments means it's a pattern, and one that happens every time there's a disruptive new technology.

                And here's an article from the New Yorker written in 2005 articulating old Musician's fears about the phonograph:

                Ninety-nine years ago, John Philip Sousa predicted that recordings would lead to the demise of music. The phonograph, he warned, would erode the finer instincts of the ear, end amateur playing and singing, and put professional musicians out of work. "The time is coming when no one will be ready to submit himself to the ennobling discipline of learning music," he wrote. "Everyone will have their ready made or ready pirated music in their cupboards." Something is irretrievably lost when we are no longer in the presence of bodies making music, Sousa said. "The nightingale's song is delightful because the nightingale herself gives it forth."

                And here's an article written for The New York Times in 1985 about how CAD software would cause a generation of engineers to be "overly reliant on CAD software and will tend to make catastrophic mistakes."

                The consultant, Dr. George E. Smith, who is also a professor of philosophy, told the winter meeting of the American Society of Mechanical Engineers in Miami Beach that junior engineers using CAD programs were simply putting in data and collecting the solutions. Little or no thought is thus given to how the program arrives at the answer or whether it is correct, he said.

                Many young engineers, Dr. Smith said, cannot devise simple mathematical models to check the computer's answers.

                Other engineers said that people who made such mistakes could not be licensed to practice engineering and that any mistakes by such nonprofessionals would be caught by licensed engineers before a design reached construction stage.

                That was written in 1985, is it coincidence that it is the exact same kind of argument that people have around software engineering and AI now?

                And finally when it comes to Photoshop, this NPR article was written in 2015 interviewing the creator of Photoshop:

                "There were previously very sophisticated people in darkrooms who could do very good photo composites that you couldn't tell from reality," Knoll says. "What Photoshop did was sort of democratize that ability."

                But some people would inevitably use these tools irresponsibly.

                "A lot of the uses of Photoshop are wonderful and creative," he says. "There are a few uses where people are being unethical with it and like any tool, it's not the fault of the tool that happens."

                Knoll sees a positive side to the pervasiveness of Photoshop.

                "It certainly raises awareness that you can't trust an image as truth without having other means of verification," he says. "People have a more healthy skepticism when they see photography."

                Again, there's a reason this mirror's the criticism of AI being used to fake photography.

                These are very very clear patterns, it's not a coincidence that the same exact phrasing and terminology gets consistently used across literally hundreds of years and it's because the core fear is around new technology and the uncertainty around it is always the same.

                And yet each time art doesn't die.

                Again, not saying all this to particularly defend AI as this amazing invention, but just trying to illustrate that the fears and concerns around AI are almost perfect reflections of fears and concerns around other disruptive technologies going back hundreds of years.

                Edit: I also thought it could use some clarification why I have a lot of this information on hand, and I grew up in the 90s and started using photoshop in the early 2000s, and I remember there being a lot of discourse around Photoshop ending the careers of traditional graphic designers. So that was a common thread early on, and so I've recognized a lot of the same arguments around AI and saved these articles a couple years ago to illustrate it.

                11 votes
                1. [17]
                  Shevanel
                  (edited )
                  Link Parent
                  I appreciate the perspective and references to similar “scares” in the past, and don’t disagree with a lot of what you’re saying on the surface. While I think 0x29A’s response to a separate...

                  I appreciate the perspective and references to similar “scares” in the past, and don’t disagree with a lot of what you’re saying on the surface. While I think 0x29A’s response to a separate comment in this thread would serve as a better response than anything I could come up with here, my angle is that this “advancement” truly is different than those, because it’s more of a replacement via automation of pre-existing methodologies, vs. an advancement or creation of anything new. Hell, the minute AI invents a new genre of music, let’s pick this conversation back up. Except, it can’t - it always reduces and homogenizes.

                  At the end of the day, all of these advancements you’re calling out, the phonograph, photography, digital photography, were natural advancements that allowed humans to keep creating using different mediums, or different approaches to the same medium. They are fundamentally different from a comparative perspective to a medium which removes the human element from the equation entirely. And yes, I’ll admit that the human is entirely necessary for exactly one step in the most basic models - a prompt. Except that’s grasping at straws as far as I’m concerned.

                  The quotes you shared from naysayers in the past are laughable at this point in time, of course, no argument there! But can we acknowledge that there is some gray area between those moments and pre- vs. post-AI art?

                  Let’s take a proper analytical look at one naysayer’s concern for a sec. As a musician, I feel most comfortable speaking to Sousa and the phonograph. So in JPS’s defense, the original wax cylinders sounded AWFUL, so his concerns were likely well founded at the time. That said, the quality of recordings got better, and now we have lossless audio. No concern for listeners losing the fineries! Let’s attempt to apply this logic to AI art. The concern is that we remove the humanity from the art and there is no way to utilize it as a toolbelt vs. a whole-cloth replacement for the artist. Okay, so what does an improvement upon this actually look like down the road? What’s the “lossless audio” to this concern’s “wax cylinder?” We put the human back into the artwork instead of relying on AI output? Then you’re not using AI anymore. The AI capabilities get better, so that a good “prompt engineer” can write better paragraphs to get better art? Yeah, that’s likely to happen, but it doesn’t fix the fundamental issue of the “”””art form”””” still being entirely reductive and devoid of humanity, and likely still not up to par with a sweat-and-blood novel creation of humanity.

                  I don’t mean to be nitpicky here, but the point I’m getting at is that this is different than the examples you’re providing. Humanity’s participation is very much required in the recording of music to a wax cylinder / record / digital device. Humanity’s participation is very much required in the creation of (high-quality) digital art using tools like Photoshop. Humanity’s participation is very much required in (good) photography. Humanity’s participation is entirely optional in the creation of AI art, and the end result is meh, and it will remain meh. And even if it starts to present as “better than meh” in the years to come, it’ll still, by its very nature, lack any semblance of humanity.

                  Edit: I wanted to close by plagiarizing an old Tildes comment of mine regarding AI art being the “next natural step,” as if we ought to just lie down and accept this reality:

                  I’m a classically trained musician who makes a living as a software engineer, and I find the arguments for (or against) AI within these two fields to be endlessly, fascinatingly, frustratingly at odds with each other. On one side, engineers are usually pretty good about seeing AI for what it really is (IMO) when it comes to its role in their own job. It’s a tool, a potentially very effective one, but it’s not going to replace them. But then in that same breath, some of these folks turn around and say, “oh, but AI art is just the way of the future, either get with it or get left in the dust.” Which one is it?! Is AI this insurmountable force that will level the playing field between any and all disciplines, or is it just a powerful tool that hones and sharpens that which has already been cultivated through other means?

                  9 votes
                  1. 0x29A
                    (edited )
                    Link Parent
                    Yeah, I want to emphasize a point you made that I never really got into, and that I like, especially this first point: Some of the naysayers, at their various times, were right, given what they...

                    Yeah, I want to emphasize a point you made that I never really got into, and that I like, especially this first point:

                    • Some of the naysayers, at their various times, were right, given what they knew and had every right to feel the way they did. I do not fault them for their view or think that because the outcome ended up being different for them decades later vs. what we expect for us, that we can assume critiques of generative AI are identical
                    • Some critiques of Photoshop among other things have validity too. So I want to be exceedingly clear that despite my arguments for the massive differences in these situations, I don't dismiss even some of the critiques leveled at these other technologies
                    • There can be parallels without the situations being identical. The parallel of "criticism of something new and worry of its destruction of something that already exists" I think is fine.
                    • What I can't get onboard is a dismissal of criticism on the those grounds, which assumes that the outcomes, the scale, or the details are remotely the same, given all that we know, and its immediate and already-observable impact, the political and social situations under which it is happening, and what we can see about how generative AI works means we can criticize it on its own ground as an entirely different paradigm.
                    • While they were critical along the spectrum of time/advancement at various turning points, I find us, instead of simply being at a point in time along the way, reaching an end-point. There is only so much that can be automated, before there is nothing left to automate. There's a finiteness of expression and human involvement, past which a certain point of automation, no longer truly exists, the "finality" being a distinguishing characteristic
                    • To add a point someone else mentioned to me today- there's an intent behind it too which furthers the point- photographers could see photography as another way to express, whereas generative AI owners look at human expression and see "we'll chase down whatever is left that hasn't been automated and automate that too". Instead of a new way to express or capture its "aren't you thankful you don't have to do the expressing yourself anymore".
                    3 votes
                  2. [2]
                    CrypticCuriosity629
                    Link Parent
                    FYI, I've been really wanting to respond to your message in kind, however I've been very busy and haven't had the time or energy to fully respond. However I did want to say that this is why I love...

                    FYI, I've been really wanting to respond to your message in kind, however I've been very busy and haven't had the time or energy to fully respond.

                    However I did want to say that this is why I love Tildes, is being able to have respectful and deep conversations about differing opinions.

                    1 vote
                    1. Shevanel
                      Link Parent
                      Acknowledged and appreciated! Protect your peace <3

                      Acknowledged and appreciated! Protect your peace <3

                  3. [13]
                    Lobachevsky
                    Link Parent
                    You seem to be missing the fact that you can just use it as part of your normal process. You can generate backgrounds, enhance a sketch, proofread your writing, bounce ideas, stuff like that. For...

                    You seem to be missing the fact that you can just use it as part of your normal process. You can generate backgrounds, enhance a sketch, proofread your writing, bounce ideas, stuff like that. For example, I can use a textured brush to save myself a lot of work, or even just grab that texture and paste it into my piece. That's what photoshop allows us to do among other things. That's automation, right? I'm removing some "humanity" from my work by letting a digital tool do some work for me. Yet no one minds it, it's just automating some part of the process.

                    You might say that well, the texture was made by someone so it still "keeps" that humanity when I use it. Well, what if I generated that texture with a regular old procedural generation algorithm. You know, the kind they use to create terrain in video games - it's not really random, there's rules built into it to create certain patterns. Does that suddenly remove some humanity from my piece if I used a texture generated with that algorithm? I personally don't believe so. And AI is the same thing, except those rules for generating something are discovered during the training process. Am I giving up humanity by using an AI generated texture then?

                    Or you might say that by using more automation you're crossing some threshold after which it becomes not "human made" anymore. How did you even pick that threshold then? Am I giving up humanity from my photos by using a smartphone or a modern camera? They perform a LOT of automatic work to make your photos look better, including taking multiple pictures and sort of averaging the best aspects of all. Used to be you had to select optimal aperture and shutter speed, nowadays the camera can do that for you. At which point are these photos not human made anymore, at what level of automation and how did you pick that? You admit yourself, AI works do require a human prompting them, curating them, editing them. You hand wave that as grasping at straws, but it's not at all clear to me why that would be the case.

                    Maybe you say it's just commissioning the work, asking someone else to do it, that doesn't count. Except up to this point, there were no requirements to consider something a piece of art or even assign authorship to it. Certainly you didn't have to create the work yourself, you could simply give instructions to others and they would do all the work. We don't even know if some paintings by old masters were executed by their hand or not, and yet no one ever doubts their credit. I think we've suddenly forgotten that fact in the era of AI hate.

                    I'll be honest with you, as someone whose artistic side hasn't had proper development but who still wants realize his vision sometimes, I am ecstatic that these tools exist and it really saddens me when there's such vitriol towards them in certain communities.

                    1 vote
                    1. [12]
                      Shevanel
                      (edited )
                      Link Parent
                      I think this is where the breakdown of the term “AI” starts to get in the way, because it feels to me like we’re talking about two different things here. I’m not familiar enough with the...

                      I think this is where the breakdown of the term “AI” starts to get in the way, because it feels to me like we’re talking about two different things here. I’m not familiar enough with the terminology here to say the correct things, but it sounds to me like you’re referring to using specific tools within your artist’s toolbox that happen to be powered by machine learning to automate / generate part of the creative process itself (one of your examples being a generated brush texture vs. creating one from scratch), vs. whole-cloth Generative AI that create entire pieces of visual art/music/whatever based on a prompt that you feed it.

                      I’d still personally be wary of it strictly through the lens of AI using a texture or element from an existing work, and your would-be original work feeling a lot more derivative because of it without you realizing it. But at this point, I’m splitting hairs. I think at this level, AI simplifying a step in the process (much like digital photography removes the need for every photographer to understand how to develop film) is considerably more palatable than using a prompt to generate the entire picture in the first place. I think the ethical quandaries remain, but it’s considerably less black and white from the perspective of the discussion in this thread up to this point.

                      1. [11]
                        Lobachevsky
                        Link Parent
                        The fact that you think that the only way to use generative AI is creating whole pieces with no control over it is the issue. We are talking about the same entity here. In fact from reading this...

                        The fact that you think that the only way to use generative AI is creating whole pieces with no control over it is the issue. We are talking about the same entity here. In fact from reading this thread my impression is that the criticism comes from belief that that's the only way to use it, when that couldn't be further from reality. That being said, even generating whole pieces still retains the authorship and creativity since, as I mentioned, execution isn't a necessary prerequisite for being credited as an author of an art piece.

                        1 vote
                        1. [10]
                          Shevanel
                          (edited )
                          Link Parent
                          I never said it was the only way to use the medium, nor that I thought it was the only way. What I meant was that using it in this way is using it such that there is virtually no human agency, and...

                          I never said it was the only way to use the medium, nor that I thought it was the only way. What I meant was that using it in this way is using it such that there is virtually no human agency, and thus is inherently less valuable from a creative standpoint. It’s not fair to expect you to do this, but if you were to look through my comments on these topics before, you’d see that, much like my approach to AI in a software engineering space, I believe there’s room for it as a supplementary tool to help expand an artist’s existing, well-honed abilities, not as a whole-cloth replacement for effort.

                          That said, you are 100% correct that I never stated the above explicitly in this thread, and I appreciate you calling it to light for clarity’s sake for folks who might stumble on this in the future.

                          That being said, even generating whole pieces still retains the authorship and creativity since, as I mentioned, execution isn't a necessary prerequisite for being credited as an author of an art piece.

                          We’ll have to agree to disagree here, especially on the creativity aspect. “Execution” is a pretty nebulous sliding scale, no argument there, but on an effort/execution scale from “I threaded the brushes, ground the organic material down to make dyes, wove the canvas, and painted this piece unassisted by hand” to “I did literally nothing,” using gen AI to create an entire piece is far closer to one of those two sides than literally any other medium by an incredibly long shot, digital art, music production, etc. included.

                          To take a page from 0x29A’s book, that approach is a lot more akin to commissioning a work, and I think it’s quite a stretch to suggest that the commissioner of a piece of art carries equal importance as the artist, at least from a creativity standpoint. Not to mention that at least IRL commissioners of IRL artists help put food on an artist’s table instead of just throwing a few more coins on top of the Nvidia gold pile.

                          2 votes
                          1. [9]
                            Lobachevsky
                            Link Parent
                            The point I'm trying to make is that there is no need to do anything other than express yourself. Commissioning a work can absolutely be of far more "importance" than executing it. Think more...

                            The point I'm trying to make is that there is no need to do anything other than express yourself. Commissioning a work can absolutely be of far more "importance" than executing it. Think more broadly. A movie director doesn't need to do anything other than telling others what to ("commissioning") in order for all of us to consider them the most "important" person at the top of the credits. Making decisions isn't necessary either - you could be throwing paint at the canvas and see how it lands and we could call that art and you an artist. You could set a video camera in one place, record thousands of hours of footage and then simply select some frames of it or clips of it. Sol LeWitt gets credited after his death because others simply followed his written instructions on how to execute his Wall Drawings - this one is practically maps 1 to 1 to "prompting" someone to do what you intend.

                            What exactly makes AI so fundamentally different that we throw these examples out of the window? It seems to me that the only prerequisite is expression and why shouldn't someone be able to express themselves by using a written prompt and selecting the appropriate results?

                            Mind you I'm not saying that it's as impressive as painting something from scratch, it's just I don't believe that effort is ever required for self-expression (and therefore creating art).

                            1 vote
                            1. DefinitelyNotAFae
                              Link Parent
                              The "commissioner" of a movie would be the execs, not the director. The director is actively participating in the creative process. In a musical piece the conductor (and depending on the director...

                              The "commissioner" of a movie would be the execs, not the director. The director is actively participating in the creative process. In a musical piece the conductor (and depending on the director maybe a bit of the composer) would fulfill a similar role.

                              A patron of the arts might pay Michaelangelo to sculpt something or a composer to compose something but the patron isn't the artist. Even if they micromanage. That's the distinction being made.

                              There's certainly a point where human art probably doesn't feel very artistic to others but there's still intention behind it. "What is art" may be fundamental to this, of course, and plenty of people don't think certain modern or pop art is "art"

                              1 vote
                            2. [7]
                              Shevanel
                              Link Parent
                              I appreciate the thoughts you’re bringing to the table! I don’t believe that commissioning a work is a fair comparison to directing a film. The work of a director is so much more than simply...

                              I appreciate the thoughts you’re bringing to the table! I don’t believe that commissioning a work is a fair comparison to directing a film. The work of a director is so much more than simply reacting to what’s in front of them and giving the next set of instructions to the entire gathered group at large. Arguably more importantly, a director is creating something that doesn’t already exist. The nature of generative AI (in the whole-cloth way we’re discussing it at the moment, acknowledging there are other minutiae that we are ignoring to your point) is that it’s only going to chop and screw what already exists. The most credit I’ll give the “author” in this scenario when comparing to the filmmaking process is to compare them to an audience in a test screening. The movie’s already been filmed (aka the training data already exists), all the test audience does is say “I liked that part” or “I wish we saw more of ____;” then the production team can go back to their editors and see if they have the footage (training data) that can solve for the ask without going back and filming more. Most importantly, the test audience can’t invent new parts of the movie that weren’t filmed in the first place, and so their role is far less impactful (and some would argue unnecessary; you don’t need to show your film to a test audience before releasing it, but I would imagine that most do, if only to satisfy producers. You absolutely need a director).

                              Your comments on LeWitt are interesting and made me reframe my thoughts a little—I appreciate the insight! I’ve spent a little time reading up on him this morning, and while I appreciate the perspective on his wall drawings as an AI analogy, I believe it doesn’t fully hit the mark on the point you’re trying to make. First, LeWitt did plenty of these “paintings” while he was alive, and made it very clear that one of the interesting aspects of the work itself is that every time they were installed, they would look a little different because of how each individual interpreted his instructions. This was a feature, not a bug. He intended for this work to be recreated differently based on who read and interpreted the instructions, and the room in which the art is installed. It is intrinsically based in the human experience, and when viewed through that lens is beautiful in its own right. I acknowledge that you can draw a parallel between this concept and an AI prompt interpreting it differently each time as well, but in this case, the AI’s varied interpretation is (arguably) not the intent. Leaning into the concept of it as an analogy for the nature of humanity is certainly interesting, but again, you’re stripping the human out of it, which is a loss in and of itself.

                              Additionally, while the human who installs a given LeWitt piece might not be named or remembered like LeWitt, they are still vital in its creation. In that same vein, I may not remember the name of a pianist who performs a Liszt concerto with a symphony orchestra, and instead only remember the name of the composer Franz Liszt, but the human performing the piece is vital. Conversely, if I sat and listened to a music composition program play through the same piece, even if it did a solid job observing tempo fluctuations, dynamics, musical expression, etc… it simply isn’t going to be as enjoyable of an experience. You lose the humanity, you lose the soul. And again, to draw this back to the director creating a film versus a test audience reacting to it, LeWitt’s idea was also novel. He came up with it, and others iterated on it. If one truly invents a new series of art based on iteration of an original concept or idea that an individual created in the first place, I do find that more palatable than acting like a machine’s response to your prompt is a fully-fledged form of expression. I think there is room for AI in this specific context because it’s almost like creating “incidental art,” though then I’d argue that it’s AI art for AI’s sake, which is a lot more in line with experimental art, and certainly far and away from art as self-expression IMO.

                              Speaking of expression, I thought your closing statement was interesting:

                              it's just I don't believe that [level of] effort is ever required for self-expression (and therefore creating art).

                              I think your stance here is valid, and I would never want to come across as gatekeeping the ability to express oneself. If you truly believe deep down that the resultant piece of art from you feeding words into a machine is an expression of yourself, I have no power in telling you that you’re wrong. To me, it rings pretty hollow, and feels like you’re adding in a middleman that doesn’t need to be there. If your words carry so much value that the resultant visual feedback that a machine synthesizes from its training data represents your inner self, why not just focus that energy into actually writing for writing’s sake? I could almost see it being a cool accompaniment - “here’s an essay / story I wrote that captures what I’m feeling; I then tried to capture this essence within a visual medium using AI and found this resultant work to capture my feelings best.” I’m probably still not gonna love the visual side of what I see, but that you actually did some honest-to-goodness creation in the interim is likely going to feel a heck of a lot more rewarding for you, too.

                              You’ve mentioned in a previous post that you want to express yourself more but that you haven’t had the opportunity for proper development from an artistic perspective. I think that’s an incredible bit of self-reflection, and I love that you want to get more in touch with that side of yourself. If you’re willing to hear some feedback, I really don’t think that using Gen AI in this whole-cloth perspective is going to be as rewarding. Using it as a tool that helps guide your growth with additions to your toolbelt per our earlier comments, yes, sure, I think there’s something there. But I promise you that the cultivation of expression through practice and honest-to-goodness development of your craft is going to (in the long run) feel a lot better than anything an AI prompt could do for you. It’ll take time and it’ll be a lot more frustrating, but I would imagine you’ll thank yourself in the long run.

                              I’ll circle back to one of my last thoughts again: I am not you, and I can’t feel what you’re feeling. If what you’re doing with AI is making you feel fulfilled, then I’m not one to stop you. Others are going to have their own issues with this from an environmental / ethical perspective, but if the cat’s out of the bag already, I suppose I’d rather have someone using it to find joy and emotional closure than to make a quick buck.

                              Without getting too sappy, I’ll leave you with one of my favorite quotes from Kurt Vonnegut’s A Man Without a Country that speaks more to getting our own hands dirty instead of letting others speak for us, even if we don’t do the best job ourselves. Do with this quote what you will.

                              “[The arts] are a very human way of making life more bearable. Practicing an art, no matter how well or badly, is a way to make your soul grow, for heaven's sake. Sing in the shower. Dance to the radio. Tell stories. Write a poem to a friend, even a lousy poem. Do it as well as you possibly can. You will get an enormous reward. You will have created something.”

                              1. [6]
                                Lobachevsky
                                Link Parent
                                Sorry, I wasn't clear. I am not trying to say that commissioning a work is like directing a film. I'm trying to say that you cannot gatekeep creating art behind execution, because clearly, such as...

                                I don’t believe that commissioning a work is a fair comparison to directing a film.

                                Sorry, I wasn't clear. I am not trying to say that commissioning a work is like directing a film. I'm trying to say that you cannot gatekeep creating art behind execution, because clearly, such as in the case of a movie director or a composer or Sol LeWitt providing directions for other people to execute can absolutely in an of itself be considered creating art.

                                The nature of generative AI (in the whole-cloth way we’re discussing it at the moment, acknowledging there are other minutiae that we are ignoring to your point) is that it’s only going to chop and screw what already exists.

                                Where is this coming from exactly? I highly disagree with this take, but I'm not sure how to go about it since you didn't really provide any supporting evidence or anything particularly specific. Anyway, machine learning algorithms don't "chop and screw", they find patterns in the data. There's nothing fundamentally wrong with this approach, it is in fact very effective apparently at understanding both language and visual arts.

                                it simply isn’t going to be as enjoyable of an experience

                                With respect, that's just personal preference and isn't really relevant to the discussion in my opinion. I don't find composing music enjoyable or indeed traditional drawing methods. I'm not wired to enjoy these and it's part of the reason I didn't get that far. I enjoy playing around with various diffusion models or LLMs way more and if that's your criteria, then apparently that's way more "art" to me.

                                why not just focus that energy into actually writing for writing’s sake?

                                Because I don't/won't/can't/don't want to. Just like some people play music and some people draw and some people dance and some people dress up, we all are different and we like different forms of self expression. I think saying "huh, playing piano, well have you tried DRAWING instead", or "watercolors, my my, have you tried Photoshop, that's a real artist's medium" would at best be very rude. I'm not saying you are doing it in the same way, but so far I don't see a fundamental distinction other than it's a new medium that is acceptable to be elitist towards.

                                To me, it rings pretty hollow, and feels like you’re adding in a middleman that doesn’t need to be there.

                                That middleman is called medium. Until we are able to project our minds, we are stuck with that middleman, its limitations and boundaries. I think I mentioned already the way modern cameras are sophisticated automated machines that pass what you "see" through multitude of algorithms. Why are you not against that middleman? But even the most simple medium, like a pencil, is a middleman that you have to adapt to. You won't be able to add any color if all you have to work with is graphite, even if the vision in your mind is colorful. AI models are the same, they have their quirks which you have to work around, you have to learn how to prompt them, which models are good for what, which tools you can use to make the vision more accurate. But again, any medium is like that, until we can project our minds you are necessarily compromising some of your inner vision when translating it into reality.

                                To avoid dragging on, I'm simply looking for a brief answer to a simple question: what is fundamentally different about AI generation that I cannot find in any other medium before this that is considered to be a valid form of self expression and thus art? So far I have never been able to find it, and that's why to me negative views of AI gen are yet another wave of "newest technological development is bad".

                                1 vote
                                1. [5]
                                  Shevanel
                                  (edited )
                                  Link Parent
                                  Point taken, and I agree! I maintain that generating art in this way is still, in the grand scheme of things, pretty well devoid of the relative creativity needed to, for example, direct a film,...

                                  you cannot gatekeep creating art behind execution

                                  Point taken, and I agree! I maintain that generating art in this way is still, in the grand scheme of things, pretty well devoid of the relative creativity needed to, for example, direct a film, but I touched on that enough in my last comment that I won’t belabor the point.

                                  Where is this coming from exactly? I highly disagree with this take, but I'm not sure how to go about it since you didn't really provide any supporting evidence or anything particularly specific. Anyway, machine learning algorithms don't "chop and screw", they find patterns in the data.

                                  I agree with your last sentence here, and I was wrong to state that the way that I did. I acknowledge that what is happening under the hood is far more impressive than what I made it out to be! That said, yes, they indeed find patterns, and then they go on to generate visuals / text / audio based on the patterns with which they are familiar without any possibility to build on them. It is entirely reductive and cannot be anything but. Technically speaking, yes, they are creating something new, i.e. that specific artifact is ostensibly a new thing, but it is always going to feel like a rehash. I’m not personally sure how else I can state this, please forgive me for not being able to be any more descriptive than that.

                                  It’s lacking the humanity that creates novelty. I’ve never once seen a piece of generative AI art that inspires me. I’ve seen some that make me go, “oh neat,” or “huh, almost couldn’t tell that was AI!” but never anything more than that. When AI can create, from a prompt, a novel creation that rivals the Adagietto from Mahler’s 5th Symphony and brings me to literal tears, cool, please ping me and I’ll eat my words. Virtually every other medium that you mention in your examples as comparisons do have examples at this level, which is why they stand apart in my mind, but I’ll get into that more later.

                                  With respect, that's just personal preference and isn't really relevant to the discussion in my opinion (…) I enjoy playing around with various diffusion models or LLMs way more

                                  That’s completely fine. It’s okay that you prefer that. The more I think about it, the more I would agree that doing this while creating pieces meant to be consumed for entertainment purposes is ostensibly, by definition, a process by which art is made, though I’d tie this back to the human in the equation being more of the commissioner and the AI being the “artist.”

                                  It brings “Comedian” to mind, and the eternal conversation about asking “what is art?” At the end of the day, I’m in the camp of, “if you create something with the intent of it being consumed as art, then that thing is art.” But the base issue goes back to the question of asking if this makes the human in the equation the artist or more akin to a commissioner, which, yeah, dead horse at this point and it’s feeling like we’ll have to agree to disagree here.

                                  Because I don't/won't/can't/don't want to. Just like some people play music and some people draw and some people dance and some people dress up, we all are different and we like different forms of self expression.

                                  I think that’s fine that you feel those ways. I also don’t think there’s much expression going on here. It feels like maybe we’re conflating “what I’m doing is a form of self-expression” and “what I’m doing is fun to do and I like doing it.” And if this is the latter thing, awesome! Cat’s out of the bag, I’m not arguing that you shouldn’t do it, putting aside other issues with AI for the sake of example. I like playing video games. That doesn’t automatically make it a form of self-expression. It doesn’t de-legitimize the thing either, mind you! It’s just not automatically self-expression. But I can use video games to express myself, such as using Minecraft to make pixel art, or 3D models. Just like I think we can use AI as a tool to help us create art.

                                  That middleman is called medium. Until we are able to project our minds, we are stuck with that middleman, its limitations and boundaries (…) Why are you not against [the camera as a middleman]?

                                  I believe answers to this have been amply provided in this thread up to this point, and I encourage you to especially check out 0x29A’s responses because he definitely speaks better to this than I can, but to quickly summarize, the “effort -> automation” spectrum of a medium within any art form is indeed just that: a spectrum. I’m of the belief that, when you get to this level of automation, it’s basically a non-starter. There is still some barrier of entry to proper photography, analog or digital. Phone cameras are lowering that barrier every year, sure, but it’s still there. Same with digital music production vs. live instruments, and so on. That level of necessary aptitude no longer exists with Gen AI being used at this level. Back to our very first exchange, being used as supplemental tools in your digital art toolkit (or whatever), yes, that’s a good dividing line, but as soon as the experience is virtually hands-off, you lose something. It’s a sliding scale, and I fully acknowledge some mediums are more push-button than others. This one is simply the most push-button by an extremely wide margin, to the point that I find that the ability to adequately express through the medium is actively hampered.

                                  Let me put this another way: surely you’ve seen videos where people challenge a common assumption about an art form or medium. They’re always kind of clickbaity, but it’s the sort of scenario where someone says “haters say ______ isn’t a real art form” (digital art / cartoons / graffiti / drum machines / whatever) and then the video is a summary debunk of that statement, by way of showing people who have mastered said art form, showing the incredibly high skill ceiling that’s actually present within the medium to a crowd of naysayers that otherwise believed that “new technology (/medium) = bad.” That type of debunk does not and cannot exist for AI art, because the skill floor and the skill ceiling are so incredibly close to each other. Do you know how to type words into a prompt? You can generate AI art. Are you familiar with the nuances of prompt engineering? You can generate marginally better AI art. That’s it. Those are the two “extremes” of the medium when we’re talking about it at this level.

                                  To avoid dragging on, I'm simply looking for a brief answer to a simple question: what is fundamentally different about AI generation that I cannot find in any other medium before this that is considered to be a valid form of self expression and thus art?

                                  Just that, when the point of entry is this low, it doesn’t really feel like you can possibly be expressing that much? I hate to put such a fine point on it, but I don’t know how else to approach it. I can draw a stick figure and say, “I enjoyed doing this, and it’s how I express myself, and it’s art.” And that’s totally fine—all of those things can be true for me, and true to the definition of art (as far as I’m concerned). But I also have to accept the fact that I’ve not really added anything of creative value to the world, and other people can make (virtually) the exact same work of art just as easily, and that it doesn’t intrinsically have value just because I like it (outside of the inherent value of me liking it, which, again, fine). I’m glossing over all of the ethical / environmental quandaries that I’ve continued to ignore with AI up until this point, but I do think it becomes more of an issue the more we do things like this, though I won’t dive any deeper into that here.

                                  I also want to make another thing clear because I’ve neglected to do so up to this point: it’s okay for people to not have an outlet of self-expression. You’re not less of a person if you don’t “create” in some way. I personally think life is more fulfilling when one does, but that’s just one person’s opinion, no more or less valid than anyone else’s. I have plenty of friends that do their day job, take care of their family and/or themselves, enjoy TV / books / games, and repeat the same thing the next day, and they are perfectly content, and I say “more power to them.” I don’t think we need to force something into a category where it doesn’t belong just so we can justify that what we’re doing is worth the time.

                                  At the end of the day, if you’re looking for someone to tell you that it’s okay for you to enjoy this, then consider that taken care of through the way you have defended your points in this thread. I appreciate the context and thoughtful points you have brought to the discussion and it has made me rethink how I mentally frame some of this. But for every other facet deeper than that, we need to acknowledge that we’ve passed a threshold here that is simply not analogous to other mediums.

                                  1. 0x29A
                                    (edited )
                                    Link Parent
                                    I appreciate the multiple mentions of me and my posts in your comments, but I will very explicitly state here that I am not interested in any replies to my comments in particular on this, and feel...

                                    I appreciate the multiple mentions of me and my posts in your comments, but I will very explicitly state here that I am not interested in any replies to my comments in particular on this, and feel free to take my arguments as your own and repost them in your comments/replies if you wish, they are freely available to base your ideas and thought processes and discussions off of, without attribution. I would respectfully request to no longer be specifically called out by name, especially in a way that gives me a notification, but preferably not at all, in regards to this conversation/subject. This is a category/tag on Tildes that I have hidden, even, and this particular topic I somewhat begrudgingly decided to post at length on keeps coming back up and I'd rather leave it where I did 11 days ago (begrudging because I know posting my feelings at all basically is an implicit "invitation to debate/discuss" when this is a subject I'm not interested in debating, it is a hill I refuse to cede any ground on)

                                    I'm not interested in being further involved in any "debate" about AI "art" as, I think at this point the various opinions out there have been expressed on this ad nauseum and it's clear that myself and anyone that still thinks AI visuals are "art" and that criticisms of it on the basis of human expression/input/execution are "gatekeeping", are operating from fundamentally different worldviews that will never reach agreement because our value judgements about various aspects of it and the world so permanently differ that to attempt to find common ground is pointless.

                                    In fact, I find the pro-generative-AI crowd, especially the subset that argues in favor of using these systems for visuals in particular, to be rather nauseating to read and exhausting to interact with and operating from a set of perspectives and ideas about the world that I have zero interest in even giving the time of day. In other words, if someone already "doesn't get it", I don't suspect any amount of conversation will move the needle. They can feel free to consider my perspectives radical anti-AI extremism if it makes them more comfortable to do so (and it's not entirely wrong, I think generative AI systems, especially those generating entire creative works, should be destroyed, and that is not a figure of speech). I hope someday what is thought of as the "extreme" perspective becomes the necessary minimum.

                                    2 votes
                                  2. [3]
                                    Lobachevsky
                                    (edited )
                                    Link Parent
                                    Sorry, it's not clear to me how do you go from the first sentence to the second one, let alone the third one. Mind elaborating as to how exactly do you arrive at that conclusion? I disagree of...

                                    That said, yes, they indeed find patterns, and then they go on to generate visuals / text / audio based on the patterns with which they are familiar without any possibility to build on them. It is entirely reductive and cannot be anything but. <..> it is always going to feel like a rehash

                                    Sorry, it's not clear to me how do you go from the first sentence to the second one, let alone the third one. Mind elaborating as to how exactly do you arrive at that conclusion? I disagree of course, so it's not so obvious to me.

                                    It’s lacking the humanity that creates novelty. I’ve never once seen a piece of generative AI art that inspires me.

                                    I have and I don't think it's because I'm a weirdo that gets inspired by soulless crap as you would perhaps put it. It's because I look past the surface level examples of "generative AI art". Practically any statement along the lines of "all of the AI art that I've seen is bad" will be an example of toupee fallacy. You're right, there's plenty of low effort garbage slop out there, but there's plenty of low effort garbage slop digital art period. It's not as easy to pump out in large numbers, but that's not really the point of contention here. This leads me to my biggest disagreement:

                                    There is still some barrier of entry to proper photography, analog or digital. <..> the skill floor and the skill ceiling are so incredibly close to each other. Do you know how to type words into a prompt? You can generate AI art.

                                    I cannot see how you can possibly say this when everyone has a smartphone and everyone uses their smartphone camera. Even my grandma uses her smartphone camera. It is quite literally push a button get good results thanks to the built-in algorithms that do the processing for you. I don't agree that generating AI art has a lower skill floor. And I don't agree that gen AI has a low skill ceiling. I guess it makes sense that you would think that considering you think it's impossible to create anything good with gen AI, but the whole point with gen AI, LLMs included, is that you pretty much have to integrate it with other tools, use the right model for the task, or even do some training of your own. That's how you get good results and there's a lot of very advanced techniques there. It's also unexplored, you've got people hiding their metadata because they've figured out some combination of things that work well. When it comes to visual art, being a trained artist is hugely helpful, because your knowledge of what is a good art piece, what's good composition, what makes things look appealing, is very useful. Not only that, but hybrid workflows are very powerful. You can use area specific prompts or use your sketches or paint over things to have fine grained control over what the model does. That's why I used the comparison to using premade textures earlier - because it is in essence the same thing: streamlining some part of your work to achieve better results faster.

                                    I'm curious what do you think about this since if I understand correctly that's where your opinion on gen AI is rooted from. By the way, I do think your conclusions with those assumptions aren't incorrect, it's just I don't believe those assumptions to be true frankly.

                                    I encourage you to especially check out 0x29A’s responses because he definitely speaks better to this than I can

                                    I've seen you mention it before and I have read those responses. I don't really think they're of interest to me since they seem to come from the point of inherent dislike for the thing itself (EDIT: judging by the latest reply I was right to avoid it), whereas as I mentioned I'm looking for an answer as to why this thing would be inherently different from all the other things that were brought by technological advancements before (and didn't cause the arts to die in the process, not even once). Hence I replied to you since you seemed to be more interested in the discussion. Thanks for that, by the way, I really appreciate your detailed and thoughtful comments so far!

                                    1 vote
                                    1. [2]
                                      Shevanel
                                      (edited )
                                      Link Parent
                                      I think at this point, we’re hitting spots on which we have fundamental differences in beliefs so the conversation’s going to circle the drain. But I will address one question that I might be able...

                                      I think at this point, we’re hitting spots on which we have fundamental differences in beliefs so the conversation’s going to circle the drain. But I will address one question that I might be able to elaborate on before I step away, and that’s your question about the camera usage and the skill floors & ceilings. It feels like you’re getting wrapped up on the ease of use of taking a picture, vs. the skill ceiling involved in actual professional-level photography.

                                      First, I’ll take a step back and acknowledge that yes, you are right, taking a digital photo is next to mindless at this point. Arguably easier than writing an AI prompt, in fact! It (and AI art) both have what we would consider to be incredibly low skill floors, as in, the barrier of entry is so low that it is virtually non-existent. The skill ceiling is what differentiates these two.

                                      A good AI art prompter (I’m not going to use the word professional here) probably has a basic understanding of prompt engineering and being able to use syntax that weights their sentences a little bit more. That’s about all that separates them from a person trying it out for the first time. It likely did take them a few hours of effort and reading up on engineering and syntax to get to this point. This skill ceiling, the upper echelon of the skill it takes to get to this point, is pretty dang low, and it shows in the results. The difference between bad AI art and good AI art is not that big. Of course there’s totally glitched out responses that are the photo equivalent of accidentally capturing the inside of your pocket with a phone camera, but even the “good stuff” is lackluster. You stated that the skill ceiling here is actually higher because folks utilizing these prompts could then turn around and integrate them into their art as a piece of the work, per our earlier comments about creating brush textures and so on. This is back to the conversation that kicked our whole thread off so I won’t dive into it again, but to shed light on it from this perspective, I would add that we’re no longer comparing skill ceilings fairly - we are now looking at the skill ceiling of a digital artist using AI in a separate medium, not just prompt engineering. The borders get nebulous here, but I generally see that we’re not going to come to an agreement on this aspect of the usage and the skill involved with it, which is why I won’t spend any more time discussing it since it’s a non-starter anymore.

                                      A professional photographer, using analog or digital mediums and staying strictly in the realm of “skills needed for the specific task of photography” and not “digital art + photography,” will also carry with them a basic understanding of compositional theory, light / exposure awareness (especially if using analog), and an incredible sense of patience and stamina (you’re holding a big chunk of metal up for an extended amount of time, waiting to capture ephemeral moments). And then to take a page from your book on integrating this with a separate medium, nearly all digital photographers today also have a grasp of digital art manipulation to be able to edit / touch up as needed. But the skillset to be a photographer still exists in a vacuum and has a definitive competency ramp. There is a reason that event photography is a completely legitimate career choice. Digital / phone photography has helped the layperson get way closer to this level, no doubt about that. Maybe 75-80% of the way there. But that last 20%? That’s where careers are made and that’s where you see truly professional work. It takes years of work and thousands of hours of practice to get to this level. And it’s why we can still recognize the difference between professional photography and layman’s work on a phone. If you cannot see this difference, then I unfortunately can’t relate to you any more at this level.

                                      Appreciate the discourse up to this point, and I wish you well. Feel free to respond, I’ll certainly read it if you do but I won’t be engaging further. Have a good day!

                                      1 vote
                                      1. Lobachevsky
                                        Link Parent
                                        No we aren't. Almost everything I mentioned doesn't have anything to do with being a digital artist, it's entirely specific to the domain of AI generation. That being said, I don't see anything...

                                        I would add that we’re no longer comparing skill ceilings fairly - we are now looking at the skill ceiling of a digital artist using AI in a separate medium, not just prompt engineering.

                                        No we aren't. Almost everything I mentioned doesn't have anything to do with being a digital artist, it's entirely specific to the domain of AI generation. That being said, I don't see anything unfair about this. A photographer with background in art will benefit from knowing the principles of light and shadow, color theory, composition among others. An artist with a background in 3D modeling will benefit from knowledge of construction and perspective. Why is that arbitrarily not part of the skill ceiling? At the end all of these share image creation as a commonality. Though I will reiterate - even with that constraint, even if you limit it to purely generation specific things, the skill ceiling is still high - look at my previous comment for examples of that.

                                        It feels like you’re getting wrapped up on the ease of use of taking a picture, vs. the skill ceiling involved in actual professional-level photography.

                                        I'm just applying it to both, well, fairly. You mentioned both skill floor and skill ceiling, I addressed both. Skill floor of photography is much lower than skill floor of AI generation. Skill ceiling of both is high as well, not extremely low like you were suggesting.

                                        But that last 20%? That’s where careers are made and that’s where you see truly professional work.

                                        If your metric now is "it must be professional work with a possible career", I'll point out that people have been making good money from AI generation for a while now. There are openly AI generative patreons up with thousands in estimated monthly income (I won't be able to find an example right now, but feel free to search). I don't think this is a good metric personally, but even that AI generation passes.

                                        In conclusion I'll just reiterate that the question that I am seeking an answer to is "what makes AI generation fundamentally different from any other technological advancement in the past that were accepted as a new medium" and so far I haven't found a satisfactory answer unfortunately. It was interesting insight nonetheless, good day to you as well.

                2. psi
                  Link Parent
                  On the other hand, sometimes those skeptics were right. Consider the industrial revolution. Despite popular depictions of the Luddites as being anti-progress, it is worth remembering that factory...

                  On the other hand, sometimes those skeptics were right. Consider the industrial revolution. Despite popular depictions of the Luddites as being anti-progress, it is worth remembering that factory conditions in the late 18th/early 19th century were absolutely grueling, with seven day work weeks, the employment of child labor, little concern for safe working conditions, and payment via vouchers only redeemable at company stores. It was only after some hundred years of resistance that the labor movement managed to limit these practices. Now we reap the benefits of the industrial revolution, but those gains came at the cost of generations of human suffering.

                  5 votes
          2. [2]
            Shevanel
            (edited )
            Link Parent
            Edit: I found a comment from 0x29A that I replied to a while back that captures my thoughts so much better than I've ever managed to do, so I'll link you to that instead. Feel free to skip my...

            Edit: I found a comment from 0x29A that I replied to a while back that captures my thoughts so much better than I've ever managed to do, so I'll link you to that instead. Feel free to skip my hastily written thoughts below lol: https://tildes.net/~music/1nvt/peter_gabriel_on_synthesizers_as_a_dream_machine_1983#comment-fjtp

            I’m sorry that I can’t give your comment the response the time it deserves, but I’ve got a sick kid home with me today. I’ve responded to previous lines of thought on Tildes in the past, and the reason that I’ll double down on here is the same thing I’ve mentioned in my first response to you - Photoshop has an actual skillset and an actual learning curve, and that’s a feature, not a bug. I assure you I’m all about advancing the tech, but this is the opposite of advancing the tech - it’s dumbing it down. It lowers the barrier for entry, but it also completely obliterates any semblance of a skill ceiling.

            But digital art was always full of poor quality amateur hour examples. Look at Deviantart since forever.

            You're not wrong. But the difference is that medium can lead to the creation of fantastic, unique art. AI art (in its current state) cannot. The worst digital/Photoshop art vs. the best digital art spans a range from “awful” to “incredible.” The worst AI art vs the best AI art spans a range from “awful” to “meh.” There is no such thing as “pushing AI art to its extreme” and creating a masterpiece in a blossoming new medium like there is with digital art and Photoshop. It’s a false narrative. AI art by its very nature is reductive, it cannot innovate.

            You’re almost certainly right that arguments of similar calibers were levied against Photoshop, et al upon their introduction. The difference is that those folks IMO truly were “old men yelling at clouds” because they didn’t understand, or chose not to understand, the new tech. I’ve spent a lot of time with AI art, and I’m certainly not one to say that AI in general doesn’t have its uses (again, I agree with 100% of nic’s parent comment). I won’t pretend to know all of its ins and outs, but I know it well enough to know that it is not the leap forward in art that folks like to pretend it is.

            Again, I’ve hastily put forward some poorly fleshed out points and I apologize that I can’t put more time into this at the moment. I’ll try to dig up some old responses and link to them if I have time later.

            2 votes
            1. 0x29A
              (edited )
              Link Parent
              Appreciate the callout to my earlier comment and I'll add these angles too: AI does absolutely not "democratize" art as is a common refrain, it commodifies it while pretending to democratize it...
              • Exemplary

              Appreciate the callout to my earlier comment and I'll add these angles too:

              • AI does absolutely not "democratize" art as is a common refrain, it commodifies it while pretending to democratize it (an industry lie, and one held up by frauds). The "democratize" lie has gotten far in the public psyche and it doesn't hold water
              • AI barges in, slurps everything up without consent, devalues skills and learning (and real human-made art and expression as a result), and then tries to sell that hollow, humanity-free commodified simulacrum back to us. It is "expression", dehumanized and productized
              • It is a grift/fraud enabler. People can prompt it, and pretend they made the output. Prompting for AI "art" is identical to commissioning an artist to draw something for you, and then you turning around and pretending you did it yourself. You did nothing, you skill-less monster that just wants praise for something you put no effort into.
              • It is not simply a "tool that lowers the barrier for entry", lowering the barrier to zero so no skill or little input is acquired to get completed works, means, as I say when I frame AI as something someone is commissioning, the AI is attempting to replace/become the artist itself. The user of AI is not what becomes the artist, the AI does, and that is a massive problem.
              • Although I use the commissioning analogy, because I find it apt in terms of process and intent, and so on, I am NOT hereby granting that generative AI art is in any sense "real" art or that AI systems are in any sense a "real" artist. Just soulless imitations.
              • Just because someone has to prompt it, does not mean "prompting" is an artistic skill. The fact that people even remotely think this is an apt comparison is mind-boggling to me. I'm good at googling things, that doesn't mean I get to claim the first result as my own work. When I describe what I want an artist to make for me (commission process), I don't turn around and pretend I did the work!
              • Ultimately, those that already don't value art or the processes involved in making it, in its many forms, are probably very difficult to convince- but I'll say this: an "amateur" making "poor quality" "amateur hour" work on deviantart (the fact that someone talks about human expression this way, especially about forums/platforms made for peoples' expression, learning, and community is a red flag) is still making far more beauty in the world, is at least learning, trying, improving, and putting their own human expression and effort into it- and that's part of what makes it art. No machine will ever come close to that.
              • Generative AI is simply an altogether different paradigm than any of the weak comparisons to advancements that affected art- digital art, photography, photoshop, etc such that I find the comparisons starting from a flawed premise and easily dismissed. It isn't remotely like any of those things. The most I can say, if you absolutely won't let go of this comparison- is that generative AI is the "extremist" "end-stage automation" version of them that goes too far. It is the threshold that we should never cross- there's a finality of "withdrawal of the human" from the expression that reaches a dead end, a road past which there is nothing else. Unlike any other tool, generative AI is an occupying force in the land of human expression that should be resisted, by any, and I mean any, means necessary.
              • Just because similar arguments and statements were made in the past about new technologies/mediums, doesn't make them identical "advancements". It is possible to look at a specific advancement and critique it on its own, and extent to which it goes in terms of scale, human input, its output, what is involved, and the 50 other facets of its deployment and ownership. Like, there's much more to say about generative AI from a thousand other angles than simply this attack on creatives, and it is for many of those other reasons that we should not let it move into the creative world
              • While mimicking another style isn't inherently bad (say digital paint mimicking physical paint), generative AI is this twisted, corrupted form of mimicry that really is mimicking the expression in particular. It's mimicking the skill, the effort, the artist themselves, in addition to the medium. But even when it comes to mediums, it mimics in an underhanded way. It pretends to be a real photo, etc. It inherently misinforms. You probably won't find real artists painting something photorealistic but trying to pretend it's a photo. There's this nature of dishonesty to generative AI that real artists don't have
              • Inevitability isn't an argument in the favor of something. It may be a distraught reality, but it is a defeatist "comply in advance" attitude. If we assume all technological advancements, or more abstractly, social and political changes, are inevitable, we have abdicated our agency in advance and are advocating for our own obsolescence and that we should just lay down and accept things.
              • Once the artist or any sense of skill is fully removed from the process of the work's creation, that work ceases to be comparable to real creative work. AI is not some new kind of paintbrush or tool. Even the most "automated of non-AI tools" for artistic expression still retain that skill input, an expressive crafting hand behind it, a direct guide heavily involved in the style, the form, the nature of what constitutes the final work- saying "computer, make something that looks like a real painting with a tree, a duck, and a purple sky" contains none of that. It is a commission, and one with trade-offs we should forever regret.
              • Call me an extremist Luddite if you wish, but it is a hill I will die on and refuse to budge.

              More ranting and links to others here
              I highly recommend writings by Dan McQuillan, Tante and David Gerard

              9 votes
          3. [2]
            Dr_Amazing
            Link Parent
            It's very strange to me how nobody seems to remember this. It's not even that the arguments are similar. The things people used to say about digital art are almost word for word what people are...

            It's very strange to me how nobody seems to remember this. It's not even that the arguments are similar. The things people used to say about digital art are almost word for word what people are saying now about AI

            2 votes
            1. Shevanel
              Link Parent
              Respectfully, there are several comments branching off your comment’s parent from several days back that go decently in depth as to why this 1:1 comparison shouldn’t be made.

              Respectfully, there are several comments branching off your comment’s parent from several days back that go decently in depth as to why this 1:1 comparison shouldn’t be made.

              1 vote
  3. [2]
    somewaffles
    Link
    Speaking from the coding/development side, it definitely helps my workflow, it really just depends on how you use it. The more code I throw at it, the more weary I am of the results. I've (for...

    Speaking from the coding/development side, it definitely helps my workflow, it really just depends on how you use it. The more code I throw at it, the more weary I am of the results. I've (for fun) tried to throw it entire files/scripts and the results vary from passable to completely unusable. I find it especially useful though, when trying to understand more complex patterns / concepts (which regularly extends to stuff outside of coding too.) Something that I would have spent days trying to understand, can be gotten through in an hour or two with a little non-human conversation.

    I see people online that talk about how LLMs have made them so much faster at development, but every time I've ever used it that field, it can toss together a quick prototype for something I likely could have googled, but there will frequently be little hidden bugs in the code.

    I am very dubious when I see people claim this too though, especially the more enthusiastic they are about it. At least as a professional developer, AI has made my job much "easier" but I would not say it has sped up my development process, at least meaningfully. Any code AI generates, I still prefer to pick through and improve on for my use cases. Even as the models have improved, I've caught it making mistakes, even with the most simple coding tasks. Seeing people claim they built entire applications using only AI are either lying or selling you something 99% of the time, as far as I'm concerned.

    AI/LLM's are just a tool and I use it along side all the other development tools I've picked up over my years of experience. Reading into AI "model drift/decay/collapse" has led me to be very skeptical of what AI's will provide me, but I still find it all very useful.

    21 votes
    1. cutmetal
      Link Parent
      Totally with you. Copilot AI code review is 75% useless, but 25% of the time it points out something worthwhile, which makes it a great adjunct to our workflow, where we have mountains of code...

      Totally with you. Copilot AI code review is 75% useless, but 25% of the time it points out something worthwhile, which makes it a great adjunct to our workflow, where we have mountains of code being spilled from low-skilled contractors every day and only a few overworked senior engineers to review it.

      Copilot tab completion in VScode is just annoying 90% of the time. But it's just useful enough for me to not turn off.

      ChatGPT is useful for grokking dense code or getting a gut check on a weird error. The other day I had some really nasty C++ macros to figure out, ChatGPT made that task many times faster than it would have been.

      4 votes
  4. Protected
    Link
    I'll just say my piece. In the form of a Q&A today! Is AI useful? Yes, in all kinds of ways. I wrote a comment before about how it does its primary job of digesting and regurgitating information...
    • Exemplary

    I'll just say my piece. In the form of a Q&A today!

    Is AI useful? Yes, in all kinds of ways. I wrote a comment before about how it does its primary job of digesting and regurgitating information quite well. Language-related tasks are generally performed very well. An AI that can search the web can glean useful information from an unstructured body of results faster than you can do it as a human, speeding up your workflow. It's cheaper than a full human for rubberducking.

    Does AI generate content that is useful in real world scenarios, and that should be published or included as part of sales, deliverables, official literature, research or anything non-temporary? From what I've seen, almost certainly no. All usage of AI in these scenarios is noticeable and treated by consumers who paid for the product with extreme contempt. It's tacky, uncanny and feels like a scam. And "running things through AI" introduces the possibility of hallucinations subtly breaking said things. I've had AI generate several scripts in python and javascript and they were never quite good at first - I had to use my knowledge and talent as an engineer to nudge it into fixing the result iteratively. Trusting the AI over the human is only justifiable if your coworkers/employees are truly clueless or you don't trust them to have any skill at all - in other words, in a scenario in which someone is being scammed or exploited (or plans for that exist). I don't feed my code to AI, and I don't feed AI's output directly to anyone else, and neither should anyone else.

    Is AI sustainable? No. AI is similar to Bitcoin in that it costs far more to run (in resources) than it benefits its users, and most uses of AI aren't even constructive. The current prevalence of AI in literally everything is artificial and only meant to create a dependency in users before severe enshittification and price gouging begins. There will surely be a generation (or at least a narrow band) of users who will have been made helpless by their over-reliance on AI and will suffer when it's gone or worse - not unlike how it became more difficult for me to function once web search engines stopped working properly. It has been clear to me for quite a while now that the end game is to turn the AI - once people rely on it for everything - into the ultimate salesthing for advertisers, making biased (subtle or unsubtle) recommendations based on who is paying Microsoft and their ilk more money. This is pretty horrible, but you can also run your own LLM instead (footing the bill for power and hardware) and then no one is making money, which still makes it not a sustainable business. I caution against long term reliance on it.

    Is AI ethical? I'm not sure/it depends. There are all sorts of unethical practices associated with this "technological revolution". I mentioned biasing the AI. I mentioned using the AI as a search agent - this completely sidesteps any means for monetization the source websites may or may not have, effectively "stealing" their work. There is the unauthorized use of resources for training the model in the first place. I can't in good conscience say I feel strongly against AI company's circumvention of copyright law. I pride myself in being fairly consistent and rational in my beliefs, and I'm strongly opposed to copyright; it's a scourge on creativity and freedom that concentrates a disproportionate amount of power in the hands of large conglomerates that control insanely large amounts of intellectual property and patents. I was always opposed to laws (or court opinions) that restricted linking, quoting and crawling online, since they break the web in its purest form.

    That said, I understand that we live in a (capitalist) society and artists and creators need to eat too. In a perfect world, people would be able to create their art and benefit from our impressive (collective) food production capacity to subsist with very little friction. Unfortunately, since most of the wealth we generate is captured by a tiny amount of people, I can understand the outrage in this case. But it's still not seeing the forest for the trees - art has been an iterative and progressive process for millennia, and endless copyright is chopping the legs off any small, novice or future artists...

    11 votes
  5. [3]
    ssk
    Link
    To use it well for coding, the best approach is to do it from a spec driven development standpoint. There's an article going around in the AI-tech-sphere that emphasizes this....

    To use it well for coding, the best approach is to do it from a spec driven development standpoint. There's an article going around in the AI-tech-sphere that emphasizes this. https://brianchambers.substack.com/p/chamber-of-tech-secrets-54-spec-driven. But basically, you start with an Architecture.md, and then break things down into individual tasks. Smaller bites for the AI to handle. Give it context. Give it the style that it should be coding in.

    The other reinforcement that has tremendously helped me to continually produce bug-free, senior level code, has been to use TDD. By implementing tests first and foremost, it can help control and guide the AI along the path to what should be "right" to do.

    So yes, it needs a bit of guiding, just like any other sr level engineer would, having some basic architecture, having some basic tests to expand off of. But Claude Code has completely revolutionized the way I code and has made me language-agnostic and I feel confident in saying as such. It just takes reaching a mutual understanding with the AI, which sounds ridiculous, but that's what it needs. Communication.

    15 votes
    1. [2]
      shrike
      Link Parent
      The trick is to treat the AI like an outsourced offshore consultant who only works for one day. Having well-defined and scoped tasks is essential. You need to get the "consultant" up to speed...

      The trick is to treat the AI like an outsourced offshore consultant who only works for one day. Having well-defined and scoped tasks is essential.

      You need to get the "consultant" up to speed really fast and only have them do one task. If they can't finish (you're getting warnings about context running out) have them write down where they left off and reset the context. Then go again. If you over-work the poor consultant (context too full), they'll start to hallucinate and create crap(pier) code.

      Like if you do TDD, take one session to create the tests. Reset context and implement according to the tests with a fresh session.

      8 votes
      1. ssk
        Link Parent
        Love this framework

        Love this framework

  6. [19]
    Eric_the_Cerise
    Link
    Here's an offbeat answer. I don't know, and I don't care. My position is that the entire edifice of modern LLM technology is built upon the biggest copyright infringement in history, fully...

    Here's an offbeat answer.

    I don't know, and I don't care.

    My position is that the entire edifice of modern LLM technology is built upon the biggest copyright infringement in history, fully sanctioned by the courts (because actually enforcing the law in these cases was unimaginable) ... and even if, today, there are legitimate LLMs out there, built/trained "ethically" on Wikipedia or Gutenberg or whatever qualifies for fair-use, they are still--in my mind--"standing on the shoulders of the giants" that stole every copyrighted word they could get their hands on.

    The entire industry is poisoned for me, eternally, and I have a lot of trouble imagining a scenario where, in the future, I'll shrug and say, "all right, I guess it's okay to start using this stuff now". The way the world is going with it, I expect it'll be another 3-5 years before it is so ubiquitous that I will be "using it" just by existing in the world ... but I will have never willingly agreed to it.

    If that makes me a Luddite, well, just point me to the wooden shoe aisle.

    15 votes
    1. raze2012
      Link Parent
      Yeah, I work in games, so for me that aspect is just a huge minefield. The real shame is how human artists may be falsely accused of using Ai, so in some respects it's now a stand-in for "bad...

      Yeah, I work in games, so for me that aspect is just a huge minefield. The real shame is how human artists may be falsely accused of using Ai, so in some respects it's now a stand-in for "bad looking art". I see potential there. But we're in full gift mode. I'll just wait for the courts to figure it out.

      On the programming side... Well, I'm trying to get a position as a graphics programmer. I've seen zero buzz around co-pilot esque LLM tools to try and speed up that workflow. Your value in that field is not in coding fast, it's in diagnosing problems in a renderer and fixing, optimizing, or updating it to keep up with modern demands. So much of that useful code is locked up in studios, so there's not much in thr open to train on.

      7 votes
    2. [17]
      Greg
      Link Parent
      Yeah, probably accurate and a fair reason to be pissed off at commercial LLMs. That seems like a very big stretch, but I can kinda sorta see the logic. I’d have to really, really like the current...

      My position is that the entire edifice of modern LLM technology is built upon the biggest copyright infringement in history

      Yeah, probably accurate and a fair reason to be pissed off at commercial LLMs.

      they are still--in my mind--"standing on the shoulders of the giants" that stole every copyrighted word they could get their hands on

      That seems like a very big stretch, but I can kinda sorta see the logic. I’d have to really, really like the current incarnation of copyright law and think its strict application was a very good thing before I’d want to even consider stretching far enough to apply that line of thinking, though.

      Copyright law is an absolutely abysmal, often harmful mess that occasionally does some good for some creators. Sometimes.

      I despise the double standard in how massive infringement by companies is handled compared to benign infringement by individuals, but I see the overlap between copyright law and ethical correctness as pretty small as well. Certainly far too small to worry about second-tier shadows of infringing knowledge being present in the papers outlining how the technology itself works.

      3 votes
      1. [15]
        Eric_the_Cerise
        Link Parent
        It is more than fair to suggest that I haven't really thought this through enough -- I haven't, and I know it. It isn't the copyright violations that bother me (I mean, not primarily), it's the...

        It is more than fair to suggest that I haven't really thought this through enough -- I haven't, and I know it.

        It isn't the copyright violations that bother me (I mean, not primarily), it's the utter unmitigated hypocrisy of it. Headlines about how Facebook somehow didn't break the law because they only leeched 18 million books off some pirate site, but they didn't seed them back.

        Piled on top of all that, the absolute avalanche of hype and sale, about how this stuff is going to replace sliced bread, and on and on ... And I understand how it actually works, just barely well enough to know that, A) it can never hold up to 10% of the claims, and 2) it's not even AI ... it's just "A"; a next-gen spellchecker.

        The enormity of the original copyright theft, coupled with the global "well it's okay when corporations do it" response, has just completely poisoned my view of this tech ... for a very, very long time.

        On top of it all, the environmental damage it's doing, and how yeah, 97% sure this hype-cycle is going to end in yet another dot-com style financial crash, followed by yet another "too big to fail" bailout ... cuz the world isn't already screwed up enough, and etc, etc.

        How this stuff is being sold and deployed is really, really bad, to the point that I don't even need to know if it has any actual useful use-case.

        8 votes
        1. [14]
          ssk
          Link Parent
          Ok, then bet, please do explain how it can't hold up 10% of the claims or how it's "not even AI" and just a "spellchecker"

          Piled on top of all that, the absolute avalanche of hype and sale, about how this stuff is going to replace sliced bread, and on and on ... And I understand how it actually works, just barely well enough to know that, A) it can never hold up to 10% of the claims, and 2) it's not even AI ... it's just "A"; a next-gen spellchecker.

          Ok, then bet, please do explain how it can't hold up 10% of the claims or how it's "not even AI" and just a "spellchecker"

          1 vote
          1. [13]
            Eric_the_Cerise
            Link Parent
            Right, like I need to have this debate. Try this

            Right, like I need to have this debate.

            Try this

            4 votes
            1. [12]
              ssk
              Link Parent
              Man, I'm trying to have a tildes-style conversation, not a reddit one. Were you not responding with a serious answer there?

              Try this

              Man, I'm trying to have a tildes-style conversation, not a reddit one. Were you not responding with a serious answer there?

              4 votes
              1. [11]
                DynamoSunshirt
                Link Parent
                There is a lot of discussion in this very thread on the subject, but TL;DR: these models merely generate statistically likely output. They are incapable of reasoning or thought.

                There is a lot of discussion in this very thread on the subject, but TL;DR: these models merely generate statistically likely output. They are incapable of reasoning or thought.

                5 votes
                1. [10]
                  ssk
                  Link Parent
                  Is there conversation over what I was asking though? For reference in case you think I'm blindly just being contrarian, my question was specifically:

                  Is there conversation over what I was asking though? For reference in case you think I'm blindly just being contrarian, my question was specifically:

                  please do explain how it can't hold up 10% of the claims or how it's "not even AI" and just a "spellchecker"

                  1 vote
                  1. [2]
                    DistractionRectangle
                    Link Parent
                    Eric has put a bit of effort already in articulating their position and you're just asking them to continue arguing their point in excruciating detail. I think if you're looking for tildes style...

                    Man, I'm trying to have a tildes-style conversation, not a reddit one.

                    Eric has put a bit of effort already in articulating their position and you're just asking them to continue arguing their point in excruciating detail. I think if you're looking for tildes style discussion, it's on you (or others) to shoulder the other end of the discourse.

                    Is there conversation over what I was asking though?

                    There is. Eric linked it.

                    12 votes
                    1. ssk
                      Link Parent
                      Yeah, I'm still any particular bit from Eric in that conversation nor anything specifically addressed to my questions. Appreciate you stepping in though!

                      Yeah, I'm still any particular bit from Eric in that conversation nor anything specifically addressed to my questions. Appreciate you stepping in though!

                  2. [7]
                    DynamoSunshirt
                    Link Parent
                    It is not artificial intelligence because it is not intelligent. Intelligence requires reasoning. LLMs simply (or more accurately, in a very complicated way) generate likely text. AI in science...

                    It is not artificial intelligence because it is not intelligent. Intelligence requires reasoning. LLMs simply (or more accurately, in a very complicated way) generate likely text. AI in science fiction over many decades has always required some level of autonomy, reasoning, logic, and understanding that we cumulatively refer to as sentience. LLMs are not that. Thus calling them AI is false advertising.

                    I'll put it this way. I just invented faster-than-light travel (FTL)! Would you like to pay me billions of dollars for it? It goes really fast. Also FYI: it only goes 30 miles per hour. But don't worry, if you invest enough billions I can spin this project up to FTFTL (faster-than-faster-than-light travel), and down the line we can attain FTFTFTL!

                    Replace FTL, FTFTL, and FTFTFTL with AI, AGI, and ASI and you might understand what is so frustrating about people redefining AI as LLMs.

                    6 votes
                    1. [6]
                      ssk
                      Link Parent
                      That's the basic premise of an LLM, sure. But the current generation of LLM goes very much beyond "just generating text". It isn't just "complete the phrase". A great article that dives into this...

                      LLMs simply (or more accurately, in a very complicated way) generate likely text.

                      That's the basic premise of an LLM, sure. But the current generation of LLM goes very much beyond "just generating text". It isn't just "complete the phrase". A great article that dives into this is from Anthropic itself: https://www.anthropic.com/research/tracing-thoughts-language-model. Quoting from the article specifically, here's a blurb that's most relevant:

                      But our research reveals something more sophisticated happening inside Claude. When we ask Claude a question requiring multi-step reasoning, we can identify intermediate conceptual steps in Claude's thinking process. In the Dallas example, we observe Claude first activating features representing "Dallas is in Texas" and then connecting this to a separate concept indicating that “the capital of Texas is Austin”. In other words, the model is combining independent facts to reach its answer rather than regurgitating a memorized response.

                      This is explicitly not just generating words. There's a chain of thought, no matter how you define "reasoning".

                      But I also do think it's worth defining reasoning. The processing of words to form a separate but related junction of words, I feel like is a good definition.

                      Ultimately, Claude is absolutely able to state autonomy, reasoning, logic, and understanding and I'd love to see where you see the opposite of that fact

                      1 vote
                      1. [5]
                        DynamoSunshirt
                        Link Parent
                        Every time an LLM confidently lies to me, I see the opposite of that fact. Given the right incentives, humans almost never lie intentionally. Most humans at least feel a bit bad when they do. LLMs...

                        Every time an LLM confidently lies to me, I see the opposite of that fact. Given the right incentives, humans almost never lie intentionally. Most humans at least feel a bit bad when they do. LLMs are a deeply frustrating experience because they will cheerily embed a subtle inaccuracy in a complex chain of reasoning, acknowledge and apologize for it when I call them out, and then confidently spew the exact same lie at me in the next chat bubble.

                        A poorly-incentivized (for instance, underpaid or unhappy) human might make a similar mistake, true. But the difference is, people can improve and evolve and change. You can educate someone to improve their reasoning, and teach them how to fact check, and give them positive reinforcement to not make the same mistakes in the future. Some people might keep screwing up. But most people, if they try, can avoid (most of) those mistakes. An LLM can never change, can never improve, can never be taught to do better. It'll cheerily, sycophantically insist that everything is fine. And it might be fine, right until it drives you off the cliff.

                        7 votes
                        1. [4]
                          ssk
                          Link Parent
                          Why can't you give the LLM the same learning context? Say like "Hey, you may have to search for some additional context but can you do complex analysis on X", just like you would with that same...

                          Why can't you give the LLM the same learning context? Say like "Hey, you may have to search for some additional context but can you do complex analysis on X", just like you would with that same underpaid human. In fact, specifically for fact checking, all of the major LLMs now have a Deep Research mode that specifically does this type of thing where it searches and searches and is super agentic and improves it's reasoning as much as it can. If it's "lying" in Deep Research mode, that would be ground breaking news and you'd literally shake the industry

                          2 votes
                          1. [3]
                            DynamoSunshirt
                            Link Parent
                            Ok now you have to be fucking with me. LLMs have lied to me repeatedly in deep research mode, even when citing conflicting facts. As far as I am aware, no amount of 'deep research' (or what I...

                            Ok now you have to be fucking with me. LLMs have lied to me repeatedly in deep research mode, even when citing conflicting facts. As far as I am aware, no amount of 'deep research' (or what I would less affectionately refer to as 'multiple LLMs stacked on top of each other under a trenchcoat, or various approaches of a similar nature') has eliminated hallucinations 100%. That is very difficult, if not impossible, for the reasons I have already stated.

                            It really feels like you aren't arguing in good faith here. Or maybe I'm just too dumb to understand your points, but given the collective near-agreement against your argument in this thread, I'm inclined to believe that you're just trolling me. I'm done with this conversation.

                            5 votes
                            1. [2]
                              ssk
                              Link Parent
                              I promise I haven't been trolling. I have done so many deep research prompts and really haven't found any hallucinations. So I'm super curious where you have found them and if so on what models

                              I promise I haven't been trolling. I have done so many deep research prompts and really haven't found any hallucinations. So I'm super curious where you have found them and if so on what models

                              1 vote
                              1. ibices
                                Link Parent
                                Off the cuff, before I go to sleep comment, but having read through this chain, I think the context missing is that "AI" and LLMs in general have been assigned a position of authority by their...

                                Off the cuff, before I go to sleep comment, but having read through this chain, I think the context missing is that "AI" and LLMs in general have been assigned a position of authority by their makers. It's true that reasoning is similar enough to pattern recognition which is what the LLMs are doing at a high level, and that you can "teach" an LLM like you would a person. However, these LLMs have been assigned a position of authority, a la "because they have troves of information, they must be smart". That false authority requires having enough knowledge to recognize when the LLM doesn't actually know what's it's talking about. Lying, hallucinating, or whatever you want to call it is the LLM not coming to the correct statistical pattern, but still saying here is the information you want. The LLM doesn't know it's wrong and the maker is financially motivated to ignore those scenarios and continue pretending it's the smartest most helpful tool on the block.

                                Stopping myself here to sleep, but that means I'm not really connecting this point back to the greater thread unfortunately. Sorry!

                                Figured I should note the lens that shapes my comment. I tend to see myself as a creative both professionally and in my hobbies, so I don't have any enthusiasm for LLMs as they currently stand.

                                3 votes
      2. raze2012
        Link Parent
        I see it as a big shakeup in some way, no matter how the dust settles. If AI wins, then it's essentially the wild west and anything is up for grabs as long as you (say you) process it through an...

        ’d have to really, really like the current incarnation of copyright law and think its strict application was a very good thing before I’d want to even consider stretching far enough to apply that line of thinking, though.

        I see it as a big shakeup in some way, no matter how the dust settles. If AI wins, then it's essentially the wild west and anything is up for grabs as long as you (say you) process it through an LLM. If they lose then they get stopped in their tracks in one of the largest consumer bases in the world. I don't think both options can be supported in tandem.

  7. [6]
    DynamoSunshirt
    Link
    TL;DR: no, for anything besides 'better OCR' from an image or PDF. But these days it's cathartic to write about AI, so here's my longform response: I feel exactly the same as you. Honestly, it's...

    TL;DR: no, for anything besides 'better OCR' from an image or PDF. But these days it's cathartic to write about AI, so here's my longform response:

    I feel exactly the same as you. Honestly, it's killing the enthusiasm I've had for the Internet since I was a young kid. So many people pushing generic crap, I wonder if this is how people felt in manufacturing when plastic garbage started to replace wooden and metal and ceramic tools and toys.

    I gave LLMs a very solid chance at first, and I didn't find them useful. Recently, I took a big chance on a new job and tried embracing LLMs with an open mind since most of my coworkers were drinking the kool-aid pretty hard. After a couple of months, I just felt stressed and bored -- I don't like editing crappy verbose text and talking to a cheerful wall all day. Handholding an LLM through a task is just so boring compared to doing it myself, because I actually like writing and programming, and even some basic art stuff. When I manage a person, I love mentorship and teaching and learning from them. When I manage an LLM, I feel like I'm just wasting my time communicating with a fake person who can't learn instead of sinking my teeth into the task myself. Time talking to the LLM is inevitably time spent NOT working on the task, and reading the LLM's crappy interpretation of my instructions and checking all the logic is just not satisfying.

    I'm increasingly losing hope in the Internet and the tech industry in general because of this. No matter how many times I try LLMs, I find them aggravating and unreliable. Maybe I'm dumb. Maybe my workflow and worldview is too inflexible. But maybe I just have a higher bar for content than some of the people who seem to love LLMs? Regardless, the LLM pushers only seem to grow in number, and I've given up trying to argue with them. At this point, they can have the Internet, they've already filled it with crap. I'll just invest in meatspace relationships and hobbies instead, it's healthier anyway.

    On a cheerier note, I've been reading Anathem recently, and it's nice to see that Neal Stephenson predicted all of this:

    “Early in the Reticulum [Internet]—thousands of years ago—it became almost useless because it was cluttered with faulty, obsolete, or downright misleading information,” Sammann said.

    “Crap, you once called it,” I reminded him.

    “Yes—a technical term. So crap filtering became important. Businesses were built around it. Some of those businesses came up with a clever plan to make more money: they poisoned the well. They began to put crap on the Reticulum [Internet] deliberately, forcing people to use their products to filter that crap back out. They created syndevs [computers] whose sole purpose was to spew crap into the Reticulum [Internet]. But it had to be good crap.”

    “What is good crap?” Arsibalt asked in a politely incredulous tone.

    “Well, bad crap would be an unformatted document consisting of random letters. Good crap would be a beautifully typeset, well-written document that contained a hundred correct, verifiable sentences and one that was subtly false. It’s a lot harder to generate good crap. At first they had to hire humans to churn it out. They mostly did it by taking legitimate documents and inserting errors—swapping one name for another, say. But it didn’t really take off until the military got interested.”

    “As a tactic for planting misinformation in the enemy’s reticules, you mean,” Osa said. “This I know about. You are referring to the Artificial Inanity programs of the mid–First Millennium A.R.”

    “Exactly!” Sammann said. “Artificial Inanity systems of enormous sophistication and power were built for exactly the purpose Fraa Osa has mentioned. In no time at all, the praxis leaked to the commercial sector and spread to the Rampant Orphan Botnet Ecologies. Never mind. The point is that there was a sort of Dark Age on the Reticulum that lasted until my Ita forerunners were able to bring matters in hand.”

    “So, are Artificial Inanity systems still active in the Rampant Orphan Botnet Ecologies?” asked Arsibalt, utterly fascinated.

    “The ROBE evolved into something totally different early in the Second Millennium,” Sammann said dismissively.

    “What did it evolve into?” Jesry asked.

    “No one is sure,” Sammann said. “We only get hints when it finds ways to physically instantiate itself, which, fortunately, does not happen that often. But we digress. The functionality of Artificial Inanity still exists. You might say that those Ita who brought the Ret out of the Dark Age could only defeat it by co-opting it. So, to make a long story short, for every legitimate document floating around on the Reticulum, there are hundreds or thousands of bogus versions—bogons, as we call them.”

    “The only way to preserve the integrity of the defenses is to subject them to unceasing assault,” Osa said, and any idiot could guess he was quoting some old Vale aphorism.

    “Yes,” Sammann said, “and it works so well that, most of the time, the users of the Reticulum [Internet] don’t know it’s there. Just as you are not aware of the millions of germs trying and failing to attack your body every moment of every day.”

    Hopefully one day we'll develop our own antibodies against Artificial Inanity. But until then I think we're about to enter an Internet Dark Age.

    Sidenote: for anyone here who doesn't work in tech, do you feel like ethics are actually respected in your industry? Because the tech industry seems chock full of people who truly did not understand the point of the classic Jurassic Park meme: "your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should." As an avid sci-fi fan, it's insane to me that nobody in the industry ever seems to think about the ethical implications of the technology they're building. Meanwhile, every time a new piece of tech debuts, all I can think of is the ways it can be used for evil and enshittified (and how a truly smart technologist could intentionally design a system to prevent that kind of abuse!).

    12 votes
    1. [5]
      Paul26
      Link Parent
      Ethics go out the window when profit is the priority. Anathem is great. I loved Diamond Age the most of what I’ve read by him. Have you read all his work? I almost picked up his latest book, but...

      Ethics go out the window when profit is the priority.

      Anathem is great. I loved Diamond Age the most of what I’ve read by him. Have you read all his work? I almost picked up his latest book, but didn’t as I have too many on the list already.

      Reading is definitely a great offline way to spend time and I’m finding myself distancing myself from tech more and more. Compared to about 7 years ago, I spend very little time online. So much to enjoy out in the “real” world. Read, write, walk outside, play with my dog, ride my bike, take photos with a regular camera, exercise, talk to people, call my parents… I try to find joy in these things and “turn off” a lot of the noise from the internet, TV, etc.

      8 votes
      1. [4]
        DynamoSunshirt
        Link Parent
        I've read most of his works. I struggled to get into The Baroque Cycle a few years back, but I think I'm going to give it another shot soon. I read Cryptonomicon and Diamond Age back in high...

        I've read most of his works. I struggled to get into The Baroque Cycle a few years back, but I think I'm going to give it another shot soon. I read Cryptonomicon and Diamond Age back in high school, so I'm due for a reread of those as well. If you haven't read Termination Shock, I highly recommend it, my favorite from him in a while.

        Couldn't agree more about time offline. I've started to pick up more and more 'intentional' hobbies in the last few years, like manual photography, espresso, bicycle maintenance, gardening, woodworking, and curating my music and video libraries (digital and physical). I've really loved thinking deeply about these things, but it sometimes creates a wedge between me and other people -- for instance, I want to talk about my favorite albums by my favorite artists, but many of my friends just play media from the home page of Spotify and never listen to an entire album in one stretch any more. I want to talk about minutia of photography and bicycle technology, but most people just click the shutter on their smartphone or bring their bike to a shop. I want to discuss latte art technique and the best way to roast a natural process bean from Ethiopia, but most people would rather order from Starbucks.

        I think these minutia are where I really miss the Internet forums of old -- you didn't have to colocate with other enthusiasts, you could just chat with them online! But these days, the forums are all dead, the subreddits that replaced them are all full of slop and karma-farming bot reposts of old popular posts, and I need to find enthusiasts around me. But it sometimes feels like everyone is too busy watching short-form videos (which I loathe) to gain a nuanced view of any subject.

        4 votes
        1. [2]
          papasquat
          Link Parent
          You really need to find people that are into that niche stuff for them to be interested. Most people, myself and probably you included, just don't even think about 90% of the stuff they interact...

          You really need to find people that are into that niche stuff for them to be interested. Most people, myself and probably you included, just don't even think about 90% of the stuff they interact with. Some people think really hard about the other 10%, but it's hard to find people that think about the same 10% that you do.

          For instance, I also just click the shutter on my smartphone, and I don't have a bicycle. However, if I need a flashlight, I'm using one of the fancy 18650 powered monsters that fit in the palm of your hand and cost too much. I'd never use some cheapo piece of crap from Walmart. If someone asks me about organizing their garage, I'll start taking about French cleat systems, plywood thicknesses, the ideal spacing of rails, what tools should be accessible at chest level vs knee level. Most people would just pick up some cheapo tubs from Walmart and throw all their crap in there.

          Finding that specific combination of things you really really care about in other people is hard. I think you just have to sort of accept that that's the way life is. I'd rather spend time with my IRL friends than spending a ton of time online arguing about whether backing board is required or if it's just a waste of money or what type of mechanical keyboard is best or whatever.

          8 votes
          1. DynamoSunshirt
            Link Parent
            Yeah, this is a great point. It is fun to have those online communities so you can improve your understanding of a lot of hobbies and surround yourself with other enthusiasts. But the kind of...

            Yeah, this is a great point. It is fun to have those online communities so you can improve your understanding of a lot of hobbies and surround yourself with other enthusiasts. But the kind of people I get along with tend to be enthusiasts about a lot of other things, and I've increasingly started to lean into their enthusiasm and pick up compatible hobbies (like woodworking!) instead of relying on the Internet for specialization. Weird, but not unexpected, for the Dead Internet Theory to work out this way, I guess.

            2 votes
        2. Paul26
          Link Parent
          Thanks for the recommendation. I’ll add Termination Shock to my list. I share much of what you said. I feel like looking for any sort of depth in any interest is like picking at scraps. I have one...

          Thanks for the recommendation. I’ll add Termination Shock to my list.

          I share much of what you said. I feel like looking for any sort of depth in any interest is like picking at scraps. I have one friend I can talk to about music on a deeper level. My sister does photography so I sometimes chat with her about that. A colleague at work is into psychology so I chat with them about that now and then. Long story short, it’s hard and inconsistent and fleeting. Moments of depth as opposed to something more prominent and frequent. I have to often just make sure I’m doing these things for myself even if not one soul knows I’m doing it. I have to be ok with that or I won’t do anything and I’ll revert to playing video games with all my free time to get lost in virtual worlds and forget the real one.

          Don’t get me started on short form video. I’ve told my partner to save anything she wants to show me and show me all of them in one sitting. Don’t send me links. I’m not on those platforms. Surprisingly, I don’t really miss out much, though I did lose touch with many people when I left those platforms. The ones who wanted to remain part of my life worked with me to find a way to do so offline. The rest I guess were just friends in the Facebook sense of the word. Would be nice to get updates about them, but does it really impact my daily life? Not to any significant extent.

          2 votes
  8. shrike
    Link
    A quarter century of actual paid professional programming here. Using LLMs Professionally? A they help a bit. An LLM agent (company authorised) can debug issues in an unknown codebase a lot faster...

    A quarter century of actual paid professional programming here.

    Using LLMs Professionally? A they help a bit.

    An LLM agent (company authorised) can debug issues in an unknown codebase a lot faster than I can

    I just had Gemini 2.5 pro implement a Firebase Messaging web client in 20 seconds. Would've taken me hours to figure out all the nuances. It even did a pretty debug UI and a web worker for me. (My task was on the server side and needed a quick client to test it, not a front end person)

    I could use LLMs more, but they need to be corp approved and some of the material is so sensitive it can't leave the internal network.

    Local models like Qwen are getting really good though, it's the way most things will be moving in the future. In the long term it's cheaper to get a few Mac Studios for the office for shared LLM use instead of paying $200/month per seat for a team LLM plan for an online model with no SLA or quality guarantee. (They shuffle resources pretty clearly, making the models stupider, but you can't prove it)

    For my personal projects? I've gottten SO MUCH use from LLMs.

    I have a ton of ideas for tiny helper projects, but not enough time (or motivation) to do all the legwork to figure out how some API works or what kind of payloads they are or write the boilerplate to map them to Go structs.

    An LLM will do it in well under a minute.

    A few 90-99% vibe coded projects of mine, all mostly functional and purely made just for me and not intended to be "products" for other people:

    None of these are especially complex, but rather are 90% boring "just get it done" bulk code

    For non-programming stuff:

    ChatGPT found me a German company selling a HDMI switch with the exact requirements I gave it - instead of me having to buy some alphabet soup chinese one from Amazon or eBay.

    I gave it a budget for a computer upgrade for my nephew and it actually gave me direct links to local Finnish shops for the exact stuff to buy.

    11 votes
  9. [12]
    Drewbahr
    Link
    No. There is no use for it in my life, professionally or otherwise. I don't work in coding or computers, and I don't do much technical writing. When I do write documents, I write them, with...

    No. There is no use for it in my life, professionally or otherwise.

    I don't work in coding or computers, and I don't do much technical writing. When I do write documents, I write them, with intention, for audiences to read. At no point do I need, or want, a bullshit creation machine to "interpret" what I wrote, because it will just fuck it up.

    When I'm doing something, I want to be the one doing it. Not some garbage spewing, water guzzling bullshit generation box

    10 votes
    1. EarlyWords
      Link Parent
      I’m also a writer and I hardly ever use any LLM. But I actually like them—as they are now. To me, the problem is that people misuse them so much. An AI program to me is like a Wikipedia page....

      I’m also a writer and I hardly ever use any LLM. But I actually like them—as they are now. To me, the problem is that people misuse them so much. An AI program to me is like a Wikipedia page. Mostly right and gets me started on a path once I check it for errors. But anything more than that it is more trouble than it’s worth or actively harmful.

      For me, if AI could stay just as it is, we would be fine with it. The excitement over its supposed efficiency and omniscience would eventually fade and we’d be left with a digital tool that helps but doesn’t dominate. Since I’m not a coder I like how it is now for the things I encounter because its outputs are clearly synthetic. We know how it writes. We can tell AI art at a glance, same with music.

      For the arts, AI isn’t currently capable of replacing humans. But when I need a quick image of an NPC for my D&D table, it serves admirably.

      The problem is that people have bought the hype and try to use AI to replicate actual work. And this flood of laziness is currently destroying global culture. The other main issue is our very real anxiety over what AI makes the world like in a year or two.

      We won’t keep LLMs the way they are. They’ll get better and the problems others list here will intensify. But right now I like the inept assistant who always shouts it is AI with its output.

      I know I don’t use these things nearly as much as others, and this is a minority viewpoint about just a single aspect of the phenomenon, but as someone who lived and worked through the dotcom boom/bust in San Francisco, this is just the latest example of capitalists ruining everything they touch. This is just the tool they’re currently using the most.

      4 votes
    2. [2]
      DynamoSunshirt
      Link Parent
      Unfortunately, LLMs subtly lie often enough that I don't think they're very useful for technical writing either!

      Unfortunately, LLMs subtly lie often enough that I don't think they're very useful for technical writing either!

      1 vote
      1. shrike
        Link Parent
        Public models do yes. You can have local models where you can set the model temperature to 0. Essentially setting its "creativity" to zero, so it only replicates the data it's been given. Works...

        Public models do yes.

        You can have local models where you can set the model temperature to 0. Essentially setting its "creativity" to zero, so it only replicates the data it's been given.

        Works especially well with RAG systems for example.

        3 votes
    3. [8]
      ssk
      Link Parent
      The frontier models genuinely aren't garbage spewing though. Yes, in 2023 they were all crap, but if you're giving it context, giving it small bites to chew off, then it can absolutely be a...

      The frontier models genuinely aren't garbage spewing though. Yes, in 2023 they were all crap, but if you're giving it context, giving it small bites to chew off, then it can absolutely be a productivity boost even for you

      1 vote
      1. [7]
        Drewbahr
        Link Parent
        Hard disagree. I'm not looking for a "productivity boost". I don't want, nor need, to generate more technical writing. I write technical documents for engineering efforts - an LLM will at best be...

        Hard disagree.

        I'm not looking for a "productivity boost". I don't want, nor need, to generate more technical writing. I write technical documents for engineering efforts - an LLM will at best be as accurate as someone (e.g., me) who knows what they are talking about. At worst, it will be actively misleading - which, in the scope of the things I deal with, could lead to injuries and death - not to mention environmental damage.

        No, there is no use case for which I see a benefit here.

        8 votes
        1. [6]
          ssk
          Link Parent
          You wouldn't be looked more highly upon if you produced more output? That's a rare profession and one I envy. Nice!

          You wouldn't be looked more highly upon if you produced more output? That's a rare profession and one I envy. Nice!

          1. [5]
            WrathOfTheHydra
            Link Parent
            Similar to the research on developer productivity from here, a lot of LLM users think they're taking shortcuts when in reality they're purposefully gimping themselves and then making up for lost...

            Similar to the research on developer productivity from here, a lot of LLM users think they're taking shortcuts when in reality they're purposefully gimping themselves and then making up for lost time. While using an LLM for creative writing may save time (arguably at the cost of style/quality), technical writing is prone to a lot of the same issues as coding.

            Wording needs to be precise and statistics/measurments cannot be allowed to hallucinate. If they are the expert in that field, all an LLM might do for them is throw landmines down with the corn that you'll need to go back and fix later. I get that they have a tone of spite towards LLMs that you're not a fan of, but using an LLM that hallucinates an error that isnt caught can actually make them look less highly upon. I myself love the concept of being able to LLM some stats around for quicker reports, but I similarly would just be shooting myself in the foot.

            6 votes
            1. [3]
              DynamoSunshirt
              Link Parent
              Fully agreed in my line of work. The risk just isn't worth it, I need to be sure that my output is correct. FYI, I wouldn't bother arguing in good faith with this account. As far as I can tell...

              Fully agreed in my line of work. The risk just isn't worth it, I need to be sure that my output is correct.

              FYI, I wouldn't bother arguing in good faith with this account. As far as I can tell they're just an argumentative troll.

              3 votes
              1. WrathOfTheHydra
                Link Parent
                Totally. I knew you couldn't take the passive agressive bait they laid out, so I threw another perspective on the trap to disarm it and move on. Keep on keeping on. 🤸‍♂️

                Totally. I knew you couldn't take the passive agressive bait they laid out, so I threw another perspective on the trap to disarm it and move on. Keep on keeping on. 🤸‍♂️

                2 votes
              2. ssk
                Link Parent
                Yikes, I'm genuinely not, nor is that very tildes-like assumption to make. I'm sorry that you feel like I'm a troll

                Yikes, I'm genuinely not, nor is that very tildes-like assumption to make. I'm sorry that you feel like I'm a troll

                1 vote
            2. ssk
              Link Parent
              Your comment really gets at what I'm trying to say. LLMs are very much a nascent tool. They take adjusting to. There absolutely are some prompts and some use cases that flat out it's not a good...

              Your comment really gets at what I'm trying to say. LLMs are very much a nascent tool. They take adjusting to. There absolutely are some prompts and some use cases that flat out it's not a good tool for. But all of that is being developed and doesn't make it on net a "bad" or "useless" thing.

              Like, let's take your stats example for instance. You can take a known approach like "hey write a Python script to do this algorithm on this data" and then it can do it for you, so that you can run the report super quickly. Similar vibes with this technical writing, instead of saying "hey make this whole report", starting out by having a conversation around the idea of the object as if you were having a conversation with the PO but without prejudice is super valuable. How can it be like it's with the PO? Well, they can be like importing design docs and using RAG, or using slack conversations as part of that basis. All of that can accelerate the work on its own.

              It's a tool and just because a hammer can't paint the Mona Lisa, doesn't mean a hammer isn't useful

              2 votes
  10. [2]
    patience_limited
    Link
    I've tried very, very hard to keep an open, experimental attitude towards LLM and generative model use. While I haven't had time or resources to roll my own, I've plowed through the Gemini and...

    I've tried very, very hard to keep an open, experimental attitude towards LLM and generative model use. While I haven't had time or resources to roll my own, I've plowed through the Gemini and corporate ChatGPT tutorials. And as of now, I am not even slightly sold on the AI Promised Land.

    I had a personally horrifying LLM encounter today.

    When I want to take the temperature of a new customer I'm working with, I'll research the existing Help Desk tickets on the account.

    Every single ticket I looked at today from the past six months had LLM-processed responses on both sides of the conversation. How did I know they were LLM-processed? Aside from the verbose, cringingly deferential, mostly tangential composition, the correspondence got less and less related to the original complaint or issue throughout the length of the ticket. The missives eventually talked past each other so badly that it became clear no human was actually reading either side. (I wish I could provide examples, but confidentiality...)

    Ultimately, it always took a phone call or several to resolve the issue. The tickets were much longer than they should have been. Considerable time was wasted on each side, days taken to resolve issues that should have been done with in minutes.

    And the LLM phone call transcripts in the tickets were barely comprehensible. We've been poisoning one of our best resources for bug-tracking, measuring customer satisfaction, developing documentation, and evolving new feature requests. I haven't searched out which model backends ZenDesk, but it's truly terrible at summarizing Knowledge Base searches as well.

    I've tried using ChatGPT (Pro subscription at work, usually o4, sometimes Deep Research) for code snippets, proofreading, document summaries, hunting for very specific troubleshooting steps, refactoring, etc. It's come up with deeply buried search results that saved me time on a couple of occasions. Mostly it's felt like teaching a bear to dance - even when the LLM performs the steps, it doesn't do as well as something that evolved dance in the first place.

    From my perspective, you don't have to be an avid Ed Zitron reader to wonder about the vast opportunity cost of sinking half a trillion dollars into datacenters for questionable AI productivity, rather than green technology transitions; basic health, welfare, and housing; public infrastructure; or any number of other more compelling investments.

    10 votes
    1. DynamoSunshirt
      Link Parent
      Opportunity cost and dancing bear fallacy are possibly my two new favorite reasons to loathe AI. Thanks for this well-reasoned take!

      Opportunity cost and dancing bear fallacy are possibly my two new favorite reasons to loathe AI. Thanks for this well-reasoned take!

      3 votes
  11. [2]
    Abdoanmes
    Link
    I can relate to a lot of what you’ve said about the current state of AI tools. They can be frustrating when used as a blanket “improvement” pass, especially when the original work already balances...

    I can relate to a lot of what you’ve said about the current state of AI tools. They can be frustrating when used as a blanket “improvement” pass, especially when the original work already balances tone, technical accuracy, and clarity. I’ve seen similar situations where running something through a tool like Copilot strips out nuance and replaces it with generic, overlong phrasing. It's NOT an improvement and loses something human.

    In my view, AI, and especially with LLM, they are most effective when guided with intention and clarity. It works best as a thinking partner you direct, rather than a one click content generator. That means starting with clear context, tone, purpose, direction, and the kind of output you want before it generates anything. The most productive exchanges often come when you prompt the the AI to ask clarifying questions, turning the process into an iterative collaboration rather than a mechanical rewrite. The ability to spark critical thinking and diverse ways to approach a topic are helpful. For example, instead of checking in an email, explain the demographic you are intending to send to and what their top questions or concerns might be from what you wrote.

    In my role leading IT and an executive in higher education, I’ve found assistants and agentic AI most useful when paired with critical thinking and problem-solving. I break work into smaller, specific tasks and use the AI early on for brainstorming, outlining, or exploring different perspectives and I retain full ownership of the final voice and accuracy. This way, the tool expands ideas instead of diluting quality with bs hallucinations. Also be prepared to edit those outputs and trust yourself.

    I think the challenge and opportunity here is in how we frame the work we give these tools. I’m curious, when have they worked well for you and what was different about how you approached using Ai?

    9 votes
    1. papasquat
      Link Parent
      I think the times where it was useful were times where the content didn't really matter much. A list of ideas for improving training, or suggestions for birthday party themes. Those kinds of...

      I think the times where it was useful were times where the content didn't really matter much. A list of ideas for improving training, or suggestions for birthday party themes. Those kinds of things are just springboards for planning, and will be debated and thought about well before anything comes of them. I find it mildly useful for stuff like that, but only if I'm very careful about how I approach that output, and force myself to actually think about the problem myself before I just go with one of the suggestions outright.

      The other area where I see it as useful are ones where documentation about a specific thing exists, but is poorly written or disorganized. A domain specific query languages for something that isn't widely used, but which has documentation out there for instance. In situations where I can tolerate errors on a one off, maybe I just decide that I don't want to dedidcate the time and effort to learn how to construct the query I want, and I can let the LLM do it. Many times, it'll just make something up, but usually I can jam it through and eventually get something that sort of works, probably with less time and definitely with less effort than if I learned myself. If it's something I'm going to use often though, I'd rather just bite the bullet and learn it myself because it'll save me time in the long run.

      3 votes
  12. krellor
    (edited )
    Link
    I find LLMs helpful in specific ways. I've even had a paper recently accepted on the application of LLMs to a specific problem domain and expect two more to be published next spring. You should be...

    I find LLMs helpful in specific ways. I've even had a paper recently accepted on the application of LLMs to a specific problem domain and expect two more to be published next spring. You should be skeptical of extreme positions, bullish or bearish, for any new technology, because the reality is usually in the middle.

    I use LLMs to accelerate small chunks of work in domains I have expertise.

    • Create a JavaScript function that fires on button click of a canvas and perform an internal fill detection algorithm.

    • Create a Python function that accesses this URL in a loop, iterating this query string variable from 1-100, and convert the table on each page to a csv file.

    • For the campaign background and story so far in the project documents, create specific dice checks and narrative text for each of the encounters given, ensuring that there is at least one task for each player.

    • Please summarize and contrast the statutory authorities between these two federal entities and their annual appropriations to maximize policy leverage for outcome A.

    • Here are pictures of friend A. Please create a space opera themed birthday poster using each of these different likenesses of friend A.

    It's a tool. Use the tool for what it's is good for and you will see benefits.

    8 votes
  13. SloMoMonday
    Link
    This basically sums up my feelings on the whole thing. It comes down to natural language prompting being the most backwards input style I've ever seen. The fact that the same inputs can result in...

    The thing is, I don't frequently run into tasks where accuracy doesn't matter. I doubt most people do. Usually when someone asks for an answer to something, or you want to actually do something useful, the hidden assumption is that the output will be correct, and LLMs are just really bad at being correct.

    This basically sums up my feelings on the whole thing. It comes down to natural language prompting being the most backwards input style I've ever seen. The fact that the same inputs can result in wildly different outcomes kills any confidence i have in general models.

    It just feels like a mismatch between what these tool can do and are doing. Because unlike the vapourware that was Blockchain and the Metaverse, I've seen these systems in action first hand. The ability to crunch ludicrous amounts of seemingly unrelated data and churn out incredibly advanced analytics and even provide a degree of predictive capabilities. Its just fancy autocomplete, but limiting it to natural language feels like the biggest waste of resources.

    6 votes
  14. mimic
    Link
    TL;DR: The way I've been describing LLMs to my engineering friends is that it essentially makes you a team lead or engineering manager of a team of junior devs. If you are good at writing...
    • Exemplary

    TL;DR: The way I've been describing LLMs to my engineering friends is that it essentially makes you a team lead or engineering manager of a team of junior devs. If you are good at writing Epics/stories/tasks/bugs, that will transition pretty well as you have to keep the LLMs on really tight strict rails. If you're not good at those things, then you'll learn quickly because the LLMs absolutely love to go off on tangents.

    Longer version...

    I think you have to really specify what models you're using and the tools to get down to the trade offs on LLMs. I know there's a lot of hate around this site about them, but coming from a 20 years of development experience perspective they are very powerful tools that are very easy to misuse. I think it was Angela (acollierastro) who said that they are great tools for people that already have the knowledge of the field they are utilizing them in. It's not going to make you an expert at anything and it's going to send you in the wrong direction frequently if you don't know what to look for and how to design your prompts.

    For programming, I've had reasonably decent success with Cursor using the Planner/Executor prompt patterns. I started with Claude 4 Sonnet, but found it wayyyyy too agreeable. My eye twitches every time I read "You're Absolutely Right!" I switched to o3 and it at least will push back on me if I'm wrong, or if I missed something it previously did. I appreciate it owning it's own decisions and generally being more straight forward and less fluffy in it's answers, but at the end of the day Cursor is not great for large scale feature or refactor work imo.

    So mostly what I've settled on is using Claude Code for large scale feature work or refactors. It does a really amazing job at those things for me, although it can still get stuck in it's own spirals so you have to keep a close eye on it, especially since it's pay-per-usage. Cursor with either o3 or sometimes even Gemini 2.5 Pro for point fixes. I find Cursor's tab completion to be really really good, especially on things like pipeline jobs or CloudFormation templates. Using a NoCode mdc with Cursor chat is also a great way of exploring ideas or getting a good jumping off point for own coding.

    I use ChatGPT as a glorified search engine for the most part. It's also a great jumping off point for my own research. On the non-technical side of things I've been getting into gardening lately and I've solved a couple nutrient issues just by having a Gardening project with my soil composition, water schedules, plant types with their growth timelines, and locality. I have a recurring thead in that project where I give it a couple pics of each plant and ask it for an assessment once a week. It's far from perfect, but again it's a good jumping off point for my own research. Just today I was able to narrow down an issue with some Scotch Bonnet pepper plants to a Ca/Mg deficiency since I switched from nitrogen rich to phosorous/potassium rich fertilizers. All from showing a pic of the canopy as the plant transitioned from foliage growth to flowering. Could I have figured all this out through Google? Absolutely, but it definitely sped up the process.

    In the end they are just tools. I hate the ulta hype around them and can't wait for that to die down, but I think they will be another tool in the toolbelt of software engineering going forward.

    4 votes
  15. [5]
    CrypticCuriosity629
    Link
    For me, yes, LLMs are incredibly useful to me in many aspects. I'm finding myself a bit overwhelmed with how much I want to say, so I figured I'd write a list of what I use AI for and if you or...

    For me, yes, LLMs are incredibly useful to me in many aspects.

    I'm finding myself a bit overwhelmed with how much I want to say, so I figured I'd write a list of what I use AI for and if you or anyone else wants me to elaborate on anything here I will.

    • I wrote a software(I wrote it in Python, not an AI) for myself at work that checks my documents and uses an LLM to parse job data and fuzzy human input fields to check and suggest better naming or descriptions. I actually went over that project here.
    • I have ADHD so I'll use AI to create a schedule for my weekend based on what I need to do so I can have a strict routine I can follow. This has been an absolute lifesaver, I can't tell you how much this has helped me take care of myself better.
    • I also have autism, so I'll straight up not really know how to respond to an email I get at work that requires meaningless customer service stuff. So an LLM has been helpful for that. I don't use it all the time, but for those fluff intro or outro customer service emails it's a godsend.
    • On the same note as the above two, I sometimes use Goblin Tools.
    • I integrate an LLM to my Obsidian note taking app that helps me summarize or expand my notes. Also lets me instantly generate todo lists in markdown from thought vomit or meeting notes.
    • I am currently developing an AI agent that, once completed, will help organize my emails into client and vendor folders by context rather than just things like senders and whatnot.
    • Same agent will help extract job information from incoming job requests, and if missing any critical information prompts me with a prepared email I can edit to send off to client.
    • When trying out handguns recently, I used it to generate this printable form to track my notes.
    • I've always just been a hobbyist coder, and LLMs have helped me learn coding. Now keep in mind I said learning. I don't usually ask LLMs to do something for me without it explaining everything. And when I do I'm usually asking if there's a better or more efficient way of getting a result I need to get.
    6 votes
    1. [4]
      papasquat
      Link Parent
      All interesting! I also have ADHD and have wondered about using LLMs to achieve something somewhat similar. I briefly tried using some LLM with a memory feature to maintain a ToDo list/log, but it...

      All interesting! I also have ADHD and have wondered about using LLMs to achieve something somewhat similar.

      I briefly tried using some LLM with a memory feature to maintain a ToDo list/log, but it failed pretty horribly once the list got above a certain length. It seems like the whole tick to use them for things like that is designing around their inherit weakness, which is inaccuracy. If you could find a way to deal with that I could see it being pretty helpful.

      1. [3]
        CrypticCuriosity629
        Link Parent
        As I said, it's been a godsend for helping augment my ADHD. I'll post a sample of my todo list below, and here's how I used it for my Obsidian Kanban life tracker. Yeah there's a token system at...

        As I said, it's been a godsend for helping augment my ADHD.

        I'll post a sample of my todo list below, and here's how I used it for my Obsidian Kanban life tracker.

        Yeah there's a token system at play and once you go past a certain amount of tokens it doesn't retain and the responses become more hallucinatory.

        You can use things like RAG memory to boost recall by essentially acting as long-term memory too.

        The trick I've realized with AI is to use it for specific tasks as opposed to a one stop shop.

        Something I'm looking into now is AI agents, and using python to augment the functionality, as I mentioned in my comment. Each time it queries the AI is for a very specific task, as opposed to using the AI for the entire task. I'm looking into this node-like ai agent framework.

        Also a cool project I have on my radar is called Observer AI, and eventually I hope to see if I can integrate it on my work computer to literally act as a desktop assistant.

        Part of how I want to integrate it is to be able to have an email open and hit a keyboard shortcut and say "pull the attachments from this email and add it to a new job folder on my desktop" and it'll read the screen for the email, use python to query my emails, use commands to create a job folder on the desktop, then download the attachments to said folder.

        Or "Find a good time on my schedule and Ron's schedule to book a meeting with this client within the next week and draft a confirmation email to Ron" and it will have a Python helper that will check my Outlook calendar, Ron's outlook calendar, find some time, then draft an email to Ron asking if he's cool with that time and pop it up.

        Or say something like "Search and bring up the last booklet job we did in the archive from XYZ client" and it'll do that in the background and pop up the window.

        Or even "Review my unread emails for anything urgent and flag them for response" and it'll use python to run the query.

        Sample Todo List:

        Original Prompt:

        Please generate a to-do list for today's chores and errands like cleaning my apartment: picking up all the accumulated trash, the clutter, dishes, laundry, cleaning litter boxes, vacuuming, have some time to organize a shopping/meal prep list for the Souper Cube meal prep, clean out fridge and freezer, etc.

        Also include times for when to do all these things.

        Also interjected into this list to break it up, I want to add short digressions for gaming and research on other projects. I'm playing a game called Deep Rock Galactic Survivor, so maybe adding one full round/dive every so often between tasks just a full go no retries, also add some digression time to research things like Sunshine/moonshine to connect to my home PC from work, or continuing on how to download music from Spotify.

        Break this list/schedule into organized time slots to the minute based on the tasks.

        Outputted list.

        • [ ] 🕙 10:00 AM – Wake Up & Intentional Start

          • [ ] Drink water
          • [ ] Quick snack or coffee
          • [ ] Stretch + bathroom
          • [ ] Open windows for airflow
        • [ ] 🕥 10:30–11:15 AM – Trash Patrol & Declutter Blitz

          • [ ] Gather all trash from every room, bag it, tie it off
          • [ ] Toss anything broken, expired, or useless
          • [ ] Throw away expired mail, boxes, or food wrappers
          • [ ] Take trash bags to dumpster
        • [ ] 🎮 11:15–11:45 AM – Deep Rock Galactic Survivor ROUND #1

          • [ ] Play 1 full dive, no retries
          • [ ] Let brain reset, hydrate after
        • [ ] 🧽 11:45 AM – 12:30 PM – Dish Duty & Kitchen Reset

          • [ ] Load dishwasher or start hand-wash
          • [ ] Wipe down counters
          • [ ] Throw out spoiled or moldy food
          • [ ] Clean old containers from fridge
          • [ ] Wipe fridge shelves if sticky
        • [ ] 🧺 12:30–1:15 PM – Laundry Load + Litter Box

          • [ ] Start laundry (clothes or bedding)
          • [ ] Bag dirty laundry pile
          • [ ] Scoop & bag cat litter
          • [ ] Wipe litter box area
          • [ ] Replace liner/litter if needed
        • [ ] 🔍 1:15–1:45 PM – Research Session #1

          • [ ] Choose ONE topic:
            • [ ] Sunshine/Moonlight remote access
            • [ ] Spotify/Lidarr integration
          • [ ] Take notes
          • [ ] Set timer for 30 minutes
        • [ ] 🧼 1:45–2:30 PM – Deep Clean Time

          • [ ] Vacuum entire apartment
          • [ ] Focus on high-traffic + under furniture
          • [ ] Wipe doorknobs, handles, switches
          • [ ] Clean bathroom surfaces (counter, toilet)
        • [ ] 🚿 2:30–3:00 PM – Body Hygiene Reset

          • [ ] Long shower (scrub + rinse thoroughly)
          • [ ] Wash hair
          • [ ] Clip nails
          • [ ] Apply deodorant
          • [ ] Change into fresh clothes
        • [ ] 🎮 3:00–3:30 PM – Deep Rock Galactic Survivor ROUND #2

          • [ ] Play one dive, full reset
          • [ ] Don’t rush – enjoy it
        • [ ] 🍱 3:30–4:15 PM – Souper Cube Meal Prep Planning

          • [ ] Review fridge & freezer contents and make a list
          • [ ] Toss expired/freezer-burned items
          • [ ] Research what you can make with freezer contents
          • [ ] Write grocery list
        • [ ] 🧦 4:15–5:00 PM – Finish Laundry + Light Resets

          • [ ] Move laundry to dryer or hang
          • [ ] Put away clean clothes
          • [ ] Wipe mirrors, surfaces
          • [ ] Put away 5 things per room
        • [ ] 🔍 5:00–5:30 PM – Research Session #2

          • [ ] Explore whichever topic you didn’t do earlier
          • [ ] Take notes
          • [ ] Avoid rabbit holes
        • [ ] 🎮 5:30–6:00 PM – Final Deep Rock Galactic Survivor ROUND #3

          • [ ] Play one last game round
          • [ ] Let yourself feel proud of today’s progress
        • [ ] 🌙 Evening Wrap-Up (Anytime Before Bed)

          • [ ] Quick sweep for anything out of place
          • [ ] Review grocery list
          • [ ] Reflect: What worked? What needs tweaking?
        1 vote
        1. [2]
          papasquat
          Link Parent
          Really interesting! I'm an Obsidian user as well, but not a power user. Every time I've tried to delve into obsidians advanced plugin ecosystem, I start off very excited and then quickly get...

          Really interesting! I'm an Obsidian user as well, but not a power user. Every time I've tried to delve into obsidians advanced plugin ecosystem, I start off very excited and then quickly get overwhelmed with decision fatigue and my eyes glaze over and my brain becomes foggy (may be a familiar feeling to you as someone with ADHD).

          What plugins are you using to accomplish this, and what does the file structure look like? Is that kanban board a single markdown file behind the scenes? Does the LLM just constnatly modify that one file via a plugin? If so, how do you gain some assurance that it's not just randomly deleting or modifying items without your say so?
          How do to do list items get cleaned up?

          Most importantly for me (and this is something I've experienced literally every time I've used a to do list), how do you prevent the little onesy and twosy to do list items that you never complete from just accumulating into a massive pile of broken promises to yourself over months? Do you just bite the bullet and drop items you haven't touched in a while? Is there some backlog they go to?

          1. CrypticCuriosity629
            (edited )
            Link Parent
            Haha See, I'm the kind of ADHD that loves experimenting, so I have the opposite problem, I get a software like Obsidian and get a shot of dopamine every time I install a plugin and play around...

            Haha See, I'm the kind of ADHD that loves experimenting, so I have the opposite problem, I get a software like Obsidian and get a shot of dopamine every time I install a plugin and play around with it, only to forget about it and never use it again hahaha

            So for what you saw in the screenshot, there's really only two plugins you see at work, and that's the Kanban Plugin and the Tasks Plugin. The Tasks plugin is nice to link tasks to different links across notes and has other features.

            And actually, yes the Kanban board is a single markdown file, and you have the option of viewing it as markdown so it's easy to bulk add tasks if you need to. It works around a simple format, where each column is just a ## Header, and each card is just an item on a checkbox list, with indented checkboxes getting nested within. Outside of that, it uses all the same markdown, so you can link to notes within cards etc.

            There's also Obsidian Projects, which has a Kanban board functionality, however unlike the Kanban plugin each card is an individual note. It's probably better for overall project tracking when each note is a project itself, but that gets a bit to granular for me.

            So the Ollama Plugin's actually pretty simplistic however you have to have Ollama installed separately with the query server activated. . You can either highlight a selection or call the plugin using the Ctrl+P command. You set up prompts to do certain tasks, it comes loaded with sample prompts like "Summarize", "Explain", "Expand", "Rewrite - Formal", "Rewrite - Casual", "Rewrite into Bullet Points", etc. but the real functionality is making your own. The one I made for myself is to convert things into detailed todo lists and direct it on a format, or take a bunch of though vomit and create organized notes for it.

            Do you just bite the bullet and drop items you haven't touched in a while? Is there some backlog they go to?

            The Kanban categories you see in the screenshot really helps me track those things. I'll kind of explain the method to my madness.

            So I really wanted to have something that tracks everything from my thoughts, to ideas, to actual todo items, and addressed the problem you mentioned. Usually I make tons of lists and promptly forget about them, so the idea was to combine all the reasons I make lists into one board, which is to track ideas and to track things I need to do.

            Also, the killer categories that address the problem you mentioned are the "On My Radar" and "On the Backburner" categories. They are kind of where things sit when I want to remember them but don't necessarily have interest in doing them right now. On my radar are things I really want to keep visible, while on the backburner are things that I want to track but can be, well, put on the backburner.

            I'll constantly move things to and from these categories to other parts as I pick them up and drop them, but outside of that, it's psychological, just giving yourself space and patience and not being guilty for doing that. For me it helped a lot knowing I was tracking these things so it's not like I'd completely forget about a project I start, and it's easier to pick projects back up because of the notes in the cards..

            Inbox

            • Anything that comes to mind goes here as I think about it, and later I will divi it out to the other lists as needed.

            Need to Research

            • Things that I don't have specific plans on actually doing yet because I don't know enough about the task to start. Usually for ideas or projects that come to mind.

            Ready for Action

            • After researching something and getting an active plan together I queue it up here. I'll move it to another category depending on when I want to do it and what it is.

            In Progress Projects

            • Anything I'm currently working on, but not really specifically scheduled.

            Today

            • This is the meat of my list, I'll move things from other categories here, then I'll select it and use the Ollama Plugin to create a scheduled list if needed.

            Tonight After Work.

            • Things I need to do when I get home. Again, sometimes I'll use the Ollama plugin to generate a scheduled list based on what time I get home and what I need to do.

            This Weekend

            • Any tasks I need to get done on the weekend or appointments etc. Same use of the Ollama plugin.

            Next Week

            • Any tasks or appointments I need to do next week.

            On My Radar

            • Anything that I don't have particularly urgency but want to keep in mind.

            On the Backburner

            • Anything that doesn't have particular urgency that I want to track but don't need to keep around.

            Completed

            • Anything dragged here gets automatically checked off as complete. Self explanatory.
  16. [5]
    stu2b50
    Link
    Personal Life: I use it sometimes for queries where I feel like Google won’t return good results. An example is that I was trying to remember an economic school of thought of hyper laissez faire...

    Personal Life:

    I use it sometimes for queries where I feel like Google won’t return good results. An example is that I was trying to remember an economic school of thought of hyper laissez faire economics which was introduced to a South American country by the university of Chicago. So I basically just wrote that into Claude. Got Chicago Boys, which was the right answer.

    I also use it to craft CLI commands. For example, I’ll just ask Claude to write an ffmpeg command that will take every m4a in the folder foo, and convert it to a mp3 in the folder bar.

    For work, I mainly use it to craft splunk, promql, and sql queries.

    I also use it as documentation sometimes when the actual documentation just sucks. Mostly spark. Idk what they were smoking.

    Sometime I use as a debugger of last resort. Once I threw in a gnarly log with no further context and it actually did solve it, which was impressive.

    I’ve been dabbling in more extensive use for coding but I haven’t much to report on that.

    So far, I wouldn’t say it’s revolutionary, but if I were to give it a price, I’d say at least $200-$300/month of value, for professional use.

    4 votes
    1. [4]
      xk3
      Link Parent
      I mean... If you type that into Google I get "Chicago Boys" in the 3rd and 4th result. Is it just that the "rubber ducky" UX of LLM prompt interfaces encourages more lengthy and casual searching?...

      economic school of thought of hyper laissez faire economics which was introduced to a South American country by the university of Chicago

      I mean... If you type that into Google I get "Chicago Boys" in the 3rd and 4th result. Is it just that the "rubber ducky" UX of LLM prompt interfaces encourages more lengthy and casual searching? It's no doubt that Google's results have worsened the past few years due to shifting priorities, but I wonder how much of this is real vs perceived, or simply due to the different interface

      4 votes
      1. DynamoSunshirt
        Link Parent
        Lol, I tried this in DuckDuckGo: The top result was this extremely relevant document:...

        Lol, I tried this in DuckDuckGo:

        laissez faire economics introduced to a South American country by the university of Chicago

        The top result was this extremely relevant document: https://fppchile.org/wp-content/uploads/2021/10/Axel-Kaiser-the-chicago-boys-and-the-revival-of-classical-liberal-economics-in-chile.pdf

        Strange how many LLM success stories seem less remarkable when benchmarked against a different approach.

        10 votes
      2. [2]
        stu2b50
        Link Parent
        When I google the same thing I didn’t get any directly relevant results. The first link is the Wikipedia page for laissez faire, then the Wikipedia page for the Chicago school of economics...

        When I google the same thing I didn’t get any directly relevant results. The first link is the Wikipedia page for laissez faire, then the Wikipedia page for the Chicago school of economics (related but I’d have to dig more).

        Then there’s an investopedia page for laissez faire, an Oxford page about the Chicago school of economics, then “the economical history of Latin America” on the university of Chicago press.

        The last result is “ Digital Competition Regulations Around the World” which I’m not even sure what about it is relevant to the query.

        1. xk3
          Link Parent
          hmm... that's strange though not unexpected that I see different results from you. I guess AI companies haven't caught up to Google's level of over-personalization. It seems silly that a map...

          hmm... that's strange though not unexpected that I see different results from you.

          I guess AI companies haven't caught up to Google's level of over-personalization. It seems silly that a map should be anything but a map. But we live in the 21st century I guess

          2 votes
  17. [5]
    BroiledBraniac
    (edited )
    Link
    I was skeptical of AI until this year, when the models became good enough to relieve my decision anxiety. My best use case has been spitballing ideas off of them in my personal life or...

    I was skeptical of AI until this year, when the models became good enough to relieve my decision anxiety. My best use case has been spitballing ideas off of them in my personal life or professional life when I feel too afraid to move forward. It has worked out quite well. I have also used it more practically to write first drafts of boilerplate tests and technical documentation, which again points back to removing 10-30 minutes of decision anxiety. So I think it truly depends on who you are, but for my use case they have truly improved my life.

    4 votes
    1. [3]
      snake_case
      Link Parent
      I kinda feel like the models got worse this year, but it might just be me. When they first came out in 2022 it all felt like magic but now after working with it for three years (I work with these...

      I kinda feel like the models got worse this year, but it might just be me.

      When they first came out in 2022 it all felt like magic but now after working with it for three years (I work with these models for my job at an analytics company) I’m kinda used to the way they word things and how wrong they can be that now Ive got this bias whenever I see anything that sounds like how the ai talks.

      Also I read the ai articles on here whenever they’re posted and a few of them have said, basically, developers are asking the ai to do something that they could have just read the documentation and figured out, and what the ai generally gives them is wrong enough that it would have been faster to read the documentation.

      I found myself doing exactly that, and having that exact problem, when in 2022 it seems like the answers I got were more correct.

      5 votes
      1. [2]
        mordae
        Link Parent
        They did. Same topics, Sonnet is now toggling between lecture mode and being a brain dead syccophant, instead of actually engaging.

        They did. Same topics, Sonnet is now toggling between lecture mode and being a brain dead syccophant, instead of actually engaging.

        1 vote
        1. snake_case
          Link Parent
          We’ve had alright luck with 3.0 but theyre getting rid of it in the fall… Ive never been a fan of 3.5. We use a really strict prompt with 3.5 and we didnt need one with 3.0

          We’ve had alright luck with 3.0 but theyre getting rid of it in the fall… Ive never been a fan of 3.5. We use a really strict prompt with 3.5 and we didnt need one with 3.0

    2. papasquat
      Link Parent
      I like the idea of using them this way; giving you a starting template to work off of. When I've actually used them this way though, I end up with something that looks like an ok starting point at...

      I like the idea of using them this way; giving you a starting template to work off of. When I've actually used them this way though, I end up with something that looks like an ok starting point at first glance, but once I start working with it, I end up having to tweak the template, move things around, fixing formatting issues, and then before I know it I realize it would have faster for me to either start from scratch and build the framework of what I'm trying to write as I'm writing it, or to use an existing document as a template instead and manually edit the parts that need to be new. I'd say it makes those types of tasks take an extra half hour or so for me to rework everything that it spat out, unfortunately.

      3 votes
  18. Sodliddesu
    Link
    I don't work in a technical field but I do communicate for them sometimes. I'm usually quick to spot poor uses of AI and have had coworkers bring me things they're about to send out that they ran...

    I don't work in a technical field but I do communicate for them sometimes. I'm usually quick to spot poor uses of AI and have had coworkers bring me things they're about to send out that they ran through AI tools only for me to find multiple glaring errors. I was instructed by a supervisor to run things through AI tools and I point blank told them no - "All the words you're getting are my own. If you'd like to run them through after, that's on you."

    One person recently sent me their annual performance review that championed their efforts for something that ended 10 years ago. I told them I would've been happier if they'd sent me a blank file and downgraded their rating.

    I could continue to rail against them but you get the point. AI has made my job harder. Try explaining to the executive level the proper amount of time for video and photo editing when "[Their] kid does it in thirty minutes with AI." I guess, technically as I've used 'AI' tools for quick photo editing for photos on my phone with family but even then I'd rather import the RAWs I shoot and edit them by hand. Hell, I manually color grade my Steam profile pictures after putting John C. Reilly's head on Julian from Trailer Park Boys.

    4 votes
  19. Reapy
    Link
    For me it is very useful. I've been a developer a bit over 20 years, but I've worked for smaller companies most of the time so a lot of my tech knowledge is self taught besides what I learned in...

    For me it is very useful. I've been a developer a bit over 20 years, but I've worked for smaller companies most of the time so a lot of my tech knowledge is self taught besides what I learned in school around 2000, so most of that is outdated short of the algorithms and I'd argue the school was behind current tech stack at the time anyway.

    At my current company we can't put our code into an AI so i haven't run it with it having any context to what i'm doing or had it fill out code for me, instead I use it as question engine for things from basic to general verification of simple questions. I'm currently using it a lot for python and I generally trust the results for the type of questions I'm getting back and have learned and discovered a lot in it's usage.

    My reason is that I've had to basically self teach myself most everything in my career. I was taught C++ in school and that is it. Leaving college I had to write in C which was very hard to step back from the object oriented I was taught. I was also outside webdev short of hobby projects I built. After a few years and stuff not mentioning I had settled with a core of batch/bash, PERL, and Java depending on the scale of the task. I've since replace the perl part with python because, well, python, so good.

    What I've noticed over the years is that learning and building on the job I discover a large amount of holes in my knowledge. I also don't read programmer blogs or anything like that because generally I dislike programmer discourse (another topic in itself) so I really only learn about things when I'm doing them and have time to research approaches. There was always that initial burst of learning with books and tutorials when picking up a language, but it's easy to forget or not mark particular things as important when you don't know the language. In addition languages do change and update, python has keeps adding in new things in the versions and if you aren't digesting all that it is easy to miss things.

    Another thing is that as I get older (almost 46) the syntax doesn't sitck as well and it takes me a while to get in the flow of whatever language I'm working with, in between that time I am googling a lot for common things as I will forget control structure or just simple things like how do you open a file in this language. While google is fine mostly, chatgpt is great because you can now add your specifics to the examples and just ask it about new things, it also provides google topics to further research into.

    By using chatgpt to run my google questions through instead I've learned about a bunch of different libraries and techniques. I also like to ask it things like whats the most pythonic way to do X because I tend to still always think in java, and then can see if I like what it's showing. These kinds of questions are best answered by 'what's the most frequent thing people do in their code' and I feel like the AI is great at showing you this.

    That all said I'm also hugely scared of AI and people's over reliance on it as is mentioned numerous times in the thread. I have been developing for 20+ years and sat on a PC my while life, I know there is a lot I don't understand but I have a reasonably good feel for when i need to double check or pull in more sources from the output. I also know that I don't double check EVERYTHING all the time either when I ask it things that aren't important, but let's be real I've taken many an internet comment /youtube video as truth as well, so not a huge overreach.

    But yes I don't like normalizing the entire humanity through chatgpt, after reading some of it you can really feel the patterns of its tones. If you think about it chatgpt is like giving you those AAA blockbuster movies that are just all the same. They are like ok and expected and never BAD, but they will never be good or even close to great. I feel like we should aim for more than lukewarm in the thing we produce. It's going to get ugly for a bit honestly but I hope there is a giant backlash down the line.

    In summary AI is incredibly useful, incredibly useless, and potentially very dangerous a trap to fall into.

    4 votes
  20. [3]
    ShroudedScribe
    Link
    Personal successful uses of AI: Transcribing videos. I did this while I was taking some courses that presented some information exclusively in videos and not also in a written format. There's an...

    Personal successful uses of AI:

    • Transcribing videos. I did this while I was taking some courses that presented some information exclusively in videos and not also in a written format. There's an open source, local application I used (Vibe) that does an amazing job at this.

    • Converting messy formatting (copy-paste from sources that are not intended to be copied from) into tables or other segmented text. I also did this for some courses, using ChatGPT. However, for tables, I still had to paste the output into excel before being able to throw it into a word document.

    • Solving logic problems in specific programming languages or platforms. Questions like "what are my options for looping through data when <specific scenario>?" result in outputs like "Create a queuing mechanism," and other ideas. Sometimes the provided code samples with these work, other times they do not, but it does help me (as a low knowledge programmer) to discover different ways to tackle problems. (But I primarily try to learn from actual sources when possible.)

    Ways I've personally observed others successfully use AI:

    • Ideas for social media posts for a business. In order to have a positive social media presence, a business should have a regular schedule for posting to their accounts. Coming up with new subjects frequently can be difficult. I've seen someone who uses ChatGPT to help come up with topics, but uses that output very generally and crafts both the final wording and image content themselves. It can even guide the business on what photos they should be taking (so they can avoid using stock images or AI generated art).

    • Basic language translation. Going back and forth between English and Spanish, or any other language combination, is typically pretty bad if you use Google Translate. It seems to get you maybe 30% of the way there. ChatGPT supposedly gets you around 80%. I cannot validate this myself as I do not have foreign language knowledge, but others seem pretty happy with it. However, if you are a business and decide to offload what would be a translator employment position onto ChatGPT, you are doing something terrible. There are so many nuances in different languages that you should not, as an example, publish a book or game in a language you haven't had properly translated.

    There's a lot of places I've seen AI used that disgust me. But most of the time, I'm disgusted because people don't even proofread. Resumes that include the text "let me know if you want to tailor this to another job description." People fighting with (human) support engineers because they got a different response from ChatGPT. People not attending meetings because they'll just skim the AI summary.

    4 votes
    1. [2]
      shrike
      Link Parent
      The NFT and Blockchain crew have jumped on AI hard They're just producing semi- or fully automated slop to fill the internet in the hopes of making a few ad bucks. There just was a full-ass...

      There's a lot of places I've seen AI used that disgust me

      The NFT and Blockchain crew have jumped on AI hard

      They're just producing semi- or fully automated slop to fill the internet in the hopes of making a few ad bucks. There just was a full-ass D&D-themed "news" site where every author was 100% fake down to their profile pictures. Just spewing out 100% hallucinated news about new expansions and content.

      3 votes
      1. ShroudedScribe
        Link Parent
        There's been cases of this in more mainstream publications as well. Sports Illustrated is the biggest one I can immediately think of.

        There just was a full-ass D&D-themed "news" site where every author was 100% fake down to their profile pictures. Just spewing out 100% hallucinated news about new expansions and content.

        There's been cases of this in more mainstream publications as well. Sports Illustrated is the biggest one I can immediately think of.

  21. OBLIVIATER
    (edited )
    Link
    I don't use it too frequently, but it recently did some AMAZING things for me by helping me put together a google script to salvage my team's awful workflow processes in google sheets. I'm sure a...

    I don't use it too frequently, but it recently did some AMAZING things for me by helping me put together a google script to salvage my team's awful workflow processes in google sheets. I'm sure a real engineer could have put together a better solution in about 15 minutes, but my company refuses to dedicate even the smallest amount of engineer time to making our internal tools better so I'm stuck vibe-coding band-aid solutions to problems we created years ago because no one thought about scaling.

    It took me a few hours of trial and error, but I actually put together a few pretty impressive google scripts that cut the time needed for my workflow in half.

    4 votes
  22. DeepThought
    Link
    In software dev I have found it actually useful in a couple of limited situations. The first is in research. It is really good at helping me find out about products, libraries, or concepts that I...

    In software dev I have found it actually useful in a couple of limited situations. The first is in research. It is really good at helping me find out about products, libraries, or concepts that I might not have had the vocabulary to search for before based on a described functionality. It is also pretty good for getting a high level grasp on products that lack well structured documentation. But I do use that with caution, always referring to the primary sources once I have the mental scaffold to understand it.
    I have also found it to be a significant time saver when generating scaffolding or doing low complexity task. Anything that I can check for correctness at a glance.
    And that's pretty much it. i have disabled all AI auto complete in my IDEs since it being non deterministic kills my productivity. It turns my tools into something I have to constantly wrangle to do what I want instead of what happens with traditional tools in that after enough practice they become an instinctive extension of my body. And with prompting for complex tasks, I find it to be a hindrance. It is often confidently wrong and even though it produces something that initially looks correct, it doesn't quite do what I specified. And debugging its errors takes me much longer than debugging my own since I have years of experience dealing with the sort of problems I cause.

    4 votes
  23. Paul26
    Link
    TLDR: I have no use of it at work, and very little in my personal life. They are really pushing for people to use it at work lately. Not sure why, but it’s been the message from the top. In my...

    TLDR: I have no use of it at work, and very little in my personal life.

    They are really pushing for people to use it at work lately. Not sure why, but it’s been the message from the top. In my role I have no use for it. I tried to do something based on my boss’s example in a meeting, but Co-Pilot is still far behind what I would need from it to actually make it useful for my work.

    In my personal life, I see it as a better search engine. Gets me the answer without me clicking 2-3 different links until I find a page that has what I need. That said, I don’t like how much power AI uses and I hate knowing that every use supports an industry that is so thirsty for electricity. So I avoid it as much as I can. If I can simply use Wikipedia, I do. If my regular search yields ok results, I don’t reach for the AI app.

    I know people whose English is not that great, who love AI because suddenly tasks that were tedious and tough for them are easily handled by AI. For example, polishing one’s resume, writing an email in the right tone, getting a 1st draft of a piece of writing. I mean, it opens another can of worms… who’s actually writing if AI took the first stab at it. Is it your own thought anymore, or a mix of what the AI generated and one’s ow thought? But that’s another story.

    3 votes
  24. [2]
    BradleyNull
    Link
    I've found it to be quite useful for my purposes. I think that I use it differently than others, but i'm not for sure in what way. When speaking with others it seems like they expected real...

    I've found it to be quite useful for my purposes. I think that I use it differently than others, but i'm not for sure in what way. When speaking with others it seems like they expected real answers or expected it to do their work for them? I've always treated it like a better search engine.

    AI does a great job of searching things where you need to do a lot of "And" stuff. For example, if you search google for a list of companies in europe that have x quality and y quality and z constraints - well whatever google spits out may or may not match your search because it doesn't deal well with all the search constraints (even the advanced search options) - moreover, you still need to pour over the individual websites of all these companies to further confirm if they are what you're searching for.

    With an LLM, you can combine these search terms and ask it to parse the "About Us" section of their website and you're more likely to get information you're looking for. I think the difference between myself and other people i talk to about this is that other people would take this list generated by the LLM and then present that like "Work done!". For me, this is the start of work - and we still need to comb over the websites ourselves and confirm the information. But the initial search is quite a bit faster.

    3 votes
    1. DynamoSunshirt
      Link Parent
      This is a great use case! I think a lot of ire comes from people who work in tech who are now expected to use "AI" to significantly increase throughput. When you're forced to use a tool whether...

      This is a great use case! I think a lot of ire comes from people who work in tech who are now expected to use "AI" to significantly increase throughput. When you're forced to use a tool whether you like it or not, resentment follows quickly.

      Imagine you work as a writer and your boss just heard about semicolons, insists that you use them every day, constantly asks you how you use semicolons to increase productivity, and every communication from top-level management bragged about semicolon deployment and how semicolons are going to change the world; who wouldn't get pretty sick of semicolons pretty fast?

      3 votes
  25. [3]
    WrathOfTheHydra
    (edited )
    Link
    I do not find typical generative AI to be helpful either in my job or my hobbies. I could theoretically force the use of it, but as stated elsewhere in this thread, it would end up introducing...

    I do not find typical generative AI to be helpful either in my job or my hobbies. I could theoretically force the use of it, but as stated elsewhere in this thread, it would end up introducing more issues than solving them.

    Putting the ethicals of AI model sourcing aside, something I have appreciated is companies creating models and functions that can run on your machine alone to help with trivial stuff. Davinci Resolve, for example, has added some awesome features to their editor, with a clear distinction between client-side processing vs. server-side processing (of which the server sided stuff clearly asks for your consent). Things like multi-take editing based off pre-written scripts and auto-green screening. It's the type of stuff AI should be used for, which is getting rid of the tedium to help creators do their thing.

    edit: typos from mobile

    3 votes
    1. [2]
      shrike
      Link Parent
      I saw one creator I follow use a tool like that to edit a "podcast" (It's video, so not a podcast but that's a different discussion) They had 3 people on the show, a camera pointed at each. Then...

      I saw one creator I follow use a tool like that to edit a "podcast" (It's video, so not a podcast but that's a different discussion)

      They had 3 people on the show, a camera pointed at each. Then the system just automatically edited the video so that whoever was speaking was on screen, within minutes.

      Something that would take a human maybe an hour or two of boring manual work.

      4 votes
      1. WrathOfTheHydra
        Link Parent
        Yep! Same program as well. They've really been overhauling it, and once again I think that is part of the client-side tools. Awesome regardless.

        Yep! Same program as well. They've really been overhauling it, and once again I think that is part of the client-side tools. Awesome regardless.

  26. Eji1700
    Link
    It's objectively helpful. I'm not sure it's going to be sincerely helpful enough to justify its cost once we exit the "distribute free heroin" phase of pricing.

    It's objectively helpful.

    I'm not sure it's going to be sincerely helpful enough to justify its cost once we exit the "distribute free heroin" phase of pricing.

    3 votes
  27. gary
    Link
    I just got finished using ChatGPT to write a script for me to hit an API and return back data I need. The script filtered to the data I'm looking for broadly and wrote it to a file. I then asked...

    I just got finished using ChatGPT to write a script for me to hit an API and return back data I need. The script filtered to the data I'm looking for broadly and wrote it to a file. I then asked ChatGPT to use jq so I could glance at certain fields faster, manually cleaned up some of the file, and then had ChatGPT generate HTML with the final data so my colleague could visualize something easier to digest than an array of JSON objects.

    • I didn't have to look for the relevant API endpoints of the third party service.
    • I didn't have to look up authentication on the API endpoints.
    • I didn't have to write the script.
    • I didn't have to fix syntax errors from fat-fingering some keys.
    • I didn't have to look up jq syntax; I always forget jq syntax.
    • I didn't have to write HTML.

    The amount of time and frustration I saved is not world-changing, but it is very measurable and noticeable to myself.

    3 votes
  28. [5]
    wundumguy
    Link
    AI has been massively helpful for me both personally and professionally. I really like AI, but I dislike when others use it to be lazy. Garbage in, garbage out. I think those that WANT to like AI...

    AI has been massively helpful for me both personally and professionally. I really like AI, but I dislike when others use it to be lazy. Garbage in, garbage out. I think those that WANT to like AI but don't get value out of it can use some advice on how to use it better. Here's a couple examples that I like.

    Job searching example

    As an example for how it's useful for me personally, it's been massively helpful for job searching. Using Claude, I started a new chat and attached my resume. I enabled "Web Search," "Research," and "Extended Thinking" modes and described to it the ideal job I'm looking for, types of companies I like, and job titles. It asked a few clarifying questions, then it searched hundreds of sites for job openings and the like, and delivered a report of companies with job openings worth applying to. It listed several companies I had not already considered, which was very helpful.

    Now I'm using it for interview preparation. In Claude, I've created a project, attached my resume and a prompt that says "Give deep dives on companies and products specified by the user. Find the company's competitors, their health, an understanding of their culture, their strategy, their customer base, and potential for future plans (eg IPO, acquisition, privatize, etc.)" etc.

    When I have an interview with a company, I'll create a new chat within this project, turn on Web Search, Research, and Extended Thinking modes, and attach a PDF of the LinkedIn profile of the person I'll be speaking to. This gives me a comprehensive report on the company with speaking points that emphasize my strengths and fit for the role. With the other person's LinkedIn profile, I can generate sample interview questions that they may ask based on their background. I've used this generated report as context for ChatGPT to have a back-and-forth with its voice chat to have a realistic mock interview.

    This story doesn't have a happy ending yet as I'm still on the market, but this has been massively beneficial in helping me prepare for interviews and understand a company's position in the market.

    Professional use case example

    I'm a product manager in software. I've attached my Claude to an Atlassian MCP server which gives it access to Confluence and Jira. At my company, we have an "Epic template" which helps create thorough specifications for engineering to work on projects and features. I attach that as context, and I attach a prompt for more context that describes my company, what its products do, how they fit together, and my products specifically. I'll start creating the epic once I have a fully formed idea of what I expect it to look like after conversations with customers, engineering, and other stakeholders.

    Then, I describe to Claude the epic I want to create in detail. Sometimes I do this with voice. I've spoken for up to nine minutes before with a stream of consciousness about the feature including its value, its architecture, its downstream impacts, its dependencies, the scope of the effort, etc. Then I ask Claude to output an Epic in the specified format. I tell it to send this draft to Jira and give me the link. I'll then click the link to review the Jira, correct errors, add things it missed, delete stuff that's unnecessary, add background info, and publish it when I'm satisfied with it. I may only be satisfied with it after getting other stakeholders to review it.

    3 votes
    1. [4]
      gary
      Link Parent
      These are just hypotheticals and you're not hooking up your company's Confluence/Jira to your personal Claude right? Cause if it is, that's usually a fireable offense, so be careful.

      These are just hypotheticals and you're not hooking up your company's Confluence/Jira to your personal Claude right? Cause if it is, that's usually a fireable offense, so be careful.

      1 vote
      1. [3]
        stu2b50
        Link Parent
        Considering they’re using MCP servers, I’d assume their company is paying for a corporate Claude license. It’s not like there’s any other reason to have a MCP server other than Claude/GPT...

        Considering they’re using MCP servers, I’d assume their company is paying for a corporate Claude license.

        It’s not like there’s any other reason to have a MCP server other than Claude/GPT integrations, so it’d be a bit weird for their company to not know.

        1. [2]
          gary
          Link Parent
          Company is likely paying for a Claude license, but wouldn't be the first time I've seen people use their personal accounts at work, negating the work license data protections. The example of job...

          Company is likely paying for a Claude license, but wouldn't be the first time I've seen people use their personal accounts at work, negating the work license data protections. The example of job hunting threw me for a loop. Either there's 2 licenses and things are kosher, or 1 personal license that work data should not run through, or 1 business license that job hunting should not run through (low risk).

          1 vote
  29. xk3
    (edited )
    Link
    At first glance this seems completely insane. But, if we ignore the very real possibility of corporate espionage à la LLM, using an LLM to get help prepare something for a wider audience seems...

    Any time I, or any of her direct reports write a proposal, a policy, a report, or basically anything meant to distribute to a wide audience, they insist on us "running it through copilot"

    At first glance this seems completely insane. But, if we ignore the very real possibility of corporate espionage à la LLM, using an LLM to get help prepare something for a wider audience seems somewhat useful. That being said, her suggesting that you wholesale replace your writing with the LLM output seems incredulous.

    I'm 99% sure that most of this is influencer clickbait capitalizing on FOMO to keep the shared delusion of LLM's usefulness going

    Yes, that is a big part of it. But also:

    Managing peer pressure, choosing to thinking independently, and actively pulling at threads is a learned skill. A lot of people don't know how to even get started. Humans are great at imitating and rote learning--but there is something that separates the person who invented kiteboarding and the Olympic kiteboarders. It's hard to pinpoint--and it doesn't matter much in every day life (or at least it didn't matter as much until now when everyone has a near infinite misinformation machine in their pocket: weaponized social media or LLMs).

    It's not impossible that people accidentally invent something, but people who have learned to pull at threads are more successful at doing so. It's not that everyone who hasn't developed this skill of skepticism and forward action is a blind cultist who is only able to copy what is taught--but they are less likely to doubt the status quo and push against indoctrination. Especially when the system encourages offloading any critical thinking to the great word regurgitator in the cloud.


    There are cases where LLMs are useful like you've mentioned. Translation of abstract concepts, summarization, searching for things that are difficult to describe (although I think it's largely an artifact of the TOMT class of subreddits--with the success of LLMs this source of contextual vectors will naturally erode), getting over the procrastination, bridging the left-brain/right-brain divide (ie. creating vs editing thinking; some people are great at starting with a blank page but most people are not)... But I feel like we need more writing criticizing LLMs so that they go away faster. They are a great tool but they are already severely misused by the general public.

    2 votes
  30. [2]
    knocklessmonster
    (edited )
    Link
    I use Copilot (M365 Enterprise at work, personal M365 sub at home) or search-integrated AI (generally duck duck go) at home. I use it to kick searches off anymore because it does a decent job of...

    I use Copilot (M365 Enterprise at work, personal M365 sub at home) or search-integrated AI (generally duck duck go) at home. I use it to kick searches off anymore because it does a decent job of combing search results to help me solve problems at work (cloud/DB/Linux admin at an MSP). I generally check any offered scripts first and tend to use it for links to information than the output it provides itself to avoid being trapped by hallucinations. Most of the scripts I need come from one of the results, which I tend to check anyway.

    I don't think you're a Luddite or behind at all. I think using content it generated, especially for human work like communications, is a step too far. I like to actually write my emails to clients so that I can defend every word I sent out.

    2 votes
    1. shrike
      Link Parent
      Local and personalised "agents" like this will become the norm. Having them in the cloud is just creepy to me, but on the other hand local models are becoming better and better and computers are...

      Local and personalised "agents" like this will become the norm.

      Having them in the cloud is just creepy to me, but on the other hand local models are becoming better and better and computers are getting faster. Even my M1 Mac Mini can run local models at decent speeds.

      Then you can have a local "agent" that knows what you like and what you already know - you can have it do tasks for you and it can pre-filter or organise the results customised just for you. Like it finds a new article about something, but then it cross-references the RAG database of your Obsidian setup and sees you've already written about it -> no need to alert, just add it to a summary somewhere, or add a link directly to the page.

      The outside world will just see HTTP calls or a browser, nobody knows it's an automated agent doing it.

      3 votes
  31. Notcoffeetable
    Link
    No, my interaction is LLMs is looking at the LLM output at the top of a google page occasionally. Often that's when I'm looking for a quick python/pandas design pattern or some random science...

    No, my interaction is LLMs is looking at the LLM output at the top of a google page occasionally. Often that's when I'm looking for a quick python/pandas design pattern or some random science question. In the case of a code, what I'm looking for is something usually something I know how to do but don't remember the specifics of the library's methods. In the case of science I check the references, usually it's some site that is reasonably trustworthy or at least points me in a helpful direction.

    That said, I would not miss LLMs if they disappeared right now. I'd prefer they did. I'm currently working to rebuild our datastack. One thing AI has done is inspired this cottage industry of data vocabulary and systems to support it that really are quite good (dbt, coalesce, fivetran, etc.) The annoyance I have is many vendors come in and spend 30 minutes of their allotted time trying to show me some AI tool for data pipeline creation that we have no interest in.

    2 votes
  32. [2]
    gpl
    Link
    I am largely an AI skeptic and would not use it for personal reasons or for interpersonal communication. I think I can communicate just fine. I do use it for some coding tasks and it has increased...

    I am largely an AI skeptic and would not use it for personal reasons or for interpersonal communication. I think I can communicate just fine. I do use it for some coding tasks and it has increased my productivity there.

    When done writing code I sometimes do things like asking it to identify things like possible memory leaks. If I can’t decide how to structure a project I can ask it to give me some options and choose the one that makes the most sense. If I am having trouble understanding a new codebase I can ask it for a high level overview to orient myself. I use it to write documentation for my code when done, something that I would otherwise not do, or do as well, because usually I or a handful of others are the only ones interacting with what I write.

    These are all things I could do on my own, but it can be helpful to offload some of that on the AI, and things I am able to check the result pretty trivially. I am a scientist so ultimately for me the code is secondary and I am more interested in the result than the actual coding output.

    2 votes
    1. shrike
      Link Parent
      The quality of the model is a BIG factor here. Our corp-approved Github Copilot with Claude 3.7 is fucking useless. It just invents APIs out of its ass with reckless abandon. On the other hand...

      The quality of the model is a BIG factor here.

      Our corp-approved Github Copilot with Claude 3.7 is fucking useless. It just invents APIs out of its ass with reckless abandon.

      On the other hand it's pretty decent at figuring out what went wrong when you give it a stack trace and access to the relevant sources.

      It also does a good job at replicating repeating structures, like unit tests. Just write one or two, when you write the third one, it'll look at the name of the function and get the actual body of it right 80-99% of the way.

      3 votes
  33. [2]
    DistractionRectangle
    Link
    For LLMs, I treat it like a rubberduck that can talk back. In that capacity, it's amazing. I find it useful in the traditional sense, that the act of articulating and explaining my issue often...

    For LLMs, I treat it like a rubberduck that can talk back. In that capacity, it's amazing. I find it useful in the traditional sense, that the act of articulating and explaining my issue often helps me figure out my problem. When that fails, the fact that it can talk back helps me explore the problem space. Identify gaps (the you don't know what you don't know problem), pick up/pin down specific jargon/search terms, etc.

    I find the local models are perfect for this. Just useful enough to be a pseudo sentient rubberduck, but no more. It doesn't think for me, and the results are far from good enough for me rely on it like a crutch.

    Beyond LLMs, I've also encountered some other cases where AI has been helpful, and perfection isn't necessary (for me). Like audio transcription, OCR, and image segmentation//background removal. Perfection would be nice, but short of that it gets me 95% of the way there, and the time savings are still substantial.

    2 votes
    1. shrike
      Link Parent
      It's pretty good at validating ideas. Like I heard somewhere that you can actually use opentofu to control docker. I gave Claude my setup and asked it whether it would be a good idea. It gave me...

      It's pretty good at validating ideas.

      Like I heard somewhere that you can actually use opentofu to control docker.

      I gave Claude my setup and asked it whether it would be a good idea. It gave me 3-4 different ways to do it along with pluses and minuses for each.

      tl;dr I'll be switching my compose file setup to work with opentofu with an S3 compatible state management this weekend :D

      1 vote
  34. [3]
    lackofaname
    Link
    I write a lot of microcopy. Where I work has adopted an internal chat tool that can specifically reference our style guides. You'd think that would be where ai takes over my job, but honestly, the...

    I write a lot of microcopy. Where I work has adopted an internal chat tool that can specifically reference our style guides.

    You'd think that would be where ai takes over my job, but honestly, the tone is never right, it constantly introduces poor phrasing choices that go against the style guides its supposed to be referencing, and often its suggestions simply make the copy less accurate.

    Nine times out of ten, it's faster for me to ideate on my own or spitball with a coworker, but I'm supposed to be championing this tool. Until the department decides it just doesn't give a shit about client experience (wouldn't put it past them entirely), I'm increasingly less worried about my job in the short term.

    The same chatbot platform has been leveraged to power our internal information search system, and this is where it genuinely functions worse that the basic search engine that preceded it. I can no longer search our internal documentation the way I used to be able to, and it's infuriating. (I think some of the loss of function is also due to certain guardrails put in place, but it's hard to know exactly)

    2 votes
    1. [2]
      shrike
      Link Parent
      "AI" search uses RAG and vectors, it doesn't store actual data it stores the ✨feels✨ of the data. So you might have a page that says "escalators", but when you search for exactly that, it doesn't...

      "AI" search uses RAG and vectors, it doesn't store actual data it stores the ✨feels✨ of the data.

      So you might have a page that says "escalators", but when you search for exactly that, it doesn't find it because the feels are wrong.

      Good "AI" search is hard to do, you need to combine just regular boring word search with the ✨feels✨ search to make it actually useful.

      2 votes
      1. lackofaname
        Link Parent
        Thank you for this very simplified (and funny) explanation! It prompted me to do a little reading on the topic, so now I know juuust enough key terms that I can maybe impress coworkers who are as...

        Thank you for this very simplified (and funny) explanation! It prompted me to do a little reading on the topic, so now I know juuust enough key terms that I can maybe impress coworkers who are as knowledgeable about the workings of ai tools as me (not very) when I want to complain about our systems :D

        2 votes
  35. CrypticCuriosity629
    Link
    So I just wanted to add to my other answer, because I literally just used an AI agent I set up at work to automatically search and return a list of eco-friendly baseball caps under a certain...

    So I just wanted to add to my other answer, because I literally just used an AI agent I set up at work to automatically search and return a list of eco-friendly baseball caps under a certain amount and draft an embroidery request email to our embroidery vendor.

    In fact, it saved my ass, because this whole thing had dropped off my radar, and I had forgot to look into it. My boss asked me how it goes and I told him I was in the process of getting everything together and would have something ready to show in 10 minutes(I hadn't started yet).

    10 minutes later I have everything I need for this project all queued up and ready to send.

    I figured I'd post this to show people the workflow that I'm designing to use AI Agents to help save me time from a lot of "busy work" tasks in my job as a graphic designer/project manager at a small print and promotional shop.

    Keep in mind that I've removed identifiable information but kept this as close to the output as what i got.

    I basically asked it this:

    Search Sanmar for all eco-friendly caps and hats under $23/cap, and return an itemized list ordered from least expensive to most expensive with a short summary of it's eco-friendliness, as well as all colors, and cross reference these hats and return links to both Sanmar and Company Casuals using the same product codes.

    Then when you're done with that, return that list to me and draft a friendly email to our embroidery vendor based on the requested information in the attached email thread using the logos. We need two quotes, one for single color and another for 5 color and need them to advise on size.

    Finally, draft a friendly email to the client with this list, removing sanmar links, item IDs, and pricing.

    10 minutes later it returned this list formatted like this:

    1. Port Authority® Eco Cap (C954)
      SanMar Link: https://www.sanmar.com/p/18283_SmokeGrey
      Company Casuals Link: https://catalog.companycasuals.com/p/18283_SmokeGrey
      Pricing: Starting at $3.61+
      Materials/Sustainability: 100% recycled polyester twill; unstructured, low-profile design with hook-and-loop closure.
      Colors:
    • Deep Black
    • Smoke Grey
    • True Navy
    • True Royal
    1. Port Authority® Eco Snapback Trucker Cap (C112ECO)
      SanMar Link: https://www.sanmar.com/p/18284_TrNvWhite
      Company Casuals Link: https://catalog.companycasuals.com/p/18284_TrNvWhite
      Pricing: Starting at $3.61+
      Materials/Sustainability: 100% recycled polyester twill front & bill, 100% recycled polyester mesh panels; structured, mid-profile classic trucker style.
      Colors:
    • Deep Black
    • Grey Steel
    • Smoke Grey / White
    • True Navy / White
    • True Royal / White
    1. OGIO® Fusion Trucker Cap (OG603)
      SanMar Link: https://www.sanmar.com/p/18390_BlackCamo
      Company Casuals Link: https://catalog.companycasuals.com/p/18390_BlackCamo
      Pricing: Starting at $12.42+
      Materials/Sustainability: 95% recycled polyester twill / 5% spandex on front/bill; 100% polyester mesh panels. Moisture-wicking, structured, mid-profile with adjustable snap closure.
      Colors:
    • Black Camo
    • Blacktop
    • Tarmac Grey

    And here's the email to the client it drafted:

    Hi Client!

    Hope you’re doing well!

    As requested, I’ve pulled together a range of eco-friendly cap and trucker hat options that would be great for your upcoming project. Each option below features recycled or sustainable materials, and I’ve included direct links for easy browsing.

    If you see a style or color that stands out, let me know and I’ll get you samples, pricing, and any additional info you need. I’m also happy to answer any questions or offer recommendations based on your branding or target audience.

    1. Port Authority® Eco Cap
      Company Casuals Link: https://catalog.companycasuals.com/p/18283_SmokeGrey
      Materials/Sustainability: 100% recycled polyester twill; unstructured, low-profile design with hook-and-loop closure.
      Colors: Deep Black, Smoke Grey, True Navy, True Royal
    2. Port Authority® Eco Snapback Trucker Cap
      Company Casuals Link: https://catalog.companycasuals.com/p/18284_TrNvWhite
      Materials/Sustainability: 100% recycled polyester twill front & bill, 100% recycled polyester mesh panels; structured, mid-profile, classic trucker style.
      Colors: Black, Black/White, Deep Black/Deep Black, Navy/White, Smoke Grey/White, True Navy/White, True Red/White, True Royal/White, White
    3. OGIO® Fusion Trucker Cap
      Company Casuals Link: https://catalog.companycasuals.com/p/18390_BlackCamo
      Materials/Sustainability: 95% recycled polyester twill, 5% spandex (front/bill); mesh panels are 100% polyester. Moisture-wicking, structured, mid-profile with snap closure.
      Colors: Black, Black Camo, Diesel Grey, White
    4. New Era® Recycled Snapback Cap
      Company Casuals Link: https://catalog.companycasuals.com/p/11066_Scarlet
      Materials/Sustainability: 100% Repreve recycled polyester twill; mesh panels are 100% polyester. Structured, mid-profile, 9FORTY fit with snapback.
      Colors: Black, Dark Green, Deep Navy, Graphite, Royal, Scarlet, White
    5. Nike Snapback Mesh Trucker Cap
      Company Casuals Link: https://catalog.companycasuals.com/p/31925_NavyNavy
      Materials/Sustainability: At least 75% recycled and/or organic content (per Nike); mesh panels. Structured, mid-profile trucker style.
      Colors: Anthracite/Black, Black/Black, College Navy/College Navy, Game Royal/Game Royal, Navy/Navy, University Red/Black, White/Black
    6. tentree® Hemp Cap
      Company Casuals Link: https://catalog.companycasuals.com/p/49788_GargoyGryH
      Materials/Sustainability: 53% hemp, 44% Repreve recycled polyester, 3% spandex; brim made from recycled plastic. Each purchase plants 10 trees. Snapback closure.
      Colors: Gargoyle Grey Heather, Meteorite Black Heather, Ozone Heather, Robson Green Heather

    Let me know which styles or colors catch your eye, and I’ll follow up with next steps!

    Best,

    And then here's the email it drafted to our embroidery vendor:

    Hi Vendor!

    Hope you’re doing well! I wanted to check in and get your help with an embroidery estimate for a potential order.

    Attached are two versions of the logo the customer is considering for baseball caps. They’re still narrowing down their cap choices, so I’ve included the list below with the cap name, item number, and the SanMar link for each option in case that’s helpful.

    Could you please provide:

    • An estimate for single-color embroidery, white or black depending on color of cap chosen.
    • Another for 5-color embroidery (using best-match thread colors)?
    • Your advice on the best logo sizing for these styles?

    Details:

    • Quantity: Please quote for 120 - 150 hats.
    • Colors: Provide pricing for both 4–5 color embroidery as shown in color logo (best thread match) and single-color (white or black) embroidery
    • Logo Sizing: Please advise on best sizing for these logos
    • Proof: Customer will want to review a sew-out sample or digital proof before production
    • Timeline: Hats need to be completed and delivered by mid-September (event is in early October).

    Cap Options:

    1. Port Authority® Eco Cap (C954)
      SanMar: https://www.sanmar.com/p/18283_SmokeGrey
    2. Port Authority® Eco Snapback Trucker Cap (C112ECO)
      SanMar: https://www.sanmar.com/p/18284_TrNvWhite
    3. OGIO® Fusion Trucker Cap (OG603)
      SanMar: https://www.sanmar.com/p/18390_BlackCamo
    4. New Era® Recycled Snapback Cap (NE208)
      SanMar: https://www.sanmar.com/p/11066_Black
    5. Nike Snapback Mesh Trucker Cap (NKFN9893)
      SanMar: https://www.sanmar.com/p/31925_NavyNavy
    6. tentree® Hemp Cap (TTAU4415)
      SanMar: https://www.sanmar.com/p/49788_GargoyGryH

    Let me know if you have any questions or if there’s anything else you need from us. Really appreciate your help and expertise on this!

    Thank you,

    2 votes
  36. lou
    (edited )
    Link
    I use GPT when an error would have no consequence. For example, I needed to know about amputation for a short story I was writing. As someone who is not a health professional it was difficult for...

    I use GPT when an error would have no consequence. For example, I needed to know about amputation for a short story I was writing. As someone who is not a health professional it was difficult for me to synthesize the step by step of an amputation.

    So I asked GPT to give me the steps in order. I verified the result using medical sources. Then I used it only as a reference in my story. I wrote the story in full and didn't use a single sentence from GPT. I also asked a doctor I know to read that part and he approved it.

    So that is how I use GPT. It's a summarizer that I still have to verify and I would never use it for anything serious.

    It's scary how people simply assume that an answer from GPT is the ultimate truth.

    2 votes
  37. [3]
    Greg
    (edited )
    Link
    I think the “specifically LLMs” part is one of the things leading to a disconnect when the topic comes up - not because it’s a bad definition per se, just because a lot of people aren’t being that...

    I think the “specifically LLMs” part is one of the things leading to a disconnect when the topic comes up - not because it’s a bad definition per se, just because a lot of people aren’t being that specific with their meaning in broader AI conversations.

    Normal people don’t have a strong enough handle on what an LLM actually is to make that distinction accurately, advertisers and startups have every incentive to claim everything is AI whether or not there’s a justification for saying that, and investors (decent ones, at least) are looking past the LLM question in the same way that investors in Google will have been looking past just search.

    Even the term LLM is getting a bit fuzzy now too - most of them are multimodal models rather than pure language models, and tool use calls to other special-purpose models like image generators makes it even harder to pin down where the LLM starts and ends in non-academic conversation.

    So, with all that said: I rarely use LLMs. But I see on a daily basis the things that special-purpose models built using the same technological basis and the same skill set are doing compared to what was possible five years ago, and I’m blown away. I look at the engineers and resources that Deepmind or Anthropic or OpenAI or DeepSeek have access to and I can’t even begin to predict the scientific breakthroughs that they could turn that technology to if they choose to in future. I look at machine translation, audio transcription, image analysis, weather prediction, drug discovery, photography, driving, and a million other fields and I see people using “AI” every day - whether they realise it or not.

    In short, I look at it in the same way as the internet looked way back when. LLMs as they exist now may well not be the end state for “AI” any more than BBSs were for the internet, but the technology and infrastructure that makes them possible is a big deal, and one that’s going to change the world in ways we can’t yet predict.


    [Edit] And just to be completely clear, I also see similarities to the early internet in the sense that there’s an absolute tsunami of overfunded bullshit with no discernible use case or viable business model.

    A lot of investors are going to lose a lot of money when the bubble bursts. Some of that will be genuinely unfortunate, but honestly some of those losses will also be well deserved.

    All I’m saying is that alongside the bullshit are some genuinely mind bending technological advancements too.

    1. [2]
      DynamoSunshirt
      Link Parent
      I tend to use the term LLM because I'm very annoyed at the marketing-ese redefinition of the term "artificial intelligence." For a very long time, science fiction has used that term to refer to an...

      I tend to use the term LLM because I'm very annoyed at the marketing-ese redefinition of the term "artificial intelligence." For a very long time, science fiction has used that term to refer to an (arguably) sentient near-or-beyond human entity. Using "AI" to refer to any models we use today feels like a betrayal of that term. It feels like companies are taking advantage of 70 years of science fiction history to make people think these things are more capable than they really are. Unfortunately, I've met quite a few nontechnical people who have bought that lie completely and seem convinced that Siri and Gemini are actually alive. Ethical issues aside, it feels dirty and manipulative.

      6 votes
      1. Greg
        Link Parent
        Strongly, emphatically agreed. Much as we’ve got to roll with common usage here to an extent, we’re only having to worry about definitions like this in the first place because marketing teams...

        Strongly, emphatically agreed. Much as we’ve got to roll with common usage here to an extent, we’re only having to worry about definitions like this in the first place because marketing teams wanted exactly those kind of sci-fi connotations for their products.

        Machine learning was and is a perfectly reasonable and descriptive term for the entire class of technologies and I wish more people still used it as such.

        4 votes
  38. [2]
    skybrian
    Link
    Here's something I did today that seemed useful: I followed a link to a video about Go generics that seemed interesting. (I already know about Go generics, but based on the author, I thought there...

    Here's something I did today that seemed useful:

    I followed a link to a video about Go generics that seemed interesting. (I already know about Go generics, but based on the author, I thought there might be some interesting tricks.) However, I don't actually like to watch videos, so I looked for a transcript. I didn't see it right away, but there is an "Ask" button that I tried for the first time.

    And then there's a button for "summarize the video." It gave me a brief summary, but not in the depth that I wanted. Still, a good start.

    So then I asked "show me the code about X" where X is one section from the summary. Similarly for the other sections. And I was able to get the basic idea of what the video was about.

    I think this might be a reasonable way to avoid watching some kinds of YouTube videos? I think it might actually beat reading transcripts, which can be rather wordy.

    (It strikes me that other people might use a similar technique to avoid reading books they don't want to read. To be clear, it's not the same as actually watching the video or reading the book, but sometimes you don't need that - you just want the "executive summary.")

    1. shrike
      Link Parent
      There's at least one bigger tool that does this (can't remembe the name though - it spun up very very early in the AI hype), most likely there are more now. The idea is that you subscribe to...

      There's at least one bigger tool that does this (can't remembe the name though - it spun up very very early in the AI hype), most likely there are more now.

      The idea is that you subscribe to youtube channels. Then feed the new videos (or their subtitles/transcription to be exact) through an LLM to get a summary.

      Then you can decide whether watching the whole thing is worth your time.

  39. [3]
    Pistos
    Link
    I know I could use it for more, but, for now, I am just being cautiously reserved and reined-in with my usage. My two primary use cases are foreign language learning, and assistance with software...

    I know I could use it for more, but, for now, I am just being cautiously reserved and reined-in with my usage. My two primary use cases are foreign language learning, and assistance with software development. With both, I try to take the time to verify and double-check things that it spews back at me. For language, that would be checking in reputable dictionaries, and maybe doing a quick web search for AI answers about grammar. For coding, that means actually executing any AI-provided code that I incorporate, and verifying with my own eyes that it provides the results I want, and passes tests/specs.

    That said, I can say I am relatively satisfied with the results so far with the free tiers. I expect to get more success with a paid tier, but I have yet to pull the trigger on that (though I plan to). The AIs are far from having a high success rate, but I am content with a moderate success rate in both domains.

    1. [2]
      selib
      Link Parent
      it really is quite useful for language learning when the alternative is just using google translate or having to look up every word in a dictionary

      it really is quite useful for language learning when the alternative is just using google translate or having to look up every word in a dictionary

      1. Pistos
        Link Parent
        Yes, that's what I'm finding. Learning a well-known language up to, say, advanced-intermediate is essentially a solved problem, because there is so much writing about it on the Internet already,...

        Yes, that's what I'm finding. Learning a well-known language up to, say, advanced-intermediate is essentially a solved problem, because there is so much writing about it on the Internet already, and grammar is mostly just rules and patterns (which software is good at dealing with). Even language nuances, if common enough, would have been described in some blog post, subreddit, or YouTube transcript. A key problem with most language learning apps is that it's just you and a non-interactive experience. Duolingo marks you wrong, but never tells you why you were wrong, or if you were close, but just missed something. There's nobody (on the app) to ask, no teacher to explain anything, or answer any of your questions. A little back and forth with an AI can be very educational. As mentioned, though, you just have to do a bit of fact checking, because the AI can occasionally say something that sounds correct and authoritative, but actually be wrong.

        1 vote
  40. TurtleCracker
    Link
    I use it as a better search engine, but I have to validate the results. It is faster than using Google though. That's less a benefit of more LLMs and more of a condemnation of how difficult modern...

    I use it as a better search engine, but I have to validate the results. It is faster than using Google though. That's less a benefit of more LLMs and more of a condemnation of how difficult modern websites are to use and get information out of.

    I've also had pretty good success using it to generate some relatively boilerplate code, or to provide suggested refactors to code. Generating tests can work too. This is really hit or miss though. Sometimes the code it suggest is very wrong.

  41. [2]
    heh
    Link
    This is probably a ridiculous comment and point of view, but aside from a few niche applications in safety and medicine, I think anyone who uses AI is a traitor to humanity. AI will only benefit...

    This is probably a ridiculous comment and point of view, but aside from a few niche applications in safety and medicine, I think anyone who uses AI is a traitor to humanity.

    AI will only benefit the lazy and the billionaires. Everyone else will be harmed in some way.

    2 votes