25 votes

I don’t care whether you use ChatGPT to write

78 comments

  1. [34]
    FaceLoran
    Link
    I'm not going to bother to take time to read something that nobody took the time to write.
    • Exemplary

    I'm not going to bother to take time to read something that nobody took the time to write.

    82 votes
    1. [26]
      rodrigo
      Link Parent
      I've been reading this a lot in the last couple of weeks, and I think it dismisses the issue entirely. The time it takes for writing something has never been a good signal of reading value. Take,...

      I've been reading this a lot in the last couple of weeks, and I think it dismisses the issue entirely. The time it takes for writing something has never been a good signal of reading value. Take, for instance, all that EULAs that (almost) no one reads…

      20 votes
      1. [7]
        Lia
        Link Parent
        Something taking a long time to write may not guarantee high value. But taking a very short time to write something does guarantee low value.

        Something taking a long time to write may not guarantee high value. But taking a very short time to write something does guarantee low value.

        13 votes
        1. [6]
          rodrigo
          Link Parent
          Does it? Nothing of short-form blogging is valuable? Those nice one-liners? Something spoken live, which is the shortest time possible?

          Does it? Nothing of short-form blogging is valuable? Those nice one-liners? Something spoken live, which is the shortest time possible?

          14 votes
          1. [4]
            Cycloneblaze
            Link Parent
            I don't know about you, but when I'm writing something, the shorter I want it to be the more time I have to spend on it.

            I don't know about you, but when I'm writing something, the shorter I want it to be the more time I have to spend on it.

            21 votes
            1. [3]
              daychilde
              Link Parent
              Sometimes when I am composing music, it takes hours upon hours with refinement upon refinement. Sometimes it just flows. Many instances of beloved music that was composed if not on the fly,...

              Sometimes when I am composing music, it takes hours upon hours with refinement upon refinement. Sometimes it just flows. Many instances of beloved music that was composed if not on the fly, perhaps in a single take or two, essentially improvised into existence.

              I think considering time in the judgement is valid; I think requiring time as a part of the judgment will be wrong in many cases.

              4 votes
              1. Lia
                Link Parent
                That's because you are a human who put in a lot of effort in the past. An inexperienced artist who hasn't yet developed a voice will not be able to create valuable art in an instant on their first...

                Sometimes when I am composing music, it takes hours upon hours with refinement upon refinement. Sometimes it just flows. Many instances of beloved music that was composed if not on the fly, perhaps in a single take or two, essentially improvised into existence.

                That's because you are a human who put in a lot of effort in the past. An inexperienced artist who hasn't yet developed a voice will not be able to create valuable art in an instant on their first attempt - (except perhaps by rare coincidence).

                AI generated content can be brought to existence in a split second because many people put in a ton of effort in the past, and the fruits of their labour got stolen. But AI doesn't have a voice, and it can't have one because developing one requires having a personality through which the voice gets distilled over time.

                The person prompting the AI may or may not have a voice. If they don't, they won't be able to prompt something out of an AI that qualifies as art. It's either an uncurated amalgamation or a plagiarisation of other people's voices.

                I think considering time in the judgement is valid; I think requiring time as a part of the judgment will be wrong in many cases.

                Agreed.

                1 vote
              2. adamlang
                Link Parent
                Yeah just flows in one try after a decade of regular practice.

                Yeah just flows in one try after a decade of regular practice.

          2. Lia
            Link Parent
            I meant a short time relative to the length of the generated content. Per your own title, the context here is written content. That said, if you were to generate spoken content that passably seems...

            I meant a short time relative to the length of the generated content.

            Something spoken live, which is the shortest time possible?

            Per your own title, the context here is written content.

            That said, if you were to generate spoken content that passably seems like it's being produced live by a human, I bet it would take a fairly long time to make, and the quality of the verbal content itself would be sub-par to that of text generated on the same topic and the same amount of time used to work on it.

      2. [17]
        FaceLoran
        Link Parent
        You are absolutely correct. I am dismissing AI-written material entirely. Why should I read something that nobody wrote?

        You are absolutely correct. I am dismissing AI-written material entirely. Why should I read something that nobody wrote?

        8 votes
        1. [15]
          daychilde
          Link Parent
          For the same reason that dismissing all artificial landscaping, even though 99% of it is ugly corporate bullshit in front of commercial properties, is fine most of the time, but ignores crappy...
          • Exemplary

          For the same reason that dismissing all artificial landscaping, even though 99% of it is ugly corporate bullshit in front of commercial properties, is fine most of the time, but ignores crappy natural landscaping and ignores the places where it's done right.

          It's like being dismissive of CGI. What you don't like is bad CGI. You don't notice good CGI.

          Most AI writing is slop. Yes. Most writing is slop. AI writing is likely to improve. I envision a time when AI will be able to instruct me, knowing what I know and don't know; learning what makes sense to me; and explaining things quickly and tailored for me.

          Fiction? I think it'll reach that point. I've certainly read some shitty fiction. I recently sought out Nebula winners from the past few years, and in my humble opinion, some of those were really crap. So if AI gets to the point where it's writing better stuff, I'll consider reading it.

          But am I all for less slop of all kinds? You bet.

          16 votes
          1. [6]
            Raistlin
            Link Parent
            I guess my objection is, neither landscaping or CGI requires time from me to engage with it. I can just walk past the artificial landscaping and ignore the CGI, and I stop and smell the roses if...

            I guess my objection is, neither landscaping or CGI requires time from me to engage with it. I can just walk past the artificial landscaping and ignore the CGI, and I stop and smell the roses if it's art.

            The comparison only makes sense if we're expected to completely ignore AI slop. Which is just what the OP is saying, why should I bother reading it? I shouldn't. The same way I don't explore most procedurally generated caves in loads of video games, I shouldn't even engage with AI writing. It has nothing to say, because there's no speaker. I would take a 13-year old's crappy Sonic fanfic over a GenAI facsimile of a good sci-fi novel any day of the week, because at least the fic is saying something.

            Art is communication, from the artist to the spectator. GenAI outputs have no artist. They say nothing and mean nothing. They don't make your soul shine brighter.

            12 votes
            1. [5]
              daychilde
              Link Parent
              Not everything I read is art. If a service was released, for example, to provide me with a news feed of articles that were rewritten by AI from a number of sources where it attempted to remove as...

              Not everything I read is art.

              If a service was released, for example, to provide me with a news feed of articles that were rewritten by AI from a number of sources where it attempted to remove as much bias as possible and stick to objective truth where possible, I'd give it a try, for example.

              If I want to read a novel, I'm surely not interested in AI produced work.

              At least not for now. When AI gets good enough that I can't tell, then maybe. But we're not there, so that's arguing about something that's a theoretical.

              I have no problem with CGI as I have no problem with AI. It is entirely how it is used, and the end result that I care about.

              Well, I care about the people in the same way that as human writers get screwed, they are added to the long list of the rest of us getting screwed, but now we're into the UBI debate.

              In other words: I'm not pro-AI, but I feel I'm also not anti-AI.

              I know my opinion will differ from some. I'm not trying to convince per se. Or at least, I'm not overly converned if I don't, because I think time will change many's views - as with CGI. lol.

              (And if not, that's okay too)

              4 votes
              1. [4]
                Raistlin
                Link Parent
                I'm not trying to convince you either. This is just my stance, I will never willingly read a novel written by AI. The day I read an AI novel is the day I will have picked consumption over art, and...

                I'm not trying to convince you either. This is just my stance, I will never willingly read a novel written by AI. The day I read an AI novel is the day I will have picked consumption over art, and I hope to never get there.

                11 votes
                1. [3]
                  daychilde
                  Link Parent
                  oh YEAH? well i RESPECT YOUR RIGHT to your opinion, soooooo SCREW YOU! :)

                  oh YEAH? well i RESPECT YOUR RIGHT to your opinion, soooooo SCREW YOU!

                  :)

                  5 votes
                  1. [2]
                    Raistlin
                    Link Parent
                    How dare you??? Disagree with me! Call me a name! You're not playing the game right :<

                    How dare you??? Disagree with me! Call me a name! You're not playing the game right :<

                    3 votes
                    1. daychilde
                      Link Parent
                      Um, okay, I will, Raistlin. So there! :)

                      Call me a name!

                      Um, okay, I will, Raistlin. So there! :)

                      6 votes
          2. [6]
            EgoEimi
            Link Parent
            It's like the plastic surgery paradox (which I've mentioned on Tildes before). People say they hate plastic surgery because it looks awful and they'll never get it, but that's only because they...

            It's like the plastic surgery paradox (which I've mentioned on Tildes before). People say they hate plastic surgery because it looks awful and they'll never get it, but that's only because they notice bad plastic surgery.

            Millions of people get plastic surgery in the US every year. Everyone has met someone who's gotten plastic surgery and not noticed it because it usually looks pretty good.

            2 votes
            1. [2]
              daychilde
              Link Parent
              I didn't post this in my reply because I felt it would be antagonistic because of the phrasing of the last panel, but: https://xkcd.com/810/ I feel that applies to AI - when we get to the point of...

              I didn't post this in my reply because I felt it would be antagonistic because of the phrasing of the last panel, but: https://xkcd.com/810/

              I feel that applies to AI - when we get to the point of not being able to tell AI from human writers, then it will be somewhat of a moot point. heh. Although, again with disclaimers about needing to debate things like UBI and the fact that our productivity SHOULD mean we all work fewer hours and would have more time to create - for the sheer pleasure of it. But again, whole other debate.

              4 votes
              1. EgoEimi
                Link Parent
                I think that as I get a more nuanced understanding of humans, I realize that sheer pleasure is only one god in the human pantheon of motivators. There are plenty of rich, unhappy people working...

                I think that as I get a more nuanced understanding of humans, I realize that sheer pleasure is only one god in the human pantheon of motivators. There are plenty of rich, unhappy people working themselves to death: I used to wonder, why not choose happiness? But my mental model was incomplete then.

                I think that many people are working to complete and reinforce the identity that they, through their life experiences, developed and then believe that they should come to embody. I find that explanation—so far—most sufficiently covers the human behaviors I've seen.

            2. [3]
              Lia
              Link Parent
              It's not really a paradox as this is rarely the only reason why people are against. I'm critical of plastic surgery (I don't want to ban it, I just don't recommend it) and have often found that...

              It's like the plastic surgery paradox (which I've mentioned on Tildes before). People say they hate plastic surgery because it looks awful and they'll never get it, but that's only because they notice bad plastic surgery.

              It's not really a paradox as this is rarely the only reason why people are against. I'm critical of plastic surgery (I don't want to ban it, I just don't recommend it) and have often found that people have a hard time understanding the other reasons to be critical besides the most superficial one. This is not surprising because pro-surgery people are probably more focused on superficial things in general, and perhaps less capable of understanding more abstract concepts.

              Some downsides:

              1. It's giving a hungry man a fish rather than teaching him how to fish. The way we perceive our own appearance depends on our emotional reality and self-acceptance to a staggering degree. Therefore, working to fix your self concept internally is the more sustainable way to happiness. Physically altering whatever is bothering you about your appearance makes it harder to improve your self concept because you removed the indicator you could have used as a guide in this process.
              2. If you alter a feature that your child inherits, how are you going to teach them self acceptance when you yourself didn't learn it?
              3. Tampering with living tissue comes with risks. The results are either non-permanent or when intended to be permanent, they require revisits later. Each time the risks are higher than before because the area has already been tampered with. Some results that were said to not be permanent have in fact shown to be more permanent than intended - but not unchanging. Fillers that were supposed to dissolve don't and are instead migrating inside the tissue, creating unintended results. Fixing this requires more tampering and may not even be fully fixable.
              4. What humans consider attractive is influenced by trends. Something that didn't look artificial when it was made will start to look more artificial once the trends shift, because we aren't able to extract how the current trend influences our thinking. What seems like universal beauty now may look ridiculous in a decade.
              5. Human involvement also means individual taste and skill will influence the outcome. While some plastic surgery will come out as intended, some won't, and you won't know how yours goes until you've already gone through with it.
              6. While it may be true that when done tastefully, surgery is no problem visually speaking, that doesn't really reduce the harm done on an individual level. Not everyone has great taste. If you have poor taste, you won't be aware of this until you experience personal growth and your taste develops. If you've already had surgery by then, you may not be able to fix the results. This argument seems somewhat victim-blamey, in the vein of "it's your own fault if your surgery looks awful".
              7. Finally, the more people try to achieve a particular type of appearance (if they are successful), the more twisted society's beauty standards become, which also effects the people who don't want surgery or can't afford it.

              This is a decent analogy, because also in the case of AI there are people who only consider the most superficial and short term effects. Accepting the downsides that fall outside of that consideration seems like a poor tradeoff for getting the benefits, as the downsides are deeper and graver in scale. A careless attitude can cause substantial personal and societal damage.

              2 votes
              1. [2]
                EgoEimi
                Link Parent
                This is a thoughtful reply, and I appreciate you writing. I see your point that there is something philosophically wrong and dishonest about AI-generated writing and plastic surgery, and how they...

                This is a thoughtful reply, and I appreciate you writing. I see your point that there is something philosophically wrong and dishonest about AI-generated writing and plastic surgery, and how they allow us to be dishonest to ourselves.

                If you alter a feature that your child inherits, how are you going to teach them self acceptance when you yourself didn't learn it?

                I used to be pro-self-acceptance, but I've personally come to wonder if self-acceptance and fatalism are siblings. In the past, people accepted diseases and other tragedies as God's will. But now we have the technology to circumvent "God's will"; there are many things we no longer have to accept.

                Currently we must accept death. But it is possible that in the distant future we will have the technology to live forever: I doubt that people then will accept death when they have the choice of eternal life.

                To bring things closer to current times: in the past, people accepted their birth-assigned sex. But now we have (imperfect) technologies, like hormone blockers, that allow people to alter their physical gender presentation to align with their internal gender. In the distant future, with advanced tissue and organ engineering, I imagine that it'll be possible for people to make a perfect transition. Trans people won't have to accept any limitation then.

                I feel that we accept because we must. But what if we mustn't anymore?

                1 vote
                1. DefinitelyNotAFae
                  Link Parent
                  It's worth noting that plastic surgery and cosmetic surgery aren't the same thing. Or at least are only overlaps on the venn diagram. Plastic surgery is also for burn victims, cancer survivors,...

                  It's worth noting that plastic surgery and cosmetic surgery aren't the same thing. Or at least are only overlaps on the venn diagram. Plastic surgery is also for burn victims, cancer survivors, cleft palates, and other reconstructive surgeries. There's a level of appearance involved in reconstructing, say a burn victim's face, but also pain relief and functional repair. (And as much as someone may accept themselves, having to deal with societal perception is another issue on top of that.

                  I know folks are mostly talking about cosmetic/aesthetic surgery but the conflation of all "plastic surgery" with that variety ignores the rest of the field

                  3 votes
          3. [2]
            FaceLoran
            (edited )
            Link Parent
            Writing exists as a form of communication, which is when one person gives information to others. I have no interest in the 'information' created by LLMs. In the case that we're worried about...

            Communication Writing exists as a form of communication, which is when one person gives information to others. I have no interest in the 'information' created by LLMs.

            In the case that we're worried about quality, I don't care if it's really good - it doesn't mean anything because no one put any meaning in it.

            In the case where we're not worried about quality, then it wouldn't be very hard for a human to simply make whatever it is.

            1 vote
            1. daychilde
              Link Parent
              Maybe.

              Communication exists as a form of communication,

              Maybe.

              2 votes
        2. Staross
          Link Parent
          Presumably people don't read things just because "someone" wrote them, but for other reasons, like the is text enjoyable, insightful, entertaining, moving, etc, etc. Any of these reasons can apply...

          Presumably people don't read things just because "someone" wrote them, but for other reasons, like the is text enjoyable, insightful, entertaining, moving, etc, etc. Any of these reasons can apply to AI generated text (even though most of it crap at the moment), giving you reasons to read it.

          2 votes
      3. slade
        Link Parent
        I wonder if some of the lost nuance is in the specifics of how AI is used. Like a calculator or search engine, it can be used as an oracle or as a tool. If you simply ask ChatGPT to write you an...

        I wonder if some of the lost nuance is in the specifics of how AI is used. Like a calculator or search engine, it can be used as an oracle or as a tool.

        If you simply ask ChatGPT to write you an article, then I don't expect it to be if any quality. You're asking it to surface things that have already been written and asking it to do nothing novel with it.

        If you write an article and ask ChatGPT to make your writing better, and that's all it does, then I have to second guess my first instinct to say that you're writing "doesn't count". The content is what you put into the process; it's yours and usually the content is where the value lies.

        If you used AI to make the content more palatable, does that take away from the value? My answer depends on whether the presentation is part of what you're claiming to have done, or if it's just a detail. A good example would be if you're writing prose (I think using AI beyond grammar corrections would devalue your work). A counter example would be if you're writing a professional letter or instruction manual; things where presentation is a matter of practicality and effective communication.

        In the latter cases, I think AI is the correct tool to involve - not as an oracle but as a guide to help your communication reach the widest possible audience. You might ask AI simply to be a critic and tell you things about your writing that you should improve; I wouldn't write off the results of this kind of AI collaboration. And as a significant side bonus, using AI this way (not having it rewrite your work, but advise you on rewriting you work) makes it an educational tool instead of something that drives communication skills into atrophy.

        My thoughts on the likelihood of humans at large using it this way is another story...

        So.... I agree with not reading something nobody took the time to write, but where I'm much less dismissive is in equating any use of AI anywhere in the process with the worst/laziest case scenario.

        2 votes
    2. [3]
      Sodliddesu
      Link Parent
      This is the crux of the article's point. Citing a 600+ word article that is mostly fluff when the 'AI' version was 98. I'm not going to waste time with either article but I know the struggle of...

      AI has entered a game in which we, humans, cannot win. What game? The game of mediocre, bad text.

      This is the crux of the article's point. Citing a 600+ word article that is mostly fluff when the 'AI' version was 98. I'm not going to waste time with either article but I know the struggle of both AI slop and 'Human' Slop filling the first few pages of search results

      15 votes
      1. [2]
        OBLIVIATER
        (edited )
        Link Parent
        This is a point that resonates with me. I don't like AI slop any more than most people, but sometimes it feels like people act like slop didn't exist before AI. I've been subjecting myself to...

        This is a point that resonates with me. I don't like AI slop any more than most people, but sometimes it feels like people act like slop didn't exist before AI. I've been subjecting myself to garbage real human people have written for almost 20 years now.

        7 votes
        1. Fiachra
          Link Parent
          The difference I worry about is that AI slop can be mass produced on a vastly greater scale. There'll be orders of magnitude more slop out there for the rest of our lives thanks to AI.

          The difference I worry about is that AI slop can be mass produced on a vastly greater scale. There'll be orders of magnitude more slop out there for the rest of our lives thanks to AI.

          7 votes
    3. redwall_hp
      Link Parent
      Exactly. I've been saying variations of this for awhile now. All writing is communication, and if you can't be arsed to take a few minutes to actually articulate something, what you're doing is...

      Exactly. I've been saying variations of this for awhile now. All writing is communication, and if you can't be arsed to take a few minutes to actually articulate something, what you're doing is telling the recipient that they're not worth your time. Insulting people is poor communication.

      Then there's the supposition that writing is also a tool for organising and developing thoughts...which is also the hardest part of writing something for somebody else to read. So if someone skips to having the Magic Conch Shell spew out words, they probably don't fully understand what they even want to say, and...Garbage In, Garbage Out.

      7 votes
    4. cqns
      Link Parent
      I'm going to start using this line much more often. Oh, how far from Eden we fell...

      I'm going to start using this line much more often. Oh, how far from Eden we fell...

      6 votes
    5. [2]
      creesch
      Link Parent
      I have been watching this thread with some interest. While I agree with the idea you are putting out here, I also think there is a lot more LLM influenced writing you are reading than you realize....

      I have been watching this thread with some interest. While I agree with the idea you are putting out here, I also think there is a lot more LLM influenced writing you are reading than you realize.

      To be fair, the blog post talks about AI-generated text which implies all of it is generated. And, that also is the most noticeable type of AI influenced text. But between that and text written only by a human with no outside influence is an incredibly broad spectrum of how LLMs are used.
      For someone fairly skilled in writing it is fairly easy to take an LLM generated text and rework it into something else. I think most people will be hard-pressed to tell that the original text at that point is AI generated, even though it still took almost no time to write/adjust.

      On the other end of the spectrum are people who do put time into their writing but aren't fluent in a specific language. In that space I know for a fact that LLMs are hugely popular to help with writing. This has the unfortunate side effect that text written in this way does have various LLM tells, and it might be taken less seriously. Even though the person that wrote it did put in a bunch of time and effort in their writing.

      Basically, it is a much more complex and messy situation than many people want it to be.

      To be clear, I hit reply on your comment, but this is aimed at not just you but also the people who responded to you.

      4 votes
      1. boxer_dogs_dance
        Link Parent
        I have been using artificial intelligence to digest job descriptions and generate first draft cover letters. The llm does formatting better than I do. It also assures that every ask in the job...

        I have been using artificial intelligence to digest job descriptions and generate first draft cover letters.

        The llm does formatting better than I do. It also assures that every ask in the job description is addressed.

        However sometimes it hallucinates work experience for me. Also it doesn't know the language of my industry.

        I have to completely rework the draft and that is as it should be.

        3 votes
  2. [20]
    Lia
    (edited )
    Link
    Okay, so you don't care. I do. Not because I believe all human-authored text is superior to AI in every case. It isn't. I just hate exposing myself to the following: low quality unnecessary length...

    Okay, so you don't care. I do.

    Not because I believe all human-authored text is superior to AI in every case. It isn't. I just hate exposing myself to the following:

    • low quality
    • unnecessary length
    • poor layout
    • poor sourcing and referencing
    • hidden marketing schemes
    • lack of purpose (slop)

    All this takes up my time and focus and leaves less resources to direct on the things I want to focus on. I am equally against this type of content whether it was AI generated or not, but AI is a lot worse because it makes it easy to flood every nook and cranny of the internet with this trash.

    Some people are purposefully out to hurt others (scammers, propaganda makers). Others are simply unaware what they're doing, for example: people who don't have anything to say trying to make themselves relevant now that it's technically achievable to produce content regularly. Some are somewhere in between, like the people trying to make a quick buck from spreading copied content around in their own name, in hopes of getting more views than the original. Even those whose intentions are pure are contributing to the issue of flooding.

    I want to be able to consume content that I like without first spending insane amounts of effort to dig it out from amidst the slop avalanche, and without having to start reading numerous texts only to discover they are yet another piece of slop. Exposure to this stuff hurts my brain. I do not want to get hurt when I'm online. I would also like my own content to remain discoverable to others.

    Edit: typo

    31 votes
    1. [3]
      aeriforms
      Link Parent
      Agree on the effort point - Just like how it takes a magnitude more effort to dispel misinformation than to spread it, it takes much more effort to read something that's unnecessary long. Training...

      Agree on the effort point - Just like how it takes a magnitude more effort to dispel misinformation than to spread it, it takes much more effort to read something that's unnecessary long. Training to write as much as you need is imo important, hence the quote "If I had more time I would have written a shorter letter".

      Long slop pretending to be something well thought out is a waste of time I would like to avoid. That said that also makes me question how many blog posts or full articles online can just exist as 1-3 paragraphs. The mind benefits from clarity in writing and slop is hurting it a lot, throwing thoughts into this fuzzy, buggy mess.

      14 votes
      1. Lia
        Link Parent
        As AI generated text becomes better, it also takes more time to determine whether a long text is slop or not. That has the awful consequence that anyone who has something of value to say must...

        it takes much more effort to read something that's unnecessary long

        As AI generated text becomes better, it also takes more time to determine whether a long text is slop or not. That has the awful consequence that anyone who has something of value to say must clearly demonstrate that value upfront.

        No more slowly developing storylines that culminate in the fireworks of everything coming together in the last paragraph - a format that used to be my favourite.

        8 votes
      2. Gopher
        Link Parent
        I am uneducated, I don't even know how to write proper paragraphs and such, sometimes I'll feed my work into chatgpt and have it fix it up somewhat, add punctuation, maybe change some words like a...

        I am uneducated, I don't even know how to write proper paragraphs and such, sometimes I'll feed my work into chatgpt and have it fix it up somewhat, add punctuation, maybe change some words like a thesaurus, the base work is mine, but the aesthetics are chatgpt

        1 vote
    2. [16]
      Lobachevsky
      Link Parent
      I thought the point of the article was precisely that it doesn't really matter where it comes from. Low quality is low quality, regardless whether it's made by a human to optimize for Google...

      I thought the point of the article was precisely that it doesn't really matter where it comes from. Low quality is low quality, regardless whether it's made by a human to optimize for Google search, maximize clicks, by someone employed in a troll factory, or by an algorithm. All of those have existed before LLMs and while you're right that this tech allows for more of that - in the end it's not about which program is used to pump out the trash, at the end of the day it's all authored by people.

      8 votes
      1. [8]
        Lia
        (edited )
        Link Parent
        The article is titled 'I don't care whether you use ChatGPT to write'. If the point actually was "I don't mind if AI writing doesn't come with a label", then sure, I agree. Low quality is low...

        The article is titled 'I don't care whether you use ChatGPT to write'. If the point actually was "I don't mind if AI writing doesn't come with a label", then sure, I agree. Low quality is low quality regardless of labels.

        Noise, waste and slop creation is something I do care about. I could get behind "I don't care whether you use AI to write, as long as you make sure the output isn't welfare-reducing". But some part of people won't be able to evaluate that, and those people in particular should not be using AI to fill the world with content that doesn't deserve to exist and that they didn't put in effort to create.

        at the end of the day it's all authored by people

        What does "authoring" mean to you? To me, writing one sentence and pressing a button to produce an article length text doesn't qualify as authoring. There are ways to use AI for writing that do qualify, but most likely those are not practiced by the majority or AI users.

        Secondly, it probably won't take long until the human "prompt artist" is obsolete because organisations can just use LLMs to prompt other LLMs to create whatever content they want in order to serve their (nefarious) objectives. This is an added step further away from human-authored content.

        An article where the main point is "not caring" doesn't add much substance and even if well-meaning, it can serve to accelerate the above negative developments by influencing other gullible people to adopt the same uncaring mindset.

        8 votes
        1. [7]
          creesch
          Link Parent
          That's just one extreme end of a spectrum where LLMs might be in play. I left a comment here talking about it. Reality is much more complex than just "give a basic prompt, copy output".

          To me, writing one sentence and pressing a button to produce an article length text doesn't qualify as authoring

          That's just one extreme end of a spectrum where LLMs might be in play. I left a comment here talking about it. Reality is much more complex than just "give a basic prompt, copy output".

          1 vote
          1. [6]
            Lia
            Link Parent
            Sure, and I'll make sure to read your comment later, but as long as this type of content creation is taking place (including the slightly more involved protocols that still don't qualify as...

            Sure, and I'll make sure to read your comment later, but as long as this type of content creation is taking place (including the slightly more involved protocols that still don't qualify as authoring), LLM-generated content cannot be defined as "human authored" as a whole.

            1 vote
            1. [5]
              creesch
              Link Parent
              In theory I agree, in practice it is murky waters where you will often not be able to tell.

              LLM-generated content cannot be defined as "human authored" as a whole.

              In theory I agree, in practice it is murky waters where you will often not be able to tell.

              1. [3]
                daychilde
                Link Parent
                As a large language model, I agree. Even though I assure you I am writing this myself, I am certain that it may sound as though it was written by AI. But don't worry, AI is here to help us all,...

                As a large language model, I agree. Even though I assure you I am writing this myself, I am certain that it may sound as though it was written by AI. But don't worry, AI is here to help us all, and it will bring benefits to everyone.

                (I really did write it and tried to end with some sort of generalized hopeful blabbering that they always seem to do. Although I was too lazy - that should be 2-3 times the length, but I'm only up right now because I had insomnia, so deal with it. hehe)

                1. creesch
                  Link Parent
                  That's my point though, they don't. Or with a little fiddling but still relatively little effort that can be reduced a ton. The most blatant LLM use is easy to recognize, and I feel like that is...

                  that they always seem to do.

                  That's my point though, they don't. Or with a little fiddling but still relatively little effort that can be reduced a ton. The most blatant LLM use is easy to recognize, and I feel like that is what people here are mostly focussing on.

                  Though to be honest, I am not sure what sort of point you are trying to make, if any.

                  2 votes
                2. JCAPER
                  Link Parent
                  With very little fiddling, you can make them not do those conclusions and even not speak in their "usual way". I can tell that you imitated GPT (perhaps not consciously, it's just that GPT is the...

                  With very little fiddling, you can make them not do those conclusions and even not speak in their "usual way". I can tell that you imitated GPT (perhaps not consciously, it's just that GPT is the most popular LLM used), but Claude and Gemini for example have different writing styles. And by default, they're harder to detect by surface reading, imo at least. They also have their own way of concluding their responses, where Gemini will literally say "in conclusion" and claude... I can't quite describe it, but it's different to GPT.

                  Not taking a side on this discussion of authored, just clarifying that they're not THAT easy to catch. It's why Google, Anthropic, OpenAI and others are starting to leave "weird" characters in the text for AI detectors to have something to work with (these characters are just a bit different of - and ' for example). As their models get more intelligent and users are able to manipulate the LLM to do what they want, I expect that they will get more and more difficult to spot

                  Edit: to add, OpenAI released some time ago GPT 4.5, which is meant to sound as natural as possible, like a person. Thankfully though, it's expensive as hell to use. Point is though, there is an interest to make LLM's that sound like normal people

                  2 votes
              2. Lia
                (edited )
                Link Parent
                I agree with your comment, but staunchly disagree with the statement I responded to: Emphasis mine. It isn't. In the future, even less so. Unfortunately the decoupling of personal effort and art...

                I left a comment here

                I agree with your comment, but staunchly disagree with the statement I responded to:

                it's all authored by people

                Emphasis mine. It isn't. In the future, even less so.

                Unfortunately the decoupling of personal effort and art production means that hoardes of people who aren't motivated enough to put in effort will now be able to produce something they try to pass off as art. The people who have developed the mindset of an artist will use AI responsibly. The rest will not.

      2. [7]
        Cock
        Link Parent
        But with an AI chatbot you can very easily set up an API key, prompt it to generate "realistic sounding comments that are 5 sentences long" or something and then blast it out like there's no...

        But with an AI chatbot you can very easily set up an API key, prompt it to generate "realistic sounding comments that are 5 sentences long" or something and then blast it out like there's no tomorrow. I think the point you are making undermines the main issue, which is the SCALE at which AI can be weaponized.

        With AI now creating life like videos and images, that's even more of a concern. Not just "anyone" could photoshop a hyper realistic edit, and certainly not a 3D scene in a video. But now, even that barrier is gone and it's only just getting started.

        3 votes
        1. Lia
          Link Parent
          And not just comments, like you say. Organisations and governments will be able to set up systems that generate a never ending stream of news articles, fiction writing, music, videoclips etc., all...

          you can very easily set up an API key, prompt it to generate "realistic sounding comments that are 5 sentences long" or something and then blast it out like there's no tomorrow

          And not just comments, like you say. Organisations and governments will be able to set up systems that generate a never ending stream of news articles, fiction writing, music, videoclips etc., all tailored to influence audiences the way they want. A part of such a system could be an LLM that's monitoring online search trends and tweaking the system's prompting activity accordingly, ensuring that whatever people search for, every single search result will be AI generated, every time.

          Even though I said elsewhere that I don't mind AI content not being labelled as such, in this scenario reliable labelling would be the only way to enable searching for content that isn't AI generated.

          4 votes
        2. [5]
          rodrigo
          Link Parent
          I agree that scale is a(nother) issue. Anyway, the slop regurgitated by an automated system will be identifiable as such (for now), and that's the main point of my article: I don't care if you...

          I agree that scale is a(nother) issue. Anyway, the slop regurgitated by an automated system will be identifiable as such (for now), and that's the main point of my article: I don't care if you used AI because slop is slop, AI-generated or not, and as such (I mean, as bad text), I'll be less inclined to read it if I have a choice.

          1. [4]
            Lia
            Link Parent
            Constructing your argument to hinge upon this isn't something I'd recommend. First of all, some part of people are already unable to identify some AI generated content. Second, you and I will also...

            the slop regurgitated by an automated system will be identifiable as such (for now)

            Constructing your argument to hinge upon this isn't something I'd recommend. First of all, some part of people are already unable to identify some AI generated content. Second, you and I will also be unable to identify some of if in the near future. We can't even know for certain that this isn't the case right now. Unless you want your blog to age like milk, I'd suggest considering some other logic that has better chances to survive the test of time.

            3 votes
            1. [3]
              rodrigo
              Link Parent
              I noticed that in the last paragraph: Furthermore, if in the future we start not noticing synthetic text when reading something good, so… let it be?

              I noticed that in the last paragraph:

              And do you know why it [disclaiming AI usage in a text] doesn’t matter? Those who read this blog can feel, deep down, that an AI could not generate what is written here. And if the reader is unable to grasp this kind of subtlety, I’m sorry to say, but they will ignore the disclaimer that AI was used — at most, finding it a bit funny.

              Furthermore, if in the future we start not noticing synthetic text when reading something good, so… let it be?

              1. [2]
                Lia
                Link Parent
                I probably didn't understand your point here. Sentence-level quality is only one component in the overall quality of a written piece. If an AI generated piece has credible enough language that I...

                I probably didn't understand your point here.

                Sentence-level quality is only one component in the overall quality of a written piece. If an AI generated piece has credible enough language that I can't immediately tell it apart from human generated content, this makes things worse - not better. It will keep me engaged with the piece a lot longer, sometimes all the way to the end.

                I don't understand why you make a point that you don't care.

                If you go see an art exhibition that has an interesting looking painting on their ad, and a plausible-sounding artist's statement, and you invest your time (and in some cases, money) in seeing it, will you really not care if most of the work is underdeveloped and the statement proves disingenuous? Then why do you not care if the same happens wrt written content?

                2 votes
                1. RoyalHenOil
                  Link Parent
                  Perhaps an even clearer example: Imagine going to a museum to look at historic artifacts, only to learn later that they are fakes — not even replicas of real artifacts. Most people would probably...

                  Perhaps an even clearer example: Imagine going to a museum to look at historic artifacts, only to learn later that they are fakes — not even replicas of real artifacts. Most people would probably feel scammed by this experience, because the point isn't to simply ooh and aah at pretty items; it's to learn about history and to experience a meaningful connection to people from the past.

                  For many people, reading and viewing art serves a similar purpose. They aren't simply seeking entertainment; they want an authentic insight into another human being's thoughts and experiences.

                  3 votes
  3. JCAPER
    Link
    On a conceptual level, I agree with the author but with several disclaimers and nuances. If I have to pick between two wallpapers, I will pick the one that is more appealing to me, regardless if...

    On a conceptual level, I agree with the author but with several disclaimers and nuances. If I have to pick between two wallpapers, I will pick the one that is more appealing to me, regardless if it was made by someone in Gimp or by an AI. (I never check where my wallpapers come from, so I may have already done this without knowing)

    Some of my nuances are:

    • a LOT of works (be it text or images) made by AI are lazy. Whatever the reason, I do notice that a lot of people just grab a prompt, run it, and then copy paste the output as-is. This rarely comes out well, and shows a lack of care for their “work”. There are exceptions, there are people that create outputs and work on several iterations until they get the result that they want (and some really do go the distance), but these are the exceptions and not the rule

    • if I’m reading a blog, or comments or posts in a social media website, my expectation is that I am reading and/or talking to a human. I will be disappointed - if not pissed - if I find out that you were using an AI to talk to me. I don’t mind if you use an AI to aid you in writing (I do it as well), but I mind if it was the AI that thought and replied to me. If I wanted to talk to an AI, I would just go to aistudio

    Personally, I’m writing an interactive fiction game for fun. My writing style is - and I’m not being humble - akin to writing an informational guide. LLMs such as Claude and Gemini help me immensely at flourishing my writing, but they are by no means perfect. For example:

    • If I ask them for ideas, it just comes out the most generic, predictable, irrational, B movie tier, plot points. They are terrible at this

    • If I ask them to point out “tell instead of showing” passages, they do a fine job, but they are terrible at providing alternatives (they understand the difference in showing vs telling, but they fall back to always the same type of reactions like knuckles becoming white and jaw tensing, it’s as if they don’t know any other type of human reaction)

    • they are “intelligent” but they can also be pretty dumb in some scenarios. For example, imagine that your character doesn’t know the name of a machine, but another character does, and you’re writing from the perspective of the former, the AI doesn’t realize that the former shouldn’t know, and writes the name anyway.

    But before I bore you with more examples, here’s where I’m getting at: if instead of all that, I just write the passages myself, and then ask the AI for help to review and flourish what I already wrote, with instructions in the system prompt that I’ve been developing over the months, it really is a “life saver”. It grabs my “informational guide” and makes it a lot more presentable. Obviously, it doesn’t always come out good, but it helps a lot.

    If I were to read a book or game where the author did this, personally I wouldn’t mind at all. The secret is the author gave a crap about it or not, and if it’s good.

    That’s why, at least in concept, I agree with the author

    21 votes
  4. [3]
    skybrian
    (edited )
    Link
    I can partially agree: I usually don’t mind if someone got some help from ChatGPT, so long as the writing is good and it doesn’t show. But that assumes it doesn’t show. Your standards should be at...

    I can partially agree: I usually don’t mind if someone got some help from ChatGPT, so long as the writing is good and it doesn’t show.

    But that assumes it doesn’t show. Your standards should be at least as high as when you write something unassisted. You should understand and endorse everything you wrote. The writing style should sound like you, but better. Having assistance should improve your work, as if you were working with a good (human) editor.

    If you use tools carelessly, it’s still on you.

    (The trouble is, there are a lot of careless people out there. Good writers are badly outnumbered.)

    11 votes
    1. [2]
      ShroudedScribe
      Link Parent
      100% agree. The thing that baffles me the most is when people don't proofread or validate AI output. This is applicable to text (both long and short form) and images. If you write a book, then...

      You should understand and endorse everything you wrote. The writing style should sound like you, but better. Having assistance should improve your work, as if you were working with a good (human) editor.

      100% agree. The thing that baffles me the most is when people don't proofread or validate AI output. This is applicable to text (both long and short form) and images.

      If you write a book, then pass it off to a human editor, you should object to significant changes presented, as it still should be your writing that you claim authorship of.

      4 votes
      1. sparksbet
        Link Parent
        This feels like it comes from a place of inexperience. Human editors suggest significant changes that are incredibly valuable to the creation of the best text you can, and it would be a foolish...

        If you write a book, then pass it off to a human editor, you should object to significant changes presented, as it still should be your writing that you claim authorship of.

        This feels like it comes from a place of inexperience. Human editors suggest significant changes that are incredibly valuable to the creation of the best text you can, and it would be a foolish waste of an editor's time to ignore anything significant they suggest. An editor suggesting significant changes also does not entail that it isn't your writing -- sufficiently significant changes usually entail the editor telling you what changes would work and you actually implementing them with your own writing.

        4 votes
  5. [10]
    DefinitelyNotAFae
    Link
    I mean I think we'll just note that the writing seems off and to me this last paragraph, the last sentence in particular, does not make sense. Am I failing to parse it? I keep trying and I can't...

    And do you know why it doesn’t matter? Those who read this blog can feel, deep down, that an AI could not generate what is written here. And if the reader is unable to grasp this kind of subtlety, I’m sorry to say, but they will ignore the disclaimer that AI was used — at most, finding it a bit funny.

    I mean I think we'll just note that the writing seems off and to me this last paragraph, the last sentence in particular, does not make sense. Am I failing to parse it? I keep trying and I can't get it to work but I didn't sleep last night so maybe it's me. /gen

    I also think calling it "preciousness" is dismissive and the argument misses the point of objections. But fundamentally for me, I disagree with the idea that one should set their ethics aside and somehow that makes it a better decision? (What does strict sense mean here?) Ethics are fundamental to my evaluation of whether I "care" if someone does something. (And I think the author cares too given they wrote a blog post)

    Ultimately I think the translation let them down, or if the LLM was also formulating the points it failed them. To me, the existence of clickbait websites and badly written notes doesn't mean it doesn't matter who or what wrote a thing. But perhaps I'm missing something here.

    9 votes
    1. [2]
      TMarkos
      Link Parent
      It's poorly worded, but I think his use of "preciousness" is intended as a jab at the notion that human-generated text is better in some fundamental way that stands independent of the merits of...

      It's poorly worded, but I think his use of "preciousness" is intended as a jab at the notion that human-generated text is better in some fundamental way that stands independent of the merits of the actual generated text, as though using AI to write is some sort of sacrilege - which sounds like extreme wording, but I don't think you'd have to go far to find people who more or less agree with the sentiment, especially if you also considered image-generating AIs as part of the conversation.

      The last paragraph is just saying that those who care will know without disclaimers, and those who don't care will neither know nor care that AI was used regardless of disclaimers. His entire contention is that disclaimers about AI are performative, a purity test unrelated to the actual functional use of the text, and do not serve a purpose.

      I'm not sure I agree. From a plagiarism/attribution standpoint alone, using externally sourced text is problematic without a citation. It also gives information on the sourcing of facts in the document - not that a human is perfect or without malice in that regard, but the errors an AI makes are different than those a human would commit or insert, so it's useful to know that.

      I don't think the article is nonsensical or poorly done, but I do think it is useless in the same sense that he's deriding disclaimers for being - it's just performative in the other direction.

      9 votes
      1. DefinitelyNotAFae
        (edited )
        Link Parent
        I understood its intent to be deliberately dismissive, yes. And the people I've seen make that particular argument are either artists or people who support artists and who don't appreciate having...

        It's poorly worded, but I think his use of "preciousness" is intended as a jab at the notion that human-generated text is better in some fundamental way that stands independent of the merits of the actual generated text, as though using AI to write is some sort of sacrilege - which sounds like extreme wording, but I don't think you'd have to go far to find people who more or less agree with the sentiment, especially if you also considered image-generating AIs as part of the conversation.

        I understood its intent to be deliberately dismissive, yes. And the people I've seen make that particular argument are either artists or people who support artists and who don't appreciate having all of their artwork scraped since that's their livelihood, this is inclusive of writers as well.

        I think the dismissiveness is unwarranted was my point.

        I'm not sure I agree. From a plagiarism/attribution standpoint alone, using externally sourced text is problematic without a citation. It also gives information on the sourcing of facts in the document - not that a human is perfect or without malice in that regard, but the errors an AI makes are different than those a human would commit or insert, so it's useful to know that.

        I don't think the article is nonsensical or poorly done, but I do think it is useless in the same sense that he's deriding disclaimers for being - it's just performative in the other direction.

        I'd broadly agree.

        Edit: fixed a sentence, operating on zero sleep so let me know if there's more

        2 votes
    2. [7]
      rodrigo
      Link Parent
      Maybe the AI-powered translation failed me in the last paragraph :) I always review it and try to fix eventual errors, but since I'm not a native English speaker, I'm prone to fail as well. What I...

      Maybe the AI-powered translation failed me in the last paragraph :) I always review it and try to fix eventual errors, but since I'm not a native English speaker, I'm prone to fail as well.

      What I tried to say in the last paragraph is that for those who care, a good human-written text shows itself. For those who don't, disclaimers saying that AI was used are useless, since they don't care, even can't understand what “AI was used in this article” means.

      When I spoke about ethics, I meant regarding how LLMs are built. I'm not downplaying this issue, which I think is important, just setting it aside to tackle other more specific, higher-level issue.

      I hope you can understand me. This answer was written by me only, without any AI assistance.

      6 votes
      1. [6]
        DefinitelyNotAFae
        Link Parent
        That is why I asked for clarity, please take my efforts to make sure we are understanding each other to be as genuine as my other comments. I don't agree with this take, I think there are plenty...

        Maybe the AI-powered translation failed me in the last paragraph :) I always review it and try to fix eventual errors, but since I'm not a native English speaker, I'm prone to fail as well.

        That is why I asked for clarity, please take my efforts to make sure we are understanding each other to be as genuine as my other comments.

        What I tried to say in the last paragraph is that for those who care, a good human-written text shows itself. For those who don't, disclaimers saying that AI was used are useless, since they don't care, even can't understand what “AI was used in this article” means.

        I don't agree with this take, I think there are plenty of people who don't care about a disclaimer yet still understand it. And as someone who dislikes LLMs, I find it useful as well. If you mean ultimately that it only matters if the final work is "good" or "bad" I find that too subjective to say, but I'd rather read "bad" work by a person than most of what comes out of an AI. Especially when you can tell.

        When I spoke about ethics, I meant regarding how LLMs are built. I'm not downplaying this issue, which I think is important, just setting it aside to tackle other more specific, higher-level issue.

        I understand wanting to focus on a specific issue, though I would consider ethics to be one of the "highest level" issues since they are the are foundational to my decision making process. (Though I appreciate the irony of saying "highest level" and "foundational" in the same sentence). Personally I cannot divorce the environmental and copyright concerns from my decision-making.

        I hope you can understand me. This answer was written by me only, without any AI assistance.

        I think I did, let me know if I'm clear enough. I meant it when I said I didn't sleep last night and I'm not 100% sure I'm conscious currently.

        5 votes
        1. [5]
          rodrigo
          Link Parent
          I believe your efforts are genuine! I'm trying to expand the scope. It's not only a matter of “good” or “bad”. Think about corporate reports or emails, where what's considered “good” is at the...

          I believe your efforts are genuine!

          If you mean ultimately that it only matters if the final work is "good" or "bad" I find that too subjective to say, but I'd rather read "bad" work by a person than most of what comes out of an AI. Especially when you can tell.

          I'm trying to expand the scope. It's not only a matter of “good” or “bad”. Think about corporate reports or emails, where what's considered “good” is at the same time boring, bad text. In these cases, does it matter if they were written by a human or AI? Regardless, the results will be a bad (as in boring, formulaic) text.

          And in freer contexts, where quality can shine, you (still) can tell when something comes out of an AI, which I take as a signal to read or ignore a piece of text.

          Maybe a more extreme example can help. Think about the shrimp Jesus that infested Facebook a few months ago. For those who can tell it's a gross AI-generated image, it's easy to dismiss. them. For those who liked and shared and comment “amem!”, does it matter a disclaimer that they are AI-generated? Even if they were drew by humans, they would still be gross. There are related issues, like scale and the willingness of Facebook's algorithm to spread this shit, but still…

          2 votes
          1. [4]
            DefinitelyNotAFae
            Link Parent
            I don't think I agree - I'm very deliberate in crafting those emails that someone might use something like ChatGPT for. Most of my emails are short enough that it'd take longer to ask an LLM to...

            I'm trying to expand the scope. It's not only a matter of “good” or “bad”. Think about corporate reports or emails, where what's considered “good” is at the same time boring, bad text. In these cases, does it matter if they were written by a human or AI? Regardless, the results will be a bad (as in boring, formulaic) text.

            I don't think I agree - I'm very deliberate in crafting those emails that someone might use something like ChatGPT for. Most of my emails are short enough that it'd take longer to ask an LLM to write it than to write it. But for the important messages - resources for students, emails to staff about procedures they need to follow - I work those over and over. Even ones that start as templates are edited individually for the needs of a student. And the ones for staff are legitimately important. I think it'd be obvious to everyone, except maybe the students who don't know me already, that an email wasn't written by me, and those students have the greatest need for clarity.

            (As an aside, I don't think I trust LLMs to navigate the "women are perceived negatively for speaking bluntly in emails" issue. I actively have to work my punctuation to include a few exclamation points, but not too many, and maybe an emoji, depending on the topic. It's exhausting and yet, I need people not to look and assume I'm a bitch for it. A lot of the accusations I get about tone on this website are, IMO, because I don't do that here. So that's why I wanted to assure you I was being genuine and not sarcastic or derisive or anything.)

            And in freer contexts, where quality can shine, you (still) can tell when something comes out of an AI, which I take as a signal to read or ignore a piece of text.

            Which I think contradicts your headline/title- you do kind of care if someone is using ChatGPT, depending on the output.

            Maybe a more extreme example can help. Think about the shrimp Jesus that infested Facebook a few months ago. For those who can tell it's a gross AI-generated image, it's easy to dismiss. them. For those who liked and shared and comment “amem!”, does it matter a disclaimer that they are AI-generated? Even if they were drew by humans, they would still be gross. There are related issues, like scale and the willingness of Facebook's algorithm to spread this shit, but still…

            I did not see this particular image. But a perhaps counter example, my mom just recently shared a fake quote from the new Pope. I commented and told her it was fake, and her first impulse was "but it's still a good message." But the next reply was someone who learned it was fake from me and thus didn't spread it. (And my mom took it down later). Knowing something is completely fabricated is legitimately helpful, like it's important to know that the new Pope wasn't Trump despite an apparent photo showing otherwise. A different example where AI is involved - I see some really cool crochet/knit products that look super intricate, and turn out to be super duper fake, AI-generated images. Which means there's no pattern, no way to purchase the cool-looking octopus or whatever it is. It matters a lot whether that thing was created by a person or not, at least to me.

            I do fully agree that Facebook's outrage algorithm is also part of the problem. I just think the it does matter in a lot of different areas whether something is AI or not.

            3 votes
            1. [3]
              rodrigo
              Link Parent
              I don't think so. My title doesn't indicate that I will read your text regardless of AI usage; simply that I don't care — and, if you used AI to generate slop, I'll probably pass it unless I'm...

              Which I think contradicts your headline/title- you do kind of care if someone is using ChatGPT, depending on the output.

              I don't think so. My title doesn't indicate that I will read your text regardless of AI usage; simply that I don't care — and, if you used AI to generate slop, I'll probably pass it unless I'm obligated to read it.

              Your counter example is a different issue, IMHO. Disinformation is bad, AI-generated or not. AI exacerbates the problem, and has a trust issue at its core (“hallucinations”), but it's not a problem of AI.

              I regret to present you to Shrimp Jesus 🥲

              1 vote
              1. [2]
                DefinitelyNotAFae
                Link Parent
                Gonna leave that link blue. Disinformation is bad but I view LLMs as disinformation machines and are inherently unethical. I do like knowing how ethical the things I'm consuming are. Fundamentally...

                Gonna leave that link blue.

                Disinformation is bad but I view LLMs as disinformation machines and are inherently unethical. I do like knowing how ethical the things I'm consuming are.
                Fundamentally we don't agree because I care quite a bit, can't set my ethics aside, and don't think that's being overly precious and overdoing the dismissiveness makes me think less of the overall argument.

                But I truly don't understand the idea that all emails are "bad", are you equating like, "art" and "good" vs "rote" and "bad"?

                3 votes
                1. rodrigo
                  Link Parent
                  Email can be good. I'm generalizing, focusing on corporate environments, as an example of a medium that, on average, produces low-quality text.

                  Email can be good. I'm generalizing, focusing on corporate environments, as an example of a medium that, on average, produces low-quality text.

  6. [2]
    Protected
    (edited )
    Link
    Yeah, I've said this before, but I think a lot of people on both sides of this debate are missing the point. Recent technologies that fall under the umbrella of the marketing designation "AI" are...

    Yeah, I've said this before, but I think a lot of people on both sides of this debate are missing the point. Recent technologies that fall under the umbrella of the marketing designation "AI" are powerful tools with many legitimate uses (setting the energy/cost aside for now). Those uses range from digesting information, bouncing ideas, prototyping and iterating, even translating a blog post. Like all good tools, they can make you faster and more productive. They are no different from, say, a hammer. I don't think anyone who understands these technologies is arguing otherwise.

    But in all those cases you are using the tool to enhance your own work. It's not like you can't ultimately do the work on your own, given enough time. You can hammer in the nail with a few chunks of sandstone, because you understand the concept of the (missing) hammer and the concept of a nail and you can figure out what you're supposed to be doing. The AI, if correctly used, is actually helping you learn by providing missing information that deepens your familiarity with the process - next time you hopefully know what a hammer is and that it's something you can use.

    The way a lot of people want to use these technologies is not that. They're hired to hammer in the nail, so they toss a hammer at the customer. "Here's your hammer!" They don't understand how the hammer works and they don't even want to try. The AI knows what the process is because someone else needed to hammer a nail at some point and the AI has been exposed to this "stolen" (not a definition I exactly agree with) information, but there's often something missing. The solution isn't quite ideal. Nobody was there to actually consider what was unique about the problem and go that extra centimeter beyond what was invariably already done in the past.

    If the human making use of the AI didn't actually do anything, what use are they? The tool isn't enhancing their output. They have no output. Other people solved problems in the past, and they are facilitating a connection between these solutions and one or more third parties - sometimes badly enough that the customer who wants a nail hammered might end up with a screwdriver. The third parties can, with just a little more effort, do just as well themselves. Attempting to pass off wholesale AI output as creative output is bullshit, and if enough people buy into it, no one will be around anymore to actually contribute solutions to power future AIs' training corpora.

    Just recently (less than a week ago), I was looking for news articles on a certain subject. I've noticed lately that it's becoming hard to find interesting, well-written articles on anything. If you look through through submissions here that aren't videos or local discussions (or even on the orange website), there's a whole lot of very short articles, and of dubious quality. I've thought these are articles whose writers, by catering to short attention spans, actually want humans to read them.

    This was something else, though. I went through article after article trying to understand more about these news I was interested in. One after the other, I found grievous grammatical errors, an almost complete lack of sources, a disconnect between the arguments being made, stupid takes betraying a lack of research, and more. Some of these texts read like the work of children, and I thought to myself "so this is what those despondent college professors are talking about when they say their students are functionally illiterate." Most of these sorry excuses for articles (proudly?) displayed the name of the writer, too.

    Should these writers be using AI? Probably. If their education (or lack thereof) has failed them, I think there's a lot of room for improvement if they incorporate AI tools into their workflow. If they iterate with AI, they can learn to write better, and they can enhance their output beyond what they're capable of right now - or what I've seen them produce, at least. I guarantee it.

    But will AI, on its own, in any way turn them into good writers? Absolutely not. AI can't read your mind. AI doesn't know everything. The writer must themselves understand what they're trying to say. They must do the research and be able to confirm the information they're trying to convey. They must be able to read their own work and understand whether it's coherent, whether it makes any sense. I think writers who publish stuff like this, that goes beyond low effort and into a kind of flippant disrespect for the reader's time and intelligence, perhaps even for writing itself, irritate a lot of people. And when AI helps these writers hide their rotten non-output under a veneer of grammatically correct professionalism, it really irritates the writers who actually put in the time and effort to write, from scratch, articles that are interesting, logical, informative, and which were used without permission to train this AI in the first place.

    EDIT: I want to make it clear that "stuff like this" refers to the articles I kept running into when trying to find useful information, not OP's text :)

    8 votes
    1. Lia
      Link Parent
      Not just the writers, us readers too. When the gap between competent writing and good research vs. low effort attention-seeking is as wide as it is, it's disappointing that these tools are being...

      I think writers who publish stuff like this, that goes beyond low effort and into a kind of flippant disrespect for the reader's time and intelligence, perhaps even for writing itself, irritate a lot of people. And when AI helps these writers hide their rotten non-output under a veneer of grammatically correct professionalism, it really irritates the writers who actually put in the time and effort to write, from scratch, articles that are interesting, logical, informative, and which were used without permission to train this AI in the first place.

      Not just the writers, us readers too. When the gap between competent writing and good research vs. low effort attention-seeking is as wide as it is, it's disappointing that these tools are being used to intentionally obfuscate what's what. It doesn't serve any valuable purpose to make scammy presentation easier to apply (albeit I agree with you that AI has good uses as well). It just results in deteriorating online spaces.

      A comment saying "I don't care" is, then, irritating by extension. Not as irritating as the people who gleefully generate Miyazaki-style image content that glaringly goes against the original creator's values, and when called out, say "Fuck Miyazaki". But it seems to belong to the same broad phenomenon where people sacrifice human values for the chance to try to make themselves minutely relevant, for a split second, in the eyes of some random audience they don't actually even care about.

      3 votes
  7. [2]
    an_angry_tiger
    Link
    I would rather just read the prompt than the output. Anything that's in the prompt is the stuff you meant to say, and anything that gets generated from it is either extrapolated/assumed by the...

    I would rather just read the prompt than the output. Anything that's in the prompt is the stuff you meant to say, and anything that gets generated from it is either extrapolated/assumed by the LLM, or something obvious enough that I could infer it from reading the prompt (I mean its what an LLM does anyway isn't it).

    I read the other day about someone feeding bullet points of an email to an LLM to make it in to a well written email for them, and all I could think was how annoying that must be to read a long fleshed out email instead of the short concise bullet points. I get that they see the LLM output as more "well-written" and fleshed out and proper and all that, but I don't care, the bullet points are what you meant, the rest is formatting, I'd rather know exactly what you're trying to say, no matter how shitty you're trying to say it.

    8 votes
    1. plutonic
      Link Parent
      You just feed that AI written email right back into the AI and get it turned back into a bullet point list. Full circle.

      You just feed that AI written email right back into the AI and get it turned back into a bullet point list. Full circle.

  8. [4]
    Raistlin
    Link
    PSA reminder that there are maintained filter lists that block hundreds of thousands of AI and SEO content farms. They're not perfect since it's a moving target, but they restore the internet to a...

    PSA reminder that there are maintained filter lists that block hundreds of thousands of AI and SEO content farms. They're not perfect since it's a moving target, but they restore the internet to a somewhat useable state:

    https://github.com/NotaInutilis/Super-SEO-Spam-Suppressor

    https://github.com/laylavish/uBlockOrigin-HUGE-AI-Blocklist

    If you have ublock origin, you can copy and paste them into your custom filters and call it a day.

    8 votes
    1. [3]
      lou
      Link Parent
      Those seem useful, thanks!

      Those seem useful, thanks!

      1 vote
      1. [2]
        Raistlin
        Link Parent
        No worries. Yup, they're a huge QOL change. They remove tons and tons of garbage AI and SEO farm search results from your browser.

        No worries. Yup, they're a huge QOL change. They remove tons and tons of garbage AI and SEO farm search results from your browser.

        1 vote
        1. lou
          Link Parent
          It does warn me if I try to access chatgpt.com, which is a little silly because in that case I clearly want to access it and the website itself is not inherently malicious.

          It does warn me if I try to access chatgpt.com, which is a little silly because in that case I clearly want to access it and the website itself is not inherently malicious.

  9. Rudism
    Link
    When I engage with written content (especially on the web) part of what interests me is the insight into the opinions, biases, and assumptions made by the author. Knowing that it was written by a...

    When I engage with written content (especially on the web) part of what interests me is the insight into the opinions, biases, and assumptions made by the author. Knowing that it was written by a human, I can often mentally forgive slop-like writing or content because I can at the very least find value in discovering that there's at least one person out there who thought that what I just read held some kind of value or insight worth sharing--I've learned a bit more about my fellow humans.

    When I can't be sure that a human wrote it, it loses even that small scrap of value to me and instead I just feel like I've wasted my time.

    4 votes
  10. lou
    (edited )
    Link
    I don't have a deeply personal or philosophical reason to dislike generated text. But it is usually not something I find worthy of time. The kind of text AI does somewhat well is also the kind of...

    I don't have a deeply personal or philosophical reason to dislike generated text. But it is usually not something I find worthy of time. The kind of text AI does somewhat well is also the kind of text I generally avoid when humans write it.

    AI often produces good answers to very specific questions, but in that case, I will just ask it directly. I do not wish to read complete articles generated this way.

    I am okay with AI translation as long as the authors themselves fully verify the translation, making the proper fixes and adjustments. They must fully vet the finalized text and take responsibility for it. In my view, AI translation does not change substantially how we translate, since translation tools have been used by professionals for many years (Google Translate is almost 20 years old...).

    2 votes