36 votes

AI slop is killing our channel

8 comments

  1. Liru
    Link
    This reminds me of xkcd's Citogenesis comic, which stuck in my mind a long time ago when I was trying to find the original source of some outlandish claim. Except now, we can replace step 1 with...

    This reminds me of xkcd's Citogenesis comic, which stuck in my mind a long time ago when I was trying to find the original source of some outlandish claim. Except now, we can replace step 1 with "someone asks ChatGPT".

    24 votes
  2. Zorind
    Link
    A new kurzgesagt video, about how AI slop is "poisoning" the "library of human knowledge", and how it is affecting Kurzgesagt. Includes a bit at the end of how they'll be use AI (LLMs) going...

    A new kurzgesagt video, about how AI slop is "poisoning" the "library of human knowledge", and how it is affecting Kurzgesagt.

    Includes a bit at the end of how they'll be use AI (LLMs) going forward (and how they won't trust it for doing the research phase or for fact checking).

    Sources link from their description: https://sites.google.com/view/sources-aislop

    Sidebar: I used the title I saw, but anyone with edit privileges feel free to de-sensationalize it if you can think of a better one.

    19 votes
  3. [2]
    lackofaname
    Link
    Couple thoughts, may as well start with the silly one: Anyone else hear, almost, "Canada" every time the narrator says "Calendar"? While overall I'm in line with the video, there are 2 maybe minor...

    Couple thoughts, may as well start with the silly one: Anyone else hear, almost, "Canada" every time the narrator says "Calendar"?

    While overall I'm in line with the video, there are 2 maybe minor points I think merit nuance (or I just wanted to share my thoughts):
    -The first is the argument that the article they looked at again must have been generated by AI because it was flagged as high probability using an AI writing detector. I've seen first hand those tools are alllllll over the place on their predictions when assessing purely human-written text. (Though, maybe they rechecked the articles in other ways to conclude whether they were likely ai slop, and just didn't mention for the sake of the narration)

    -The second is about the finding that scientific papers show an increasing trend toward markers indicative of ai. Grain of salt and all, but this isn't necessarily bad. Maybe researchers now can write their manuscript in their native language and use an ai to translate (or write in english and clean up grammar). Or, sure, they may be plugging in findings and to help buld the first draft (original research articles tend to be very formulaic IMRaD). That said, I agree it's important to state which and how ai tools are being used. Above uses are much different than plugging in results and asking the ai to build a discussion/draw conclusions without heavy human brainpower contributing.

    15 votes
    1. JCAPER
      (edited )
      Link Parent
      About the second thought, didn’t they say that those papers weren’t disclosing that they used AI? If that’s true (and if I’m not misremembering) then it’s fair to assume - or doubt at least - that...

      About the second thought, didn’t they say that those papers weren’t disclosing that they used AI?

      If that’s true (and if I’m not misremembering) then it’s fair to assume - or doubt at least - that those papers didn't have proper care when being written.

      I’m of the same mind as you though. In fact I would go as far as to say that I wouldn’t care that the paper was written in its entirety with LLMs, as long as the quality is there. If it’s fact checked and completely on point, then who cares?

      But if you don’t disclose the usage, then yeah… I’m going to wonder if you didn’t on purpose, and if you put in the work necessary to produce that paper.

      7 votes
  4. [2]
    teaearlgraycold
    Link
    I realized recently that web search has effectively been ruined for searches of the nature “<superlative> <noun>s in the world”. Everything seemed to be AI slop. I wanted a nice article from a...

    I realized recently that web search has effectively been ruined for searches of the nature “<superlative> <noun>s in the world”. Everything seemed to be AI slop. I wanted a nice article from a researcher or historian or a Wikipedia page or something that was potentially high quality text written by an expert. The content I found even had a ton of AI images sprinkled in to let me know the text was for sure written by an AI.

    5 votes
    1. JXM
      Link Parent
      Search was broken long before AI. It's been that way for years. AI is just the latest tool used to create these articles. Before that, they paid humans pennies to write shitty SEO optimized...

      Search was broken long before AI.

      It's been that way for years. AI is just the latest tool used to create these articles. Before that, they paid humans pennies to write shitty SEO optimized articles that barely made sense but included the right keywords to get clicks on Google. AI has just optimized the process even more for the companies that own those garbage sites.

      13 votes
  5. [2]
    TonesTones
    Link
    I’m not very convinced that the problem of “fake information” is that new to the age of AI. There are lots of great stories in the study if history about “facts” appearing from thin air. Searching...

    I’m not very convinced that the problem of “fake information” is that new to the age of AI. There are lots of great stories in the study if history about “facts” appearing from thin air. Searching for primary sources and then debating about the validity of those primary sources was my favorite part about the few history classes I did take.

    I guess random stuff just being fabricated about today is new. Realistically, I think AI will just reinforce the practice of “trust trusted sources” that you see before the age of pictures and videos.

    4 votes
    1. CannibalisticApple
      Link Parent
      It's not new, but what is new is the sheer volume and speed of it. Someone else linked the XKCD comic about citogenesis, where a fake fact is added to Wikipedia, and it gets quoted in an article,...

      It's not new, but what is new is the sheer volume and speed of it. Someone else linked the XKCD comic about citogenesis, where a fake fact is added to Wikipedia, and it gets quoted in an article, which is then cited on Wikipedia as a source for the fact. Wikipedia has a dedicated list of citogenesis incidents, and some are really crazy. The Alan MacMasters one is particularly notable to me, since it went undetected for ten years.

      So, once a "fact" gets quoted in a published article, that can lead to other articles quoting that one, and propagating it that way. Disproving those "facts" can be incredibly complicated and time-consuming as people track down the original source... And AI just flat-out making up facts makes it harder to disprove those since people won't necessarily cite AI. Not everyone will go through layers of citations to find the very first mention of a "fact" or consult experts. I'd wager the average person would never check beyond the first place they read or see it, or stop after seeing it mentioned in two or three different articles.

      In my opinion, the bigger issue from there is that it's really hard, maybe even impossible, to remove erroneous information from AI respositories. This article is about an erroneous term that got picked up by AI. They've figured out the source, they can prove it's nonsense, but it's used in so many actual, scientific papers now, that it's pretty much impossible to scrub from the data sets.

      And like I said, not everyone will rigorously check citations and sources. By the time someone finally notices, a false fact will have plenty of time to propagate and become embedded in AI, contaminating it. And with the way people use AI these days, it really does contaminate the overall knowledge base.

      10 votes