Couple thoughts, may as well start with the silly one: Anyone else hear, almost, "Canada" every time the narrator says "Calendar"? While overall I'm in line with the video, there are 2 maybe minor...
Couple thoughts, may as well start with the silly one: Anyone else hear, almost, "Canada" every time the narrator says "Calendar"?
While overall I'm in line with the video, there are 2 maybe minor points I think merit nuance (or I just wanted to share my thoughts):
-The first is the argument that the article they looked at again must have been generated by AI because it was flagged as high probability using an AI writing detector. I've seen first hand those tools are alllllll over the place on their predictions when assessing purely human-written text. (Though, maybe they rechecked the articles in other ways to conclude whether they were likely ai slop, and just didn't mention for the sake of the narration)
-The second is about the finding that scientific papers show an increasing trend toward markers indicative of ai. Grain of salt and all, but this isn't necessarily bad. Maybe researchers now can write their manuscript in their native language and use an ai to translate (or write in english and clean up grammar). Or, sure, they may be plugging in findings and to help buld the first draft (original research articles tend to be very formulaic IMRaD). That said, I agree it's important to state which and how ai tools are being used. Above uses are much different than plugging in results and asking the ai to build a discussion/draw conclusions without heavy human brainpower contributing.
This reminds me of xkcd's Citogenesis comic, which stuck in my mind a long time ago when I was trying to find the original source of some outlandish claim. Except now, we can replace step 1 with...
This reminds me of xkcd's Citogenesis comic, which stuck in my mind a long time ago when I was trying to find the original source of some outlandish claim. Except now, we can replace step 1 with "someone asks ChatGPT".
A new kurzgesagt video, about how AI slop is "poisoning" the "library of human knowledge", and how it is affecting Kurzgesagt. Includes a bit at the end of how they'll be use AI (LLMs) going...
A new kurzgesagt video, about how AI slop is "poisoning" the "library of human knowledge", and how it is affecting Kurzgesagt.
Includes a bit at the end of how they'll be use AI (LLMs) going forward (and how they won't trust it for doing the research phase or for fact checking).
Couple thoughts, may as well start with the silly one: Anyone else hear, almost, "Canada" every time the narrator says "Calendar"?
While overall I'm in line with the video, there are 2 maybe minor points I think merit nuance (or I just wanted to share my thoughts):
-The first is the argument that the article they looked at again must have been generated by AI because it was flagged as high probability using an AI writing detector. I've seen first hand those tools are alllllll over the place on their predictions when assessing purely human-written text. (Though, maybe they rechecked the articles in other ways to conclude whether they were likely ai slop, and just didn't mention for the sake of the narration)
-The second is about the finding that scientific papers show an increasing trend toward markers indicative of ai. Grain of salt and all, but this isn't necessarily bad. Maybe researchers now can write their manuscript in their native language and use an ai to translate (or write in english and clean up grammar). Or, sure, they may be plugging in findings and to help buld the first draft (original research articles tend to be very formulaic IMRaD). That said, I agree it's important to state which and how ai tools are being used. Above uses are much different than plugging in results and asking the ai to build a discussion/draw conclusions without heavy human brainpower contributing.
This reminds me of xkcd's Citogenesis comic, which stuck in my mind a long time ago when I was trying to find the original source of some outlandish claim. Except now, we can replace step 1 with "someone asks ChatGPT".
A new kurzgesagt video, about how AI slop is "poisoning" the "library of human knowledge", and how it is affecting Kurzgesagt.
Includes a bit at the end of how they'll be use AI (LLMs) going forward (and how they won't trust it for doing the research phase or for fact checking).
Sources link from their description: https://sites.google.com/view/sources-aislop
Sidebar: I used the title I saw, but anyone with edit privileges feel free to de-sensationalize it if you can think of a better one.