12
votes
Concerning YouTube short I came across
Short in question
This short popped up in my recommended. It's clearly AI (tinny voice, random jump in scenes in the middle of one of her sentence, very awkward "oh yeah, stop me"), but, unlike older AI videos, virtually nobody in the comments realizes. With how good AI is getting, we'll very probably have actual riots and political conflict breaking out over AI hoaxes and AI-fueled sentiment campaigns (if the WhatsApp lynchings in 2017-2018 are any measure of how bad social media hoaxes can get). On the other side, citizen journalism of atrocities may come to be worthless and easily dismissed as AI. Humanity is cooked, as the kids say.
(Reposted as a text post and with a better title. Previous comments can be viewed here)
My first instinct is: are all of the commenters on that video being fooled by AI, or are you?
That is, how many of those comments represent real people, and how many of them are just bots farming engagement?
It's nearly impossible to tell, and the kind of person willing to post blatant lies in an attempt to mislead people for money is exactly the kind of person to also pump their engagement with bot farms.
That's not to say this stuff isn't concerning, but the entire Internet is rapidly becoming a series of bots interacting with each other.
In this case I'd say it's pretty self evident. There are repeated patterns in many of the usernames (like: name-[3 random characters]). The bot farm isn't trying very hard to hide it.
Fake content is definitely going to become a bigger problem. But most people don't like to be fooled and recoil pretty strongly after they realize it's happened. People who will believe anything will continue to believe anything, everyone else will adapt eventually.
Looks like a click farm trying to find its base. First video posted was a badly synced singing video a few weeks ago. The next two were pro-Somali immigrants and anti-Trump. This one hit the jackpot it seems, with over a million views (whether bought or not), and will likely be what the account continues to post.
The limited duration of AI clips used to be a marker you could use to identify them, but the endless jump cut style of YT shorts makes that hard to differentiate here. I wonder how convincing this is to the generations who have grown up with them both? I think Gen Z is generally best at identifying AI content, I'd be interested to hear from them. Not sure if Gen Alpha is better or worse than millennials at this point.
This has been going on for a while now. During the shutdown, there were a handful of racist AI-made videos of Black women complaining about not getting their welfare checks. Some of them were shared and reported on by Fox News. It's as infuriating as it is horrifying. Unfortunately, man-made content can be just as damaging, seeing as how Nick Shirley's "Project Veritas for dipshits" video - which was boosted by Musk - has sparked a wave of outrage and harassment in Minnesota, and was likely the reason why someone made this. We are very much cooked.
This is the plot Mountainhead, only we don't seem to have a Ramy Youssef. Nothing means anything, and everything is funny.