2
votes
Is YouTube's use of AI upscaling for Shorts unethical?
Link information
This data is scraped automatically and may be incorrect.
- Title
- YouTube Is Using AI to Alter Content (and not telling us)
- Authors
- Rhett Shull
- Duration
- 13:57
- Published
- Aug 14 2025
This was uploaded back in August, and YouTube subsequently responded on Twitter to clarify that it was testing AI upscaling, not altering content. I've changed the title here in an attempt to avoid sensationalism while retaining valid concern.
I try my best to avoid YouTube Shorts for all sorts of reasons (that they have in common with other social media video feeds like TikTok, Instagram, etc.), but one of them is the proliferation of what seems to me like AI-generated content. In my rare forays into YT shorts (before coming across this video), I see many with the same kind of smeary effect Rhett mentions, but surely a lot of them are actually real videos that have had some sort of weird filter intentionally applied over them. However, I don't like it, so even seeing videos that are upscaled and have an unintentionally similar filtered result is disgusting to me. Add that on top of mountains of clickbait and propaganda and you can argue that YouTube has, at minimum, been irresponsible with this. It's easy to be paranoid - for example, try searching for Burkina Faso president Ibrahim Traoré on YT, for any first-party reporting, and see how much crap you have to trawl through. By the time I finally found some, I started wondering if I was imagining that telltale AI character stiffness in him or not, even in regular, non-shorts videos.
This gets to what I think may be the core question on the ethics of AI use, which is mentioned in the latter part of the video - the inevitable erosion of trust when humans cannot easily determine whether they are interfacing with AI-generated content in any format.
My hope is that we implement some sort of requirement that any published media discloses whether it uses AI and maybe even how much, in the same way that food products containing GMO ingredients must be labeled accordingly. How this can be practically applied and/or enforced, I have absolutely no clue... we're heading down the road where everyone and everything will need a label. Still, there need to be guardrails for the fundamental assumptions we have utilized for millennia to navigate human society. I'd even go as far as making it illegal for any chatbot, AI agent, etc. to use first-person pronouns.
Ok to be fully honest, would anyone actually get upset about this issue if it wasn't reported as "AI Upscaling"? If it was just called "Upscaling" or "enhanced transcoding" or something, people would probably either be unaware, or think that it's a cool feature. "AI" upscaling has existed for quite a while in one form or another, and has quietly been a powerful tool for a lot of purposes.
The question is valid though, if Youtube is using this as an excuse to train AI on all youtube shorts that definitely is a misleading and scummy thing to do; but I'd have to see any scrap of proof that its actually happening, not just "what if they were doing this?"
I get that everyone is rightfully a little touchy over AI since pretty much every major company has trawled the internet and stolen zettabytes of data to train their AI on, but there isn't even any real evidence that this is that. Youtube claims its traditional machine learning upscaling and explicitly not AI which is neither new nor damaging to artists/creators.
Youtube absolutely should make this an Opt-in service though, simply because people are so wary of it. No reason to give people a service they don't want.