My daughter is at one of the big SoCal film schools. A couple months ago she cast me in her final project and I got to see the future of filmmaking firsthand. Her coproducer was a young man from...
Exemplary
My daughter is at one of the big SoCal film schools. A couple months ago she cast me in her final project and I got to see the future of filmmaking firsthand.
Her coproducer was a young man from Calcutta, who dreams of going back to India with Hollywood connections and training, hopefully the local rep for Netflix. He is all in on AI and is taking all their classes on it.
This past fall semester they had one AI class. By spring they had six, all taught by a single professor who is more a programmer than an artist. This young man from India showed me his two minute “proposal” for his senior thesis which was a fully-synthetic short film about a man who dreams he is turning into a demon. He made the whole thing on his laptop over a weekend.
In the end, we made a beautiful little film with great acting and a very sweet and sensitive director who is a Nickelodeon child actor all grown up. But the coproducer from India vanished during post-production.
He had no patience for the politics of working with people. He couldn’t handle deadlines and compromise because he is learning he doesn’t need to work with anyone but a few LLMs. The professors yelled at him and gave him poor grades. He may not be back next year to finish his degree.
But, to him, he obviously feels like he doesn’t need their full program. He thinks he’s got it all figured it out with AI. And for his generation, that might be true.
Movie making is such a complex process that it’s really easy to get lost in the weeds. They have LED sound stages that are already super green screens. For many of the kids, AI is just another step in that direction.
I'm not quite sure what to take out of this tale. It sounds like a student who got too high on his britches and think he can do everything himself. But at the same time the most disruptive people...
I'm not quite sure what to take out of this tale. It sounds like a student who got too high on his britches and think he can do everything himself. But at the same time the most disruptive people in tech didn't take traditional paths either and it could be some origin story for the next big thing. I guess we'll see.
My personal morals aside, the only thing I warn against here is this: Gates, Jobs, and Zuckerberg didn't rely on a black box to give them power. If you don't have the power to make deals with such tools, you'll see how quickly your own power wanes without it.
I think the point is that there are more and more people like that kid, who think AI is the future for creativity. Arrested Development Ron Howard Voice: It wasn’t.
I think the point is that there are more and more people like that kid, who think AI is the future for creativity.
And something such people are missing is that even if they are right then they're also not needed. To use the example of the film student above, why would an employer pay him when they could just...
And something such people are missing is that even if they are right then they're also not needed. To use the example of the film student above, why would an employer pay him when they could just use the machines without him? For his great "ideas"? They surely are capable of prompting such things themselves.
It depends. If you look at the progression of the music industry, it went from something you needed to pay hundreds of dollars an hour to record in a studio, down to something you can do on a...
Exemplary
It depends. If you look at the progression of the music industry, it went from something you needed to pay hundreds of dollars an hour to record in a studio, down to something you can do on a laptop in your basement.
A lot of the money in music these days is not in the recording, it's the distribution and advertising. You can go the traditional route with the music labels, or you can record it in your basement, and try to get publicity through social media and word of mouth. The music label isn't doing anything you can't do, they just have connections and money that you don't.
I think film is following a similar trajectory. We can already record pretty decent stuff on our phones, but the CGI was out of reach for most people. AI is helping to bridge that final gap. A single person can now film, direct, edit, animate, and release a "blockbuster" sized movie all on their own (though they may want some actors). It still won't get all the marketing and publicity, but it's the next logical step in indie-filmmaking - CGI/animation that was previously only for the big-budget films can now be done on your laptop.
I get the gist of what you're saying, but there is still a huge chasm between what I can do at home with an iPhone, and this, and I'm skeptical diffusion models can actually cross that.
I think film is following a similar trajectory. We can already record pretty decent stuff on our phones, but the CGI was out of reach for most people.
I get the gist of what you're saying, but there is still a huge chasm between what I can do at home with an iPhone, and this, and I'm skeptical diffusion models can actually cross that.
My point was less about the ability to make a thing and more about the ability to do be making those things for a living. As the required skills and investment goes down the market floods with...
My point was less about the ability to make a thing and more about the ability to do be making those things for a living. As the required skills and investment goes down the market floods with lots of content and the differentiator is then often just marketing. Someone "good at prompting songs" is not likely to end up employed by a label and anything they make independently will just get lost in the sea of just as good, probably better due to having more resources for hungrier models, prompted songs that had marketing budget.
So I don't really see where these people "fit". Their skills don't seem useful to an employer and no one is clamoring to find "indie" AI hits. If I wanted to start making a living off prompting AI to make songs I think I'd find the reality to be that there's no money to be found for me. Even if I somehow got so lucky as to get a degree of popularity, the fact that there is nothing at all differentiating my AI work from what anyone else could produce makes it very likely a bigger name would just copy the styles I was prompting and bury me. An actual artist can try to overcome this by building a loyal fan base, but I don't believe there's a real demographic of people that want to be AI middlemen fans.
I'm not referring to people that just want to make stuff for fun. I'm referring to people that want a career like that guy that wants to work for Netflix but that I cannot see why they would hire.
Yeah, the entire idea of "prompt engineering" as a career is a complete joke. It's similar to the idea of a "smartphone operator" in the early 2000s. The entire point of both technologies is to be...
Yeah, the entire idea of "prompt engineering" as a career is a complete joke. It's similar to the idea of a "smartphone operator" in the early 2000s. The entire point of both technologies is to be usable by an average person with no technical background. To the extent that they're not, that's an implementation issue, not something inherit to the technology. It's very clear, and was even clear at the time with smartphones that they would improve, and now can be intuitively used by a 4 year old.
Generative AI has followed the same pattern. Sure, 3 years ago, you'd have to use esoteric language and tricks to get what you wanted, but nowadays you can generally just use normal language, get something close to what you want, and then refine. If prompt engineering was ever a skill, it's a skill that's rapidly becoming obselete before it could ever have become a real career.
What that means is that if you can use AI to generate something valuable, so can any prospective boss of yours, which means the ability to create something valuable with AI is basically worthless.
It's a depressing thought, because it means that there's really no longer any room for art in general as a career.
Hollywood has lately been in what media observers have described as an “existential crisis,” an implosion, and a “death spiral.” Studios were making fewer films, and fewer people were watching the ones that got made. Layoffs were mounting. Depending on whom you asked, AI was either hastening the end or offering a lifeline. It had brought a proliferation of tools that were capable, to varying degrees of success, of creating every component of a film: the script, the footage, the soundtrack, the actors. Among the many concerns rattling nerves around town, one significant issue was that nearly every AI model capable of generating video had been trained on vast troves of copyrighted material scraped from the internet without permission or compensation. When the writers and actors unions ended their strikes in 2023, the new contracts included guardrails on AI — ensuring, for the moment anyway, that their members retained some measure of control over how the studios could use the technology. The new contracts barred studios from using scripts written by AI and from digitally cloning actors without explicit consent and compensation. But they left the door open for certain uses, particularly with generative video: Studios can use models to create synthetic performers and other sorts of footage — including visual effects and entire scenes — so long as a human is nominally in charge and the unions are given a chance to bargain over terms. Even with that latitude, the industry is hemmed in by a growing thicket of legal uncertainty, especially around how these systems were trained in the first place. Over 35 copyright-related lawsuits have been filed against AI companies so far, the outcomes of which could determine whether generative AI has a future in Hollywood at all. As one producer put it to me, “The biggest fear in all of Hollywood is that you’re going to make a blockbuster, and guess what? You’re going to sit in litigation for the next 30 years.”
Despite this fear, every major studio is forging ahead (though most are not issuing a press release describing their efforts). Along with the generative models developed by tech giants — Google, OpenAI — a number of artist-run studios created specifically with filmmakers in mind are making headway in the industry. That includes Asteria, which launched its model this year and came with an attractive selling point. Lyonne and Mooser say its model was trained only on licensed material; they were touting it as the first “ethical” studio. Runway, a technology and media start-up, was earlier to the scene, and its models are now among the most widely used in Hollywood; the company was also the first to strike a public partnership with a movie studio. Many of these studios are developing sophisticated methods of working with generative video — the kind that, when given a prompt, can spit out an image or a video and has the potential to fundamentally change how movies are made.
This spring, Darren Aronofsky announced a partnership with Google’s DeepMind. James Cameron teamed up with Stability AI, one of the tech companies making inroads in Hollywood. “In the New Year, many people in the studios woke up and said, ‘Okay, in 2025 we need to make a difference,’” said Prem Akkaraju, the CEO of Stability AI. “Because production is down, profitability is down, attendance at theaters is down. It’s harder to make a movie today than it ever has been.” Cameron put it more bluntly on a tech podcast recently: If audiences want more blockbusters, he said, “we’ve got to figure out how to cut the cost of that in half.” Erik Weaver, a producer and technologist who regularly talks with studios on how to use AI, had observed a “radical shift” over the past few months. A few weeks ago, in a meeting with a major studio, an executive told him that “almost every single one of our productions is coming to us and saying, ‘How can I use AI to get back on budget?’” Weaver added, “These filmmakers need $30 million to make their movie and they have about $15 million. They only have so much money, and they’re getting desperate.”
…
Conversations with dozens of workers at every level suggested a different story, of off-the-books experimentation and plausible deniability. Roma Murphy, a writer and co-chair of the Animation Guild’s AI committee, had heard of “rogue actors” at studios — lower-level staffers under deadline pressure — asking workers to use AI without formal clearance. One animator who asked to remain anonymous described a costume designer generating concept images with AI, then hiring an illustrator to redraw them — cleaning the fingerprints, so to speak. “They’ll functionally launder the AI-generated content through an artist,” the animator said. Reid Southen, a concept artist and illustrator who has worked on blockbusters like The Hunger Games and The Matrix Resurrections, ran an informal poll asking professional artists whether they had been asked to use AI as a reference or to touch up their finished work. Nearly half of the 800 respondents said they had, including Southen.
…
Most of these off-the-record asks happen during early development: pitches, mood boards, preproduction. The work is invisible, the stakes low, the temptation high. “Any artist or designer worth their salt would push back on the quality, but we’re all busy fighting the clock, fighting the budget,” said Sam Tung, a storyboard artist and member of the Guild’s AI committee. “And if your back’s against the wall, it’s tempting — even if the result is of dubious quality and dubious ethical makeup.”
…
A recent music video offers a glimpse into how Asteria is blending hand-drawn artistry with AI: For the project, Trillo collaborated with the L.A. illustrator Paul Flores, who hand-drew 60 original images. The team used those to train a custom AI model in Flores’s style, allowing them to generate hundreds of additional images. From there, they used a 3-D generation tool to create a digital version of a city — nicknamed Cuco Town — that enabled more dynamic camera movements and recurring environments. A team of 20 people ultimately transformed Flores’s sketches into a trippy animated short with a layered, painterly look.
To me, that last section is something I want to be excited about. I don't actually hate the idea of bespoke models trained on a tightly-controlled dataset made for a very specific project, and I...
To me, that last section is something I want to be excited about. I don't actually hate the idea of bespoke models trained on a tightly-controlled dataset made for a very specific project, and I almost wish projects that actually do that (no idea if the project mentioned here fits the ideal I'm describing) could be separated from the larger conversation, because I don't think the actual tech of LLMs is the problem so much as the ethics of its use. But as the article describes, GenAI is basically inextricable from the context of its use as a "market disruptor" that only exists to entertain shorter deadlines and smaller budgets. I think it's just making me a little wistful that at the same time that GenAI is destroying artistry and artists' livelihoods, it's essentially cannibalizing its own potential for making something interesting.
That wouldn’t be an LLM. The second L in LLM stands for “language”. Models that generate images are generally diffusion models, and are architectured very differently from LLMs. They’re generally...
That wouldn’t be an LLM. The second L in LLM stands for “language”. Models that generate images are generally diffusion models, and are architectured very differently from LLMs. They’re generally not transformers, for instance, and work in lower dimensional spaces.
LLM is used so frequently as synonymous with "generative AI" that I suspect we've already lost the chance to prevent the general public from using it that way, I'm afraid.
LLM is used so frequently as synonymous with "generative AI" that I suspect we've already lost the chance to prevent the general public from using it that way, I'm afraid.
Indy filmmakers and game developers have smaller budgets and hire fewer staff. That's how it works. The people they don't have to hire don't get paid. It's not all bad. Hopefully it will allow for...
Indy filmmakers and game developers have smaller budgets and hire fewer staff. That's how it works. The people they don't have to hire don't get paid.
It's not all bad. Hopefully it will allow for more, smaller projects, rather than profits being concentrated in risk-adverse blockbusters?
But I expect that it will still be hit-driven, with most projects losing money. It's a tough way to make a living.
Seems promising to me too, but/and I'd love to see what the actual result is like. I glanced through the article but didn't find a reference or mention of what music video this was. Was I too...
To me, that last section is something I want to be excited about.
Seems promising to me too, but/and I'd love to see what the actual result is like. I glanced through the article but didn't find a reference or mention of what music video this was. Was I too poorly focused or do they not mention it at all?
If I only consume prepared foods (i.e. frozen dinners, meals from fast food and restaurants living on Sysco), I'm not going to be very healthy - most likely. But nothing prevents me from cooking...
If I only consume prepared foods (i.e. frozen dinners, meals from fast food and restaurants living on Sysco), I'm not going to be very healthy - most likely. But nothing prevents me from cooking myself, or going to mom-and-pop restaurants for a good meal.
I think AI has the potential to level the playing field. I think it's possible that people with talent who otherwise wouldn't be able to get their vision made will be able to make their vision. A lot of it will be crap, but some of it will be really great that we would not have otherwise gotten to see.
So while I'm not happy about AI in general, I think there are definitely positives here.
I think we’ll eventually end up in a good equilibrium, I would have loved to have these kinds of tools when my friends and I were making crappy videos in iMovie as a kid. But I do think it’s going...
I think we’ll eventually end up in a good equilibrium, I would have loved to have these kinds of tools when my friends and I were making crappy videos in iMovie as a kid.
But I do think it’s going to be a terrible shitshow of a lot of the old world being at the very least disrupted if not destroyed between here and there.
I worry the most that our political system (in the US) is not functioning in a state to be able to properly navigate this change for the common benefit.
Honestly my feelings on AI are kind of summed up by what you touched on. I love having access to generative AI tools. I just wish no one else had access to them. That sounds extremely selfish, and...
Honestly my feelings on AI are kind of summed up by what you touched on.
I love having access to generative AI tools. I just wish no one else had access to them.
That sounds extremely selfish, and it probably is, but the whole situation is just sort of a tragedy of the commons.
It's extremely useful to help me create software rapidly, search for answers I want without having to plod through ad infested junk websites, cut through pages upon pages of boilerplate in contracts and agreements to actually sum up what the point of the document is and so forth. However, the sheer amount of low effort shit it's produced so far, the amount of bugs and flaws I know it will introduce into software, the trivialization of all the art I consume, and the elimination of massive industries that actually add flavor and meaning to the human experience (the arts) are things that I absolutely hate.
In my perfect world, I'd be the only one with access to generative AI. In a slightly less perfect but in my opinion, far better world than we currently live in, it just wouldn't exist at all.
Optimists in the technology, I think, have this tendency to imagine how they would use the technology, not look at how technology has been historically used all throughout human history and how generative AI is currently being used. Like yes, it could be used to cure cancer, solve income inequality, stop violence and so forth, but what piece of evidence of human behavior over the past 2000 years would ever make you believe that it would?
I'm ambivalent because I think it probably will cure cancer*, and I think it'll probably do all the bad things as well. Like you say, history suggests pretty much any technology will be used to...
I'm ambivalent because I think it probably will cure cancer*, and I think it'll probably do all the bad things as well. Like you say, history suggests pretty much any technology will be used to reinforce power structures, pollute the planet, and/or kill people - but all those same technologies have found their way into the right hands along the way and ended up being the building blocks for positive progress at the same time.
I don't really know where I'm going with this thought. I guess if I really boil it down I do still believe in technology as a net positive force, but only in a hard fought, two steps forward, one step back kind of way. We're better off having the tools otherwise that person who needs them for the transformational breakthrough would never be able to do so, but that seems to necessitate everyone else using those same tools to beat each other over the head in the background.
* Perhaps not literally, but read whatever world-changing scientific breakthrough you want there
To use your metaphor, I'm more worried about large corporations that lie about what's in my food than the local farmers. At the end of the day, I can't make the right decisions if I'm being lied...
To use your metaphor, I'm more worried about large corporations that lie about what's in my food than the local farmers. At the end of the day, I can't make the right decisions if I'm being lied to. That's what really needs to come under control.
Also, like the article said, it's still a legal minefield. I wouldn't tskr the risk on a professional project that may be litigated down later on. A big company will just settle out of court in comparison.
Maybe. But music has also been “democratized”, and for a little longer, so a random person with a laptop can create and edit in ways that previously took skilled musicians and mixers in recording...
Maybe. But music has also been “democratized”, and for a little longer, so a random person with a laptop can create and edit in ways that previously took skilled musicians and mixers in recording studios. It doesn’t seem like music is any better. I wonder how much the old limitations actually contribute to quality. Maybe all the rare talent was brought together in a way that no longer exists.
I admit that there may be a lot of music that is better than the 1960s-1990s stuff I listen to but it just isn’t available to me.
How could music be "better", like what metric are you even suggesting to measure against? The thing that democratization of art achieves is that there is more art. It's a double-edged sword:...
How could music be "better", like what metric are you even suggesting to measure against? The thing that democratization of art achieves is that there is more art. It's a double-edged sword: there's more of what you like and more of what you don't, so potentially there's a lot more to sift through that isn't to your taste.
I'm convinced that Disney used AI to generate the script for their centennial flop, Wish, which lost them $130 million dollars. The plot was a mix of dozens of paint by numbers, and has several...
The new contracts barred studios from using scripts written by AI
I'm convinced that Disney used AI to generate the script for their centennial flop, Wish, which lost them $130 million dollars. The plot was completely paint by numbers a mix of dozens of paint by numbers, and has several gaps which one could fill in with tropes, almost like it was requiring and expecting the audience to do so. For folks who've used AI to generate a bunch of short bed time stories you know what I mean: it's extremely templated and it doesn't recognize when it's giving you trope gaps because the whole thing is worse than a trope, it's a Madlib of tropes
The consumer wants to see magnificent landscapes and special effects,
----- as the visual component of a real, human, worth telling story.
We’re going to blow stuff up so it looks bigger and more cinematic, [...] Now we can say, ‘Do it in anime, make it PG-13.’ Three hours later, I’ll have the movie.
They're going to blow $3b up and the documentary on this bust cycle will be epic. They don't get it at all.
Have y'all seen the anime short Look Back(2024)? It's 60 minutes of the human spirit these studios will never be able to understand, let alone facsimile in collage. Is Look Back partially computer generated? Maybe, but the story was human written, directed and storyboarded first. That's what they're refusing to recognize: human creativity labour.
EEAAO used their product as one of their many tools to enhance a human story. Among other tools they used effectively were two rocks on a cliff for conveying love, and a suspicious looking "award" for conveying insane comedy. The imagination has to come first, and then tools could be used to convey the imagined meaning on screen.
What a lot of these dumb dumb studios is doing is to produce AI trailer out of an AI script to show you incoherent stuff generated out of things other people have already previously imagined coherently.
And from the article, it sounds like they're very well aware. The animation guild contract sounds perfect but that's not what their greed wants. Nor is the process one where the AI is mature enough to deliver:
It was terrible,” he said. “I was trying to prompt it to move the camera up ten degrees, and it gave me a whole new house.”
Mooser promised to provide me with details about where and how exactly the company had managed to pay for and acquire a sufficient trove of data, but in the end, a spokesperson for Moonvalley declined to share that information, claiming it was confidential.
:D because it wasn't ethically acquired. At best it was a vast team of overseas paid humans reading prompts off a sheet and posing for cameras. At best. If the money is rushing the blood in their heads, it's off of existing unethical models rebranded like "free range eggs"
I'm so glad to hear David Lynch's quote at the end. AI is just a pencil. It's just a smart phone. Pretty soon, we'll all be tired of the giants who churn garbage, and we'll turn our collective attention onto indie makers who are using those same pencils to create works not just fit for human consumption but gratifying for our souls.
My daughter is at one of the big SoCal film schools. A couple months ago she cast me in her final project and I got to see the future of filmmaking firsthand.
Her coproducer was a young man from Calcutta, who dreams of going back to India with Hollywood connections and training, hopefully the local rep for Netflix. He is all in on AI and is taking all their classes on it.
This past fall semester they had one AI class. By spring they had six, all taught by a single professor who is more a programmer than an artist. This young man from India showed me his two minute “proposal” for his senior thesis which was a fully-synthetic short film about a man who dreams he is turning into a demon. He made the whole thing on his laptop over a weekend.
In the end, we made a beautiful little film with great acting and a very sweet and sensitive director who is a Nickelodeon child actor all grown up. But the coproducer from India vanished during post-production.
He had no patience for the politics of working with people. He couldn’t handle deadlines and compromise because he is learning he doesn’t need to work with anyone but a few LLMs. The professors yelled at him and gave him poor grades. He may not be back next year to finish his degree.
But, to him, he obviously feels like he doesn’t need their full program. He thinks he’s got it all figured it out with AI. And for his generation, that might be true.
Movie making is such a complex process that it’s really easy to get lost in the weeds. They have LED sound stages that are already super green screens. For many of the kids, AI is just another step in that direction.
I'm not quite sure what to take out of this tale. It sounds like a student who got too high on his britches and think he can do everything himself. But at the same time the most disruptive people in tech didn't take traditional paths either and it could be some origin story for the next big thing. I guess we'll see.
My personal morals aside, the only thing I warn against here is this: Gates, Jobs, and Zuckerberg didn't rely on a black box to give them power. If you don't have the power to make deals with such tools, you'll see how quickly your own power wanes without it.
I think the point is that there are more and more people like that kid, who think AI is the future for creativity.
Arrested Development Ron Howard Voice: It wasn’t.
And something such people are missing is that even if they are right then they're also not needed. To use the example of the film student above, why would an employer pay him when they could just use the machines without him? For his great "ideas"? They surely are capable of prompting such things themselves.
It depends. If you look at the progression of the music industry, it went from something you needed to pay hundreds of dollars an hour to record in a studio, down to something you can do on a laptop in your basement.
A lot of the money in music these days is not in the recording, it's the distribution and advertising. You can go the traditional route with the music labels, or you can record it in your basement, and try to get publicity through social media and word of mouth. The music label isn't doing anything you can't do, they just have connections and money that you don't.
I think film is following a similar trajectory. We can already record pretty decent stuff on our phones, but the CGI was out of reach for most people. AI is helping to bridge that final gap. A single person can now film, direct, edit, animate, and release a "blockbuster" sized movie all on their own (though they may want some actors). It still won't get all the marketing and publicity, but it's the next logical step in indie-filmmaking - CGI/animation that was previously only for the big-budget films can now be done on your laptop.
I get the gist of what you're saying, but there is still a huge chasm between what I can do at home with an iPhone, and this, and I'm skeptical diffusion models can actually cross that.
My point was less about the ability to make a thing and more about the ability to do be making those things for a living. As the required skills and investment goes down the market floods with lots of content and the differentiator is then often just marketing. Someone "good at prompting songs" is not likely to end up employed by a label and anything they make independently will just get lost in the sea of just as good, probably better due to having more resources for hungrier models, prompted songs that had marketing budget.
So I don't really see where these people "fit". Their skills don't seem useful to an employer and no one is clamoring to find "indie" AI hits. If I wanted to start making a living off prompting AI to make songs I think I'd find the reality to be that there's no money to be found for me. Even if I somehow got so lucky as to get a degree of popularity, the fact that there is nothing at all differentiating my AI work from what anyone else could produce makes it very likely a bigger name would just copy the styles I was prompting and bury me. An actual artist can try to overcome this by building a loyal fan base, but I don't believe there's a real demographic of people that want to be AI middlemen fans.
I'm not referring to people that just want to make stuff for fun. I'm referring to people that want a career like that guy that wants to work for Netflix but that I cannot see why they would hire.
Yeah, the entire idea of "prompt engineering" as a career is a complete joke. It's similar to the idea of a "smartphone operator" in the early 2000s. The entire point of both technologies is to be usable by an average person with no technical background. To the extent that they're not, that's an implementation issue, not something inherit to the technology. It's very clear, and was even clear at the time with smartphones that they would improve, and now can be intuitively used by a 4 year old.
Generative AI has followed the same pattern. Sure, 3 years ago, you'd have to use esoteric language and tricks to get what you wanted, but nowadays you can generally just use normal language, get something close to what you want, and then refine. If prompt engineering was ever a skill, it's a skill that's rapidly becoming obselete before it could ever have become a real career.
What that means is that if you can use AI to generate something valuable, so can any prospective boss of yours, which means the ability to create something valuable with AI is basically worthless.
It's a depressing thought, because it means that there's really no longer any room for art in general as a career.
Are you saying you're the parent of someone who was on a Nick show?
No my daughter was the producer. Fancy school where everyone is the child of wealth. Well… except us…
https://archive.is/yvvyy
From the article:
…
…
…
To me, that last section is something I want to be excited about. I don't actually hate the idea of bespoke models trained on a tightly-controlled dataset made for a very specific project, and I almost wish projects that actually do that (no idea if the project mentioned here fits the ideal I'm describing) could be separated from the larger conversation, because I don't think the actual tech of LLMs is the problem so much as the ethics of its use. But as the article describes, GenAI is basically inextricable from the context of its use as a "market disruptor" that only exists to entertain shorter deadlines and smaller budgets. I think it's just making me a little wistful that at the same time that GenAI is destroying artistry and artists' livelihoods, it's essentially cannibalizing its own potential for making something interesting.
That wouldn’t be an LLM. The second L in LLM stands for “language”. Models that generate images are generally diffusion models, and are architectured very differently from LLMs. They’re generally not transformers, for instance, and work in lower dimensional spaces.
LLM is used so frequently as synonymous with "generative AI" that I suspect we've already lost the chance to prevent the general public from using it that way, I'm afraid.
Indy filmmakers and game developers have smaller budgets and hire fewer staff. That's how it works. The people they don't have to hire don't get paid.
It's not all bad. Hopefully it will allow for more, smaller projects, rather than profits being concentrated in risk-adverse blockbusters?
But I expect that it will still be hit-driven, with most projects losing money. It's a tough way to make a living.
Seems promising to me too, but/and I'd love to see what the actual result is like. I glanced through the article but didn't find a reference or mention of what music video this was. Was I too poorly focused or do they not mention it at all?
If I only consume prepared foods (i.e. frozen dinners, meals from fast food and restaurants living on Sysco), I'm not going to be very healthy - most likely. But nothing prevents me from cooking myself, or going to mom-and-pop restaurants for a good meal.
I think AI has the potential to level the playing field. I think it's possible that people with talent who otherwise wouldn't be able to get their vision made will be able to make their vision. A lot of it will be crap, but some of it will be really great that we would not have otherwise gotten to see.
So while I'm not happy about AI in general, I think there are definitely positives here.
I think we’ll eventually end up in a good equilibrium, I would have loved to have these kinds of tools when my friends and I were making crappy videos in iMovie as a kid.
But I do think it’s going to be a terrible shitshow of a lot of the old world being at the very least disrupted if not destroyed between here and there.
I worry the most that our political system (in the US) is not functioning in a state to be able to properly navigate this change for the common benefit.
Honestly my feelings on AI are kind of summed up by what you touched on.
I love having access to generative AI tools. I just wish no one else had access to them.
That sounds extremely selfish, and it probably is, but the whole situation is just sort of a tragedy of the commons.
It's extremely useful to help me create software rapidly, search for answers I want without having to plod through ad infested junk websites, cut through pages upon pages of boilerplate in contracts and agreements to actually sum up what the point of the document is and so forth. However, the sheer amount of low effort shit it's produced so far, the amount of bugs and flaws I know it will introduce into software, the trivialization of all the art I consume, and the elimination of massive industries that actually add flavor and meaning to the human experience (the arts) are things that I absolutely hate.
In my perfect world, I'd be the only one with access to generative AI. In a slightly less perfect but in my opinion, far better world than we currently live in, it just wouldn't exist at all.
Optimists in the technology, I think, have this tendency to imagine how they would use the technology, not look at how technology has been historically used all throughout human history and how generative AI is currently being used. Like yes, it could be used to cure cancer, solve income inequality, stop violence and so forth, but what piece of evidence of human behavior over the past 2000 years would ever make you believe that it would?
I'm ambivalent because I think it probably will cure cancer*, and I think it'll probably do all the bad things as well. Like you say, history suggests pretty much any technology will be used to reinforce power structures, pollute the planet, and/or kill people - but all those same technologies have found their way into the right hands along the way and ended up being the building blocks for positive progress at the same time.
I don't really know where I'm going with this thought. I guess if I really boil it down I do still believe in technology as a net positive force, but only in a hard fought, two steps forward, one step back kind of way. We're better off having the tools otherwise that person who needs them for the transformational breakthrough would never be able to do so, but that seems to necessitate everyone else using those same tools to beat each other over the head in the background.
* Perhaps not literally, but read whatever world-changing scientific breakthrough you want there
Just to briefly say: Got nothing to add, but I agree, I agree, and… I agree. heh
To use your metaphor, I'm more worried about large corporations that lie about what's in my food than the local farmers. At the end of the day, I can't make the right decisions if I'm being lied to. That's what really needs to come under control.
Also, like the article said, it's still a legal minefield. I wouldn't tskr the risk on a professional project that may be litigated down later on. A big company will just settle out of court in comparison.
Maybe. But music has also been “democratized”, and for a little longer, so a random person with a laptop can create and edit in ways that previously took skilled musicians and mixers in recording studios. It doesn’t seem like music is any better. I wonder how much the old limitations actually contribute to quality. Maybe all the rare talent was brought together in a way that no longer exists.
I admit that there may be a lot of music that is better than the 1960s-1990s stuff I listen to but it just isn’t available to me.
How could music be "better", like what metric are you even suggesting to measure against? The thing that democratization of art achieves is that there is more art. It's a double-edged sword: there's more of what you like and more of what you don't, so potentially there's a lot more to sift through that isn't to your taste.
I'm convinced that Disney used AI to generate the script for their centennial flop, Wish, which lost them $130 million dollars. The plot was
completely paint by numbersa mix of dozens of paint by numbers, and has several gaps which one could fill in with tropes, almost like it was requiring and expecting the audience to do so. For folks who've used AI to generate a bunch of short bed time stories you know what I mean: it's extremely templated and it doesn't recognize when it's giving you trope gaps because the whole thing is worse than a trope, it's a Madlib of tropes----- as the visual component of a real, human, worth telling story.
They're going to blow $3b up and the documentary on this bust cycle will be epic. They don't get it at all.
Have y'all seen the anime short Look Back(2024)? It's 60 minutes of the human spirit these studios will never be able to understand, let alone facsimile in collage. Is Look Back partially computer generated? Maybe, but the story was human written, directed and storyboarded first. That's what they're refusing to recognize: human creativity labour.
EEAAO used their product as one of their many tools to enhance a human story. Among other tools they used effectively were two rocks on a cliff for conveying love, and a suspicious looking "award" for conveying insane comedy. The imagination has to come first, and then tools could be used to convey the imagined meaning on screen.
What a lot of these dumb dumb studios is doing is to produce AI trailer out of an AI script to show you incoherent stuff generated out of things other people have already previously imagined coherently.
And from the article, it sounds like they're very well aware. The animation guild contract sounds perfect but that's not what their greed wants. Nor is the process one where the AI is mature enough to deliver:
:D because it wasn't ethically acquired. At best it was a vast team of overseas paid humans reading prompts off a sheet and posing for cameras. At best. If the money is rushing the blood in their heads, it's off of existing unethical models rebranded like "free range eggs"
I'm so glad to hear David Lynch's quote at the end. AI is just a pencil. It's just a smart phone. Pretty soon, we'll all be tired of the giants who churn garbage, and we'll turn our collective attention onto indie makers who are using those same pencils to create works not just fit for human consumption but gratifying for our souls.