AI video editing helpers are changing my life
If you are like me then you are kinda over hearing about AI all the time; I get it, believe me. I've written about jobs on here before: my day job is absolutely infested with AI jargon, most of it pretty meaningless (flashbacks to "The Cloud"), and it's a constant everywhere else too, so yeah it's a lot and it's largely unimpressive. Image gen has gotten pretty ridiculous in the last 6-12 months, and video gen seems to be taking off next, and I've successfully wrangled various chatbots into helping with coding projects, etc.
Probably none of this is news to you, but I just found out that I can get AI to edit video. I've done a lot of short-form editing, and recently picked up some side work that is much lengthier, without realizing that the time I would spend hunkered over going through it would be exponentially more lengthy. Painfully so. Cue signing up for a trial of AutoCut, and hot damn it's like living in the future. I am as we speak watching it delete gaps, cut to speakers, add captions that are mostly correct & even formatted & unbelievably also do the VHS singalong/Tiktok "highlight the word being spoken" thing that all the cool kids are doing these days. It's not perfect, it's kinda finicky—I'm having to use a V1 when V2 is supposedly much better, and I'm having to chunk these beastly premiere timelines to get it to do anything at all, but wow—if this is your day job, are you worried? Cause it's a game changer for me but no one is going to replace me because no one else would bother messing with it lol, but on a corporate scale do people know about this stuff yet? I'm thinking our jobs may not be replaced by AI, our jobs will probably just become AI babysitting.
The question isn’t whether or not AI can perform tasks a human can, it’s whether or not it can create trends and innovate.
All of the editing you just mentioned are stylistic choices. Contemporary stylistic choices.
If AI were to threaten jobs it would need to not just do what we all have been doing, it would need to get creative and do better than we’ve been doing. I do not see AI be innovative by itself. It needs to train on what a human does just to be contemporary.
Now that isn’t to say it won’t kill jobs at all, there is a bunch of slop and trend chasers everywhere that would love to nix their staff and just continue churning out stuff they’ve been consistently creating already.
Eventually, somewhere down the line, we will start seeing how it chips away at existing jobs wholesale.
But I firmly believe we aren’t there yet and for that to happen we would need AI to not just learn but improve on itself before any humans do.
Who's to say AI can't innovate or create trends?
Most contemporary trends in a post modern society are based off previous trends anyway, it just needs the right marketing to take off. Look at fashion, it's always circling back on itself. Right now 90s and Y2k styles are huge. I wouldn't call it innovative, the clothes and knowledge have always been around, it just needed the right environmental factors or, unfortunately, influencers, to turn it back into a modern trend.
Innovation happens everyday, we just miss most of it. I think of it like natural selection, especially when it comes to something like a stylistic choice where the occasional happy accident can tie a piece together. One person doing it doesn't make it trend, but as other people see it and become influenced by it they integrate it into their style and soon it spreads and evolves on its own.
I don't see why AI isn't capable of making a random decision that could spark a trend. AlphaGo and the Watson chess AI (IIRC) both made innovations in their respective games much in the same way I believe by finding patterns and moves the human mind or collective understanding of the game just didn't see.
Edit: The more I think about this more the more I think about how nothing I or any human creates is truly innovative by definition. Everything I make, whether intentional or not, is either influenced or inspired by something or someone I've encountered. AI LLMs have an advantage here because they're trained on more data than you or I could possibly ingest and recall when creating. I think you're right in that the immediate concern might not be an AI bot selling us the trend directly, but influencers and other creators are absolutely figuring out how to use AI to brainstorm new trends they can spark today.
It cannot make pointillism paintings without learning about pointillism first. It, by design, is incapable of innovative design.
The same can be said for you and me though, right? Only a tiny fraction of the population are truly innovators, but for some reason we accept this with people but not with computers?
I think if I had billions of imperfect AIs then some of them would eventually stumble on genuine e novelty in the same way that the wider human population has, and I think the vast majority of them would be simplistic derivative machines in the same way that the vast majority of humans are too.
I hate that the majority of AI companies that are building billion dollar industry are built on theft and grifting, but I disagree that there’s fundamentally something unique about humans (collectively) that an AI could never match.
something something monkeys typewriters
I feel like following the AI concerns all the way to their natural conclusion is the only way to figure these feelings out: I understand what @CptBluebear is saying, but I'm not claiming that what I'm churning out is "art" lol. It's complicated—if it's 1970 and I won't use a typewriter, insisting on writing out longhand because "machines are making it too easy," then that seems silly, same as insisting on a typewriter instead of a word processor on a PC anytime in the past 30 years would be. So it may be luddite-ish to eschew AI help completely, but also a typewriter or WordStar was never going to churn out something grammatically perfect, all on its own, that looked like a person wrote it but was ultimately just an amalgamation of pattern recognition + RNG & doesn't actually say anything. And I think that's what's scary for some folks about AI, is that they won't be able to tell the difference, so then where's the value in what I'm doing if a computer could do it?
For me, it's saving me hours and hours and hours on things that are technically stylistic choices, but also so mind-numbingly technical that it was actually making me hate the fun parts as well. So that's my goal for AI, and if they descend to rule over us in our ignorance then hey maybe they'll do better than us lol
That's on me, Pointillism was an example of something huge, but even a simple concept such as character design has to be created by a human before it can be absorbed. Within that framework then, it can only draw upon the libraries it has received and while humans similarly use existing art as inspiration, nearly all artists will iterate upon this concept every drawing eventually creating and continuously evolving a certain "meta" (if you will) purely by just doing the thing and adding their own sauce.
I did not use the word "never" though. LLMs in their current form cannot. So far, every AI User Agent I've seen can't do so either. But if we're talking the holy grail AGI, the conversation changes completely and my opinion shifts accordingly. I even alluded to this in my last paragraph of the first post:
Ah it looks like I started answering against you, but then got lost in my own thoughts and started arguing against some generalised argument I’ve heard before. Sorry for putting words in your mouth there, I should have been more deliberate in my response.
If you're interested in this concept of emergence vs design and enjoy non fiction, I recommend giving The Evolution of Everything by Matt Ridley a go, it goes into this idea in detail and I found it super interesting.
yeah i've done some of that contract AI trainer stuff you see advertised—decent but painfully mindless work; the best thing I got from it was reassurance that there is really not much danger there, because of how absolutely helpless a bot seems when you ask it to do something completely logical that it's just not expecting. "Intelligence" is a far cry from where we are right now, probably, but we already had autocorrect & autocomplete—and Premiere editing workflow pattern recognition is all it is currently taking to make my Saturday way better.
https://www.smbc-comics.com/comic/sonnet-2
Person:
“You may be smarter than us, machine, but will you ever compose a perfect sonnet? Or a novel that changes the way we think about the meaning of existence?”
Machine:
“…No.”
Machine:
“Fortunately, those people are extremely rare!
We rounded up the 2,562 living people who meet your standard in various arts and sciences and have granted them immortality and perpetual bliss.”
Person:
“But what about—”
Machine:
“We’re opening a petting zoo for all the cute, stupid leftover humans.”
Machine:
“Would you like a handful of corn?”
Person:
“Yes, please.”
I said this to throwdobaggins too, but I'm talking about current state LLMs and the near future. We will not see an AI at the level of that comic for a long time.
While I do think it's coming (though perhaps not in the form that comic is going for), this current method of machine learning is limited by its own intake by design. This type of LLMs will not wholesale take err jerbs.
It's starting to be competent enough to take a couple of lower skilled jobs, but even in those jobs it seems to make some odd mistakes every now and then that are potentially so incredibly damaging to your business, it's probably best to avoid them for now.
I produce longform YouTube history videos that require hundreds of images. I research the topic, write a script, narrate and produce the audio, then start assembling the artwork. After that I compose the video and add music and crossfades etc.
I’m using more AI tools at each step of the process. But the real time sink is finding appropriate imagery that isn’t AI-generated, isn’t problematic with its rights, and closely matches the ideas presented in the script.
Apart from that, I’m now teetering on the brink of allowing AI generated imagery, music, and video in the videos. The subjects are really obscure, and we just don’t have any paintings or drawings of Olmec priests talking to kings for example. The temptation to have a historically-accurate (as much as possible) fifteen second scene of Hammurabi actually having his code written is only growing.
My unsolicited advice, as an enjoyer of the type of history videos you describe, is "be careful", because AI-generated images tend to have a very recognizable, and frankly, shit aesthetic. Like, they actively detract from "the experience" when placed into a larger work alongside human-made art.
So yeah, if I were in your shoes, I would have extremely high standards for what AI-generated imagery I would allow. The images couldn't have any weird defects. The aesthetic couldn't be "obviously AI". Again, the images would have to add to, or at least fit with, the "experience" of the larger work, not diminish it.
The images from these links for example:
https://lumenor.ai/ai-generated-images-gallery/s/historical%20portraits/
https://lumenor.ai/ai-generated-images-gallery/s/historical%20paintings
These are all shit. Like, universally, they're all just terrible art. The vast majority of those images have the garbage "oil sheen" that plagues AI images, and are obvious slop. The ones that don't immediately stand out aesthetically still have obvious defects, like tables just floating in the air, or characters in the same image being vastly different sizes.
I wouldn't use any of those, for example.
To add onto your point, I feel that adding AI art would decrease the credibility of the rest of the work. The script may be accurate and factually correct, but if I see AI art accompany it I start to doubt the rest of the video. I feel that AI artwork would undermine the credibility rather than reinforce the point.
ok but why? If it was a drawing that said "artist's rendition" at the bottom because obv. no one was around to take Solomon's picture why would that be better? Because thus far I haven't had anyone say anything other than "Well someone got paid to draw that," which is true but not an indicator of historical accuracy. I feel like if @EarlyWords knows enough about the subject & enough about AI art to get it to look like it should based on what we know right now, then that's the same as someone drawing whatever their interpretation would be, right? To @hungariantoast 's point, if Joan of Arc had a written description or contemporary portrayals that looked just like Emma Watson, then those would kind of make sense lol, but otherwise yes hot garbage. But I feel like there's a use case here that wouldn't upset anyone (@Lia mentioned one here), or at least wouldn't upset anyone any more than Spielberg using CGI Not Real Dinosaurs for Jurassic Park instead of just stop-motion Not Real Dinosaurs—and he used both, and the fear then also was putting animators out of work, when the real danger turned out to be forcing animators to pivot to CGI, then slowly squeezing the life out of animators as a workforce, then outsourcing CGI to cheaper animators elsewhere, then... oh it's AI again. So yeah I think my suspicions lie elsewhere.
If someone is using AI art, the possibility of them using AI in other ways is higher. If they're using AI to write the script, the likelihood of inaccuracies is higher than otherwise. Thus, AI art casts a shadow of doubt across the face of the whole production.
Yeah if I am seeing terrible DALL-E level shiny people with goofed up fingers eating hamburgers or whatever ima peace out of there, but I feel like what you're saying could be applied to any aspect of content creation, especially in the early stages of whatever is at hand—four years ago if someone had put their "serious" content on Tiktok that would have been crazy, right, because it would have devalued their message; in 2025 it's almost unavoidable. The AI images I'm describing would be indistinguishable from the real deal, not a crummy facsimile. But that would make an entirely different group of people uncomfortable.
I think why it discredits is not because of AI itself, but a few value statements people may ascribe to AI
This is AI generated content, how much of it is AI generated and how much has been fact checked? We are all aware of AI hallucinations and the need to fact check AI output.
AI currently is thought of as low-effort/cheap. So using AI for a visible part of the process can cheapen the end result.
Comparison to/categorizing it the same as AI generated slop
Maybe this is what @MimicSquid meant—I think there are valid concerns, and the general public definitely has a bad taste re: AI due to the baseline they've been presented with on facebook or whatever. So if not done correctly it could be the same as using animated captions or something not cohesive with the material.
It's just gotten so much better than most people are aware—not at thinking or taking over the world, but writing sonnets & making videos of cavemen riding bicycles & editing boring parts out of videos, for sure.
Yeah, my thoughts and @MimicSquid's are very similar, I just did not see their response before writing my own. And your comment challenging why created an opportunity for MimicSquid and myself to better expand and articulate my original point of how using AI art can undermine credibility. I think AI art done well and not obvious low-effort, can help in videos, and your response to Mimic shows the potential for AI that can be incorporated well. However, if it is the baseline oily-sheen six fingered AI that most people think of, I do think using those images would weaken the credibility of the video as a whole for the reasons Mimic and I previously stated.
I understand the desire to make use of an easy way to get visuals for your content. I very much agree with @hungariantoast though that using AI generated images that try to replicate historical paintings will most likely reduce the overall enjoyability of the work.
How about generating more abstract visuals that won't steal the show from the few more complex, authentic images that you are able to source and use?
As an example, part of the storytelling in 'Monty Python and The Holy Grail' happens via hand-drawn animations. You could try generating something equivalent that looks stylistically appropriate to the context/era of your video, rather than try to achieve a photographic presentation of some event where photos couldn't realistically have been taken (or worse, a historical painting that doesn't actually exist). This way your audience won't have to deal with the dissonance from gauging and deciding what to believe or not while trying to enjoy the content.
I feel like that's a pretty solid use-case; I totally get wanting purely "real" images, but 50 years ago an artist would have drawn illustrations for a textbook on that & they could very well have been historically... loose haha. I don't know that that's any consolation, but even if your video gets absorbed into the AI hivemind/mothership and spit back out, it will probably be more accurate than some 1960s clipart of a white Hammurabi.
So I'm sort of curious here, because what you are saying is an argument for AI (or paying someone else to do the labor you can't) , but it is also an argument that your work shouldn't even be in video format at all. Sounds like you can just remove a lot of the hassle and make it a podcast instead.
Well that’s a long story. It really isn’t my channel. The founder drafted me to narrate some of his old history books during the pandemic and the YouTube channel took off.
Then he tragically died and one of his final wishes was that his widow keep the channel going to provide for their two young daughters. I’ve been working hard to keep the channel afloat and respect his wishes ever since.
I’m pretty sure that this crappy version of AI art and video that we currently have wouldn’t be up to his standards. But some of the stuff that is about to become available looks indistinguishable from other forms of video. For him, it was always about making sure it looked like quality. So I guess I keep waiting…
That is an interesting development. When you say his old books, do you mean books he wrote?
He began as an amateur historian who collected antique and public domain books. In the interest of sharing them he began the channel. Most of my early episodes featured archaeologists on subjects like the Hittites and ancient Egypt.
Depending on the periods covered the backing pictures and artwork can be critical to the format, and it's still something desirable to make the content appealing to the audience in a way a podcast might not even if it's not strictly necessary. This is why video essays are so popular even in cases where cutting out the audio would stand as its own work just fine. Also, if EarlyWords' audience is accustomed to the format, I can see not wanting to break away from it even when it doesn't serve the subject of the video as well as it usually does. You wouldn't want to incur the wrath of The Algorithm™, after all...
If you're talking about something wheres there's no photographs and maybe not even pictures drawn by a first hand witness, I don't think AI is going to hurt anything
With generative AI ultimately being influenced by the bias introduced in its training, I wouldn't be so sure there is truly no concern with using it to depict events where no existing work can be used for comparison, especially if it's for educational content about history. Not to the point of dismissing the idea out of hand, but given what failing to account for bias in machine learning is already known to lead to, this is definitely something to keep in mind.
Obviously a human artist is perfectly capable of letting their own bias get in the way when creating artwork as well, but I've seen people think that because its creation is delegated to a computer, genAI content (and machine learning use in general) is immune to bias, which, given that the whole point of machine learning is to induce a desired behavior and in the case of generative AI the training process involves very much not impartial human created and selected work, is absolutely not the case. There are many examples out there showing it ranging from comical (AI tasked to find how to run as fast as possible in a simulation instead learns how to break the physics engine to go to ludicrous speeds because maximizing that metric was prioritized over realistic movement) to extremely worrying (automated racism in charge of decision making).
I make some of my art in Photoshop and Krita. Part of my process is very detailed selection masks, and another is photo manipulation (adding elements to an image that didn’t exist there in the first place, and making them look natural). The AI tools in Photoshop have saved me countless hours I would have spent on monotonous, repetitive actions; I can focus more on stylising the image and on creative choices, and less on “grunt” work.
Before we had computers (and it is still valid anyhow, but on a smaller scale) virtually all big artists who were creating complex works had a small team behind them who would perform the grunt work. For painting: prepping canvases, underpainting, mixing paints. For metal sculptures: basically everything. A good example when it comes to lithography is Salvador Dali. He “created” complex, multi-layered lithographs and his only involvement was designing & drawing the final print. Afterwards, it was up to his many assistants to actually do the hard work & create the prints.
I am neither a big artist, nor a rich person, so this is something I can’t have for myself. However, when I create digitally I now have some AI tools at my disposal that can serve the same purposes. And I recognise that these tools can be something you use to create without thought (not my thing) or an actual equaliser, because for now they give me the chance of creating more complex digital works in a shorter span of time. And I don’t need to hire any assistant for that.
If they ever come out with affordable robots that can help me with non-digital works, I will be the first in line to use them.
I can see handing off the most basic steps of certain processes to AI so the user can focus on the actually creative part, it definitely has the potential to be a significant help in cases where automating the "dumb" part is harder to figure out with traditional algorithms. I've seen it attempted with managing "boilerplate" code in the tech side of things but AFAIK LLMs aren't any better than traditional parsers in this role.
Regarding generative AI more specifically as opposed to machine learning as a whole, it'll also be useful for anything that involves placeholder/very low importance massive scale generation as in these situations it really doesn't matter that the content only shows a simulacrum of skill that can easily be noticed for what it really is. In a way, it could be seen as raising the "skill floor" of which situations would call for a professional rather than asking a random intern to fiddle with a prompt and I think there will also be some shades of that, but I think it's more likely to result in AI generated content showing up in situations where, had it not been a possibility, there would simply be nothing rather than paying someone to make it.
As a recursive example, I quickly stopped by my ComfyUI install which I'm using to toy with image generation and prompted it for a "thanos car", as a way to represent silly usages of image generation in, say, instant messaging. The process, image generation and uploading it to imgur included, took about a minute (EDIT: ...though I then wasted over an hour aimlessly toying with the image generation model, but that's down to me being unable to direct my focus appropriately to save my life, not anything to do with the technology). I personally have absolutely no skill whatsoever in digital art, and you can plainly see the kind of inaccuracies typical of image generation models in the picture that would make it useless for anything serious, but would that really matter in the context of a throwaway joke? Probably not, and the image generation model worked faster than me whipping up Paint for the same purpose would have been. I think I can safely say that this usecase doesn't constitute a threat to the digital art field, and, in at least this specific context, AI image generation is benign.
EDIT:
I swear I'm not doing this on purpose but this became an entire essay yet again
I went back and wrote the following after submitting the initial comment above, because I think it's important to mention it in this context as it's *currently* an unavoidable aspect of image generation, despite it not being inherent to the technology... and of course what was meant to be an aside edit ended up becoming the largest portion of my comment by a significant margin. Typical.Outside of concerns regarding the technical impact of its usage in isolation, the whole "countless privacy and intellectual property violations have been committed in the process of training the image generation model I'm using" problem does leave a bad taste in my mouth, which is why I'd refrain from doing this in any other context than "hey, look what this generative AI model can do". While I believe strongly in generalizing freedom of information to its logical extreme (abolishing copyright in the process), I also consider that the consent of a content creator matters regarding ways their work is reused, and the current attitude of the AI companies merrily downloading the entire Internet to feed it into the hallucination machine and pretending they're entitled to do whatever they want with it robs them of that choice.
I do know of an initiative that promotes using datasets that were sourced ethically (even then I don't believe they're going far enough but it's at least a step in the right direction) and gives examples of models that were trained using them. This shows that generative AI isn't inherently morally corrupt and doesn't deserve to be wholly discarded on that front, but I find it shameful that this is not the expected default in the first place and that, as of now, creators have no reliable way to say "no, I don't want you to feed my work into your electricity-to-nonsense converter" other than straight up not publishing their work at all. As in, not even just on the Internet, anywhere. Even if it's a physical work, there's nothing guaranteeing that a digital copy of it won't show up on the Internet and promptly get siphoned by web crawlers. Hell, even keeping it private might not even be safe because the moment a computer factors into the creation process it might just straight up be stolen by the software itself. Adobe, I'm looking at you. Generative AI as a technology technically isn't at fault, but in practice that's irrelevant to the undeniable fact its deployment is by and large causing harm.
Knowing this, would making use of generative AI make me and anyone else doing it complicit in this generalized racket? Sure, I never asked for this and highly disapprove, and since almost literally every actor in the market is doing it it's impossible to avoid it if you want to interact with the technology at all (and with training a model from scratch still being ludicrously expensive in terms of power consumption, leading by example and making your own as an individual is a no-go)... But it's not like I'm forced to use generative AI in the first place. For now, at least.
Making a parallel with another cause that is close to my heart and is itself related, some would argue that not committing to using free software makes you complicit in the generalized abuse embedded into the business model of proprietary commercial software that free software aims to fight against. And while I agree to a point that apathy on the part of the general population is helping the tech corporations getting away with it, it would be unrealistic to expect the average user to sacrifice their quality of life for the sake of reversing the current trend and eventually improve everyone's experience. Sure, there's great free software out there including many that did achieve industry standard status, but pretending the FOSS options are always a viable alternative let alone a moral imperative compared to the industry-standard would be disingenuous. Even when it is viable, the commitment and concessions required to migrate to another software is rarely if ever a trivial matter, so equating using proprietary software with a moral failing is out of touch with reality, and actively harmful to the free software movement.
As far as GenAI is concerned, however, it's in many ways an opposite mirror of the above. There is (with vanishingly few exceptions, and even then the applications for what is available are much more limited) no viable model that weren't trained with data sourced in ways I'd find morally reprehensible, the abusive behavior of the AI companies is rooted in the same mold as what made commercial software the hellscape it currently is (and that's no coincidence, they're largely the same corporations), and the tech itself, while interesting to experiment with, is definitely something I could do without. There are confirmed viable uses for it that are gaining adoption for constructive purposes, but for many people it's a solution in search of a problem. Hell, the biggest reason the AI hype train annoys most people is that companies keep trying to shove it in our throats in contexts where it's obviously useless if not outright harmful. The public backlash against it very much is there (even if often for completely unrelated reasons, including some I'd consider are missing the point).
What does it say about my principles if I keep using genAI in spite of disapproving how the generation models are being made on moral grounds and acknowledging the harm committed in the creation of the current models when I have the option to just not use it, especially in trivial contexts? It certainly feels hypocritical on some level, but if I commit to refusing to use generative AI (at least until a viable model emerges trained from a carefully curated dataset with explicit consent of its original creators in mind), what about others who don't? Would it be fair to hold it against them? If it's a matter of which context you use it rather than a black-or-white "it's wrong to use it VS it's fine", where does the limit lie? There isn't an answer I'm confident in giving for any of these questions. Overall I wouldn't dump all uses of it no matter how minor in the "morally wrong" bucket, but I can't fault someone for thinking otherwise.
Given what I personally use it for, is it even something important to worry about? Right now "Am I endorsing plagiarism and devaluing art by sending a Thanos car picture made using genAI in a chatroom?" is clearly something silly to wrack one's brain about, but the aforementioned unsolicited push for genAI features everywhere is still happening in ways we can't do much about, backed by companies we have no real power to oppose in the long run. Sure, the hype will eventually die down but whether we like it or not, this technology is now part of the available toolset, and it's going to become even more widespread. I don't have any faith that the AI companies will suddenly have a change of heart and start investing in models that have been trained with ethically sourced data by then. Ignoring that they have no financial incentive to do so, it would also mean throwing away everything they invested in the existing models because even if they actually could establish a list of every piece of content they didn't have explicit consent to use for training, the way this technology works doesn't allow to "untrain" the impact something that is no longer desirable to include in the dataset had on the model. What it "learned" from individual data can't be distinguished from everything else that contributed to computing the huge pile of numbers that makes up the end product used for generation. To truly remove its influence on the model's output, you'd have to restart the training process from the very beginning, aaaaaaall the way back to the first iteration of their product. And this, of course, includes discarding anything made with other generative AI models, completely or in part, as valid training data. None of the actors in the AI space would willingly do this unless legally compelled to which I don't think is a plausible scenario either given whose interests our political leaders are aligned with.
So, in all likelihood, this issue will not be solved in a way I'd find satisfying by the time this technology spreads enough that I can't reasonably ignore it, but just going "Eh, whatever, it's not like me boycotting genAI would have a meaningful impact given how much it already spread" and dismissing the whole matter as not my problem doesn't feel right either. Overall, I don't know what to make of it.
I feel like it's a very similar conversation to the discussion surrounding piracy—is it preferable? Definitely not. Is it necessary? I mean sometimes I guess so. Is it oftentimes vastly easier and more economical? 100% Does that mean I don't think people deserved to be paid for their work? And there it is. What I'd like to say to that is: I don't think anyone should be dependent on getting "paid" for something that's infinitely reproducible, because that means they could theoretically be paid infinitely. But we don't have a working model in use to fix that problem yet, so here we are. I think what I'm saying is that AI image gen & downloading things from the internet are both less than preferable, but also at times the best choice available, and for reasons caused by larger forces that we as a society have been unwilling or unable to change, or even face honestly. So yeah taking a stand against a tiny symptom, now, isn't going to affect the disease.
I don't shoot a whole lot anymore, but the idea that AI can cut out pregnant pauses, umms and uhhs is so exciting.
I wonder whether it's able to find the best parts of my clip (maybe based off a script that's loaded) and cut the rest? Or normalize all audio clips on the voice track?
Now I'm getting excited... Tell me it's, so. Tell me.
Normalizing = yes
Best clips-wise = TBD lol; there's a preset that claims it will find the "Viral" shots & dump out a 1-min version, but I haven't tried it yet. Glad someone is excited about the boring parts like I am—prob the divide is between people who have fallen asleep in front of their NLE & people who haven't : )
Haha - I bet. Ingesting and processing footage is the absolute worst, and for me at least, I find very few creative moments come out of that process.
This articulates nicely what I've been feeling about AI