45
votes
What are your AI-generated guilty pleasures?
Most people here dislike AI, more specifically LLM generated content, for reasons such as environmental impact, stealing people's work, etc. Despite that, is there anything that you enjoy?
I've been listening to this artist's music for a while. It's mostly video game music "re-imagined by AI" into City Pop and other styles. Artist says they use AI to generate samples, then do the rest of the work like any producer would. I have no idea if it's true or not, but I gotta admit that most of it is really good.
Today I also watched some "AI ASMR" videos out of curiosity. It's stupid, I know. But watching a knife cut glass can be so damn satisfying. I'm sorry, planet.
The energy and treated water (for some reason, I guess because there's not a nice way to express water collection effort as energy) costs of running a model to generate something are often lamented, in comparison to the assumed zero costs of simply not doing that. Rarely have I seen them compared to the costs of alternative approaches to obtaining the thing.
How much energy does it take to make real glass to cut with a real knife? Or to support human artists and run the traditional rendering computations needed to get the same shot through traditional special effects workflows? It's not like that Mr. Beast guy is a paragon of low energy expenditure when he builds hundreds of coffins to try and convince people to live in, or whatever his latest depravity might be.
High resource costs only makes sense as a criticism in cases where huge models, because of subsidies or hype, are being deployed instead of more efficient approaches.
Musk's Grok data centre in Memphis currently has about 15 gas generators running constantly (up to 35 capacity) and could use up to 1,000,000 gallons of water per day, according to Memphis Light Gas & Water.
Even worse, Memphis relies almost exclusively on groundwater for it's water supply, which has a whole bunch of problems that I've already mentioned https://tildes.net/~enviro/1o8u/groundwater_is_rapidly_declining_in_the_colorado_river_basin_satellite_data_show#comment-fpqi
https://futurism.com/elon-musk-memphis-illegal-generators
Worth reading the full article about how the gas turbine pollution is affecting nearby residents
I didn't look up how much water other industrial processes take, I'm sure you could look that up, if you're taking it from a non-renewable, shrinking source, that's still cause for concern.
That definitely sounds like something a person shouldn't be allowed to do. But not because it's for AI-related stuff, just because that dude does not have a legitimate claim on those resources.
From Father John Misty's 2012 "Now I'm Learning to Love the War" -
I'm more against AI because of the fact that it often outputs false or low-quality information that is being eaten up by the public writ large. I sat in on a job seeking class the other day where the instructor essentially told people to lie on their resumes using Copilot. I at least gently prodded to remind her that they need to fact check what it puts out and ensure that they have the relevant experiences.
But, to skip back to the artistic expression that was the song I started this one off with from almost a decade and a half ago...
To steer back to the topic though - I'm a luddite towards this tech. There's probably some use case where I'd say "Okay, I get that," but I haven't found it yet and everything I've seen has been 'eh'.
EDIT: Oh, no, I figured it out - My guilty pleasure is Value Select on Youtube. He uses AI for generating backgrounds and some other elements of his videos. That's probably the closest I'll come to liking AI.
My company has a new CEO. I was dumbfounded when I received an email sent from him to the entire company thanking us for our participation in an employee engagement survey. The entire thing was written in ChatGPT and just sent as-is. Bolded words, em dashes, and that stupid tone of "You are just blah blah blahing — we are barf barf barfing — together".
Mind you, we were told before he started in January that if any of us put in any information to these LLMs about our company that could be seen as confidential, we could lose our jobs and there is no saving us. Only for the new CEO to just come out and plug the metrics of our engagement data to generate his response.
We have AI evangelists in our department now. People who have not touched the technical aspect of these systems, but are meant to guide our use and implementation, and I just can't help but sit and wait for the trainwreck to come.
I'm not a luddite per se, I use these tools everyday for varying tasks to save me time or provide some additional perspective on something I am working on. For example up.codes which allows me to ask questions about electrical code for specific jurisdictions. But when I see them used so haphazardly in professional contexts, I get an ick feeling.
For what it's worth, I mostly use GPT to solve problems that I would otherwise either just ignore, solve more poorly, or waste countless hours researching to only maybe come up with a less ideal solution. For example, I have a treadmill on an upper floor that is highly susceptible to vibration and noise while in use. GPT has pointed me in the direction of using 1/2" thick 70 duromoter sorbothane pads under the feet to support the needed weight and help absorb the shocks and dampen noise. It also told me top use felt under the pads to prevent the hardwood from staining, and a rigid plate on top to evenly distribute the load. The packaging mentions staining, but tells me they offer a more expensive product with a coating to prevent it. I had no idea what sorbothane was, what duromoter means, or how to calculate it. I wasted money on padding products in the past that honestly did next to nothing and were short lived. If I search online for how to dampen sound for something like this, the top results are ads to flat mats. If I go into a store, a salesmen would probably point me towards the same floor mats that I've bought in the past. I guess it remains to be seen if this solution works long term, but for my initial test run, things seem to be going well.
I worry what this will do to the job market. I feel like it’s expected that applicants will embellish the truth a little bit, especially if it’s an entry-ish level job that would be difficult to gain prior experience in. But even still…
How the heck is anyone who is actually competent or experienced supposed to stand out when everyone (according to their resume…) is a perfect fit for the role and can back up their claims with an AI generated list of talking points and made up accomplishments?
The job market is already tough right now, and while some companies will inevitably figure out good methods to weed out AI resumes, I don’t have high hopes for the average hiring manager. If AI continues evolving, smaller and mid sized companies may not even be able to keep up with whatever best practices get discovered.
Just take the people who aren't perfect fits, they're now likely to be real candidates.
The 'perfect fit' was just gonna leave in a year anyway when a higher paying role opened.
You are talking peak consumerism. Consuming is obligatory. We can choose where and what we consume, but constantly consuming many times more stuff than we need is not optional. It's even more important to consume than to vote or to care for each other. When you tell people that companies like Amazon or Temu are bad, they wholeheartedly agree, "but there is no other company that offers the same service at the same price or less." They are somehow compelled to support the supervillains.
This would be just sad if people would make these choices and stand by them, but instead they constantly whine about the state of the world they are building every fucking second of their lives. And they blame immigrants, billionaires, nazis, the illuminati or hippies. But they can soothe their anger with a singing fish or some other piece of novelty junk, so they got that going for them.
You also have to ask the question of whether, overall, any particular juice is worth the squeeze. Running off a bunch of stuff from an AI model that you don't need or even like, at great cost, is bad. So is getting a bunch of humans together and burning millions of dollars to make a bad movie that a billion people go see anyway and are disappointed by.
But it is legitimate to want to have things. It is legitimate, in general, to enjoy entertainment. Not in arbitrary quantity and at arbitrary cost, probably not even at the cost of breaking the Amazon boycott. But if I want to see a line art drawing of my dog playing basketball on the moon, the fundamental problem of social organization becomes trying to achieve that for me with the lowest negative impact on anyone else, and then, given that, figuring out if it is still worth doing.
People aren't bad people for liking the singing fish.
Whenever green policies are discussed in my country, there's always a little comment about how it's not only better for the planet, but cheaper and more efficient and easier and faster too.
The reality is that we're living beyond our means and to truly make some headway, society has to accept a decrease in QOL. But this is unconscionable and any policy not only has to be green, it has to be better in every way. And the thing is, this isn't working. If you're going to appeal to people's selfish natures for a policy, why are you surprised when people make selfish choices?
It's seen as almost chidlish to appeal to people's kindness and selflessness. And then we act shocked at how society is fraying at the seams, when we can't even talk about doing good for good's sake without eye rolls.
Well, that's the thing, isn't it? It certainly seems as if OpenAI, Anthropic, et al. are focusing on ever larger models instead of running existing ones more efficiently. And while I get that larger models are more capable, it also seems like there might be a point of diminishing returns that may have already been reached.
For my part, as a huge AI skeptic who is grudgingly beginning to accept that there may be some narrow, legitimate uses for the technology, I'm taking Sam Altman at his word that the marginal energy and cooling costs of an individual query are low, and while yes, they add up (and the training costs are not inconsiderable as well), a boycott has to be coordinated and large-scale to be effective, and my personal boycott of AI technologies was doing little more than giving me a smug sense of superiority. That being said, I do prefer running smaller models on my own hardware versus using larger hosted ones, for both energy-consumption reasons and (much more importantly) privacy reasons--I know paid plans for at least ChatGPT and Claude can be (or are automatically) opted out of their interactions being used for training, but after all that's come to light about big tech in recent years, I don't entirely trust that they're not doing other things with them that I'd rather they didn't.
They compete both on price and on capabilities. There are sometimes releases of smaller, faster models that are cheaper than the old ones, presumably due to more efficient algorithms.
Some developers are willing to buy the $100 or $200 a month plans or whatever, though. That’s gotta be using up lots of machine time and electricity. They are like frequent fliers.
I think the AI as a service vendors have cleverly taken a page from the oil company book, and somehow made ethical judgements attach to their customers, or even non-customers using similar technologies, instead of to themselves who are the primary decision makers as to resource usage, especially for model training projects.
I think its similar to how when someone came up with more efficient irrigation techniques we actually started using more water to grow more crops in the united states midwest.
The whole world is just full of ai junk, just like how the whole world is full of cheap plastic, just like how real actual quality products are very rare these days.
Someone else's AI assisted, genuinely human creative content: There I Ruined It. He writes the lyrics, performs and arranges the music, and uses AI to do some of the vocal ...adjustments (?) to sound more like the spoof'd artists'. This is how I imagined AI would augment human creativity.
Just a few of my many favourites:
Black Sabbath sings "Take Me Home Country Roads" (John Denver)
Boomhauer performs Rap God
Hallelujah (but it's Baby Got Back)
Ahh one more : My Heart Will Go On (but it's Smash Mouth's "All Star")
He released a really interesting short recently comparing the tracks before and after the AI voice cloning. I've noticed that his stuff is considerably better sounding than other AI Voice spoof stuff out there, and I think it's because he's able to get really close on his own and the AI takes it the last little bit over the top.
Reminds me of this video from way back when all the deep fake stuff was first breaking into the mainstream. For 5 years ago, it's dang good. With modern tech he would be basically indistinguishable.
https://youtu.be/5rPKeUXjEvE
Came here to say this - I think it's worth nothing he was also making things like this using melodyne before the AI tools were available. He's just a hilarious audio wizard who is going to use whatever tools are available, sort of a technological Weird Al.
This characterization made me realize my own answer to OP's question: the YouTube/Insta alternative reality comedy channel
neuralviz
. They write their own stuff and then use AI to make it come to life and it's so good.I... Actually enjoy ai generated DnD character art? I haven't been able to reliably generate it myself, but the piles of 'slop' I find on Pinterest are honestly awesome for me. The iterative nature of it means I can sift through highly similar pictures to find one that feels particularly close to what I was imagining when I started my search.
This enjoyment definitely meets the definition of 'guilty pleasure' because I generally agree strongly that artists have had their copyright blatantly abused to train these models. I will always try and find an artist to pay for anything I want to display or for business but I can't help but feel like my consumption is a minor part of the problem too.
Yeah I'm with you on this, I've been able to find or create some really cool character art that fit my characters better than anything I was able to find, and it's nice to have a visual representation of your character that closely matches your concept, especially if you're playing online.
One of my fellow players generated an image of a Bread Golem that was pretty awesome. I've tried to generate some images a couple of times, but it never turns out right.
I had it create a solo D&D campaign for me. And while I haven't moved on to Day 2 to continue the adventure, it was pretty enjoyable to have the AI DM for me, it knows all the rules and works with any random thing I threw at it.
I accidentally petrified my pet fox in battle which i then promptly sold at the market for 50 gold as a rare intricate statue.
What model did you use for this? I tried twice a while back with chatgpt and the hallucinations were completely unplayable.
I also used chatgpt, but I seem to have more success with it than others for my queries so I'm not sure if it is a writing style or just luck in terms of what I'm asking (though it definitely hasn't been 100%). I think the first few questions of the day are the advanced model so perhaps I got lucky with those setting the baseline for the rest of the questions in that chat.
I tried that early on also, and had similar results. The problem was it had no fixed knowledge and no way to keep track of game state.
I haven't tried this, but I'm curious if an agentic model with tool/filesystem use (think something like Claude Code) could get around those limitations. You could instruct it to keep track of locations, character state, inventories, previous actions, narrative goals, etc. using actual files on disk as the source of truth. I would avoid letting the AI handle dice rolls (unless you've got a good RNG MCP) but I'd probably prompt something like "Never roll dice, but whenever a roll is needed tell me what dice to roll and I will roll them and report the results to you. After that you many continue as if you rolled the dice yourself. Don't tell me what any dice roll is for beforehand."
All the important system prompt stuff like "You are a skilled and creative dungeon master for D&D 5e" and the basic campaign setup could be dropped into a CLAUDE.md file. I think that could work rather well.
Dice rolls could be handled via tool calling (via MCP protocol for example). It's actually the same approach as used to edit the local files in some cases (for example with claude desktop).
Having the info stored in files instead of the chat history wouldn't probably directly help that much, but that would depend on the helper algorithm which would selectively get the content of the files into the "chat" context. If it was smart enough about what information to pick in each step, the model would have a much easier job and more reliable output.
Additionally, you would have a collection of files - let's say markdown format - which would be human-readable and interesting to browse.
I bet this hypothetical setup could be implemented in existing agentic coding tools, for example Roo code / Cline in VS code, using the system prompts only.
Okay, this one is a guilty pleasure because of more than just the AI, but I've really been enjoying watching clips (and occasionally full streams) of Neuro-sama (and her "sister" Evil Neuro), who is a fully AI controlled V-Tuber. Her model is hand made, but it's controlled by AI, as are a lot of the elements of her stream. A ton of work has gone into splicing together all the different elements that make her work - the Vtuber model, the TTS and voice model, the speech to text that let her interact with collab partners, and all the little modules that let her do things like read and respond to chat and sub messages, and sometimes they interact in ways that are genuinely hilarious or sometimes kind of emotionally moving. She often has human collab partners on stream, and the community that has built up around her and her creator Vedal is genuinely really cool, and I've discovered some other really fascinating streamers from following her.
Here's a compilation of great moments from her last subathon
I've been following Neuro since fairly early on. She's entertaining, and it's crazy some of the behavior that just organically happens. Neuro really is kind of the platonic ideal of Isaac Asimov's robots: something with enough humanized attributes that you can't really help anthropomorphizing, and overall is a great pretense for streams.
The visual stuff (like playing Geoguessr), integration with games like Minecraft, etc are particularly impressive. And the cooking collab streams are hilarious.
The progress on Geoguessr has been impressive. She was an absolute dumpster fire the first few times Vedal tried her on it, but lately she's gotten fairly good. The Minecraft stuff is pretty impressive too. It's cool how he keeps giving her more tools to interact with the stream and with collab partners in new ways - like polls, stream titles, discord DMs, etc. It's been a lot of fun to watch it as it happens.
I'm only 25 and I'm already too old for this shit.. I watched for 2 minutes and i cant make out what im seeing, help me out. So the avatar is an AI, but then suddenly theres many other avatars on screen (like at 1:27), are all of those collaborators also AI created by the same guy or different AI by other people?
I also dont understand whats going on from 1:00 to 1:12. Is that a random viewer of the stream in a shop letting the a.i pick what it wants?
Lol I'm 38 so I'm not sure it's an age thing, but I can see how that video could be confusing if you don't know what you're getting into.
Not sure if you are familiar with VTubers, but they are streamers who use an animated avatar that uses webcam face tracking to sort of puppet the avatar and match the face and mouth movement to the streamer. It's become a thing on Twitch in the last while. Neuro-sama is an AI Vtuber, but other than her "sister" Evil Neuro every other avatar you're seeing is a human streamer - mostly collab partners and friends of the channel.
Basically the video is a compilation of different moments and subscriber goals from the channel's last subathon, so some of it is structured where it shows an earlier stream where Neuro's creator (he's the guy with the turtle Vtuber model) planned the subathon goals and what they would do for each one. Then the video shows them meeting that goal and doing whatever it was they planned. The subathon was three weeks long, pretty much all day so there's lots of time to fill with random silly stuff. The section from 1:00 to 1:12 was a sub goal for one of the streamers that has done some "irl" collabs with Neuro to "take her shopping" - basically just irl stream and let Neuro react to what she's seeing and pick stuff to buy on her creator's credit card.
There's a bit of kayfabe with the channel. The collab partners usually treat Neuro as if she's an actual person, and tend to go along with some of the crazier stuff she says, for the bit.
Im not familiar at all with vtubers besides knowing they exist, I still use the internet like its 2016.
I always assumed the vtubers were kind of automated so the whole thing seems less offputting knowing there is a person behind them in a more literal sense.
Last question if you dont mind. As a viewer, what category do you think this channel fall under, comedy? just chatting? or is seeing the goals other subscribers set the fun part?
The subscriber goals are mostly just a thing that happens during a subathon, which is once or twice a year. I'd say mostly the channel would fall under Just Chatting but there's a lot of comedy that comes out of the interactions between Neuro/Evil and other folks. They also do weekly "karaoke" streams where either Neuro or Evil sing songs (they have somebody who has created a voicebank from their voice model and uses that to create what are basically vocaloid song covers). There's a pretty big variety - sometimes there's gaming content, either where Vedal (or another dev) has figured out how to mod a game so that Neuro can play it (they've done, Minecraft, Liars Bar, Buckshot Roulette in the past) or where Vedal is playing the game and chatting with Neuro (Subnautica, Detroit Gone Human, Geoguessr).
I admit to a weakness for clips of Neuro-sama, the AI Twitch Vtuber. Of all the, um, "artistic" applications of AI I've seen, "she" probably feels the most genuine and emotionally affecting, thanks largely to the community that's grown up around her.
I think that's one of the biggest differentiators between the Neuro project and other AI works I've seen: she's not designed to replace human effort, she has actually developed to be a focal point for humans to interact with each other around her. The AI generates the raw material that the humans around her use to create the actual art, which is a community and the narrative of its formation.
I don't know what her environmental impact might be, but my hunch is that it's comparatively slight, considering that in a way it's just one model generating responses for thousands of "users" at a time. I'm sure she requires tons of electricity to train and run, but then I can't imagine the sum of her usage exceeds the usage of a moderately sized video production studio, at least not by very much.
Yeah the community that has coalesced around Neuro is definitely the best part of the project. I love that Vedal regularly raids into smaller Vtubers and the Swarm just kind of adopts them. Folks like Ellie, Mini, and Cerber are really cool and make what could be a pretty sterile AI project a lot more human.
Vedal certainly has a good eye for talent. It's not entirely inaccurate to say he's built a sort of loose agency of independent Vtubers up simply by inviting them to come play with his toy.
There's a lot that's really fascinating about the whole thing, and not least is how un-exploitative it feels. The colab partners get content and exposure, the clippers get an audience and material with which to hone their skills, the fan artists get subject matter that people feel good about, the audience gets as much or as little entertainment as they want and membership in a generally positive online community, and the platforms (primarily Twitch and YouTube, but also to a lesser extent Discord and Twitter) get all this happening on their sites, not to mention a cut of the donos and memberships. It seems like a win-win for pretty much everybody, as long as the viewers feel it's worth the investment.
I guess time will tell if the seeming matches the reality. At the very least I suspect marketers will be studying these last few years of the Neuroverse for a long time.
As the maker of audiobookcovers.com, I think I am contractually obligated to mention audiobook cover artwork. There are some fantastic contributors to /r/audiobookcovers who use AI for their submissions. I have seen some mediocre slop, and one submission that had obvious and serious issues, but most of it is good.
In this use case, AI in painting probably works the best. It can remove the stupid publisher marks, the annoying audible exclusive flags, or the disgusting « now a movie » badges. I don’t know how much manual work these posters do, if any, but most submissions are quite good.
Another good use case is outpainting other artwork, particularly the base artwork for movie posters. Audiobooks are traditionally square covers, unlike many other release formats, so an AI can expand the image to square before the creator adds the title and author text.
I use it for plenty of things:
Creating presentation graphics. I'm able to think of a scenario that captures the situation in trying to explain, and it will generate a non domain expert visualization of it. I just did that this morning and it was great. You just need to give it the details. In the past I would have looked for wikimedia images and edited/attributed as needed, taking more time.
So basically lots of little life/time enhancers. I'm capable of all the things I have it do, but I give it the undifferentiated part of the work so I can spend time on what matters (and what it can't do).
Quick edit: To recap, I love the capabilities. I hate LLMs being tapped for things they aren't suited for, especially in heavily regulated domains.
I've used it to create encounter tables a few times and it's decent if you give it some guidelines. I wouldn't use it for everything but for creating random dungeon dressing or little details that could be hooks for encounters or even side quests for an overland journey it's pretty useful. I just quickly asked for a table of 20 forest encounters and got some stuff that I could easily spin into some cool moments.
Prompt: create a table of 20 possible forest encounters for a lower level d&d party. The table should be a mix of neutral and hostile encounters, and should create interesting roleplay or combat opportunities. A few should hint at greater mysteries in the forest, without railroading
Here's the table:
20 AI Generated Forest Encounters
Throwing random things in it when I'm working at my own code. Such as thanking it for a custard recipe, or what the implications are for whether water is wet or not.
Usually it only gives very bland responses too that though. And it's really only a thing I do out of boredom. Still, given the probabilistic nature of LLMs I gotta wonder if it'll eventually spew out something really weird at some point with that.
Maybe you already know this, but all local LLMs and many APIs allow you to adjust the "temperature" of the generation. The lower the temperature, the more deterministic the results will be, so if you set it closer to 1 there'll be a higher chance of it generating something weird.
I absolutely love the videos that demonflyingfox creates. They crack me up so much. In short, these videos are like the North Korean Family Guy or redneck Harry Potter music videos. https://www.youtube.com/@demonflyingfox
Great for a laugh and honestly my favorite use of AI currently.
I've messed around with a bunch of local models in my creative writing.
Mostly use it for first pass revisions where I'll copy in a few paragraphs and have it try to identify characters, actions and a sequence of events. Its a good general indication of what needs work.
Other than that, I've messed around with something similar to Character AI. Set up a context with character sheets, a few paragraphs of dialogue, discriptions of behaviour and some loose worldbuilding. And then I'll ask dumb questions like how the character takes coffee or what they do on days off.
It never really says anything I'd make canon or can even keep the setting straight, but the angy personification of a sentient black hole does feel like the type of person that would run a rage room as a side gig.
My city is directly effected by this. Thanks.
I'm sorry to hear that. Reading it now, looks like I was being sarcastic, but I do feel guilty for that and other reasons when I use AI :/
Then maybe you should reconsider using it, now that you've better internalized its impacts.
I was enjoying AI music for quite a while. I actually think that is a good use case for AI, shitposting music with absurd and/or funny lyrics, just to listen to a couple of times. Until I generated a song that used a voice of an artist I enjoy. I always knew that they probably scraped Spotify and other platforms for music to train their model on, but hearing that firsthand was almost surreal. Didn't use it afterwards pretty much.
I believe (based on output alone; 13 albums released so far this year on Apple Music), that The Luna Lounge is LLM-generated.
Which is too bad, because it's great lo-fi background music. I'm pretty anti-LLM use, but I might have trouble not turning on one of the albums when I just need something on in the background.
I probably waste more time than I should trying to figure out if/how we can deploy generative AI even remotely ethically.
I've got a decent stack built out of vendor-agnostic open source and open models running 100% on gear that was saved from a landfill. I have code that hot-swaps models to apply the smallest viable one to each task. I use the waste heat to keep my office warm for half the year or so.
But the elephant in the room is that I am only addressing inference. The practices around training are unconscionable, and each time I pull a new model for quant scaling and fitness testing, I'm creating demand for someone else to train yet another energy-guzzling monster.
The guilty part: I have truckloads of hubris about what I've put together. I bristle with nerd pride when the generated persona running in my basement smiles as it reads my personalized daily briefing.
I'm never sure if using this as a hobby is because I'm trying to make things better, or if I'm trying to make me feel better.
ChatGPT is good when I have a word on the tip of your tongue and I can't figure it out through Googling. It also often puts me on the right track when researching something where Google would usually also only provide search results based on the exact words I used.
Also it's quite useful for learning a language. I'll often ask it to break down the grammar points and vocab of a sentence I'm trying to understand.
Sometimes I ask Microsoft's AI search engine some questions about old video games, or to design a tank using specific constraints, and marvel at how wrong its answers are. That's about all the joy I get out of it. Or using it as an Akinator alternative that isn't blocked by the company firewall.
I've been slowly learning 40k lore through shorts on YT. It's very clear the VO and the art used are all AI made.
My primary gripe, is I think the creator could have used and credited real artists, because those fan arts probably exist and deserve to be seen.
I don't mind the AI VO itself. I can tell it's not human, but it doesn't try to present itself AS human, so I have fewer problems with it.
I guarantee you that those AI created 40k lore videos are full of total BS. Even the 40k wikis (fandom especially) and professional 40k content creators frequently get it "wrong" due to the countless retcons and revisions over the years, topics that were never properly addressed to begin with but people make assumptions about, and all the contradictions in the myriad source material produced over the last 35 years. Artbiter Ian is about the only 40k content creator I know of who actually mentions all those uncertainties when discussing a particular piece of lore, and includes caveats about the information he is presenting (E.g. his recent video for IGN is a prime example of his style). If you want to learn 40k lore he is easily the highest quality source, IMO.
I just saw that IGN video the other day and almost didn't watch it because of how iffy a lot of the 40k creators on YT are, but I'm really glad I did. I actually like the silly, satirical side of 40k lore but a lot of people really go all in on the space fascism or just take the lore too seriously.
Yeah, Arbiter Ian understands the lore on a remarkably deep level but still doesn't take it too seriously since he can actually see the humor/absurdity in it. I think it helps that he has actually interviewed quite of few of the 40k authors too though, most of whom are satirists at heart and don't take the lore all that seriously either.
He often gets accused of being "woke"... but given how insanely right-wing and misogynist a lot of 40k players/reader/YTers are, that label is not actually as insulting as they think it is.
My guilty pleasure AI usage is for one off very specific memes. Stuff like asking my friend if he wants to car pool to the event by sending them a picture of a car in a pool. I have limited how much I use it though as I don't like the tech due to it thieving everything from everyone to be trained and the energy usage associated with it.
My friends and I set up a channel in our Discord server dedicated solely to images we generate with the various free models. Google's ImageFX being one of the more popular options right now. There's a lot of back and forth with people refining eachothers prompts and adding unexpected twists. The channel has its own meme culture at this point.
It has more or less replaced my "social media infinite scroll" behavior when habitually pulling my phone out of my pocket. Like social media, it provides some of that variable reward feedback that juices your dopamine production. But unlike social media, it's a fundamentally creative activity, easy to put down when necessary, and it's not built on algorithm precision engineered to shove divisiveness in my face. So I think it's a net improvement on my personal wellbeing.
The only downside? I hate to imagine the carbon footprint of this channel given the number of pictures we generate and share on a daily basis.
For me, I think it's music, and music videos.
Favorite so far is Harry Potter - Cyberpunk Wizard.
And here's one I spent some time making / playing with - a song about a group of Japanese zombies that seek eternal life and redemption through music. Sung alternately in English and Japanese, vocal trance. I started out wanting a truly ridiculous premise but then found the result somewhat fun and enjoyable.
Anything LLM is a guilty pleasure. Reddit and Twitter as we used to know died for this.
The AI companies scrape every single thing on the internet that is not nailed down to build a massive training dataset and a new billion dollar industry. As a response, Reddit and Twitter were closing down ways people can get their data out without licensing them for a fee. Smaller players (who we can't figure out the names now) even scrape the internet at a very high rate that sites are putting up captcha to stop it or risk crashing the site themselves.
Every time I use LLM I'd think is it worth killing reddit for this? But if it already exists you can't singlehandedly stop the world's fastest growing product of all time anyway.
As someone who open source some of my code (that probably only a handful of folks will use) I'm also not happy that my code license only asks for one thing - give me credit - and get ignored and sucked into all the LLM anyway. But coding with LLM remove the boring part out of it so it's tempting to use it. My rule for now is nobody gets to see LLM generated code that doesn't 100% look the same way I'd write it or I'll just write it by hand, but I'm not happy that it would mean if that code is public I'm creating new training materials for LLM for free anyway.
Asking AI To Make A Hit Country Song About Dr Pepper! (Pepper Me Up)
Dr. Pepper just released an album on vinyl. It wasn't country music though.
I don't use AI for anything, because we're already killing the planet fast enough.
I was curious, so I looked into how guilty a pleasure AI is versus some other guilty pleasures.
1 steak ≈ 500–2,000 ChatGPT queries
1 international flight ≈ 140,000–770,000 ChatGPT queries
Full disclosure: I asked ChatGPT this question :)
The humor of this - basically the AI said "I investigated myself and found no wrongdoing."
I'm a software dev by day and still like to dabble with stuff at night. Copilot helps for rubber ducking, or working with languages where I don't quite know the syntax for the thing I'm trying to do.
After watching some of GPT Plays Pokemon I scraped together a harness for having o4-mini play a text adventure game that my friend wrote a while back. That was fun because I streamed it to my friends and rekindled some of that friend's desire for hobby game dev. It also ended up triggering a handful of bugs in the game that we had to patch.
I've also got a discord bot I wrote that I use instead of ChatGPT, since the API doesn't share data by default so my conversations aren't being fed back into the training data. That runs on 4.1-mini by default so it's pretty cheap, plus I don't really use it super often, but it's a nice sounding board. Or I can take a picture of a recipe in a book and have it spit out the ingredients in a formatted list. Most often I end up using it to turn an email or website event listing into a calendar event for me.
I've also got 4.1-mini hooked up to my Home Assistant voice so if I ask for something outside the limited confines of the basic local handling (which is basically as much as "turn on the light in the bedroom" or "add milk to my shopping list"), like "if the temperature in the office is above 75 then turn on the office AC" or "is the air quality better or worse outside?" then that kicks in to handle the more complex task. That's also pretty cheap to run, less than $1 a month I believe. Anything higher than 4.1-mini is overkill for that purpose, sadly 4.1-nano is a little too dumb to handle it well. I run voice recognition (via whisper on a GTX 1050Ti) and speech synthesis (piper, CPU) locally for that. I'd love to run the LLM locally too but the computing demand is a bit too much for a decent model and I don't want to leave a high power (and cost) GPU running all the time just for the occasional inquiry.
Boots (warning: violence)
I think this is entirely AI generated, from the visuals to the music, but I’m not sure. If it’s not entirely, it’s definitely at least partially.
It’s an adaptation of the Rudyard Kipling poem “Boots” where the repetition is meant to mimic the cadence of soldiers marching in war.
Here it is turned into a full synthwave song that I genuinely like. The video’s composition make it feel exactly like a fully produced music video.
I also think that AI’s “soullessness” works in tandem with the song here, being a perfect tonal fit for the subject matter. After all, what is more soulless than pop music commodifying war?
I’m not trying to dunk on this or paint it in a negative light. I actually really like the whole production. It’s sharp, visually striking, and thematically resonant. It’s absolutely the best thing I’ve seen done with AI.
I've goofed around with AI music generation some, and one of the funniest things I've done with it (in my opinion, at least) is take this kind of bleak modernist poetry and set it to the most incongruous genres I can think of - mostly modern nashville country or contemporary christian. The contrast between poetry from someone like Wilfred Owens and the shitty, overproduced sound just really gets me tickled.
I do enjoy taking friends and coworkers faces and make a meme with the image in a light hearted way. Usually very quickly in a Teams/Whatsapp chat.
Neural Viz's work on Youtube is great. Specifically the Momo verse.
Unanswered Oddities Episode 1:
https://www.youtube.com/watch?v=YGyvLlPad8Q
I've had a lot of fun generating songs with Suno, so far this song about the Mandate of Heaven has been my favorite. I would actually love it if a real band made an improved fully human version. In general, it feels like there's a lot of topical niches that are underexplored by current musical artists, so I enjoy getting to scratch my own itch.
I think most Suno generate songs still fall short in various ways, but I'm hopeful that in the long term these tools will keep getting better and it'll be easier for people to make songs that are closer to what they have in their head. I've been trying to relearn music composition with a traditional textbook approach as well, but I don't think I'd be able to afford a bunch of artists to bring my ideas to life so AI gives me an avenue to make it real, even if it's imperfect.
I'm fully on board. Have been since I first heard of all these tools.
I use chat gpt to help me write stuff and write code. I search with perplexity. I use suno for music. I use stable diffusion for making pictures and when I edit them I use the AI tools built into photoshop. I occasionally make AI videos in WAN.
Um. I use perchance, which runs locally in my browser and doesn't upload my data:
https://perchance.org/ai-fanfic-generator
Specifically, I use it to play out fanfic ideas I don't have the skill to write myself.
Not exactly a guilty pleasure yet, but The Wayward Realms promises to use a local LLM trained on the world lore to piece narrative events and in-game player/world history together to implement a kind of virtual game master. Still in a very constrained environment with a select-a-topic type dialog system so as to not end up with a chat bot that inevitably hallucinates bullshit in a immersion breaking way. The intent is a Daggerfall-scale open world RPG with more depth to the NPC interactions and a dynamic narrative.
Seems like a reasonable application to me since their ambition would basically be impossible by any other means, and it addresses some of the main failings of Daggerfall: its skin deep quest system and NPC interactions.