AI doomers: What uses of generative AI are you actually excited about?
I almost certainly come off as aggressively anti-AI to anyone who gets into a discussion about it with me, but I think a lot of that is just a reaction to the insane hype period that generative AI is going through right now. There are probably potential uses that get me excited, so I figure as a change of pace (and to keep me from spiraling), I'll go through the mental exercise of exploring some of the things that AI could lead to that don't make me want to shake my fist and yell at clouds:
- Accessibility for the hearing impaired: The idea of AR glasses with microphones that could do things like generate live closed captions or translated subtitles, or notify someone who's hearing impaired to environmental sounds (fire alarms, crying babies, stuff like that) feels both achievable with current tech and also pretty awesome. I didn't bother ripping subtitles from any of my non-foreign blurays/dvds to my Plex server, and this product category gives me hope that I won't completely regret that if/when my hearing eventually starts to go.
Actually, after ruminating for a while, that's all I can come up with. So I guess I'll pose the question to everyone else because I'm curious what kind of examples and discussion it could generate: What uses of generative AI excite you that aren't already being aggressively (over)hyped by all the tech companies and AI enthusiasts right now?
The problem with a question like this is that "AI" does not have a unified definition. Everyone hears "AI" and turns that into a different definition.
Is it just LLMs? Is it LLMs + denoising models (which is what the AI Art comes from)? Is it anything with a linear model with weights that are derived from data "AI"?
Is something like CFRM (Counter-factual Regret Matching) not AI, because it isn't a neural network? It was AI when I was studying it in college!
So first, what is AI to you?
Good question. I guess I'm just thinking in very general terms. Deep learning, training generative models on large data sets--basically the underlying technologies that the big tech companies are dumping ungodly amounts of money into right now.
If you're just to mean anything that can be called a "neural network", it's so broad that you can find as many positive things as you want.
Some easy ones: AlphaFold revolutionized medical science and made drug discovery significantly cheaper and faster, and there are many drugs out on the market now that only exist because of it.
Much better OCR has not only made many business functions more efficient, but has also been a great benefit to archival, digitization, and acessibility services.
Much better machine translation has really made communicating between languages much easier. Even in live settings.
The technology that underpins "generative images" is denoising models, and those denoising models have made a huge difference, well, denoising photographs for photographers. This has made certain pictures usable that would otherwise not be, made cheaper cameras more competitive against expensive cameras, and overall made photographer's lives much easier when you don't have to worry about your ISO very much.
But again, most of these have very little to do with, say, ChatGPT. Grouping together "everyone that involves matrices" is like grouping together missile control software and Doom because they both are coded in C.
I don't think it's a very useful classification, especially in the context of ethics or human development. Imagine trying to determine if "programs made in C" are ethical or not.
I guess the spirit of the question falls along the lines of if you think stuff like ChatGPT, Claude, and Sora suck, what are some ChatGPT-, Claude-, or Sora-adjacent uses of that same kind of technology you could see not sucking? Maybe that's a bit vague, but I do think a lot of people (including myself to some degree) have strong anti-AI biases even without a detailed understanding of the breadth and nuance of the underlying technologies that the big "AI" products and companies are actually referring to when they use that term, which is why I'm interested to see what kind of opinions people have.
I also think the question is a bit... missguided? There's plenty of generally cool AI functions that could become revolutionary, but it's the lack of oversight and privatization that's turning us into doomers. Medical advancements? Cool when not just in the hands of the elites and without the invasion of privacy (being added to the data set). Advancements in law being able to scrub thousands of documents in an instant and parse inconsistencies? Cool if not just affordable by the rich, otherwise it just makes fighting court battles near impossible for those that can't pay for the premium version.
I don't think there's anything wrong about asking people what kind of Jetson's future they think AI could bring. But this has a similar ring to "Vegans, what recipes would taste best with veal?" I'm sure a vegan could come up with a hypothetical answer for you, but you're not going to get nuanced answers from that demographic that don't carry caveats with it. They're instead going to feel like you're trying to lull them into submission, which can totally not be the original question's intention, but it feels the same in the spotlight.
The AI that I approve of is the AI used in medical research, science research, and anything that advances humanity overall and helps it get better.
Anything else can fuck right off. Not interested in why AI ABC is good because of XYZ. It. Can. Fuck. Off.
/rantover
/yesihateaithatmuch
The thing that worries me about the use of AI in medicine is that so far the tech companies who would be able to best assist in that kind of research aren't really inspiring confidence in their attitudes towards privacy.
None? I acknowledge that AI will improve a lot of people's lives. But it's also severely worsening a lot of people's lives. I'd just as soon leave it out entirely.
I'm not sure about alarms but I know someone with hearing issues and I looked into it recently, there are at least a half dozen companies making live caption glasses for hearing impaired people. I agree, it's a fantastic use case.
A few others: research, in so many fields but especially in genomics. Medicine, not to replace doctors but to improve the process (screening, data management, diagnostic aids), modeling (weather, climate, geology, etc..), gaming (the most popular speculation is MUCH better NPCs). Also world building in general, for games of course but the possibilities for "world models" in a variety of areas are near unlimited. If you can train on the "world" of a domain (say, an ecosystem), the resulting inference could be remarkably useful. I expect there will be a lot of annoying hype around world models this year.
Regarding spiraling, I get it. I don't want to take this thread into the downsides direction but they're as unlimited as the upsides.
In spite of that, it's going to happen no matter what we think about it. We can hopefully support the creation of regulation and guardrails but one way or another the technology is going to keep exploding in all directions.
If some day we look back and collectively decide that AI was a huge mistake, not one of us will be able to realistically say "I should have done more to stop it" because there's just not currently anything an individual can do relative to the unprecedented amount of capital involved. Even large groups of individuals don't stand a chance.
I don't say that from a defeatist perspective. At any point in history there are large scale developments and circumstances beyond an individual's control. With just the one life each, I think we should appreciate the upsides where we can and focus our energy in the places we actually can make an impact. There have always been too many problems for one person to solve.
Excited about? Not a whole lot, but I think it's mostly because the "exciting" use cases for AI aren't heavily publicized or really brought to media attention. I think there's a lot to be gained in the medical fields with pattern recognition, early detection, and other general improvements to making accurate diagnoses (when used right of course, not relied on solely as a crutch without oversight). Anything that by and large can improve society by being a proper assisting tool would get me excited, when applied thoughtfully.
Unfortunately I feel like the most publicized use cases are what makes me the most angry and push me more into the anti-AI camp. Boring generated videos/art that have no soul behind them (with ethically questionable copyright issues), increasingly annoying and useless customer chat bots, and every other product you can imagine forcing "AI" branding into their product for no apparent reason.
Effectively nothing. My opinion will probably change once the bubble pops and things stop being so absurdly damaging and patently unethical, as I do recognize the technology is capable of being a niche good thing were it not for the... you know, everything. But sadly the reality of it thus far makes me loathe it such that I'd sooner just get rid of all of it, at least for a good long while. It's just done too much damage.
In the spirit of the thread though, I can at least say that 15.ai was fun, way back when. Small-scale, not paid, no cancerous growth targets, environmental impacts, job loss, etc. Just silly goofs of popular characters and nothing more.
I'm against generative AI as a wholesale replacement... But I do think it has potential as a tool even in creative projects. This article about Cyberpunk 2077 working with a voice cloning company to use AI, in conjunction with an actual voice actor, to replicate a deceased actor's voice for one of the dubs is a good example. (And here's a mirror if it's paywalled.) The company, Respeecher, is one of the more ethical AI-based companies I've seen since they A) explicitly get permission to use someone's voice, and B) work with voice actors to still do the lines and then modify their voices, rather than try to outright replace voice actors.
Since you mentioned captions, another potential use in the accessibility category are tools for blind and low vision people. There are multiple apps that identify objects and read text for blind people, and I'm sure there are many other uses. In a similar vein, glasses with AI facial recognition could be helpful for people with severe face blindness.
One other creative use I'm not too upset about: using it for singing. Specifically for vocals, NOT the lyrics, melody, instrumentals or any other parts that go into music. I talked about it more in-depth here on a post about AI music, but I had the realization songwriters can have a hard time breaking into the industry if they can't sing or their voice doesn't suit the song. Typically, the name most attached to a song will be the singer, not the songwriter, so they can't necessarily hire a freelance singer the way you hire voice actors.
With that mindset, AI vocals don't seem too different from Vocaloid to me. Both provide a voice for your song that won't necessarily overshadow you specifically, and allow people to pay more attention to your work. So AI vocals can provide another way for songwriters and composers to break into the industry.
I like it for situations where an absolutely gargantuan amount of data needs to be sifted through, and perfect accuracy isn't the most important thing in the world, better than human accuracy is good enough.
Things like email spam filtering, phone call screening, pouring through huge data sets of telescope images looking for certain types of atmospheric phenomenon, weather modeling, sensor fusion and so on.
The thing is that "ai" has been used for that sort of thing for decades, quietly. It didn't get as much hype as LLMs have lately because of chatGPT mostly because it doesn't try to impersonate a human chatting with you, which I think is a monumentally bad use of AI.
As far as LLMs... Honestly I always scratch my head at this one, trying to think of a way that it could be used that makes people's lives better. Maybe in some situations where it could be used as an aid to trawl through lots of data. The issue there is that it quickly becomes a crutch that people start relying on and start throwing expertise out the window.