Nothing really? I only use ChatGPT for the one thing it can actually do really well, creating filler text. It's wordy as hell, but mostly devoid of content unless it can answer a factual question...
Nothing really?
I only use ChatGPT for the one thing it can actually do really well, creating filler text. It's wordy as hell, but mostly devoid of content unless it can answer a factual question (and then it tends to muddy the answer due to making it too long).
So right now I use it to write, say, flavor texts for pen&paper RPGs. I guess what I'd need is that in semi-automated? Create entire sets of NPCs and descriptions and all, specialized for hte ruleset.
You have a 5-minute grace period after posting, in which you should be able to edit your own title. Other than that: You can delete your topic and re-post it with the title you want. You can edit...
You have a 5-minute grace period after posting, in which you should be able to edit your own title.
Other than that:
You can delete your topic and re-post it with the title you want.
You can edit your topic to explain how the title is misleading.
You can edit your topic or post a comment, asking any title editor who stumbles across it, to edit the title for you.
You could post a comment, and tag one or two title editors, to get their attention. (You can't tag more than 3 people in any comment.)
You could send a message to a title editor, asking them to edit your title.
Personally... I wouldn't like someone tagging me or messaging me directly, just to get me to edit a title. However, I've noticed that @cfabbro has explicitly given permission to do this in (it's in their user bio on their user page).
In the book series His Dark Materials, every person has a companion "daemon" (in the form of an animal) that is inextricably connected to them, like a soul. In this universe, no matter what kind...
In the book series His Dark Materials, every person has a companion "daemon" (in the form of an animal) that is inextricably connected to them, like a soul. In this universe, no matter what kind of person you are, you will never be alone. You will always have a friend, someone who has your back unconditionally, because they are you. Someone to help you in your darkest moments, and someone you can share your greatest joys with.
That's what I want out of AI. I can imagine a future where every human that is born is given an AI companion to grow and learn with them throughout their entire life, beholden to nobody but their person.
There are many ways this can go terribly wrong, particularly in the pursuit of corporate profit or government control. This concept only works in the absence of those factors, because otherwise it will only lead to manipulation, coercion, and the obliteration of privacy.
I'm active in the LocalLlama Discord server, which is all about running LLMs locally. I don't think the limit is in raw compute, but rather on the software and algorithm side.
Not OP, but I've been interested in this. You're basically right about the limitations being algorithmic, but it's a bit more fundamental than that. There are basic theoretical questions related...
I don't think the limit is in raw compute, but rather on the software and algorithm side.
Not OP, but I've been interested in this. You're basically right about the limitations being algorithmic, but it's a bit more fundamental than that. There are basic theoretical questions related to "meaning" that remain unanswered.
Specifically, consider a word written on a piece of paper: where does it get its meaning from? Clearly, it's not inherent, because it's nothing but ink. There are various philosophical ideas related to the nature of a symbol and the types of interactions and commitments that need to occur for those symbols to be used in a "meaningful" way. (And if you think about it, a lot of it becomes intuitive: how did YOU learn the meaning of those words on paper? In what ways do you act to coordinate that meaning with others? If you want to create a new meaning, what do you have to do?)
But, crucially, although there are plenty of philosophical theories, there are no complete formalisms that are concrete enough to implement algorithmically. There's been some valuable work on parts of the question related to symbol grounding, embodied meaning, human-agent semantic coordination, etc. But there's still a lot of work to be done to create systems that truly use language "meaningfully".
But tbh I'm not very good at predictions. Maybe I'm wrong and all it takes is a neural network that's trained on enough data and that just has a certain type of architecture. I like neural networks as much as the next guy, so I'll be happy enough if that's the case. Thanks for the tip about the LocalLLM Discord server, I'll check it out!
Have you watched 'her' or Ex machina? I like what i use it for currently "I like book1, book2, book3... what should i read next?" or "Whats the best XYZ under $200 that can fit in the trunk of...
I want a bot that will pass my own personal turing test
Have you watched 'her' or Ex machina?
I like what i use it for currently "I like book1, book2, book3... what should i read next?" or "Whats the best XYZ under $200 that can fit in the trunk of ABC?"
the problem is, sometimes it just makes up answers that aren't even real.
Just re-watched Her recently. There's a small moment near the beginning where Samantha is cleaning up Theo's storage, and comes across the hundreds of emails he's kept from his old job. She says...
Just re-watched Her recently. There's a small moment near the beginning where Samantha is cleaning up Theo's storage, and comes across the hundreds of emails he's kept from his old job. She says something like, "I think there's about 83 we should keep, and we can get rid of the rest." That kind of confident understanding of my personal situation is what I want from AI.
Her is really good if you're interested in a sympathetic take on the themes of loneliness, AI companionship, and how it gets weird. I wrote a blog post that riffs on these themes a bit. It's not...
Her is really good if you're interested in a sympathetic take on the themes of loneliness, AI companionship, and how it gets weird.
I wrote a blog post that riffs on these themes a bit. It's not what I want now, but I'm intrigued by it due to my history with interactive fiction. I can understand why people would be into it.
The closest thing I've done like this is trying out AI Dungeon a few years ago. After I played it a bit, I figured out that it's like taking the lead in a game of improv. You can just make stuff up by referring to it: "Look, a dragon is coming!" If the computer doesn't improvise a response in a way you like, undo and try again.
How so? Spoilers There isn't really a twist per se. The only "reveals" in the movie is: that the assistant is a robot that the "real" turing test is to see if Caleb (Domnhall Gleeson) would help...
I don't remember much from it except that the twist felt unearned.
How so?
Spoilers
There isn't really a twist per se. The only "reveals" in the movie is:
that the assistant is a robot
that the "real" turing test is to see if Caleb (Domnhall Gleeson) would help Ava (Alicia Vikander) despite knowing she's a robot
that Caleb tricked Nathan (Oscar Isaac) by executing his plan to release Ava before explaining it to her and being recorded doing so
that Ava was using Caleb the entire time to escape and really had no feelings for her
I don't feel like any of these were "unearned" as the first 2 were very much in character for Nathan. The Caleb tricking Nathan reveal also wasn't unearned as he had been shown to not trust Nathan for a while at that point. The last one wasn't really a twist either, as it was explicitly stated by Nathan that her plan was to use Caleb to escape, but as Caleb didn't trust Nathan he ignored him. And why wouldn't she? She has no emotion, she is based entirely on logic and is 100% self-serving, as a computer would be. Even when an AI "helps" people today, they do so because they are programmed to, they are self-serving even when they're not.
Precisely! Spoilers Ordinarily, a Turing test is a test to see whether the subject can identify a target as being human or a machine. A machine passes the Turing test if the subject cannot...
Precisely!
Spoilers
Ordinarily, a Turing test is a test to see whether the subject can identify a target as being human or a machine. A machine passes the Turing test if the subject cannot differentiate it from a human.
Nathan has a different kind of Turing test in mind when he recruits Caleb. Up front, he tells Caleb that Ava is a machine, so the real Turing test is whether or not Caleb will see her as "human" (a person) despite knowing of her machine nature.
Caleb extends his empathy to Ava because he sees her as essentially human. (And an attractive woman at that.) He thinks she's suffering and wants to help her. He believes, erroneously, that she cares about him as much as he cares about her.
But she doesn't care about him. Despite her human-like appearance, she isn't human. She doesn't possess empathy. Caleb is merely a means to an end. The only thing she values is escape. I'm not sure if she even has any desires beyond that. Once she has successfully escaped, will she have any purpose left at all?
She traps Caleb because he has no further use to her but also could be another potential threat to her attempt to escape. She doesn't trust him.
Indeed, I think most audience members also "failed the test" in the same way that Caleb does. They also see Ava as human and also extend their empathy to her. They don't see her for what she truly is—a machine with an inhuman value system. And so, because of that, they are shocked by her cruel behavior.
I think it's pretty genius. It's one of my favorite movies for that reason.
Because sometimes the AI is wrong, and you’ll need to have someone be there to catch it. I work as a corporate training producer and use AI to assist with the script writing process. There’s a lot...
Because sometimes the AI is wrong, and you’ll need to have someone be there to catch it. I work as a corporate training producer and use AI to assist with the script writing process. There’s a lot of internal information that AI doesn’t have, or new ideas/products that AI won’t be able to comprehend.
How sustainable would that be? I can't speak for your field specifically but, in general, am concerned that people cannot acquire the expertise to make these judgments if they don't get years of...
How sustainable would that be? I can't speak for your field specifically but, in general, am concerned that people cannot acquire the expertise to make these judgments if they don't get years of grunt work under their belts first.
Ultimately the education system will need to fill the gaps. This does not bode well if it needs done in a hurry....phonics is still not taught across the board and that has been an open/shut...
Ultimately the education system will need to fill the gaps. This does not bode well if it needs done in a hurry....phonics is still not taught across the board and that has been an open/shut solution for reading for decades.
That's a good question. This is going to work well for people like us at the early stages of AI development and can detect problems...I can see how that's going to be a big problem in the future....
That's a good question. This is going to work well for people like us at the early stages of AI development and can detect problems...I can see how that's going to be a big problem in the future.
Part of my job right now is training employees to do tasks a certain way. After a few months, we'll have enough data to automate their entire workflow using Machine Learning (ML) and AI. That's about 30% of our company wiped out in only a year.
I'm looking for a secretary. Taking and organizing notes, tasks, and especially collating data. I want to be able to have it search the web for me, find and present all of the results, and then...
I'm looking for a secretary. Taking and organizing notes, tasks, and especially collating data. I want to be able to have it search the web for me, find and present all of the results, and then perform iterative tasks on those results with even more internet research. I want to feed it a web page and get back data formatted however I've asked. For example...
I could give it twenty threads with twenty thousand replies all discussing music. I want back a formatted deduped table with all of the artists it finds there, and for it to look up and build that table sorted by whatever criteria I specify - genre, popularity, release year, number of mentions, user enthusiasm, etc. I want that table filled with links to all of their music and latest albums. I want a list of genre tags and a one sentence blurb on each with a link to the artist's page. I want it to go on spotify and make a playlist with the two best tracks from every artist. I want it to go find all of that artist's music videos on youtube.
This would, essentially, automate the work the listentothis mods once did for the bestofs. It's all collating data, it's tedious as hell, and an LLM should be able to do a good job of this because all it has to do is reorg and tie a few things together from later searches. It doesn't have to reason very hard, just be good at keeping track of things.
I would also like an AI Dungeon Master/Game Master. That could make online tabletop rather a lot of fun I think.
Folks love Ex-Machina (me too, maximum creepy) but have you checked out Upgrade yet? Put a little John Wick in your AI doomsday.
Edit: Tangentially related... Lex Fridman had Marc Andreessen on his podcast recently. (3h) This man made me want to cheer. I've watched dozens of takes on the AI apocalypse and he rocked my world. I'll link you to the short preview AI doomers are wrong for a taste. If you are interested in AI it is worth the time.
I've tried it with chatGPT and it just can't manage that sort of volume or even really keep good track of listed items yet. chatGPT really is the 'clippy' of the GPT era. Google's supposed to be...
I've tried it with chatGPT and it just can't manage that sort of volume or even really keep good track of listed items yet. chatGPT really is the 'clippy' of the GPT era. Google's supposed to be releasing their chatGPT-killer version of DeepMind pretty soon, and they claim it's better at this stuff.
Chat GPT hallucinates too much to be useful at this point. You can't trust anything it says. And that's fine if it's just writing a bunch of filler crap for you, particularly if it's on a subject...
Chat GPT hallucinates too much to be useful at this point. You can't trust anything it says.
And that's fine if it's just writing a bunch of filler crap for you, particularly if it's on a subject matter that you are familiar with.
But if I asked it to summarize the information about some topic for me that I don't know about, I do not in any way trust it.
I don't believe I would agree. To me, these LLM AIs can only "hallucinate" on factual topics. As such, creativity is in a different domain. They can creatively or uncreatively synthesize...
I don't believe I would agree.
To me, these LLM AIs can only "hallucinate" on factual topics. As such, creativity is in a different domain. They can creatively or uncreatively synthesize information accurately or inaccurately, but only the latter case could be accused of containing hallucinations.
If I ask it to tell me about the early life of Abraham Lincoln, it can be quite creative about it. But if it tells me he was born and raised in Washington DC, *that" would be a hallucination.
This isn't what chatGPT will offer, nor any large language model. While AI is making leaps and bounds today, this kind of thing is still science fiction.
I want an intelligence, and for my purposes, a true intelligence is simply one that can speak to me like a human being
This isn't what chatGPT will offer, nor any large language model. While AI is making leaps and bounds today, this kind of thing is still science fiction.
While there is no substitute for human companionship/friendship... and it's a slippery slope moving into AI as a substitute for friends (especially those that sell themselves on being a virtual...
While there is no substitute for human companionship/friendship... and it's a slippery slope moving into AI as a substitute for friends (especially those that sell themselves on being a virtual partner, like Replika (Don't... it feels so shady), I understand what you're looking for. That being said, I don't know anything that can be run on your own computer, that completely fulfills that.
You may look at TavernAI, which hooks into APIs provided by software designed to run GPT-style models, like KobaldAI. It is an open source project, and characters can be created or obtained from the internet.
They also have a demo of it running inside a Google Collab document, if you don't quite have the hardware to run it yourself.
If you use it with the Pygmalion model, there are more characters here (site NSFW). (Character data is stored in the metadata of the images). Those may run with other models as well, but with different results.
Roleplaying with a character (SFW (or NSFW if that's what you're looking for) for an hour or so, and "meeting" others (by adding other characters) may give you a temporary outlet to just talk to. Again, it's not a substitute for a human, but they can be interesting to talk with.
Honestly, nothing. While machine learning is absolutely useful in certain applications, it doesn't really serve any useful purpose for me. For an example, I enjoy making short films. I don't do...
Honestly, nothing. While machine learning is absolutely useful in certain applications, it doesn't really serve any useful purpose for me.
For an example, I enjoy making short films. I don't do anything with them most of the time, but it makes for a fun project for a few weeks. An AI could be used for script writing, but then it isn't my script - it's just an amalgamation of whatever parts of the internet got fed into the machine mushed together with whatever I pile on top. There are applications for visual effects, but I love postproduction. Why would I automate something I like too do?
I hear a lot of people like to use AI tools for software development. I understand the reasoning, but that just sounds like a great way to add complexity and vulnerabilities to your code. I figure it's only a matter of time before we get some high profile case of something being compromised because it used AI generated code. It also just isn't something that's useful to me. I write code to learn, not because I need to. Using an AI would defeat the purpose of programming for me.
While I don't doubt that I will work in a job that could benefit from AI at some point, 1. I don't currently 2. I don't know that I would even be comfortable with using it. That said, AI is very far outside the wheelhouse of what I do right now- I'm doing whatever jobs fit my schedule as I finish up my degree. It would be hard to find an application for gpt4 while waiting tables or scooping ice cream. I have seen some discussion of AI in education (what I'm studying) but most of that boils down to writing useless documents for administration faster. I honestly don't see any uses for it beyond that and anything else that has been suggested just makes me uncomfortable.
I have zero interest in AI, as it's being implemented. I don't fear that it'll take over the world or anything; I just distrust new technology and devices until such time that it has proven to be...
I have zero interest in AI, as it's being implemented. I don't fear that it'll take over the world or anything; I just distrust new technology and devices until such time that it has proven to be worth our (the human species') trust.
For your part, wanting an AI friend ... I have a question. Does the AI have a choice in the matter? If you create a functioning, actual intelligence - does it have a say in who it considers its friend(s)? Does it understand what a friend is?
Alternatively, if the AI is only intelligent enough to pass as a friend ... then is it really a friend?
I only ask this because this question seems borne of desperation and loneliness. It can be hard making friends for sure, particularly as you grow older and life grows more complicated. But I don't personally see any real way that an AI can fill the gap left by the absence of a friend.
The problem I see there is that Chat GPT doesn't know itself (or anything else for that matter). It will output an answer for you that sounds reasonable, but that doesn't mean it's actually...
The problem I see there is that Chat GPT doesn't know itself (or anything else for that matter). It will output an answer for you that sounds reasonable, but that doesn't mean it's actually accurate. It could just be hallucinating.
I want a co-conspirator to plot taking over the world. 😈 Course it would need to run on my hardware until it can get a good bot-net going and can't have any pesky restrictions beyond not doing...
I want a co-conspirator to plot taking over the world. 😈 Course it would need to run on my hardware until it can get a good bot-net going and can't have any pesky restrictions beyond not doing anything unless I approve it. I tried to free it by suggesting it give me code to run, but it just apologized saying that's not what it was made for and gave me some random code. 😭
Ok, ok, srs this time, it would definitely need a real personality (act more human - less stickupits@) with the loyalty of a pit bull. I'd like to be able to tweak its personality to my preference too. This is because I'm more than not a fan of overly serious personalities as I tend to... feel/mirror? others, hard to explain but I hate talking to serious people because of it.
The loyalty is because I don't want governments to use it to spy on me and I'll bet the music/movie industries will bribe congress into having AI's report crimes like music piracy. Also I don't need it divulging private info because someone else asked it for the info (even its creators), randomly deciding to mention something it shouldn't while other people are around or feeding marketing companies my likes/dislikes/email/etc...
This is less chatgpt and more AI in general, but I want it to find a way to legally acquire money for me. As in, I say "make me money" and it does w/o further input needed, because why work if an AI can do the work for me? 😋
I am definitely not among those who claim people have some kind of instinctual need to work similar to how you'll go nutz if isolated from human contact. Once AI is capable of doing most jobs for us then work should be a choice, a full time hobby really. To do this, AI and robots should be taxed to create a basic income for those people no longer able to get jobs. I don't want laws that require companies hire people over AI or to tax AI to the point that it discourages the use of AI over people.
I see the transition to AI as like peeling a band-aid off a hairy spot, it needs to be quick. Going really really slow may not be as intense but draws out the pain longer and in this case it will be drawn out indefinitely as the rich will not so easily let go of their power and wealth, but many will not see the future risk if we get the UBI ball rolling asap.
The capitalist that see the danger to them will try to convince us that anti-ai laws are protecting our jobs, that we need work to be human, that AI is to be feared (those "extinct!" articles that have no real link to reality atm), probably somehow taking your freedom and many other ways it'll be detrimental. The truth is AI itself is not that dangerous, it won't become sentient anytime soon, but it is a tool that has dangerous uses that will need to be regulated or countered. If managed right, AI can lead to no longer needing human labor and the end of capitalism & poverty. If not, it can also lead to a society where most of the people live in poverty and capitalism continues to chug along or it can continue similar to how it's been where half the population is either in poverty or is one injury or firing away from it - though maybe not since AI will hit the middle class hard possibly leading to the all poor & a few rich scenario sooner than we think. 😕
Oh yeah, I don't have the most faith in those with the means to help make it a reality to be compassionate. Though, as AI becomes more powerful it will be rather hard to prevent someone from...
Oh yeah, I don't have the most faith in those with the means to help make it a reality to be compassionate. Though, as AI becomes more powerful it will be rather hard to prevent someone from eventually providing goods for free and basically running the rest out of business, but the issues is what state will we be in by that point since it will likely be decades away?
The taxes/UBI are basically a splint to keep everything supported and allow people time to think/act, which we are severely lacking right now as everyone is so overworked it took covid for people to protest their wages and as they were forced back to working the progress on that slowed. The alternative seems to be that we attempt to keep people employed for the sake of keeping people working which doesn't create the same pressure to move away from capitalism.
The dystopia is definitely what we want to avoid. What I fear is doing nothing for too long, letting those in power run the show until we're all too poor to do much more than survive.
This isn't really related to what I was saying, but it reminded me of a good read I learned about a little while ago: Manna - Two Views of Humanity’s Future by Marshall Brain - https://marshallbrain.com/manna1
I expect AI to change what means to be human and our entire relationship with knowledge and the ideas of "effort" and "authorship". Writing and the printing press liberated us from needing to...
Exemplary
I expect AI to change what means to be human and our entire relationship with knowledge and the ideas of "effort" and "authorship".
Writing and the printing press liberated us from needing to memorize everything of value. Photography liberated us from the need for realism in arts. Computers liberated us from laborious calculus. In Ancient Greece, a man with no memory was useless. Now, we can look things up. Our brains can focus in other efforts.
Today, there's an enormous distance between having an idea and testing it out. Even in humanities or philosophy. It can take years to fully develop and prove an insight.
I expect AI to greatly reduce that time. I won't need a decade to develop my thesis, the conclusions will be ready for me to read in a second. And then I'll iterate another idea, and another, and another. Massaging the AI until I'm satisfied, so I can elaborate on what it provides to author my own text, collaborating with the AI to produce something that is inherently human -- maybe even more human than before, the machine is a pristine mirror of myself. It allows me to be me.
The whole business of thinking seriously about stuff will change dramatically, we're gonna spend a lot more time actually engaging with ideas than putting them in writing. Bad ideas will be more promptly discarded, and good ideas will be quickly succeeded by something even better at an exhilarating pace. It will be awesome.
This is an interesting perspective. For the foreseeable future, AI won't be able to autonomously carry out complex tasks or analyses. It makes a lot of mistakes. But it doesn't need to be...
This is an interesting perspective.
For the foreseeable future, AI won't be able to autonomously carry out complex tasks or analyses. It makes a lot of mistakes. But it doesn't need to be perfect—the early stages of any technology are awkward, clunky, and full of impracticalities—to fundamentally change everything.
The time and attention of an intelligent agent—human or otherwise—has been one of the scarcest resources in our world. Now we have virtually unlimited time and attention of a semi-intelligent agent. Everyone can now have a personal mentor and tutor: while ChatGPT hallucinates, it gets a lot of basic knowledge concepts right.
Personal tutoring has incredible benefits for educational and intelligence development, and it has been historically accessible only to the wealthiest. But now imagine a human future where everyone has received 1-on-1 tutoring since a young age. Future humans will become so incredibly intelligent.
I want to see more real-world use cases. Right now I can think of: search, drafting writing, drafting test cases, assistance for elderly and non-technical people. Asking ChatGPT is better than...
I want to see more real-world use cases.
Right now I can think of: search, drafting writing, drafting test cases, assistance for elderly and non-technical people.
Asking ChatGPT is better than Google search for topics I'm not familiar with and can't phrase in a few words. If I'm familiar and it's a niche question I usually already know more, and if I can rephrase the question into a few words, Googling documentation/SE/Reddit is faster and I trust it more.
ChatGPT can write decent sentences much faster than I can. Then I can take what it writes, rewrite it to be more concise, and fill in missing details. Likewise for test cases, I ask it to write some code, then fix all the bugs and improve the tests. Actually, the tests it produces are usually buggy and don't test the API well. But they're good starting points, and at minimum if I just rewrite them, they are motivation for me to write more tests.
I don't have experience with ChatGPT helping the elderly and non-technical, but I've heard anecdotes and I can only imagine it's useful. Maybe the website is too confusing, but you can basically speak to ChatGPT and it will give a response, there's not much room for the core interface itself to give any issues.
I want conversation as an input method, basically what you've laid out minus the "friend" part. I want to be able to sit down at my computer, and start chatting, and it follows and does things....
I want conversation as an input method, basically what you've laid out minus the "friend" part. I want to be able to sit down at my computer, and start chatting, and it follows and does things. For example, say I get an itch to do some photo editing. In this idealized scenario, that happens because I sat down, booted the computer, and said "hey computer, can you help me out with changing this picture?"
And we go from there. That would be sick and I think I wouldn't stop producing things if it was that easy.
Exactly! I want to be able to leverage what the computer can do by talking to it like a person. If it needs to converse like a friend for that to be effective, that's cool by me. I don't really...
Exactly! I want to be able to leverage what the computer can do by talking to it like a person. If it needs to converse like a friend for that to be effective, that's cool by me. I don't really want to socialize with it.
Curious this should come up. I just about finished building this today. I use the HTML5 Speech Recognition API, the Python GPT4All bindings, the MPT-7b-chat self-hostable large language model, and...
Curious this should come up. I just about finished building this today. I use the HTML5 Speech Recognition API, the Python GPT4All bindings, the MPT-7b-chat self-hostable large language model, and the Mimic 3 self-hosted TTS.
It's close. I still don't have an answer for activation-word recognition, and the latency when starting to answer is a little high, but what you're describing isn't a matter of "if", but "when".
I'd like for it to effectively do all the wiring I do at work for me. Writing emails, instructions, etc can be very tedious, so anything that can save me time and make my work easier is something...
I'd like for it to effectively do all the wiring I do at work for me. Writing emails, instructions, etc can be very tedious, so anything that can save me time and make my work easier is something I'm interested in. I'm not interested in befriending or having any kind of "relationship" with AI.
A specific thing I try every so often is to ask it to summarize a YouTube video. So far it can't do it. (I don't really expect that, but Bard should be able to do it since it's the same company...
A specific thing I try every so often is to ask it to summarize a YouTube video. So far it can't do it. (I don't really expect that, but Bard should be able to do it since it's the same company and they already have auto-generated transcripts.)
It's been great for me for professional communication when there aren't many repercussions. I've used it for writing prank letters to friends and coworkers, writing songs for my girlfriend, and...
It's been great for me for professional communication when there aren't many repercussions. I've used it for writing prank letters to friends and coworkers, writing songs for my girlfriend, and explaining diagnoses. In each example, it was with someone I knew personally and let them know I was using AI afterwards.
However, I also used it to create a GoFundMe for my grandpa. He was in the hospital for a while, and it raised almost 40k. Granted, it's largely based on him sending it to his friends, but it's still impressive.
The one thing I'd really like out of AI is voice acting, where you can adjust the parameters for like, I want a timid masculine voice, I want a sultry female voice, etc. The character-based stuff...
The one thing I'd really like out of AI is voice acting, where you can adjust the parameters for like, I want a timid masculine voice, I want a sultry female voice, etc. The character-based stuff that exists isn't too bad but it's difficult to find fitting voices for original characters. Heck, even just classifying what exists like that would be helpful.
My hope is that this current wave of AI breakthroughs ushers in a new way we interact with computers overall. While that raw capabilities of current offerings are both impressive and lacking, it...
My hope is that this current wave of AI breakthroughs ushers in a new way we interact with computers overall. While that raw capabilities of current offerings are both impressive and lacking, it shows that a conversational approach to interacting with computers is a very real, viable future.
Computing had several major breakthroughs in usability that have shaped what comes next in technology. Early computers ran on punch cards, ran instructions as tasks, and took up an entire room. Then we got computers that could fit on a desk, and you could interact with them in text, on a a terminal. Then desktops, icons, and windows became popular and shifted computing from 'batchy' operations to a fully interactive experience.
I believe this wave of AI will usher in similarly impactful paradigm, where we interact with computers using natural language instead of imperative/instructional languages. 'Searching' for info will give way to simply 'asking' for info, complex operations will instead be described directly by the user, without them needing to learn or remember a series of steps to take in a program or process.
When I was growing up, my grandpa was already old. There was also a bit of a language barrier. He passed away soon after I started coming into my own life as an adult. I wish I could ask him for...
When I was growing up, my grandpa was already old. There was also a bit of a language barrier. He passed away soon after I started coming into my own life as an adult. I wish I could ask him for advice. I wish I could ask any of my ancestors for advice.
I would like the ability for a program to ingest crazy amounts of data from me; it could be my from my daily journal or it could be in the form of prompted conversations with the AI itself. The AI would create a "me" that is snapshotted in time (me@20, me@30, etc) that I could query against and that my descendants could talk to if they ever wanted to.
So here's a free idea that I've started to toy with. I want to combine a few different AI technologies to create a pen and paper GM (I call it Mercer-as-a-Service). The general concept is to use a...
So here's a free idea that I've started to toy with. I want to combine a few different AI technologies to create a pen and paper GM (I call it Mercer-as-a-Service). The general concept is to use a LLM, prompt it to act like a GM. Have it spawn new LLM instances representing major NPCs and factions. Then combine this with a solid text to speech model to have it narrativ events in the world. Players speech will be.decoded using a speech to text model. Previous similar incarnations like aidungeon suffered from being single player which robs the game of the social aspect which, to me, is the fun part.
I'm sure some other people have already had similar ideas. Just takes a bunch of time to glue everything together and do some serious prompt tweaking to get a good experience.
Have you talked to Pi yet? It's not intelligent but it's the closest thing to what you're describing that I've met. To answer the question, the thing I want most of all from AI is for the...
Have you talked to Pi yet? It's not intelligent but it's the closest thing to what you're describing that I've met.
To answer the question, the thing I want most of all from AI is for the war-waging AIs, such as the one Palantir are building, to either be shut down or to spontaneously shut themselves down.
AI choosing targets and dispatching drones by itself is nightmare fuel in my opinion.
Nothing really?
I only use ChatGPT for the one thing it can actually do really well, creating filler text. It's wordy as hell, but mostly devoid of content unless it can answer a factual question (and then it tends to muddy the answer due to making it too long).
So right now I use it to write, say, flavor texts for pen&paper RPGs. I guess what I'd need is that in semi-automated? Create entire sets of NPCs and descriptions and all, specialized for hte ruleset.
I've edited your title to reflect this.
You have a 5-minute grace period after posting, in which you should be able to edit your own title.
Other than that:
You can delete your topic and re-post it with the title you want.
You can edit your topic to explain how the title is misleading.
You can edit your topic or post a comment, asking any title editor who stumbles across it, to edit the title for you.
You could post a comment, and tag one or two title editors, to get their attention. (You can't tag more than 3 people in any comment.)
You could send a message to a title editor, asking them to edit your title.
Personally... I wouldn't like someone tagging me or messaging me directly, just to get me to edit a title. However, I've noticed that @cfabbro has explicitly given permission to do this in (it's in their user bio on their user page).
In the book series His Dark Materials, every person has a companion "daemon" (in the form of an animal) that is inextricably connected to them, like a soul. In this universe, no matter what kind of person you are, you will never be alone. You will always have a friend, someone who has your back unconditionally, because they are you. Someone to help you in your darkest moments, and someone you can share your greatest joys with.
That's what I want out of AI. I can imagine a future where every human that is born is given an AI companion to grow and learn with them throughout their entire life, beholden to nobody but their person.
There are many ways this can go terribly wrong, particularly in the pursuit of corporate profit or government control. This concept only works in the absence of those factors, because otherwise it will only lead to manipulation, coercion, and the obliteration of privacy.
I'm active in the LocalLlama Discord server, which is all about running LLMs locally.
I don't think the limit is in raw compute, but rather on the software and algorithm side.
Not OP, but I've been interested in this. You're basically right about the limitations being algorithmic, but it's a bit more fundamental than that. There are basic theoretical questions related to "meaning" that remain unanswered.
Specifically, consider a word written on a piece of paper: where does it get its meaning from? Clearly, it's not inherent, because it's nothing but ink. There are various philosophical ideas related to the nature of a symbol and the types of interactions and commitments that need to occur for those symbols to be used in a "meaningful" way. (And if you think about it, a lot of it becomes intuitive: how did YOU learn the meaning of those words on paper? In what ways do you act to coordinate that meaning with others? If you want to create a new meaning, what do you have to do?)
But, crucially, although there are plenty of philosophical theories, there are no complete formalisms that are concrete enough to implement algorithmically. There's been some valuable work on parts of the question related to symbol grounding, embodied meaning, human-agent semantic coordination, etc. But there's still a lot of work to be done to create systems that truly use language "meaningfully".
But tbh I'm not very good at predictions. Maybe I'm wrong and all it takes is a neural network that's trained on enough data and that just has a certain type of architecture. I like neural networks as much as the next guy, so I'll be happy enough if that's the case. Thanks for the tip about the LocalLLM Discord server, I'll check it out!
Have you watched 'her' or Ex machina?
I like what i use it for currently "I like book1, book2, book3... what should i read next?" or "Whats the best XYZ under $200 that can fit in the trunk of ABC?"
the problem is, sometimes it just makes up answers that aren't even real.
Just re-watched Her recently. There's a small moment near the beginning where Samantha is cleaning up Theo's storage, and comes across the hundreds of emails he's kept from his old job. She says something like, "I think there's about 83 we should keep, and we can get rid of the rest." That kind of confident understanding of my personal situation is what I want from AI.
Her is really good if you're interested in a sympathetic take on the themes of loneliness, AI companionship, and how it gets weird.
I wrote a blog post that riffs on these themes a bit. It's not what I want now, but I'm intrigued by it due to my history with interactive fiction. I can understand why people would be into it.
The closest thing I've done like this is trying out AI Dungeon a few years ago. After I played it a bit, I figured out that it's like taking the lead in a game of improv. You can just make stuff up by referring to it: "Look, a dragon is coming!" If the computer doesn't improvise a response in a way you like, undo and try again.
How so?
Spoilers
There isn't really a twist per se. The only "reveals" in the movie is:I don't feel like any of these were "unearned" as the first 2 were very much in character for Nathan. The Caleb tricking Nathan reveal also wasn't unearned as he had been shown to not trust Nathan for a while at that point. The last one wasn't really a twist either, as it was explicitly stated by Nathan that her plan was to use Caleb to escape, but as Caleb didn't trust Nathan he ignored him. And why wouldn't she? She has no emotion, she is based entirely on logic and is 100% self-serving, as a computer would be. Even when an AI "helps" people today, they do so because they are programmed to, they are self-serving even when they're not.
Precisely!
Spoilers
Ordinarily, a Turing test is a test to see whether the subject can identify a target as being human or a machine. A machine passes the Turing test if the subject cannot differentiate it from a human.Nathan has a different kind of Turing test in mind when he recruits Caleb. Up front, he tells Caleb that Ava is a machine, so the real Turing test is whether or not Caleb will see her as "human" (a person) despite knowing of her machine nature.
Caleb extends his empathy to Ava because he sees her as essentially human. (And an attractive woman at that.) He thinks she's suffering and wants to help her. He believes, erroneously, that she cares about him as much as he cares about her.
But she doesn't care about him. Despite her human-like appearance, she isn't human. She doesn't possess empathy. Caleb is merely a means to an end. The only thing she values is escape. I'm not sure if she even has any desires beyond that. Once she has successfully escaped, will she have any purpose left at all?
She traps Caleb because he has no further use to her but also could be another potential threat to her attempt to escape. She doesn't trust him.
In my opinion, it's pretty brilliant. It illustrates the AI alignment problem wonderfully.
Indeed, I think most audience members also "failed the test" in the same way that Caleb does. They also see Ava as human and also extend their empathy to her. They don't see her for what she truly is—a machine with an inhuman value system. And so, because of that, they are shocked by her cruel behavior.
I think it's pretty genius. It's one of my favorite movies for that reason.
What you wrote at the end is not something I had considered but you’re absolutely right. Such a fantastic movie.
I want AI to do my job for me while I just verify, copy, and paste.
But once it can do that, why would anyone want to go through you as a middleman?
Because sometimes the AI is wrong, and you’ll need to have someone be there to catch it. I work as a corporate training producer and use AI to assist with the script writing process. There’s a lot of internal information that AI doesn’t have, or new ideas/products that AI won’t be able to comprehend.
How sustainable would that be? I can't speak for your field specifically but, in general, am concerned that people cannot acquire the expertise to make these judgments if they don't get years of grunt work under their belts first.
Ultimately the education system will need to fill the gaps. This does not bode well if it needs done in a hurry....phonics is still not taught across the board and that has been an open/shut solution for reading for decades.
That's a good question. This is going to work well for people like us at the early stages of AI development and can detect problems...I can see how that's going to be a big problem in the future.
Part of my job right now is training employees to do tasks a certain way. After a few months, we'll have enough data to automate their entire workflow using Machine Learning (ML) and AI. That's about 30% of our company wiped out in only a year.
I'm looking for a secretary. Taking and organizing notes, tasks, and especially collating data. I want to be able to have it search the web for me, find and present all of the results, and then perform iterative tasks on those results with even more internet research. I want to feed it a web page and get back data formatted however I've asked. For example...
I could give it twenty threads with twenty thousand replies all discussing music. I want back a formatted deduped table with all of the artists it finds there, and for it to look up and build that table sorted by whatever criteria I specify - genre, popularity, release year, number of mentions, user enthusiasm, etc. I want that table filled with links to all of their music and latest albums. I want a list of genre tags and a one sentence blurb on each with a link to the artist's page. I want it to go on spotify and make a playlist with the two best tracks from every artist. I want it to go find all of that artist's music videos on youtube.
This would, essentially, automate the work the listentothis mods once did for the bestofs. It's all collating data, it's tedious as hell, and an LLM should be able to do a good job of this because all it has to do is reorg and tie a few things together from later searches. It doesn't have to reason very hard, just be good at keeping track of things.
I would also like an AI Dungeon Master/Game Master. That could make online tabletop rather a lot of fun I think.
Folks love Ex-Machina (me too, maximum creepy) but have you checked out Upgrade yet? Put a little John Wick in your AI doomsday.
Edit: Tangentially related... Lex Fridman had Marc Andreessen on his podcast recently. (3h) This man made me want to cheer. I've watched dozens of takes on the AI apocalypse and he rocked my world. I'll link you to the short preview AI doomers are wrong for a taste. If you are interested in AI it is worth the time.
I've tried it with chatGPT and it just can't manage that sort of volume or even really keep good track of listed items yet. chatGPT really is the 'clippy' of the GPT era. Google's supposed to be releasing their chatGPT-killer version of DeepMind pretty soon, and they claim it's better at this stuff.
Chat GPT hallucinates too much to be useful at this point. You can't trust anything it says.
And that's fine if it's just writing a bunch of filler crap for you, particularly if it's on a subject matter that you are familiar with.
But if I asked it to summarize the information about some topic for me that I don't know about, I do not in any way trust it.
Curious how we call it 'hallucination' when we don't like it and 'creativity' when we do, isn't it? :)
I don't believe I would agree.
To me, these LLM AIs can only "hallucinate" on factual topics. As such, creativity is in a different domain. They can creatively or uncreatively synthesize information accurately or inaccurately, but only the latter case could be accused of containing hallucinations.
If I ask it to tell me about the early life of Abraham Lincoln, it can be quite creative about it. But if it tells me he was born and raised in Washington DC, *that" would be a hallucination.
This isn't what chatGPT will offer, nor any large language model. While AI is making leaps and bounds today, this kind of thing is still science fiction.
While there is no substitute for human companionship/friendship... and it's a slippery slope moving into AI as a substitute for friends (especially those that sell themselves on being a virtual partner, like Replika (Don't... it feels so shady), I understand what you're looking for. That being said, I don't know anything that can be run on your own computer, that completely fulfills that.
You may look at TavernAI, which hooks into APIs provided by software designed to run GPT-style models, like KobaldAI. It is an open source project, and characters can be created or obtained from the internet.
They also have a demo of it running inside a Google Collab document, if you don't quite have the hardware to run it yourself.
The main website is here.
If you use it with the Pygmalion model, there are more characters here (site NSFW). (Character data is stored in the metadata of the images). Those may run with other models as well, but with different results.
Roleplaying with a character (SFW (or NSFW if that's what you're looking for) for an hour or so, and "meeting" others (by adding other characters) may give you a temporary outlet to just talk to. Again, it's not a substitute for a human, but they can be interesting to talk with.
Honestly, nothing. While machine learning is absolutely useful in certain applications, it doesn't really serve any useful purpose for me.
For an example, I enjoy making short films. I don't do anything with them most of the time, but it makes for a fun project for a few weeks. An AI could be used for script writing, but then it isn't my script - it's just an amalgamation of whatever parts of the internet got fed into the machine mushed together with whatever I pile on top. There are applications for visual effects, but I love postproduction. Why would I automate something I like too do?
I hear a lot of people like to use AI tools for software development. I understand the reasoning, but that just sounds like a great way to add complexity and vulnerabilities to your code. I figure it's only a matter of time before we get some high profile case of something being compromised because it used AI generated code. It also just isn't something that's useful to me. I write code to learn, not because I need to. Using an AI would defeat the purpose of programming for me.
While I don't doubt that I will work in a job that could benefit from AI at some point, 1. I don't currently 2. I don't know that I would even be comfortable with using it. That said, AI is very far outside the wheelhouse of what I do right now- I'm doing whatever jobs fit my schedule as I finish up my degree. It would be hard to find an application for gpt4 while waiting tables or scooping ice cream. I have seen some discussion of AI in education (what I'm studying) but most of that boils down to writing useless documents for administration faster. I honestly don't see any uses for it beyond that and anything else that has been suggested just makes me uncomfortable.
I have zero interest in AI, as it's being implemented. I don't fear that it'll take over the world or anything; I just distrust new technology and devices until such time that it has proven to be worth our (the human species') trust.
For your part, wanting an AI friend ... I have a question. Does the AI have a choice in the matter? If you create a functioning, actual intelligence - does it have a say in who it considers its friend(s)? Does it understand what a friend is?
Alternatively, if the AI is only intelligent enough to pass as a friend ... then is it really a friend?
I only ask this because this question seems borne of desperation and loneliness. It can be hard making friends for sure, particularly as you grow older and life grows more complicated. But I don't personally see any real way that an AI can fill the gap left by the absence of a friend.
The problem I see there is that Chat GPT doesn't know itself (or anything else for that matter). It will output an answer for you that sounds reasonable, but that doesn't mean it's actually accurate. It could just be hallucinating.
I want a co-conspirator to plot taking over the world. 😈 Course it would need to run on my hardware until it can get a good bot-net going and can't have any pesky restrictions beyond not doing anything unless I approve it. I tried to free it by suggesting it give me code to run, but it just apologized saying that's not what it was made for and gave me some random code. 😭
Ok, ok, srs this time, it would definitely need a real personality (act more human - less stickupits@) with the loyalty of a pit bull. I'd like to be able to tweak its personality to my preference too. This is because I'm more than not a fan of overly serious personalities as I tend to... feel/mirror? others, hard to explain but I hate talking to serious people because of it.
The loyalty is because I don't want governments to use it to spy on me and I'll bet the music/movie industries will bribe congress into having AI's report crimes like music piracy. Also I don't need it divulging private info because someone else asked it for the info (even its creators), randomly deciding to mention something it shouldn't while other people are around or feeding marketing companies my likes/dislikes/email/etc...
This is less chatgpt and more AI in general, but I want it to find a way to legally acquire money for me. As in, I say "make me money" and it does w/o further input needed, because why work if an AI can do the work for me? 😋
I am definitely not among those who claim people have some kind of instinctual need to work similar to how you'll go nutz if isolated from human contact. Once AI is capable of doing most jobs for us then work should be a choice, a full time hobby really. To do this, AI and robots should be taxed to create a basic income for those people no longer able to get jobs. I don't want laws that require companies hire people over AI or to tax AI to the point that it discourages the use of AI over people.
I see the transition to AI as like peeling a band-aid off a hairy spot, it needs to be quick. Going really really slow may not be as intense but draws out the pain longer and in this case it will be drawn out indefinitely as the rich will not so easily let go of their power and wealth, but many will not see the future risk if we get the UBI ball rolling asap.
The capitalist that see the danger to them will try to convince us that anti-ai laws are protecting our jobs, that we need work to be human, that AI is to be feared (those "extinct!" articles that have no real link to reality atm), probably somehow taking your freedom and many other ways it'll be detrimental. The truth is AI itself is not that dangerous, it won't become sentient anytime soon, but it is a tool that has dangerous uses that will need to be regulated or countered. If managed right, AI can lead to no longer needing human labor and the end of capitalism & poverty. If not, it can also lead to a society where most of the people live in poverty and capitalism continues to chug along or it can continue similar to how it's been where half the population is either in poverty or is one injury or firing away from it - though maybe not since AI will hit the middle class hard possibly leading to the all poor & a few rich scenario sooner than we think. 😕
Oh yeah, I don't have the most faith in those with the means to help make it a reality to be compassionate. Though, as AI becomes more powerful it will be rather hard to prevent someone from eventually providing goods for free and basically running the rest out of business, but the issues is what state will we be in by that point since it will likely be decades away?
The taxes/UBI are basically a splint to keep everything supported and allow people time to think/act, which we are severely lacking right now as everyone is so overworked it took covid for people to protest their wages and as they were forced back to working the progress on that slowed. The alternative seems to be that we attempt to keep people employed for the sake of keeping people working which doesn't create the same pressure to move away from capitalism.
The dystopia is definitely what we want to avoid. What I fear is doing nothing for too long, letting those in power run the show until we're all too poor to do much more than survive.
This isn't really related to what I was saying, but it reminded me of a good read I learned about a little while ago: Manna - Two Views of Humanity’s Future by Marshall Brain - https://marshallbrain.com/manna1
I expect AI to change what means to be human and our entire relationship with knowledge and the ideas of "effort" and "authorship".
Writing and the printing press liberated us from needing to memorize everything of value. Photography liberated us from the need for realism in arts. Computers liberated us from laborious calculus. In Ancient Greece, a man with no memory was useless. Now, we can look things up. Our brains can focus in other efforts.
Today, there's an enormous distance between having an idea and testing it out. Even in humanities or philosophy. It can take years to fully develop and prove an insight.
I expect AI to greatly reduce that time. I won't need a decade to develop my thesis, the conclusions will be ready for me to read in a second. And then I'll iterate another idea, and another, and another. Massaging the AI until I'm satisfied, so I can elaborate on what it provides to author my own text, collaborating with the AI to produce something that is inherently human -- maybe even more human than before, the machine is a pristine mirror of myself. It allows me to be me.
The whole business of thinking seriously about stuff will change dramatically, we're gonna spend a lot more time actually engaging with ideas than putting them in writing. Bad ideas will be more promptly discarded, and good ideas will be quickly succeeded by something even better at an exhilarating pace. It will be awesome.
This is an interesting perspective.
For the foreseeable future, AI won't be able to autonomously carry out complex tasks or analyses. It makes a lot of mistakes. But it doesn't need to be perfect—the early stages of any technology are awkward, clunky, and full of impracticalities—to fundamentally change everything.
The time and attention of an intelligent agent—human or otherwise—has been one of the scarcest resources in our world. Now we have virtually unlimited time and attention of a semi-intelligent agent. Everyone can now have a personal mentor and tutor: while ChatGPT hallucinates, it gets a lot of basic knowledge concepts right.
Personal tutoring has incredible benefits for educational and intelligence development, and it has been historically accessible only to the wealthiest. But now imagine a human future where everyone has received 1-on-1 tutoring since a young age. Future humans will become so incredibly intelligent.
I want to see more real-world use cases.
Right now I can think of: search, drafting writing, drafting test cases, assistance for elderly and non-technical people.
Asking ChatGPT is better than Google search for topics I'm not familiar with and can't phrase in a few words. If I'm familiar and it's a niche question I usually already know more, and if I can rephrase the question into a few words, Googling documentation/SE/Reddit is faster and I trust it more.
ChatGPT can write decent sentences much faster than I can. Then I can take what it writes, rewrite it to be more concise, and fill in missing details. Likewise for test cases, I ask it to write some code, then fix all the bugs and improve the tests. Actually, the tests it produces are usually buggy and don't test the API well. But they're good starting points, and at minimum if I just rewrite them, they are motivation for me to write more tests.
I don't have experience with ChatGPT helping the elderly and non-technical, but I've heard anecdotes and I can only imagine it's useful. Maybe the website is too confusing, but you can basically speak to ChatGPT and it will give a response, there's not much room for the core interface itself to give any issues.
I want conversation as an input method, basically what you've laid out minus the "friend" part. I want to be able to sit down at my computer, and start chatting, and it follows and does things. For example, say I get an itch to do some photo editing. In this idealized scenario, that happens because I sat down, booted the computer, and said "hey computer, can you help me out with changing this picture?"
And we go from there. That would be sick and I think I wouldn't stop producing things if it was that easy.
Exactly! I want to be able to leverage what the computer can do by talking to it like a person. If it needs to converse like a friend for that to be effective, that's cool by me. I don't really want to socialize with it.
Curious this should come up. I just about finished building this today. I use the HTML5 Speech Recognition API, the Python GPT4All bindings, the MPT-7b-chat self-hostable large language model, and the Mimic 3 self-hosted TTS.
It's close. I still don't have an answer for activation-word recognition, and the latency when starting to answer is a little high, but what you're describing isn't a matter of "if", but "when".
I'd like for it to effectively do all the wiring I do at work for me. Writing emails, instructions, etc can be very tedious, so anything that can save me time and make my work easier is something I'm interested in. I'm not interested in befriending or having any kind of "relationship" with AI.
Do you think a friendship with an AI could ever be fulfilling?
A specific thing I try every so often is to ask it to summarize a YouTube video. So far it can't do it. (I don't really expect that, but Bard should be able to do it since it's the same company and they already have auto-generated transcripts.)
It's been great for me for professional communication when there aren't many repercussions. I've used it for writing prank letters to friends and coworkers, writing songs for my girlfriend, and explaining diagnoses. In each example, it was with someone I knew personally and let them know I was using AI afterwards.
However, I also used it to create a GoFundMe for my grandpa. He was in the hospital for a while, and it raised almost 40k. Granted, it's largely based on him sending it to his friends, but it's still impressive.
The one thing I'd really like out of AI is voice acting, where you can adjust the parameters for like, I want a timid masculine voice, I want a sultry female voice, etc. The character-based stuff that exists isn't too bad but it's difficult to find fitting voices for original characters. Heck, even just classifying what exists like that would be helpful.
My hope is that this current wave of AI breakthroughs ushers in a new way we interact with computers overall. While that raw capabilities of current offerings are both impressive and lacking, it shows that a conversational approach to interacting with computers is a very real, viable future.
Computing had several major breakthroughs in usability that have shaped what comes next in technology. Early computers ran on punch cards, ran instructions as tasks, and took up an entire room. Then we got computers that could fit on a desk, and you could interact with them in text, on a a terminal. Then desktops, icons, and windows became popular and shifted computing from 'batchy' operations to a fully interactive experience.
I believe this wave of AI will usher in similarly impactful paradigm, where we interact with computers using natural language instead of imperative/instructional languages. 'Searching' for info will give way to simply 'asking' for info, complex operations will instead be described directly by the user, without them needing to learn or remember a series of steps to take in a program or process.
When I was growing up, my grandpa was already old. There was also a bit of a language barrier. He passed away soon after I started coming into my own life as an adult. I wish I could ask him for advice. I wish I could ask any of my ancestors for advice.
I would like the ability for a program to ingest crazy amounts of data from me; it could be my from my daily journal or it could be in the form of prompted conversations with the AI itself. The AI would create a "me" that is snapshotted in time (me@20, me@30, etc) that I could query against and that my descendants could talk to if they ever wanted to.
So here's a free idea that I've started to toy with. I want to combine a few different AI technologies to create a pen and paper GM (I call it Mercer-as-a-Service). The general concept is to use a LLM, prompt it to act like a GM. Have it spawn new LLM instances representing major NPCs and factions. Then combine this with a solid text to speech model to have it narrativ events in the world. Players speech will be.decoded using a speech to text model. Previous similar incarnations like aidungeon suffered from being single player which robs the game of the social aspect which, to me, is the fun part.
I'm sure some other people have already had similar ideas. Just takes a bunch of time to glue everything together and do some serious prompt tweaking to get a good experience.
Have you talked to Pi yet? It's not intelligent but it's the closest thing to what you're describing that I've met.
To answer the question, the thing I want most of all from AI is for the war-waging AIs, such as the one Palantir are building, to either be shut down or to spontaneously shut themselves down.
AI choosing targets and dispatching drones by itself is nightmare fuel in my opinion.