While neat, that autoregressive language models don't have any kind of hierarchical planning and no real understanding of the vectors its ingesting and the vectors its outputting rears its head...
While neat, that autoregressive language models don't have any kind of hierarchical planning and no real understanding of the vectors its ingesting and the vectors its outputting rears its head for actually being a DM. Namely, in that it basically never tells you "no". But, saying "no" is quite important for playing within a ruleset... otherwise there aren't any rules.
Sometimes you want to just have a narrative partner, but for actually being a DM the DM needs to understand the rules of DnD 5e and enforce them if nothing else.
If you try it out, you'll notice that you really need to be DM-ing yourself, because you need to restrain yourself from cheating to any objectives because it will just go with whatever you tell it to.
Anecdotally from hanging around on some of the local-gpt discords and lurking some LLaMa stuff, its not actually as bad as you'd think. ChatGPT- yes, but if you use the api to 'train' it on a set...
Anecdotally from hanging around on some of the local-gpt discords and lurking some LLaMa stuff, its not actually as bad as you'd think. ChatGPT- yes, but if you use the api to 'train' it on a set of documents like in that demo video OP linked, then you can get it to have some amount of like, idk, domain awareness?
apparently the biggest hurdle is doing it in such a way that ur not spending most of ur time playing with datasets and indexing them to make it return tokens (sometimes a word, sometimes less than a word) faster than once every few minutes. Someone was talking to me yesterday about how they've been banging their head against the wall to get pinecone (some database thing?) and langchain (no clue lol) to play nice with the leaked facebook models.
That said I am picking this up via just idle curiosity and letting my friends passionately infodump at me, so feel free to correct me if im conflating things or misunderstanding
Fine tuning the LLM will merely make it better at generating text that looks like it could be from a campaign. You'd really want reinforcement learning to such that the model actually has a...
Fine tuning the LLM will merely make it better at generating text that looks like it could be from a campaign. You'd really want reinforcement learning to such that the model actually has a different objective than merely to find tokens that are the most likely (that was, in many ways, the import difference in chatGPT - that after RLHF it was to some degree optimized to present text that makes humans happy, as opposed to text that is more likely).
No matter what, though, it's doesn't do hierarchical planning. When your meat space DM is having the inn keeper you met tell their life story to you, that's one element of an overall plan for what they want the campaign to be. The inn keeper is talking about all the food shipments that go missing because the DM wants you to investigate, and they want you to investigate because it follows into a larger plot around a vampire who is impersonating a local baron.
A LLM is talking about the shipments that go missing because that was the token sequence that showed up after doing softmax on posterior probabilities.
Additionally, you'll always come to the exponential error compounding from autoregressive models. The more tokens, the more the likelihood that the prior token makes no sense in the context, and the more that compounds. It's one of LeCun's major arguments against autoregressive models. In practice, the deeper your DnD session goes the more likely that by then the LLM has gone completely off the rails.
Of course, you can prevent this by being the ground truth, and reorienting it periodically, but you essentially are DM-ing yourself.
Yeah its never going to be able to compete with a real human, but for people wanting to get into tabletops and have some single player fun for a bit then i think the bar gets lowered. There are...
Yeah its never going to be able to compete with a real human, but for people wanting to get into tabletops and have some single player fun for a bit then i think the bar gets lowered. There are already examples using the gpt4all model and 8 bit 13B models that I would happily accept as a dm, even if there's no real underlying planning or grand scheme to things.
except for that fact that you're right though, and things always go off the rails after a certain amount of time. Although based on some of the stuff people have been showing off after they figured out memory(albiet short sessions) - theres certainly something nearly usable there. even if its not fundamentally sound or consistent or whatever, it just needs to be fun and enjoyable.
I think it can be fun, but it is basically a different game than DnD. That's fine, it's not like DnD is the perfect embodiment of gaming or anything. But it is different. More of a creative...
I think it can be fun, but it is basically a different game than DnD. That's fine, it's not like DnD is the perfect embodiment of gaming or anything. But it is different. More of a creative writing partner. AI Dungeon. And certainly it's a useful augmentation to a meat-space DM, which is infamously a role highly prone to burnout.
Sorry in advance, I'm not trying to call you out, but it struck me as odd that for the rest of your post you aren't using txting abbreviations, yet here you chose to use "ur" twice. Was it...
such a way that ur not spending most of ur time
Sorry in advance, I'm not trying to call you out, but it struck me as odd that for the rest of your post you aren't using txting abbreviations, yet here you chose to use "ur" twice. Was it intentional, or just habitual?
ahhhh yeah thats habits slipping through i think. I zhuzh up my typing on tildes because it just seems like the plast to do that. But yeah they're not so much texting abbreviations as much as...
ahhhh yeah thats habits slipping through i think. I zhuzh up my typing on tildes because it just seems like the plast to do that. But yeah they're not so much texting abbreviations as much as like, a path of least resistance when it comes to my mild dyslexia and major dyspraxia. it'll be inconsistent even within short messages to friends on discord just based on what my hands want to do at any given time.
I do have a lot of wierd typing habits like that. I'll type prolly instead of probably 100% of the time - but u, you, ull, urs, yours, and all of those are basically a coinflip. But also I have to force myself to use pronouns beacuse i often just omit them like saying "going shops". Except i also omit trailing punctuation so that could be a question for someone else or a statement about what im doing. I also very frequently type the word AFTER the one i'm trying to type (trying type type for ex). and last one that my friends have pointed out. I'll sometimes get my left and right hands our of sync when it comes to the spacebar - s om ymessage sloo klik ethis.
Some “prompt engineering” might help to override the defaults? Explicitly tell it that sometimes actions should fail and that stories shouldn’t always have happy endings. And it seems they tried...
Some “prompt engineering” might help to override the defaults? Explicitly tell it that sometimes actions should fail and that stories shouldn’t always have happy endings. And it seems they tried to make it do that with their prompt.
Though it might go the other way too. When I played AI Dungeon it would occasionally kill you randomly.
Another trick someone told me that might help with games that have secret info is to ask it to base64-encode a secret in advance.
I agree that you’re mostly leading the AI. It seems like leaning into the no-rules, Calvinball, “Alice in Wonderland” aspects of it instead of doing a traditional RPG would be a more promising creative use.
His first video on making his prompted. https://www.youtube.com/watch?v=i5H638t4_eU&ab_channel=DeckofDMThings His Prompted to make this work half way down the page. I wonder if you could change...
His Prompted to make this work half way down the page. I wonder if you could change the prompted to give it a campaign setting to pull from so it could run detailed settings? I've seen it used in may ways for D&D but Making it a DM would literally change the game.
So I just found a guy that has been working a code, giveing it ChatGPT along with a repository of pdfs to cross analize to get analysis over 1000's of pdf. https://youtu.be/Ix9WIZpArm0?t=293
So I just found a guy that has been working a code, giveing it ChatGPT along with a repository of pdfs to cross analize to get analysis over 1000's of pdf. https://youtu.be/Ix9WIZpArm0?t=293
While neat, that autoregressive language models don't have any kind of hierarchical planning and no real understanding of the vectors its ingesting and the vectors its outputting rears its head for actually being a DM. Namely, in that it basically never tells you "no". But, saying "no" is quite important for playing within a ruleset... otherwise there aren't any rules.
Sometimes you want to just have a narrative partner, but for actually being a DM the DM needs to understand the rules of DnD 5e and enforce them if nothing else.
If you try it out, you'll notice that you really need to be DM-ing yourself, because you need to restrain yourself from cheating to any objectives because it will just go with whatever you tell it to.
Anecdotally from hanging around on some of the local-gpt discords and lurking some LLaMa stuff, its not actually as bad as you'd think. ChatGPT- yes, but if you use the api to 'train' it on a set of documents like in that demo video OP linked, then you can get it to have some amount of like, idk, domain awareness?
apparently the biggest hurdle is doing it in such a way that ur not spending most of ur time playing with datasets and indexing them to make it return tokens (sometimes a word, sometimes less than a word) faster than once every few minutes. Someone was talking to me yesterday about how they've been banging their head against the wall to get pinecone (some database thing?) and langchain (no clue lol) to play nice with the leaked facebook models.
That said I am picking this up via just idle curiosity and letting my friends passionately infodump at me, so feel free to correct me if im conflating things or misunderstanding
Fine tuning the LLM will merely make it better at generating text that looks like it could be from a campaign. You'd really want reinforcement learning to such that the model actually has a different objective than merely to find tokens that are the most likely (that was, in many ways, the import difference in chatGPT - that after RLHF it was to some degree optimized to present text that makes humans happy, as opposed to text that is more likely).
No matter what, though, it's doesn't do hierarchical planning. When your meat space DM is having the inn keeper you met tell their life story to you, that's one element of an overall plan for what they want the campaign to be. The inn keeper is talking about all the food shipments that go missing because the DM wants you to investigate, and they want you to investigate because it follows into a larger plot around a vampire who is impersonating a local baron.
A LLM is talking about the shipments that go missing because that was the token sequence that showed up after doing softmax on posterior probabilities.
Additionally, you'll always come to the exponential error compounding from autoregressive models. The more tokens, the more the likelihood that the prior token makes no sense in the context, and the more that compounds. It's one of LeCun's major arguments against autoregressive models. In practice, the deeper your DnD session goes the more likely that by then the LLM has gone completely off the rails.
Of course, you can prevent this by being the ground truth, and reorienting it periodically, but you essentially are DM-ing yourself.
Yeah its never going to be able to compete with a real human, but for people wanting to get into tabletops and have some single player fun for a bit then i think the bar gets lowered. There are already examples using the gpt4all model and 8 bit 13B models that I would happily accept as a dm, even if there's no real underlying planning or grand scheme to things.
except for that fact that you're right though, and things always go off the rails after a certain amount of time. Although based on some of the stuff people have been showing off after they figured out memory(albiet short sessions) - theres certainly something nearly usable there. even if its not fundamentally sound or consistent or whatever, it just needs to be fun and enjoyable.
I think it can be fun, but it is basically a different game than DnD. That's fine, it's not like DnD is the perfect embodiment of gaming or anything. But it is different. More of a creative writing partner. AI Dungeon. And certainly it's a useful augmentation to a meat-space DM, which is infamously a role highly prone to burnout.
Sorry in advance, I'm not trying to call you out, but it struck me as odd that for the rest of your post you aren't using txting abbreviations, yet here you chose to use "ur" twice. Was it intentional, or just habitual?
(Mark this as #noise.)
ahhhh yeah thats habits slipping through i think. I zhuzh up my typing on tildes because it just seems like the plast to do that. But yeah they're not so much texting abbreviations as much as like, a path of least resistance when it comes to my mild dyslexia and major dyspraxia. it'll be inconsistent even within short messages to friends on discord just based on what my hands want to do at any given time.
I do have a lot of wierd typing habits like that. I'll type prolly instead of probably 100% of the time - but u, you, ull, urs, yours, and all of those are basically a coinflip. But also I have to force myself to use pronouns beacuse i often just omit them like saying "going shops". Except i also omit trailing punctuation so that could be a question for someone else or a statement about what im doing. I also very frequently type the word AFTER the one i'm trying to type (trying type type for ex). and last one that my friends have pointed out. I'll sometimes get my left and right hands our of sync when it comes to the spacebar - s om ymessage sloo klik ethis.
(similarly, this is noise)
Thanks for explaining that! I'd somehow never encountered dyspraxia before, so I've learned something new today!
Some “prompt engineering” might help to override the defaults? Explicitly tell it that sometimes actions should fail and that stories shouldn’t always have happy endings. And it seems they tried to make it do that with their prompt.
Though it might go the other way too. When I played AI Dungeon it would occasionally kill you randomly.
Another trick someone told me that might help with games that have secret info is to ask it to base64-encode a secret in advance.
I agree that you’re mostly leading the AI. It seems like leaning into the no-rules, Calvinball, “Alice in Wonderland” aspects of it instead of doing a traditional RPG would be a more promising creative use.
His first video on making his prompted. https://www.youtube.com/watch?v=i5H638t4_eU&ab_channel=DeckofDMThings
His Prompted to make this work half way down the page. I wonder if you could change the prompted to give it a campaign setting to pull from so it could run detailed settings? I've seen it used in may ways for D&D but Making it a DM would literally change the game.
So I just found a guy that has been working a code, giveing it ChatGPT along with a repository of pdfs to cross analize to get analysis over 1000's of pdf.
https://youtu.be/Ix9WIZpArm0?t=293