So I've been following this since the very beginning, back before Nick Walton founded Latitude and AIDungeon was just a google collab sheet rigged up to run GPT3. I know this may not be a popular...
So I've been following this since the very beginning, back before Nick Walton founded Latitude and AIDungeon was just a google collab sheet rigged up to run GPT3.
I know this may not be a popular opinion, but one should never assume content created/stored online is in any way private, even when it should be. I downloaded AIDungeon back when it was open source (before the current online version was up). It doesn't run the best, sometimes taking several minutes to generate text, but at least I know it's only on my machine.
I agree that Latitude had to take action against the CP stuff, I'm just not sure they went about it the right way.
Their stance used to be “We won't read your private stories, unless they happen to come up during debugging. Our guidelines only apply if you decide to publish to the Explore page.” For them to...
Their stance used to be “We won't read your private stories, unless they happen to come up during debugging. Our guidelines only apply if you decide to publish to the Explore page.” For them to suddenly take a 180° on this felt like a huge violation of trust.
I understand why these filters are needed, but they should be optional and transparent. As it stands, it's even interfering with playing the game normally: It's a simple keyword filter, so cursing or flying a plane (cockpit) around children is a no-go. You only get a vague (and frankly condescending) error message, which makes it hard to figure out what to edit to continue playing.
It's really unfortunate that there are no proper alternatives at the moment. OpenAI's ToS prohibits the kind of free interaction with the AI that AiD allows, so you won't be able to start a competitor using the same models. (Latitude got an exception by virtue of getting in early.) The current freely available models are much worse than GPT-3. EleutherAI is currently gearing up to train a competing model, but even when that gets released, it'll be much too large to run on consumer hardware.
The game touted its use of the GPT-3 text generator. Then the algorithm started to generate disturbing stories, including sex scenes involving children.
Title gets it right. An AI dungeon master is a pretty cool idea, but I can see how that would turn out poorly when a small subset (I hope) of players want to play the game erotically, are trolling...
Title gets it right. An AI dungeon master is a pretty cool idea, but I can see how that would turn out poorly when a small subset (I hope) of players want to play the game erotically, are trolling the algorithms, or have overall fucked-up role-play fantasies. Really an instance of how a few bad apples can spoil the bunch.
I imagine this is a huge challenge to deal with as the developers, because you basically need to sequester that community who different... interests to keep the game enjoyable for others. Sounds like that manifested as heavy moderation and content control, which most platforms that allow shared, community-generated content have to deal with. Just this this time, it's way more random because that self-sequestering content can get tossed in randomly at players based on how the within the ML algorithm (GPT-3) is trained.
AI is what you train it on. That's why the text-analyzing and tweeting Twitter bot from Microsoft became racist, with the caveat that it was fairly tame compared to a game that manages to write fantasy child porn.
I don’t think it’s much like what Microsoft was doing because GPT-3 itself can’t learn anything new from users. It was trained on datasets from before the pandemic and it’s probably...
I don’t think it’s much like what Microsoft was doing because GPT-3 itself can’t learn anything new from users. It was trained on datasets from before the pandemic and it’s probably cost-prohibitive to retrain. It doesn’t know any news that happened since then or what users are doing with it. Whatever it knows about writing sex scenes, it got from the common crawl and was in there from the beginning.
Instead I think of this as users getting better at exploring and exploiting what’s already in there. It’s going to put together combinations of user input, sex scenes and other writing it got from the Internet and doesn’t know it should avoid some combinations.
Then again I don’t know how Microsoft’s chatbot worked. Did it have updatable memory?
Are you sure about that? I'm pretty sure you can add additional training material to the AI without retraining on the entire data set. At least that's my understanding.
Are you sure about that? I'm pretty sure you can add additional training material to the AI without retraining on the entire data set. At least that's my understanding.
With GPT-3, it’s normally done by putting a short text file ahead of user input and then letting autocomplete do its work. For example, if you want it to do Q&A you can add some example questions...
With GPT-3, it’s normally done by putting a short text file ahead of user input and then letting autocomplete do its work. For example, if you want it to do Q&A you can add some example questions and answers showing it what you want. For a game, you might add a section introducing whatever characters will be in the story.
This text doesn’t survive from session to session though, unless you copy the text yourself, or the UI that you’re using adds it automatically. AI Dungeon adds different prefix text to start different genres of games, but they don’t share how they do it.
Maybe AI Dungeon is doing something more sophisticated now? It’s been a while since I played with it.
This is an important cautionary tale for me. I work with a GPT-3 based product for work. I doubt things will get as bad as AI Dungeon since we focus on marketers and business owners for now. But...
This is an important cautionary tale for me. I work with a GPT-3 based product for work. I doubt things will get as bad as AI Dungeon since we focus on marketers and business owners for now. But it’s still a somewhat free form content creation tool.
So I've been following this since the very beginning, back before Nick Walton founded Latitude and AIDungeon was just a google collab sheet rigged up to run GPT3.
I know this may not be a popular opinion, but one should never assume content created/stored online is in any way private, even when it should be. I downloaded AIDungeon back when it was open source (before the current online version was up). It doesn't run the best, sometimes taking several minutes to generate text, but at least I know it's only on my machine.
I agree that Latitude had to take action against the CP stuff, I'm just not sure they went about it the right way.
Their stance used to be “We won't read your private stories, unless they happen to come up during debugging. Our guidelines only apply if you decide to publish to the Explore page.” For them to suddenly take a 180° on this felt like a huge violation of trust.
I understand why these filters are needed, but they should be optional and transparent. As it stands, it's even interfering with playing the game normally: It's a simple keyword filter, so cursing or flying a plane (cockpit) around children is a no-go. You only get a vague (and frankly condescending) error message, which makes it hard to figure out what to edit to continue playing.
It's really unfortunate that there are no proper alternatives at the moment. OpenAI's ToS prohibits the kind of free interaction with the AI that AiD allows, so you won't be able to start a competitor using the same models. (Latitude got an exception by virtue of getting in early.) The current freely available models are much worse than GPT-3. EleutherAI is currently gearing up to train a competing model, but even when that gets released, it'll be much too large to run on consumer hardware.
Title gets it right. An AI dungeon master is a pretty cool idea, but I can see how that would turn out poorly when a small subset (I hope) of players want to play the game erotically, are trolling the algorithms, or have overall fucked-up role-play fantasies. Really an instance of how a few bad apples can spoil the bunch.
I imagine this is a huge challenge to deal with as the developers, because you basically need to sequester that community who different... interests to keep the game enjoyable for others. Sounds like that manifested as heavy moderation and content control, which most platforms that allow shared, community-generated content have to deal with. Just this this time, it's way more random because that self-sequestering content can get tossed in randomly at players based on how the within the ML algorithm (GPT-3) is trained.
AI is what you train it on. That's why the text-analyzing and tweeting Twitter bot from Microsoft became racist, with the caveat that it was fairly tame compared to a game that manages to write fantasy child porn.
Interesting article, disturbing implications.
I don’t think it’s much like what Microsoft was doing because GPT-3 itself can’t learn anything new from users. It was trained on datasets from before the pandemic and it’s probably cost-prohibitive to retrain. It doesn’t know any news that happened since then or what users are doing with it. Whatever it knows about writing sex scenes, it got from the common crawl and was in there from the beginning.
Instead I think of this as users getting better at exploring and exploiting what’s already in there. It’s going to put together combinations of user input, sex scenes and other writing it got from the Internet and doesn’t know it should avoid some combinations.
Then again I don’t know how Microsoft’s chatbot worked. Did it have updatable memory?
Are you sure about that? I'm pretty sure you can add additional training material to the AI without retraining on the entire data set. At least that's my understanding.
With GPT-3, it’s normally done by putting a short text file ahead of user input and then letting autocomplete do its work. For example, if you want it to do Q&A you can add some example questions and answers showing it what you want. For a game, you might add a section introducing whatever characters will be in the story.
This text doesn’t survive from session to session though, unless you copy the text yourself, or the UI that you’re using adds it automatically. AI Dungeon adds different prefix text to start different genres of games, but they don’t share how they do it.
Maybe AI Dungeon is doing something more sophisticated now? It’s been a while since I played with it.
This is an important cautionary tale for me. I work with a GPT-3 based product for work. I doubt things will get as bad as AI Dungeon since we focus on marketers and business owners for now. But it’s still a somewhat free form content creation tool.