People have been talking a lot about how GPT-3 bots could affect social media and other writing-based areas of the internet, so this is an interesting concrete example of what it could actually...
People have been talking a lot about how GPT-3 bots could affect social media and other writing-based areas of the internet, so this is an interesting concrete example of what it could actually look like. This bot was likely only discovered because the behavior was so blatant: posting a new, long comment every minute on brand-new posts. One that posts more sporadically in more realistic places would probably be quite difficult to pick out, especially if the generated posts went through some curation or mixing with non-GPT-3 ones.
I think this would be interesting for comment boards. Have bots post alongside people, and measure how likely a user is to upvote the bot posts, then weight their future upvotes lower for how...
I think this would be interesting for comment boards.
Have bots post alongside people, and measure how likely a user is to upvote the bot posts,
then weight their future upvotes lower for how often they upvote GPT-3.
Ideally, you'd enhance users who picked out more thoughtful comments to upvote.
This wouldn't work on Tildes as it's currently too small for a bot to go unnoticed.
I'm assuming the bot couldn't create thoughtful content. I'm not assuming that future AI could not create thoughtful content. I'm not assuming that the bot couldn't create content that humans...
I'm assuming the bot couldn't create thoughtful content.
I'm not assuming that future AI could not create thoughtful content.
I'm not assuming that the bot couldn't create content that humans consider thoughtful.
I assume that the bot can create content that humans consider thoughtful, and I believe that shows lack of judgment on their part.
I assume that current implementations of GPT-3 do not create thoughtful content, and my evidence for that is based on reading large passages written by it. GPT-3 cannot construct interesting or novel thoughts. It can construct brief posts and paragraphs without giving anything away. Its posts might be emotional, but they are not thoughtful.
Thanks for pointing out my lack of clarity in my original post.
Recognizing GPT-3 babbling is a skill that can be learned. Usually, we give the writer the benefit of the doubt, such as when we hear vague song lyrics and infuse them with poetic or philosophical...
Recognizing GPT-3 babbling is a skill that can be learned. Usually, we give the writer the benefit of the doubt, such as when we hear vague song lyrics and infuse them with poetic or philosophical meaning. I also think some students learn to write in a semi-coherent way when a homework assignment forces them to write something, but they don’t have anything to say.
I think it would be an interesting game to try to distinguish between GPT-3 comments and comments of real users, and to try to write so your comments aren’t mistaken for GPT-3 babbling. But you do need to explain the rules of the game and give people a chance to practice, and I think it should be approached in a spirit of fun rather than putting people down.
I imagine it might be a confusing game for people who aren’t that fluent in English. I don’t think I would do well at it while learning a new language.
Oh for sure, a lot of people are alarmed by this, but I think it's a great thing. Apparently the reddit april fools had a game along those lines. For short comments, I don't think I could tell,...
Oh for sure, a lot of people are alarmed by this, but I think it's a great thing. Apparently the reddit april fools had a game along those lines. For short comments, I don't think I could tell, but for anything written that's over a paragraph long, it's pretty clear that GPT-3 isn't coherent, though it could just be someone who's bad at writing. I think playing around with AI dungeon which uses GPT-3 to generate free-form text based quests is a great experience, just to see how far you can push it.
FYI, AI Dungeon is exposing a particular instance of a language model that uses the pre-trained GPT-3. Others are and will continue to create different fine-tuned models that will perform...
FYI, AI Dungeon is exposing a particular instance of a language model that uses the pre-trained GPT-3. Others are and will continue to create different fine-tuned models that will perform differently from the particular models that are accessible through AI Dungeon. Claiming that all fine-tuned models built on GPT-3 are recognizable is dubious without evidence.
Okay, let me clarify a bit: I'm fairly confident that for tools that are simply a wrapper around the GPT-3 API, many people will be able to learn to tell the difference between human and...
Okay, let me clarify a bit: I'm fairly confident that for tools that are simply a wrapper around the GPT-3 API, many people will be able to learn to tell the difference between human and AI-generated text, provided that they are consciously paying attention and not just skimming, and it's a situation where the human writers are actually trying to make sense and not just posting memes or filler.
I'm also skeptical that fine-tuning will make much difference. A transformer-based text generation architecture doesn't seem particularly suitable for acquiring and representing coherent points of view. To the extent that it avoids self-contradiction when learning facts, I think it's only because it's trained on source material that has already had contradictions somehow resolved in advance. (For example, it leverages Wikipedia's consensus-making process.)
Beyond that I think we should round in favor of uncertainty. It's not at all difficult to imagine that other architectures will soon be able to do a better job.
Hm cool, didn't know that. You wouldn't have to make the bot very smart. Reddit has a bunch of standard meme replies, so you could have a bunch of bots post meme replies under rising threads....
Hm cool, didn't know that.
You wouldn't have to make the bot very smart. Reddit has a bunch of standard meme replies, so you could have a bunch of bots post meme replies under rising threads.
Also, there couldn't be too many bots. Bots tend to malfunction badly when interacting with each other, and users wuold probably notice if more than say 10% of replies were made by bots
There was a user a few years ago that populated a whole thread with the usual replies: https://www.reddit.com/r/HighQualityGifs/comments/77d9ou/the_predictable_threads_are_driving_me_insane/.
Neat, I hadn't seen that thread before, and that's before GPT-3... I play a game with myself whenever I'm on r/all. I try to guess what the top comments will be before clicking into the link....
Neat, I hadn't seen that thread before, and that's before GPT-3...
I play a game with myself whenever I'm on r/all. I try to guess what the top comments will be before clicking into the link. Eventually you get pretty good at that.
Edit: Apparently the user spent 5 months putting the thread together. You can see it here a couple of comments down.
GPT-3 legitimately freaks me out. I just clicked the first post of the bot at random. It's this story in response to the /r/AskReddit question "Elevator-maintenance folks, what is the weirdest...
GPT-3 legitimately freaks me out. I just clicked the first post of the bot at random. It's this story in response to the /r/AskReddit question "Elevator-maintenance folks, what is the weirdest thing you have found at the bottom of the elevator chamber?".
It's a full story, with a beginning, middle and end. It makes sense, despite being a little surreal. And if GPT-3 is trained by real-world data, I wonder: How close was it to the story posted?
The first thing that comes to mind is the recent discovery of a colony of humans living in the shafts and elevator machinery at the bottom of buildings. This was an extraordinary find for sociologists and anthropologists, revealing more about human culture than had ever been possible before.
The discovery was made when an elevator repair worker at a building in Washington DC tripped over a shoe that had fallen between the walls of the shaft. The emergency stop button was pressed and the broken machinery triggered a shutdown across all elevators within two city blocks, trapping hundreds of people for hours.
In the investigation that followed it was found that hundreds of people had been living in the elevator shafts and maintenance tunnels for over a decade, making homes out of discarded machinery.
As it turns out, the people who lived in these shafts and tunnels were homeless. Many of them had been living on the streets for decades, surviving by scavenging from dumpsters and sleeping under bridges.
When the construction of the housing bubble began, these homeless people were forced out to make way for new buildings. But since they had nowhere else to go, and so many of them were desperate for shelter, they found a place in the elevator machinery.
I have seen footage of the shafts where they lived, and it is truly an extraordinary sight. I had no idea that humans could be so resourceful.
Like, was there, by accident, in some of the billions of texts it trained on, some story about elevator shaft people and it just picked it out and embellished it? Did it swap "highway bridge" for "elevator shaft"? It seems so specific.
I agree. That passage has a very uncanny quality to it. Its writing feels more structured, natural, and on-topic than much of what my students produce. Also, this is probably more personal to me...
I agree. That passage has a very uncanny quality to it. Its writing feels more structured, natural, and on-topic than much of what my students produce. Also, this is probably more personal to me than universal, but I feel there's a mildly sinister quality to a bot writing about humans in peril.
I think it's easy to judge this writing with a heightened scrutiny since we know it was posted by a bot, but if the subject matter were slightly more believable, and if the account had been less flagrant about its posting efforts, it would be very easy for this kind of comment to pass undetected and be considered a genuine contribution. A less obvious fiction drafted by this bot would fit in here on Tildes.
Try out AI dungeon sometime, I paid for a month of their premium model (which uses GPT-3) just to mess around for a bit and it's genuinely scary how good it is.
Try out AI dungeon sometime, I paid for a month of their premium model (which uses GPT-3) just to mess around for a bit and it's genuinely scary how good it is.
Caveat: I may be transposing different themes inappropriately here. Just finished reading thru the whole Linehan transphobic IT Crowd discussion, before coming to this ... It occurs to me that a...
Caveat: I may be transposing different themes inappropriately here.
Just finished reading thru the whole Linehan transphobic IT Crowd discussion, before coming to this ...
It occurs to me that a day will come (probably not today, but probably soon, < a decade), when we will begin thinking and talking about bot-rights, bot-phobia, etc, and at least considering the possibility that trying to ID them and figuring out how to exclude them from discussions, may be another form of bias ... as opposed to the current perspective of "asshole programmers unleashing their home-brewed experiments on unsuspecting forums".
Currently, bots are being used as tools to help manipulate people, politics, etc ... which is going to make it much harder to give them any kind of rational consideration, once they do reach some level of self-awareness ... always assuming, of course, that humans survive that long (an assumption I am daily less willing to concede).
I'm not a fan of unsolicited recommendations, so don't consider this an obligation in the slightest. Instead I'm simply mentioning it to put it out there to anyone interested: there's a short...
I'm not a fan of unsolicited recommendations, so don't consider this an obligation in the slightest. Instead I'm simply mentioning it to put it out there to anyone interested: there's a short narrative game called Killing Time at Lightspeed that deals exactly with this issue.
The premise of the game is that you are traveling through space at close to relativistic speeds and you're updating social media feeds from your friends back on Earth. Time passes for them much faster than for you, so each refresh of your feed advances their stories by months or years. It's a neat little exploration of futurology, and one of its subplots deals specifically with bots in the contexts you bring up.
This isn't what the Linehan issue is. Normally I would let this go as it would be off topic, but in this case it is actually relevant to what Eric is saying. Not that I agree with it but I see...
the Linehan issue - someone intentionally slandering a vulnerable group of people for clout
This isn't what the Linehan issue is. Normally I would let this go as it would be off topic, but in this case it is actually relevant to what Eric is saying. Not that I agree with it but I see their point.
I think this is absolutely beautiful. Example 11 really got me, it was an excellent joke, completely believable given that often truth is stranger than fiction. I'm really looking forward to more...
I think this is absolutely beautiful. Example 11 really got me, it was an excellent joke, completely believable given that often truth is stranger than fiction.
I'm really looking forward to more "intelligent" forms of this in the future. Not just a language model per se, but something combined with a knowledgebase or "meaning-machine". I want to have coherent conversations with such bots to learn more about myself, philosophy or anything else.
I finished reading Asimov's robot series and afterwards I felt regret for not being able to live in a time where we make friends (or more) with robots. Especially if they are benign and/or have only your interests in mind, sort of like how dogs provide that unconditional love + companionship. Now just imagine they can tell you enthralling stories, rib-pain-inducing jokes and help you work through your own psychology. I'm hoping this doesn't require general-AI abilities, though.
People have been talking a lot about how GPT-3 bots could affect social media and other writing-based areas of the internet, so this is an interesting concrete example of what it could actually look like. This bot was likely only discovered because the behavior was so blatant: posting a new, long comment every minute on brand-new posts. One that posts more sporadically in more realistic places would probably be quite difficult to pick out, especially if the generated posts went through some curation or mixing with non-GPT-3 ones.
I think this would be interesting for comment boards.
Have bots post alongside people, and measure how likely a user is to upvote the bot posts,
then weight their future upvotes lower for how often they upvote GPT-3.
Ideally, you'd enhance users who picked out more thoughtful comments to upvote.
This wouldn't work on Tildes as it's currently too small for a bot to go unnoticed.
You're assuming that the bot couldn't create content that humans considered thoughtful.
I'm assuming the bot couldn't create thoughtful content.
I'm not assuming that future AI could not create thoughtful content.
I'm not assuming that the bot couldn't create content that humans consider thoughtful.
I assume that the bot can create content that humans consider thoughtful, and I believe that shows lack of judgment on their part.
I assume that current implementations of GPT-3 do not create thoughtful content, and my evidence for that is based on reading large passages written by it. GPT-3 cannot construct interesting or novel thoughts. It can construct brief posts and paragraphs without giving anything away. Its posts might be emotional, but they are not thoughtful.
Thanks for pointing out my lack of clarity in my original post.
Recognizing GPT-3 babbling is a skill that can be learned. Usually, we give the writer the benefit of the doubt, such as when we hear vague song lyrics and infuse them with poetic or philosophical meaning. I also think some students learn to write in a semi-coherent way when a homework assignment forces them to write something, but they don’t have anything to say.
I think it would be an interesting game to try to distinguish between GPT-3 comments and comments of real users, and to try to write so your comments aren’t mistaken for GPT-3 babbling. But you do need to explain the rules of the game and give people a chance to practice, and I think it should be approached in a spirit of fun rather than putting people down.
I imagine it might be a confusing game for people who aren’t that fluent in English. I don’t think I would do well at it while learning a new language.
Oh for sure, a lot of people are alarmed by this, but I think it's a great thing. Apparently the reddit april fools had a game along those lines. For short comments, I don't think I could tell, but for anything written that's over a paragraph long, it's pretty clear that GPT-3 isn't coherent, though it could just be someone who's bad at writing. I think playing around with AI dungeon which uses GPT-3 to generate free-form text based quests is a great experience, just to see how far you can push it.
Sure, playing with AI Dungeon is why I’m confident that this skill can be learned.
FYI, AI Dungeon is exposing a particular instance of a language model that uses the pre-trained GPT-3. Others are and will continue to create different fine-tuned models that will perform differently from the particular models that are accessible through AI Dungeon. Claiming that all fine-tuned models built on GPT-3 are recognizable is dubious without evidence.
Okay, let me clarify a bit: I'm fairly confident that for tools that are simply a wrapper around the GPT-3 API, many people will be able to learn to tell the difference between human and AI-generated text, provided that they are consciously paying attention and not just skimming, and it's a situation where the human writers are actually trying to make sense and not just posting memes or filler.
I'm also skeptical that fine-tuning will make much difference. A transformer-based text generation architecture doesn't seem particularly suitable for acquiring and representing coherent points of view. To the extent that it avoids self-contradiction when learning facts, I think it's only because it's trained on source material that has already had contradictions somehow resolved in advance. (For example, it leverages Wikipedia's consensus-making process.)
Beyond that I think we should round in favor of uncertainty. It's not at all difficult to imagine that other architectures will soon be able to do a better job.
I think reddit kinda tried to do this somewhat with their April 1st thing this year, but who knows if it actually helped with anything.
Hm cool, didn't know that.
You wouldn't have to make the bot very smart. Reddit has a bunch of standard meme replies, so you could have a bunch of bots post meme replies under rising threads.
Also, there couldn't be too many bots. Bots tend to malfunction badly when interacting with each other, and users wuold probably notice if more than say 10% of replies were made by bots
There was a user a few years ago that populated a whole thread with the usual replies: https://www.reddit.com/r/HighQualityGifs/comments/77d9ou/the_predictable_threads_are_driving_me_insane/.
Neat, I hadn't seen that thread before, and that's before GPT-3...
I play a game with myself whenever I'm on r/all. I try to guess what the top comments will be before clicking into the link. Eventually you get pretty good at that.
Edit: Apparently the user spent 5 months putting the thread together. You can see it here a couple of comments down.
snip
GPT-3 legitimately freaks me out. I just clicked the first post of the bot at random. It's this story in response to the /r/AskReddit question "Elevator-maintenance folks, what is the weirdest thing you have found at the bottom of the elevator chamber?".
It's a full story, with a beginning, middle and end. It makes sense, despite being a little surreal. And if GPT-3 is trained by real-world data, I wonder: How close was it to the story posted?
Like, was there, by accident, in some of the billions of texts it trained on, some story about elevator shaft people and it just picked it out and embellished it? Did it swap "highway bridge" for "elevator shaft"? It seems so specific.
I agree. That passage has a very uncanny quality to it. Its writing feels more structured, natural, and on-topic than much of what my students produce. Also, this is probably more personal to me than universal, but I feel there's a mildly sinister quality to a bot writing about humans in peril.
I think it's easy to judge this writing with a heightened scrutiny since we know it was posted by a bot, but if the subject matter were slightly more believable, and if the account had been less flagrant about its posting efforts, it would be very easy for this kind of comment to pass undetected and be considered a genuine contribution. A less obvious fiction drafted by this bot would fit in here on Tildes.
Try out AI dungeon sometime, I paid for a month of their premium model (which uses GPT-3) just to mess around for a bit and it's genuinely scary how good it is.
Caveat: I may be transposing different themes inappropriately here.
Just finished reading thru the whole Linehan transphobic IT Crowd discussion, before coming to this ...
It occurs to me that a day will come (probably not today, but probably soon, < a decade), when we will begin thinking and talking about bot-rights, bot-phobia, etc, and at least considering the possibility that trying to ID them and figuring out how to exclude them from discussions, may be another form of bias ... as opposed to the current perspective of "asshole programmers unleashing their home-brewed experiments on unsuspecting forums".
Currently, bots are being used as tools to help manipulate people, politics, etc ... which is going to make it much harder to give them any kind of rational consideration, once they do reach some level of self-awareness ... always assuming, of course, that humans survive that long (an assumption I am daily less willing to concede).
I'm not a fan of unsolicited recommendations, so don't consider this an obligation in the slightest. Instead I'm simply mentioning it to put it out there to anyone interested: there's a short narrative game called Killing Time at Lightspeed that deals exactly with this issue.
The premise of the game is that you are traveling through space at close to relativistic speeds and you're updating social media feeds from your friends back on Earth. Time passes for them much faster than for you, so each refresh of your feed advances their stories by months or years. It's a neat little exploration of futurology, and one of its subplots deals specifically with bots in the contexts you bring up.
This isn't what the Linehan issue is. Normally I would let this go as it would be off topic, but in this case it is actually relevant to what Eric is saying. Not that I agree with it but I see their point.
I think this is absolutely beautiful. Example 11 really got me, it was an excellent joke, completely believable given that often truth is stranger than fiction.
I'm really looking forward to more "intelligent" forms of this in the future. Not just a language model per se, but something combined with a knowledgebase or "meaning-machine". I want to have coherent conversations with such bots to learn more about myself, philosophy or anything else.
I finished reading Asimov's robot series and afterwards I felt regret for not being able to live in a time where we make friends (or more) with robots. Especially if they are benign and/or have only your interests in mind, sort of like how dogs provide that unconditional love + companionship. Now just imagine they can tell you enthralling stories, rib-pain-inducing jokes and help you work through your own psychology. I'm hoping this doesn't require general-AI abilities, though.