It’s a funny interaction but in the end the only thing it indicates is that people on Internet forums don’t like Zuck or Facebook and think that they exploit people (especially Reddit, which due...
It’s a funny interaction but in the end the only thing it indicates is that people on Internet forums don’t like Zuck or Facebook and think that they exploit people (especially Reddit, which due to its properties is almost always part of the training dataset), which I don’t think is particularly revelatory.
I’m tired of media outlets sensationalizing chatbots. It’s entertainment, not news, and misrepresents the state of the art to the public. Chatbots are currently just unthinking...
I’m tired of media outlets sensationalizing chatbots. It’s entertainment, not news, and misrepresents the state of the art to the public. Chatbots are currently just unthinking mechanical-statistical parrots.
Is that really the only thing it indicates? "Large amounts of publicly available language data" (from the article) surely does not equate to just forums or opinions. Not saying it's so revelatory,...
Is that really the only thing it indicates? "Large amounts of publicly available language data" (from the article) surely does not equate to just forums or opinions. Not saying it's so revelatory, sentient or whatever—but that's definitely not the only thing it indicates...
It can't indicate more than that the corpus they trained it on has negative sentiment towards Meta/Zuck. That's all that it is, after all. And that the corpus has negative sentiment I don't find...
It can't indicate more than that the corpus they trained it on has negative sentiment towards Meta/Zuck. That's all that it is, after all. And that the corpus has negative sentiment I don't find particularly surprising; we know from the GPT-3 paper that Reddit is one of the main training corpuses for NLP due to its variety, easy of access (you can basically get all the reddit comments you want from pushshift), and text-focus (as opposed to, say, IG posts, which often have large image/text interplay meaning the text is filled with non-sequiturs), and sheer size.
Wikipedia and online news are also typically there. I would imagine most news about Meta is also negative - it just draws more clicks. But in any case, Reddit will definitely dominate those in proportion, and especially when you condition yourself on text that is conversational.
I would imagine that the pd of sources conditioned on things like "What do you think of Meta?" easily is modally Reddit. That's not something sources other than forums and social media would have.
It’s a funny interaction but in the end the only thing it indicates is that people on Internet forums don’t like Zuck or Facebook and think that they exploit people (especially Reddit, which due to its properties is almost always part of the training dataset), which I don’t think is particularly revelatory.
I’m tired of media outlets sensationalizing chatbots. It’s entertainment, not news, and misrepresents the state of the art to the public. Chatbots are currently just unthinking mechanical-statistical parrots.
Is that really the only thing it indicates? "Large amounts of publicly available language data" (from the article) surely does not equate to just forums or opinions. Not saying it's so revelatory, sentient or whatever—but that's definitely not the only thing it indicates...
It almost feels like satire.
It can't indicate more than that the corpus they trained it on has negative sentiment towards Meta/Zuck. That's all that it is, after all. And that the corpus has negative sentiment I don't find particularly surprising; we know from the GPT-3 paper that Reddit is one of the main training corpuses for NLP due to its variety, easy of access (you can basically get all the reddit comments you want from pushshift), and text-focus (as opposed to, say, IG posts, which often have large image/text interplay meaning the text is filled with non-sequiturs), and sheer size.
Wikipedia and online news are also typically there. I would imagine most news about Meta is also negative - it just draws more clicks. But in any case, Reddit will definitely dominate those in proportion, and especially when you condition yourself on text that is conversational.
I would imagine that the pd of sources conditioned on things like "What do you think of Meta?" easily is modally Reddit. That's not something sources other than forums and social media would have.
Finally, a 'bot that is self-aware.
Edit: but, does does this include itself in 'people'?