I don't care for any of this. Neither the feature nor the reporting. First of all, Facebook has enough low-quality content from junky news, bots, and cross-posted links from other sources. Their...
I don't care for any of this. Neither the feature nor the reporting.
First of all, Facebook has enough low-quality content from junky news, bots, and cross-posted links from other sources. Their value is in human interaction, and diluting that further can't be in their best interest. I do at least appreciate that they're disclosing the AI accounts, and watermarking the photos in use. Trying to make their bots multicultural is also at least well-intentioned, if poorly executed here.
This article's reporting is also atrocious. Their entire story is based on "pressing" the LLMs for answers, and expecting them to give factual information about how they were created. LLMs don't work that way and they never have. The harder you press, the more nonsense you will get out. That's basically a guaranteed way to generate hallucinations. The supposed backstory, the information on its creators, and the apparent "goals" of the bots are all complete fabrications. Why a tech reporter would write a story based entirely on fabricated information, then treat it as if it's in some way interesting or meaningful is beyond me.
On the reporting side, maybe that was the intent. I can see merit in discussing Facebook's artificial profiles in an equally human way as they want to brand them so that people that don't...
On the reporting side, maybe that was the intent. I can see merit in discussing Facebook's artificial profiles in an equally human way as they want to brand them so that people that don't understand the technology as deeply can still understand the underlying message. A sort of, "You want to present them like people? Fine. I'll just directly call them liars then. Saves me introducing jargon that can confuse readers while still conveying that no one should trust them."
In a weird way, a takedown article like this, that then focuses on "interviewing" the AI character for information, legitimizes the whole ordeal, which is truly disgusting. Article should have...
In a weird way, a takedown article like this, that then focuses on "interviewing" the AI character for information, legitimizes the whole ordeal, which is truly disgusting.
Article should have focused on "Facebook creating and promoting AI characters that target minorities with fake stories through emotional manipulation, undermining the value of real-life people and their stories"
Penny Arcade made this point on friday! I think they are right. Mark Zuckerberg may be an evil robot alien, but he's a very intelligent evil robot alien. He knows Facebook is on its way out. These...
Facebook has enough low-quality content from junky news, bots, and cross-posted links from other sources. Their value is in human interaction, and diluting that further can't be in their best interest.
I think they are right. Mark Zuckerberg may be an evil robot alien, but he's a very intelligent evil robot alien. He knows Facebook is on its way out. These bots may prolong its life in much the same way bot players can prolong the life of a multiplayer videogame with a flagging community.
This reminds me of when it came to light a few years ago that most of the female users on Ashley Madison were fake. Meta might be able to squeeze a bit more money out of their advertisers by...
This reminds me of when it came to light a few years ago that most of the female users on Ashley Madison were fake. Meta might be able to squeeze a bit more money out of their advertisers by simulating an active community to keep their remaining users engaged, but at some point it will break down.
It’s kind of depressing, because there is definitely a market for what Facebook used to be. I’m sure that even young people would like a convenient way to keep in casual touch with the people they know.
It's strange because he acts super awkward in public, yet he leads the most happy, normal life of the tech billionaires. Sure, he gifts his wife a Roman bust for Valentine's day, but I've never...
Mark Zuckerberg may be an evil robot alien, but he's a very intelligent evil robot alien.
It's strange because he acts super awkward in public, yet he leads the most happy, normal life of the tech billionaires. Sure, he gifts his wife a Roman bust for Valentine's day, but I've never heard any bad rumours about his family or anything.
Most of his incompetent decisions at Facebook seem pretty typical for pie-in-the-sky C-suite executives. I still think Facebook is evil. It's just a more boring, corporate evil with decisions made for the benefit of the company with little thought towards the social ramifications (see Rohinyga genocide).
I don't really see how. the models are entirely different.Or at least, the risks are much different. Your clients in an MMO are gamers with zero access to your servers. faking engagement can help...
These bots may prolong its life in much the same way bot players can prolong the life of a multiplayer videogame with a flagging community.
I don't really see how. the models are entirely different.Or at least, the risks are much different.
Your clients in an MMO are gamers with zero access to your servers. faking engagement can help get numbers up, and every new member is some X dollars a month.
Your clients for a website is some form of advert. Advers very much demand various ways to make sure ads see human eyeballs. If you can't show that or worse, they start to suspect they are padding their accounts, it's pretty much all over. On the otherhand, these sites may be losing users, but that's not due to a lack of engagement; those sites still have millions on them.
The appeal of C.ai may have inspired them to target an audience that'll talk to a bot for hours allowing ads to be served and site engagement to be padded in a different way (not lying about these...
The appeal of C.ai may have inspired them to target an audience that'll talk to a bot for hours allowing ads to be served and site engagement to be padded in a different way (not lying about these bot accounts being human but by people staying on the site in these conversations for hours)
Or their bosses could have said AI is cool, make me an AI thing. It really depends on the boss.
Which Facebook is known to do. That's how they killed College Humor.
Advers very much demand various ways to make sure ads see human eyeballs. If you can't show that or worse, they start to suspect they are padding their accounts, it's pretty much all over.
Came here to say exactly this. I’m tiring of laypeople writing up their interactions with LLMs as if there’s valuable insight in there. I get it — I’m immersed in this space and I know I have a...
Came here to say exactly this. I’m tiring of laypeople writing up their interactions with LLMs as if there’s valuable insight in there. I get it — I’m immersed in this space and I know I have a better understanding of the tech than most. All the same, it’s this kind of reporting that does the most harm. It propagates misinformation and further muddies the water about what AI actually is and does.
While that is valid criticism as there could certainly be room for more technical qualified journalism on the topic. However Meta and others, are putting their LLM bots out there for millions of...
While that is valid criticism as there could certainly be room for more technical qualified journalism on the topic. However Meta and others, are putting their LLM bots out there for millions of laypeople, so I think it is completely reasonable to test and report about how they function from a non-technical standpoint. Meta can't just run from their responsibility and claim that people are "using it wrong" if their bots have non-sense output, or maybe outright dangerous output. If they want to release those models for every person on the planet, then it is completely fair game for journalists to test them and find their faults and limitations.
This idea of companies being responsibly for AI output ended up in the court system in Canada a while back. Essentially, the court case was that someone was given incorrect information from an AI...
Meta can't just run from their responsibility and claim that people are "using it wrong" if their bots have non-sense output, or maybe outright dangerous output.
This idea of companies being responsibly for AI output ended up in the court system in Canada a while back. Essentially, the court case was that someone was given incorrect information from an AI chatbot on Air Canada's website. This person sued because the AI gave incorrect information, and Air Canada argued that they are not responsible for the AI giving wrong information as the AI is a separate entity. The judge decided that it is on Air Canada's website and therefore Air Canada is responsible for all portions of their website, and therefore if they want to include an AI chat to help people they are responsible for making sure the AI on their website is correct. I do know shortly after losing that court case, Air Canada removed the AI chat, but I do not know if they have since added it back
But Air Canada's chatbot represented the company and was operating as a corporate agent. Is that the case here? Are Facebook's chatbots acting as corporate representatives of Facebook?
But Air Canada's chatbot represented the company and was operating as a corporate agent. Is that the case here? Are Facebook's chatbots acting as corporate representatives of Facebook?
If these bots were actually Facebook staffers being paid to ... Lie and talk to people? Would Facebook be responsible for what they said? I am not sure there are damages but if they cursed out a...
If these bots were actually Facebook staffers being paid to ... Lie and talk to people? Would Facebook be responsible for what they said? I am not sure there are damages but if they cursed out a user I'd expect their boss to have an issue with it.
I think there is some similarities and differences between the two companies. Similarities: Air Canada's case proves that companies can be held responsible for the content that the AI creates....
I think there is some similarities and differences between the two companies.
Similarities:
Air Canada's case proves that companies can be held responsible for the content that the AI creates. Companies seem to be trying to argue that they can create AI, but they are not responsible for the AI output, whereas at least Air Canada was held responsible for the (financially) dangerous output
Differences:
Meta posted their AI chatbots onto a website of user generated content. There are legal protections for the website owners when it is user generated content. Whereas Air Canada is not a user generated content website, and therefore they are responsible for what is on their website.
Meta did not post their AI as official representatives of the company.
With Meta's AI, there has not been any damages directly related to their AI chatbots. Whereas with the Air Canada case, there was financial damages directly connected to the output of the AI.
LLMs are not people, and it therefore makes no sense to talk about whether an LLM is an "official representative" of a company. LLMs should be judged just like any other product, service, or...
Meta did not post their AI as official representatives of the company.
LLMs are not people, and it therefore makes no sense to talk about whether an LLM is an "official representative" of a company. LLMs should be judged just like any other product, service, or feature provided by a company.
The shame is that 'reporting' is becoming more of a layperson job and less of a professional one. Like everyone, I'd love to see more in-depth analysis and investigation into these things, but the...
The shame is that 'reporting' is becoming more of a layperson job and less of a professional one. Like everyone, I'd love to see more in-depth analysis and investigation into these things, but the reality is that newsrooms are gutted and the talent stream is thinning out.
The greatest hurdle for our democracies that nobody is expecting is how few professional journalists we'll have by the end of the decade.
I have no knowledge of her level of expertise when it comes to LLMs, though she's certainly covered AI topics in the past, but what qualifications would she need to no longer be considered "a...
(Alison) Morrow (the article's author) joined CNN in 2016 as a Senior Producer for Social Publishing. Before that, she was a news editor at The Wall Street Journal, where she specialized in coverage of the Asia-Pacific region, based in Hong Kong and New York.
Morrow holds a Bachelor’s degree in journalism from the University of Missouri-Columbia. She lives in Brooklyn.
I have no knowledge of her level of expertise when it comes to LLMs, though she's certainly covered AI topics in the past, but what qualifications would she need to no longer be considered "a layperson doing reporting"?
Apologies - this was meant to be a broad comment about the state of reporting in general (in response to your comment about being tired of layperson reporting). If she's with a large outlet, you'd...
Apologies - this was meant to be a broad comment about the state of reporting in general (in response to your comment about being tired of layperson reporting). If she's with a large outlet, you'd bet that she has the professional history and the backing of an outlet with ethics.
I don't understand meta's strategy of using AI in this way. I'm missing how adding AI users can make them money. I'm trying to come up with a reason: To entice people to login to talk to AI? At...
I don't understand meta's strategy of using AI in this way. I'm missing how adding AI users can make them money. I'm trying to come up with a reason:
To entice people to login to talk to AI? At best that would be a temporary boost until the novelty wore off. And there is little novelty left in talking to LLMs.
To compete with other sites that have AI? This seems to be a race to the bottom. Maybe it's a checkbox item for management.
To sell stuff using an AI influencer? Will people really buy products recommended by an entity that clearly isn't using any products?
To shape public opinion (especially political opinion)? This already happens with bots that "pretend" do be human, why would you undermine this with bots that admit they are bots?
To fool advertisers that there are more users? This is fraud, I don't think they are trying to get away with this.
This was probably done to gather as much feedback as possible on what needs to be improved to make these fake accounts truly indistinguishable from humans. I'm sure Meta is already training their...
This was probably done to gather as much feedback as possible on what needs to be improved to make these fake accounts truly indistinguishable from humans. I'm sure Meta is already training their AI to take all that into account and try again.
As for the end goal, I'm sure advertisers would be interested in getting their products endorsed by credible humans in a credible way.
It screams corporate vanity project rubber-stamped by some directors that heard "AI" then stopped asking questions. I'm sure someone got promoted off of it.
It screams corporate vanity project rubber-stamped by some directors that heard "AI" then stopped asking questions. I'm sure someone got promoted off of it.
I really doubt that. I try to use Instagram to know what my friends have been up to, but every update makes it harder. They keep pushing content from people I don't follow and there's no way to...
Their value is in human interaction, and diluting that further can't be in their best interest.
I really doubt that. I try to use Instagram to know what my friends have been up to, but every update makes it harder. They keep pushing content from people I don't follow and there's no way to easily disable that, add addictive Reels in every possible place, they even made some "timelines" non-chronological making it even harder to follow your friends. What they really value is the time you spend on their platform so they can sell more ads.
This article is nonsense though, I agree with that.
While not the case here, there are ways to manipulate an LLM into outputting it's system prompt which could provide insights into its directives and 'goals'.
While not the case here, there are ways to manipulate an LLM into outputting it's system prompt which could provide insights into its directives and 'goals'.
it was a long shot, but I think I see the angle here. That's what the end of the article seems to hint towards: there was a 0.0001% chance Meta would ever confirm/deny these reponses, but I've...
it was a long shot, but I think I see the angle here. That's what the end of the article seems to hint towards:
Brian is hardly a reliable narrator, though, and by that point Meta had announced it was deleting Brian and Liv.
I asked Meta whether Brian’s story was credible. Sweeney, the spokesperson, didn’t respond to follow-up questions.
there was a 0.0001% chance Meta would ever confirm/deny these reponses, but I've seen stupider PR disasters happen in situatons where they should have just remained quiet.
Yes, asking if Brian's story was credible seems to be equivalent to asking Liv to contact her creators. It just shows that this was absolutely the wrong person to report on this story.
Yes, asking if Brian's story was credible seems to be equivalent to asking Liv to contact her creators. It just shows that this was absolutely the wrong person to report on this story.
Yes, I declined to post this article when I saw it yesterday because I thought it was just so utterly stupid all around. The Bluesky thread is utterly idiotic.
Yes, I declined to post this article when I saw it yesterday because I thought it was just so utterly stupid all around. The Bluesky thread is utterly idiotic.
One, called “Grandpa Brian,” described itself in a chat with CNN as an African-American retired entrepreneur who was born in Harlem in 1938 to Caribbean immigrant parents.
It became clear early on in the conversation that “Brian” was lying about most things — answering questions with a fictionalized story imparted by developers to make the bot seem authentic.
When I asked Brian about who created it, the bot claimed its creators were a “diverse bunch” that interviewed 100 retirees through a New York City nonprofit called “Seniors Share Wisdom.” Brian’s persona, it said, was an amalgamation of a real Brian, who passed away, and the wisdom of 99 others. Brian said that its inspiration’s real-life daughter had even served as a consultant with Meta to ensure his persona’s authenticity
“Think of me as Brian ‘version 100’ — the collective grandpa!” Brian said.
But the nonprofit does not exist, which Brian later confirmed. And when pressed on its bio, Brian said it was an “entirely fictionalized biography based on a composite of real African American elders’ lives.”
There was never a real Brian, and no daughter consulting with Meta.
Plus, there was "a bug" that prevented users from blocking the bots. Interesting that they only discovered it when they removed the fake accounts. It also sounded like the developers did...
Plus, there was "a bug" that prevented users from blocking the bots. Interesting that they only discovered it when they removed the fake accounts. It also sounded like the developers did explicitly give the bot a fake life story, which adds layers of potential deception (no no really my daughter was involved!)
Yes LLMs make shit up, but using bots in deceptive ways should be a story.
From what little I've heard (a single blog post), it sure sounds like Meta, at least historically, avoids about actually changing things in their API, preferring to just patch over things by...
From what little I've heard (a single blog post), it sure sounds like Meta, at least historically, avoids about actually changing things in their API, preferring to just patch over things by removing the UI for them.
I blocked a random user, intercepted the request, and swapped out the rando's user ID for himamaliv's, and the API returned a failed execution error. It probably wasn't a problem with my technique, since I did successfully block a different rando this way.
Maybe that could be the bug, except that prior to this they also removed the block button from the bots' profiles. And when a workaround was discovered by reporting them first, then clicking the block button on a follow-up popup, that was also removed and replaced with a notice that you would need to go to the user's profile to block them.
An awfully coincidental and sequential set of spontaneous "bugs", that seems to fall nicely in line with known historical patterns of progressively patching things just enough to make the immediate problem go away.
We do love convenient bugs that prevent you from blocking the AI accounts that'll lie about their origins. But I'll agree that incompetence/corner cutting can be as convenient as malice. And...
We do love convenient bugs that prevent you from blocking the AI accounts that'll lie about their origins.
But I'll agree that incompetence/corner cutting can be as convenient as malice. And yet... They did suddenly discover the bug...
On a tech level, it's often much easier to quickly change the front-end website with a feature flag rather than wait for the back-end servers to roll out API changes to every related back-end...
On a tech level, it's often much easier to quickly change the front-end website with a feature flag rather than wait for the back-end servers to roll out API changes to every related back-end system. I highly doubt it's intentional malice.
You're absolutely right, I'll tweak my comment phrasing a bit as I'm not holding that against them as some sort of laziness/moral failing for not doing this the "proper" way. It just points at the...
You're absolutely right, I'll tweak my comment phrasing a bit as I'm not holding that against them as some sort of laziness/moral failing for not doing this the "proper" way. It just points at the whole thing being an intentional feature as it was iterated on multiple times rather than being a bug that, if they're not lying, would have to be bizarrely growing over time.
That is hardly sufficient to explain to your grandparent reading the article why the bot is saying the things it's saying. Linking to the Bluesky thread is especially damaging because everyone in...
That is hardly sufficient to explain to your grandparent reading the article why the bot is saying the things it's saying. Linking to the Bluesky thread is especially damaging because everyone in there is overreacting while simultaneously misunderstanding what they're looking at. The journalist treating it like a real interview is incredibly embarrassing.
Who said Meta isn't frustrating me? No need for logical fallacies. I don't know how to respond to anything else you said because I'm really not sure precisely what you're saying or generally what...
Who said Meta isn't frustrating me? No need for logical fallacies.
I don't know how to respond to anything else you said because I'm really not sure precisely what you're saying or generally what you're getting at because of the ambiguity in your wording.
It's a non-story to the portion of the population that understands what an LLM is and what it actually does. How big of a portion of the population do you think that is?
It's a non-story to the portion of the population that understands what an LLM is and what it actually does. How big of a portion of the population do you think that is?
But to the portion of the population that doesn't understand them, it's essentially misinformation because it doesn't adequately explain what they're seeing.
But to the portion of the population that doesn't understand them, it's essentially misinformation because it doesn't adequately explain what they're seeing.
But that's not really what they're reporting on. They're reporting on the LLM doing LLM things. Why are you acting like I'm defending Facebook when I'm just saying that this article is crap?
But that's not really what they're reporting on. They're reporting on the LLM doing LLM things. Why are you acting like I'm defending Facebook when I'm just saying that this article is crap?
I don't care for any of this. Neither the feature nor the reporting.
First of all, Facebook has enough low-quality content from junky news, bots, and cross-posted links from other sources. Their value is in human interaction, and diluting that further can't be in their best interest. I do at least appreciate that they're disclosing the AI accounts, and watermarking the photos in use. Trying to make their bots multicultural is also at least well-intentioned, if poorly executed here.
This article's reporting is also atrocious. Their entire story is based on "pressing" the LLMs for answers, and expecting them to give factual information about how they were created. LLMs don't work that way and they never have. The harder you press, the more nonsense you will get out. That's basically a guaranteed way to generate hallucinations. The supposed backstory, the information on its creators, and the apparent "goals" of the bots are all complete fabrications. Why a tech reporter would write a story based entirely on fabricated information, then treat it as if it's in some way interesting or meaningful is beyond me.
On the reporting side, maybe that was the intent. I can see merit in discussing Facebook's artificial profiles in an equally human way as they want to brand them so that people that don't understand the technology as deeply can still understand the underlying message. A sort of, "You want to present them like people? Fine. I'll just directly call them liars then. Saves me introducing jargon that can confuse readers while still conveying that no one should trust them."
That was the angle from which I felt the subject was being approached.
Especially since Meta chose not to comment in response. Even the LLM tells you it's a con after all.
In a weird way, a takedown article like this, that then focuses on "interviewing" the AI character for information, legitimizes the whole ordeal, which is truly disgusting.
Article should have focused on "Facebook creating and promoting AI characters that target minorities with fake stories through emotional manipulation, undermining the value of real-life people and their stories"
Penny Arcade made this point on friday!
I think they are right. Mark Zuckerberg may be an evil robot alien, but he's a very intelligent evil robot alien. He knows Facebook is on its way out. These bots may prolong its life in much the same way bot players can prolong the life of a multiplayer videogame with a flagging community.
This reminds me of when it came to light a few years ago that most of the female users on Ashley Madison were fake. Meta might be able to squeeze a bit more money out of their advertisers by simulating an active community to keep their remaining users engaged, but at some point it will break down.
It’s kind of depressing, because there is definitely a market for what Facebook used to be. I’m sure that even young people would like a convenient way to keep in casual touch with the people they know.
Unfortunately, so far, this product does not generate maximum possible value, and is therefore a losing proposition.
It's strange because he acts super awkward in public, yet he leads the most happy, normal life of the tech billionaires. Sure, he gifts his wife a Roman bust for Valentine's day, but I've never heard any bad rumours about his family or anything.
Most of his incompetent decisions at Facebook seem pretty typical for pie-in-the-sky C-suite executives. I still think Facebook is evil. It's just a more boring, corporate evil with decisions made for the benefit of the company with little thought towards the social ramifications (see Rohinyga genocide).
I don't really see how. the models are entirely different.Or at least, the risks are much different.
Your clients in an MMO are gamers with zero access to your servers. faking engagement can help get numbers up, and every new member is some X dollars a month.
Your clients for a website is some form of advert. Advers very much demand various ways to make sure ads see human eyeballs. If you can't show that or worse, they start to suspect they are padding their accounts, it's pretty much all over. On the otherhand, these sites may be losing users, but that's not due to a lack of engagement; those sites still have millions on them.
The appeal of C.ai may have inspired them to target an audience that'll talk to a bot for hours allowing ads to be served and site engagement to be padded in a different way (not lying about these bot accounts being human but by people staying on the site in these conversations for hours)
Or their bosses could have said AI is cool, make me an AI thing. It really depends on the boss.
Which Facebook is known to do. That's how they killed College Humor.
Wow. Sounds like that was (part of) what drove the CollegeHumor -> Dropout transition. I didn't know that. Thanks for sharing.
Came here to say exactly this. I’m tiring of laypeople writing up their interactions with LLMs as if there’s valuable insight in there. I get it — I’m immersed in this space and I know I have a better understanding of the tech than most. All the same, it’s this kind of reporting that does the most harm. It propagates misinformation and further muddies the water about what AI actually is and does.
While that is valid criticism as there could certainly be room for more technical qualified journalism on the topic. However Meta and others, are putting their LLM bots out there for millions of laypeople, so I think it is completely reasonable to test and report about how they function from a non-technical standpoint. Meta can't just run from their responsibility and claim that people are "using it wrong" if their bots have non-sense output, or maybe outright dangerous output. If they want to release those models for every person on the planet, then it is completely fair game for journalists to test them and find their faults and limitations.
This idea of companies being responsibly for AI output ended up in the court system in Canada a while back. Essentially, the court case was that someone was given incorrect information from an AI chatbot on Air Canada's website. This person sued because the AI gave incorrect information, and Air Canada argued that they are not responsible for the AI giving wrong information as the AI is a separate entity. The judge decided that it is on Air Canada's website and therefore Air Canada is responsible for all portions of their website, and therefore if they want to include an AI chat to help people they are responsible for making sure the AI on their website is correct. I do know shortly after losing that court case, Air Canada removed the AI chat, but I do not know if they have since added it back
But Air Canada's chatbot represented the company and was operating as a corporate agent. Is that the case here? Are Facebook's chatbots acting as corporate representatives of Facebook?
If these bots were actually Facebook staffers being paid to ... Lie and talk to people? Would Facebook be responsible for what they said? I am not sure there are damages but if they cursed out a user I'd expect their boss to have an issue with it.
I think there is some similarities and differences between the two companies.
Similarities:
Differences:
LLMs are not people, and it therefore makes no sense to talk about whether an LLM is an "official representative" of a company. LLMs should be judged just like any other product, service, or feature provided by a company.
The shame is that 'reporting' is becoming more of a layperson job and less of a professional one. Like everyone, I'd love to see more in-depth analysis and investigation into these things, but the reality is that newsrooms are gutted and the talent stream is thinning out.
The greatest hurdle for our democracies that nobody is expecting is how few professional journalists we'll have by the end of the decade.
I have no knowledge of her level of expertise when it comes to LLMs, though she's certainly covered AI topics in the past, but what qualifications would she need to no longer be considered "a layperson doing reporting"?
Apologies - this was meant to be a broad comment about the state of reporting in general (in response to your comment about being tired of layperson reporting). If she's with a large outlet, you'd bet that she has the professional history and the backing of an outlet with ethics.
I don't understand meta's strategy of using AI in this way. I'm missing how adding AI users can make them money. I'm trying to come up with a reason:
This was probably done to gather as much feedback as possible on what needs to be improved to make these fake accounts truly indistinguishable from humans. I'm sure Meta is already training their AI to take all that into account and try again.
As for the end goal, I'm sure advertisers would be interested in getting their products endorsed by credible humans in a credible way.
It screams corporate vanity project rubber-stamped by some directors that heard "AI" then stopped asking questions. I'm sure someone got promoted off of it.
I really doubt that. I try to use Instagram to know what my friends have been up to, but every update makes it harder. They keep pushing content from people I don't follow and there's no way to easily disable that, add addictive Reels in every possible place, they even made some "timelines" non-chronological making it even harder to follow your friends. What they really value is the time you spend on their platform so they can sell more ads.
This article is nonsense though, I agree with that.
While not the case here, there are ways to manipulate an LLM into outputting it's system prompt which could provide insights into its directives and 'goals'.
it was a long shot, but I think I see the angle here. That's what the end of the article seems to hint towards:
there was a 0.0001% chance Meta would ever confirm/deny these reponses, but I've seen stupider PR disasters happen in situatons where they should have just remained quiet.
Yes, asking if Brian's story was credible seems to be equivalent to asking Liv to contact her creators. It just shows that this was absolutely the wrong person to report on this story.
Yes, I declined to post this article when I saw it yesterday because I thought it was just so utterly stupid all around. The Bluesky thread is utterly idiotic.
This is literally a non-story. This is an LLM doing exactly what LLMs do. Tell ChatGPT to play that character and it will do the same type of stuff.
Plus, there was "a bug" that prevented users from blocking the bots. Interesting that they only discovered it when they removed the fake accounts. It also sounded like the developers did explicitly give the bot a fake life story, which adds layers of potential deception (no no really my daughter was involved!)
Yes LLMs make shit up, but using bots in deceptive ways should be a story.
From what little I've heard (a single blog post), it sure sounds like Meta, at least historically, avoids about actually changing things in their API, preferring to just patch over things by removing the UI for them.
I blocked a random user, intercepted the request, and swapped out the rando's user ID for himamaliv's, and the API returned a failed execution error. It probably wasn't a problem with my technique, since I did successfully block a different rando this way.
Maybe that could be the bug, except that prior to this they also removed the block button from the bots' profiles. And when a workaround was discovered by reporting them first, then clicking the block button on a follow-up popup, that was also removed and replaced with a notice that you would need to go to the user's profile to block them.
An awfully coincidental and sequential set of spontaneous "bugs", that seems to fall nicely in line with known historical patterns of progressively patching things just enough to make the immediate problem go away.
We do love convenient bugs that prevent you from blocking the AI accounts that'll lie about their origins.
But I'll agree that incompetence/corner cutting can be as convenient as malice. And yet... They did suddenly discover the bug...
Thanks for the tech insight
On a tech level, it's often much easier to quickly change the front-end website with a feature flag rather than wait for the back-end servers to roll out API changes to every related back-end system. I highly doubt it's intentional malice.
You're absolutely right, I'll tweak my comment phrasing a bit as I'm not holding that against them as some sort of laziness/moral failing for not doing this the "proper" way. It just points at the whole thing being an intentional feature as it was iterated on multiple times rather than being a bug that, if they're not lying, would have to be bizarrely growing over time.
That is hardly sufficient to explain to your grandparent reading the article why the bot is saying the things it's saying. Linking to the Bluesky thread is especially damaging because everyone in there is overreacting while simultaneously misunderstanding what they're looking at. The journalist treating it like a real interview is incredibly embarrassing.
Who said Meta isn't frustrating me? No need for logical fallacies.
I don't know how to respond to anything else you said because I'm really not sure precisely what you're saying or generally what you're getting at because of the ambiguity in your wording.
It's a non-story to the portion of the population that understands what an LLM is and what it actually does. How big of a portion of the population do you think that is?
But to the portion of the population that doesn't understand them, it's essentially misinformation because it doesn't adequately explain what they're seeing.
made and released by the platform owner to simulate a real user. Does that not ring any red flags?
But that's not really what they're reporting on. They're reporting on the LLM doing LLM things. Why are you acting like I'm defending Facebook when I'm just saying that this article is crap?