We don’t know who the hoaxer was here, or their motives. Seems fairly amateur-hour, tbh, given the current state of AI tools. A properly motivated and funded disinformation campaign would know how...
We don’t know who the hoaxer was here, or their motives. Seems fairly amateur-hour, tbh, given the current state of AI tools. A properly motivated and funded disinformation campaign would know how to make its fake badge and research paper less detectable as such.
The real danger of stuff like this is not that it slanders the good name of Uber Eats, but that it decreases our ability to discern truth from falsehood. In a few years we’re all gonna be so fatigued from questioning everything that we just believe nothing anymore. The zone will be utterly flooded with shit — with zero hope of ever draining it again. Which of course creates an environment for malefactors to prey with impunity, because everyone has conceded their ability to defend against it.
Market manipulation could be an easy movitation. Buy some cheap, out-of-the-money put options on Uber, post the 'whistleblower' report, and hope that the news media picks it up. If the story goes...
We don’t know who the hoaxer was here, or their motives. Seems fairly amateur-hour, tbh, given the current state of AI tools. A properly motivated and funded disinformation campaign would know how to make its fake badge and research paper less detectable as such.
Market manipulation could be an easy movitation. Buy some cheap, out-of-the-money put options on Uber, post the 'whistleblower' report, and hope that the news media picks it up. If the story goes viral and especially if it catches the attention of regulators or politicians, the stock could drop 10-20% in a profitable manner.
In a few years we’re all gonna be so fatigued from questioning everything that we just believe nothing anymore.
As I see it, AI content generation as gotten good enough that we should no longer trust unauthenticated sources or evidence without a known chain of custody: unverified comments should have all the credibility of "my uncle works for Nintendo and told me how to capture Mew."
It's a crying shame, but I think that this is the inevitable consequence of having a giant mixing vat of human attention without other defense mechanisms; blaming AI for this is like blaming the salmonella for colonizing the raw chicken left out on the counter.
You might be onto something there. Honestly we’re so up to our eyeballs in scams, grifts, and pump-and-dumps these days that it’s legitimately tempting to join in. Feels like shooting fish in a...
Market manipulation could be an easy movitation. Buy some cheap, out-of-the-money put options on Uber, post the 'whistleblower' report, and hope that the news media picks it up. If the story goes viral and especially if it catches the attention of regulators or politicians, the stock could drop 10-20% in a profitable manner.
You might be onto something there. Honestly we’re so up to our eyeballs in scams, grifts, and pump-and-dumps these days that it’s legitimately tempting to join in. Feels like shooting fish in a barrel and these modern fraudsters rarely face any consequences (on the contrary, they get elected President and amplify their grifts by factors of magnitude… but I digress). I’m over here struggling to make an honest living for my family when I fully believe I could be raking it in by ripping people off if I just ignored my conscience. I wouldn’t be able to sleep at night but damn if I’m not a teensy bit jealous of the crooks these days.
I mean, reddit has been a place for creative writing exercises for way longer than AI has been around, longer than I've been on this site, hell longer than most of my adult life. People love going...
We don’t know who the hoaxer was here, or their motives. Seems fairly amateur-hour, tbh, given the current state of AI tools. A properly motivated and funded disinformation campaign would know how to make its fake badge and research paper less detectable as such.
I mean, reddit has been a place for creative writing exercises for way longer than AI has been around, longer than I've been on this site, hell longer than most of my adult life. People love going on there and just writing bullshit (e.g. r/amItheAsshole and such), doesn't have to be for a coordinated nefarious purpose.
I don't think reddit has been a usable website for a long long time outside of some VERY niche hobby subs maybe. Even the likes of r/rpg destroy a lot of discussion because of the upvote mechanism...
I don't think reddit has been a usable website for a long long time outside of some VERY niche hobby subs maybe. Even the likes of r/rpg destroy a lot of discussion because of the upvote mechanism and of course the most popular ones are just karma farming simulators. I genuinely prefer places with no personalities, no virtual currency and little moderation. Sadly there's very few such places left.
Years ago I posted a fake story on reddit that went viral. I was actually contacted by the podcast Reply All, about doing an episode about my story. But I had the sense to come clean and not try...
Years ago I posted a fake story on reddit that went viral. I was actually contacted by the podcast Reply All, about doing an episode about my story.
But I had the sense to come clean and not try to lie my way through an interview
I don't remember the exact title and it was on an old alt account I don't use anymore. But basically the story was about a guy using a program that sends messages to people after you die. Except...
I don't remember the exact title and it was on an old alt account I don't use anymore. But basically the story was about a guy using a program that sends messages to people after you die. Except he screws it up and all the messages go out early. So all of his friends and family are getting heartfelt messages that say he's dead.
For what it's worth, Casey Newton (the author of the article) co-hosts the podcast Hard Fork, and I 100% expect them to discuss this story on this week's episode.
For what it's worth, Casey Newton (the author of the article) co-hosts the podcast Hard Fork, and I 100% expect them to discuss this story on this week's episode.
It will hopefully go the way of the phone call. Phone calls go unanswered and filtered except from trusted sources unless you are looking for a job. The entire medium has been rendered nearly...
but that it decreases our ability to discern truth from falsehood. In a few years we’re all gonna be so fatigued from questioning everything that we just believe nothing anymore
It will hopefully go the way of the phone call.
Phone calls go unanswered and filtered except from trusted sources unless you are looking for a job. The entire medium has been rendered nearly useless because of garbage spam calls.
So I feel like there are some interesting points worth pulling out of the article (instead of the now deleted reddit post) and later on So understandably, reporters do not have the knowledge and...
So I feel like there are some interesting points worth pulling out of the article (instead of the now deleted reddit post)
I wish I could tell you that I immediately clocked the document as a fake. The truth is that it initially fooled me. Laden with charts, diagrams, and mathematical formulas, the document closely resembled many AI-related papers that I have read (and perhaps half-understood) over the past few years. I lacked the technical knowledge to discern that, as plausible as it may have looked in some places, the document was nonsense.
and later on
I shared the document he had sent me with a former ridesharing company engineer I know, and he pointed out various mistakes the whistleblower had made. Companies just don’t talk like this, or work this way, he explained. They run experiments and describe their findings in narrowly focused documents. They do not outline sinister plans for human exploitation and regulatory evasion in writing.
So understandably, reporters do not have the knowledge and experience to tell legitimate documents from fakes, but those in the industry do. So why not just ask for expert opinions right away, well the answer is simple:
The whistleblower, for his part, worked to amp up the pressure. He told me he had shared the document with other reporters, putting me into a competitive crunch. He asked when I thought I would publish.
Those who are fabricating it understand that publishing it first matters to reporters. There is a time crunch that limits the ability to send it off to others to dissect the jargon that these documents use to obfuscate the nothingness of the document.
from the social media age: you should always be at your most suspicious online when someone is baiting you into outrage.
I have never heard of this quote before, but it is something more people should be paying attention to. I have been aware of manufactured outrage on the internet and try to steer clear of it (but it sometimes sucks me in). So this is something that I should try more to be aware of. But then the question becomes when does outrage become legitimate?
Gabe Newell talked about the valve philosophy of game making: late is temporary, suck is forever. With journalism, speed is essential....or is it? Wouldn't it be worthwhile to be the kind of org...
Gabe Newell talked about the valve philosophy of game making: late is temporary, suck is forever.
With journalism, speed is essential....or is it? Wouldn't it be worthwhile to be the kind of org where people outraged by clickbait would say, hold on a minute, let me wait for X commentary first because they always have expert opinions even if late. There's very few things we must engage with on the same day aside from evacuations.
Unfortunately, the business incentives of media lean strongly into breaking the story, or at least being the first to publicize it. This effect predates social media entirely: CNN 'made its bones'...
With journalism, speed is essential....or is it?
Unfortunately, the business incentives of media lean strongly into breaking the story, or at least being the first to publicize it.
This effect predates social media entirely: CNN 'made its bones' with its wall-to-wall coverage of the OJ Simpson car chase in 1994, which spun off into breathless reporting about the criminal trial. Essentially as soon as we had 24/7 news media, producers sought breaking content to keep eyes glued to the screen.
Longer-form, slower content is 'better', but I need to honestly ask the question of whether it's more valuable. IMO, the sad reality is that most news – even "important" news – has nearly zero value for the average citizen in terms of directly informing their actions. The emotional roller coaster is the valuable part of the story: "if it bleeds it leads."
Trade media is an exception to this rule, and I think it's instructive. There, reporting really does inform sober consideration of valuable choices, so accurate and thorough reportage gains value relative to the 'catchy' and emotional.
I think news has an urgency that definitely far outstrips most other fields. And I think you're really underestimating the number of situations where having the news as soon as possible can be...
I think news has an urgency that definitely far outstrips most other fields. And I think you're really underestimating the number of situations where having the news as soon as possible can be really, really important. One of my best friends in the US has family in Venezuela, and they were only able to find out about the initial attacks through him, because the news in Venezuela got locked tf down during that. Similarly, several US newspapers knew in advance of the attack that led to the kidnapping of Maduro and chose not to report it until afterwards, ostensibly for the safety of the soldiers involved -- and whether you believe them or not, I think it's pretty safe to say that having that news earlier would have had some big effects.
Now, this obviously doesn't apply to reporting on reddit threads about poor working conditions at a company. But it does mean that the way news media is incentivized to get stories out as quickly as possible isn't entirely detached from reality.
I like to practice an 'emotional scientific method', where I try to focus on how much a new piece of evidence contradicts rather than supports a narrative. This has a few different effects, such...
So this is something that I should try more to be aware of. But then the question becomes when does outrage become legitimate?
I like to practice an 'emotional scientific method', where I try to focus on how much a new piece of evidence contradicts rather than supports a narrative.
This has a few different effects, such as forcing me to keep multiple perspectives in mind, but for the purpose of dealing with outrage it helps with the propensity towards the 'bitch eating crackers' syndrome.
That comes from our natural confirmatory bias, so when we've decided that X is bad then nearly everything X does gets interpreted in a negative light such that even ostensibly neutral things (like the eponymous 'eating crackers') reinforces the negative perception. The application to political discourse is self-evident.
However, that alone doesn't help with outright fake information such as this hoax. Here, I rely on a second emotional scaffold: slow news. Very few events are both important and urgent such that I need to change my actions right now[†], and the intersection of important, urgent, and emotionally resonant is even smaller.
Thus, I try to downweight breaking news, and instead I rely more on longer-form and slower analysis pieces where the authors have had time to look into the totality of then-breaking events and put the news in a broader context. This consciously 'swims against the tide' of the algorithm and so it's hard to keep up the focus, but I truly find that it helps.
[† — The weather forecast is one of the few consistent examples of information that is both important and urgent.]
I won't be so egotistical as to say I totally clocked them immediately, but I was sceptical of the comment when I read it. The sensationalised, dramatic writing put me in mind of a similar...
I won't be so egotistical as to say I totally clocked them immediately, but I was sceptical of the comment when I read it. The sensationalised, dramatic writing put me in mind of a similar anecdote of someone watching a monetised torture video on the dark web - while instantly assuming that wild stories are too dramatic to be true is a mistake, reality is more often than not banal in its goods and evils. I'm reminded of Dan Olson in his documentary on GME Ape noting that their conspiracist worldview is doubtless more interesting and engaging than the truth of Wall Street simply being filled with greedy assholes. Then again, it's easy to say this in retrospect.
Bare minimum. It never made sense to be so concerned about opsec while declaring that they gave 2 weeks notice in a way that would make them immediately identifiable. Some of the rest of the stuff...
Bare minimum. It never made sense to be so concerned about opsec while declaring that they gave 2 weeks notice in a way that would make them immediately identifiable. Some of the rest of the stuff I wouldn't have known whether or not to believe as I have no tech background. But it was all too evil for evil's sake
That was the immediate red flag I saw, but wanting to sort of believe it I thought maybe that (and other details) were introduced as red herrings. If I were them I would also feed the entire story...
concerned about opsec while declaring that they gave 2 weeks notice
That was the immediate red flag I saw, but wanting to sort of believe it I thought maybe that (and other details) were introduced as red herrings. If I were them I would also feed the entire story through an LLM to make it harder to detect my writing style. That would explain the difference in writing style between the main post and his comments... but then why not do the same thing to the comments...
When I first saw the post, I was suspicious (but didn't wholly dismiss it as fake) mostly bc they claimed to know too many things from unrelated parts of the company. Some of the dynamic pricing...
When I first saw the post, I was suspicious (but didn't wholly dismiss it as fake) mostly bc they claimed to know too many things from unrelated parts of the company. Some of the dynamic pricing stuff they mentioned was weird even from the perspective of someone who worked in unrelated ML fields and has friends-of-friends in this industry, and a lot of what they described would be questionably legal at best, but companies like this have done plenty of questionably legal stuff that we do know for sure happened and the ML stuff they described was still possible even if weird af. I was willing to dismiss a lot of red flags that would be sketchy or illegal as US companies being uniquely awful. But the red flag that most stood out in retrospect (that is, knowing it's fake) is that the writer claims knowledge over stuff that someone on the software engineering side just would not know about in a big company like this. But this wasn't really enough for me to dismiss the post as a sure fake when I first saw it, I'll freely admit.
But ultimately I shrugged and scrolled on and didn't consciously think more about it until I saw that it was confirmed fake. I suspect a lot of people did that.
We don’t know who the hoaxer was here, or their motives. Seems fairly amateur-hour, tbh, given the current state of AI tools. A properly motivated and funded disinformation campaign would know how to make its fake badge and research paper less detectable as such.
The real danger of stuff like this is not that it slanders the good name of Uber Eats, but that it decreases our ability to discern truth from falsehood. In a few years we’re all gonna be so fatigued from questioning everything that we just believe nothing anymore. The zone will be utterly flooded with shit — with zero hope of ever draining it again. Which of course creates an environment for malefactors to prey with impunity, because everyone has conceded their ability to defend against it.
Market manipulation could be an easy movitation. Buy some cheap, out-of-the-money put options on Uber, post the 'whistleblower' report, and hope that the news media picks it up. If the story goes viral and especially if it catches the attention of regulators or politicians, the stock could drop 10-20% in a profitable manner.
As I see it, AI content generation as gotten good enough that we should no longer trust unauthenticated sources or evidence without a known chain of custody: unverified comments should have all the credibility of "my uncle works for Nintendo and told me how to capture Mew."
It's a crying shame, but I think that this is the inevitable consequence of having a giant mixing vat of human attention without other defense mechanisms; blaming AI for this is like blaming the salmonella for colonizing the raw chicken left out on the counter.
You might be onto something there. Honestly we’re so up to our eyeballs in scams, grifts, and pump-and-dumps these days that it’s legitimately tempting to join in. Feels like shooting fish in a barrel and these modern fraudsters rarely face any consequences (on the contrary, they get elected President and amplify their grifts by factors of magnitude… but I digress). I’m over here struggling to make an honest living for my family when I fully believe I could be raking it in by ripping people off if I just ignored my conscience. I wouldn’t be able to sleep at night but damn if I’m not a teensy bit jealous of the crooks these days.
I mean, reddit has been a place for creative writing exercises for way longer than AI has been around, longer than I've been on this site, hell longer than most of my adult life. People love going on there and just writing bullshit (e.g. r/amItheAsshole and such), doesn't have to be for a coordinated nefarious purpose.
I don't think reddit has been a usable website for a long long time outside of some VERY niche hobby subs maybe. Even the likes of r/rpg destroy a lot of discussion because of the upvote mechanism and of course the most popular ones are just karma farming simulators. I genuinely prefer places with no personalities, no virtual currency and little moderation. Sadly there's very few such places left.
What places would meet those requirements in this modern time on the internet? The only places I could think of are various *chans
Yep, basically. Obscure small time forums kinda as well or any place where it's not worth having a "persona".
Years ago I posted a fake story on reddit that went viral. I was actually contacted by the podcast Reply All, about doing an episode about my story.
But I had the sense to come clean and not try to lie my way through an interview
I'm deeply curious what story it was now
I don't remember the exact title and it was on an old alt account I don't use anymore. But basically the story was about a guy using a program that sends messages to people after you die. Except he screws it up and all the messages go out early. So all of his friends and family are getting heartfelt messages that say he's dead.
ooh I can see why you got traction with that, it sounds extremely Podcastable.
I miss that podcast, they would’ve done a great job with this debunking story
For what it's worth, Casey Newton (the author of the article) co-hosts the podcast Hard Fork, and I 100% expect them to discuss this story on this week's episode.
It will hopefully go the way of the phone call.
Phone calls go unanswered and filtered except from trusted sources unless you are looking for a job. The entire medium has been rendered nearly useless because of garbage spam calls.
So I feel like there are some interesting points worth pulling out of the article (instead of the now deleted reddit post)
and later on
So understandably, reporters do not have the knowledge and experience to tell legitimate documents from fakes, but those in the industry do. So why not just ask for expert opinions right away, well the answer is simple:
Those who are fabricating it understand that publishing it first matters to reporters. There is a time crunch that limits the ability to send it off to others to dissect the jargon that these documents use to obfuscate the nothingness of the document.
I have never heard of this quote before, but it is something more people should be paying attention to. I have been aware of manufactured outrage on the internet and try to steer clear of it (but it sometimes sucks me in). So this is something that I should try more to be aware of. But then the question becomes when does outrage become legitimate?
This is a classic strategy by almost any variety of scammer, so I suppose it's not a surprise to see it being deployed here.
Gabe Newell talked about the valve philosophy of game making: late is temporary, suck is forever.
With journalism, speed is essential....or is it? Wouldn't it be worthwhile to be the kind of org where people outraged by clickbait would say, hold on a minute, let me wait for X commentary first because they always have expert opinions even if late. There's very few things we must engage with on the same day aside from evacuations.
Unfortunately, the business incentives of media lean strongly into breaking the story, or at least being the first to publicize it.
This effect predates social media entirely: CNN 'made its bones' with its wall-to-wall coverage of the OJ Simpson car chase in 1994, which spun off into breathless reporting about the criminal trial. Essentially as soon as we had 24/7 news media, producers sought breaking content to keep eyes glued to the screen.
Longer-form, slower content is 'better', but I need to honestly ask the question of whether it's more valuable. IMO, the sad reality is that most news – even "important" news – has nearly zero value for the average citizen in terms of directly informing their actions. The emotional roller coaster is the valuable part of the story: "if it bleeds it leads."
Trade media is an exception to this rule, and I think it's instructive. There, reporting really does inform sober consideration of valuable choices, so accurate and thorough reportage gains value relative to the 'catchy' and emotional.
I think news has an urgency that definitely far outstrips most other fields. And I think you're really underestimating the number of situations where having the news as soon as possible can be really, really important. One of my best friends in the US has family in Venezuela, and they were only able to find out about the initial attacks through him, because the news in Venezuela got locked tf down during that. Similarly, several US newspapers knew in advance of the attack that led to the kidnapping of Maduro and chose not to report it until afterwards, ostensibly for the safety of the soldiers involved -- and whether you believe them or not, I think it's pretty safe to say that having that news earlier would have had some big effects.
Now, this obviously doesn't apply to reporting on reddit threads about poor working conditions at a company. But it does mean that the way news media is incentivized to get stories out as quickly as possible isn't entirely detached from reality.
I like to practice an 'emotional scientific method', where I try to focus on how much a new piece of evidence contradicts rather than supports a narrative.
This has a few different effects, such as forcing me to keep multiple perspectives in mind, but for the purpose of dealing with outrage it helps with the propensity towards the 'bitch eating crackers' syndrome.
That comes from our natural confirmatory bias, so when we've decided that X is bad then nearly everything X does gets interpreted in a negative light such that even ostensibly neutral things (like the eponymous 'eating crackers') reinforces the negative perception. The application to political discourse is self-evident.
However, that alone doesn't help with outright fake information such as this hoax. Here, I rely on a second emotional scaffold: slow news. Very few events are both important and urgent such that I need to change my actions right now[†], and the intersection of important, urgent, and emotionally resonant is even smaller.
Thus, I try to downweight breaking news, and instead I rely more on longer-form and slower analysis pieces where the authors have had time to look into the totality of then-breaking events and put the news in a broader context. This consciously 'swims against the tide' of the algorithm and so it's hard to keep up the focus, but I truly find that it helps.
[† — The weather forecast is one of the few consistent examples of information that is both important and urgent.]
I won't be so egotistical as to say I totally clocked them immediately, but I was sceptical of the comment when I read it. The sensationalised, dramatic writing put me in mind of a similar anecdote of someone watching a monetised torture video on the dark web - while instantly assuming that wild stories are too dramatic to be true is a mistake, reality is more often than not banal in its goods and evils. I'm reminded of Dan Olson in his documentary on GME Ape noting that their conspiracist worldview is doubtless more interesting and engaging than the truth of Wall Street simply being filled with greedy assholes. Then again, it's easy to say this in retrospect.
Bare minimum. It never made sense to be so concerned about opsec while declaring that they gave 2 weeks notice in a way that would make them immediately identifiable. Some of the rest of the stuff I wouldn't have known whether or not to believe as I have no tech background. But it was all too evil for evil's sake
That was the immediate red flag I saw, but wanting to sort of believe it I thought maybe that (and other details) were introduced as red herrings. If I were them I would also feed the entire story through an LLM to make it harder to detect my writing style. That would explain the difference in writing style between the main post and his comments... but then why not do the same thing to the comments...
Sure, it was a red flag though that had me wary from jump. I wasn't sure, just wary.
When I first saw the post, I was suspicious (but didn't wholly dismiss it as fake) mostly bc they claimed to know too many things from unrelated parts of the company. Some of the dynamic pricing stuff they mentioned was weird even from the perspective of someone who worked in unrelated ML fields and has friends-of-friends in this industry, and a lot of what they described would be questionably legal at best, but companies like this have done plenty of questionably legal stuff that we do know for sure happened and the ML stuff they described was still possible even if weird af. I was willing to dismiss a lot of red flags that would be sketchy or illegal as US companies being uniquely awful. But the red flag that most stood out in retrospect (that is, knowing it's fake) is that the writer claims knowledge over stuff that someone on the software engineering side just would not know about in a big company like this. But this wasn't really enough for me to dismiss the post as a sure fake when I first saw it, I'll freely admit.
But ultimately I shrugged and scrolled on and didn't consciously think more about it until I saw that it was confirmed fake. I suspect a lot of people did that.