This paywall depends on cookies, localStorage and/or IndexedDB (I haven't checked which one, but I'm sure it's one of those) to keep track of how many articles you have read. Incognito mode...
This paywall depends on cookies, localStorage and/or IndexedDB (I haven't checked which one, but I'm sure it's one of those) to keep track of how many articles you have read. Incognito mode doesn't save those so the website always thinks you have opened it for the first time when you open it in an incognito window. There are ways to get around this, like storing the IP of the visitor (other news websites do that), but Washington Post is not using any of those techniques. Edit: I just checked and it seems that they are using a combination of techniques, sometimes they block by IP too. Second edit: I was wrong about that last edit, the GDPR message (which was hidden by uBlock so the page looked blank) confused me.
It's pretty close though. Companies are already paying for social media data on potential employees and basing their hiring decisions on that. This rating system could affect people's livelihoods.
It's pretty close though. Companies are already paying for social media data on potential employees and basing their hiring decisions on that. This rating system could affect people's livelihoods.
... ? It doesn't sound like the sort of thing that would be available to outside companies, as it is specifically targeted at determining which reports of abuse / fake news should be taken more or...
This rating system could affect people's livelihoods.
... ? It doesn't sound like the sort of thing that would be available to outside companies, as it is specifically targeted at determining which reports of abuse / fake news should be taken more or less seriously:
“One of the signals we use is how people interact with articles,” Lyons said in a follow-up email. “For example, if someone previously gave us feedback that an article was false and the article was confirmed false by a fact-checker, then we might weight that person’s future false-news feedback more than someone who indiscriminately provides false-news feedback on lots of articles, including ones that end up being rated as true.”
The score is one signal among many that the company feeds into more algorithms to help it decide which stories should be reviewed.
That said, FB does have a rather... poor track record of over-exposing user-data over their existence.
Yes, but it seems a rather large jump to go from a rating system to identify false / bad user reports directly to: I mean, it's about as compelling an argument as saying Silicon Valley's...
Yes, but it seems a rather large jump to go from a rating system to identify false / bad user reports directly to:
This rating system could affect people's livelihoods.
I mean, it's about as compelling an argument as saying Silicon Valley's curtailing of employee of perks is going to result in less satisfied employees, which could be detrimental to the tools used for hiring in OP's example, and therefore can affect people's livelihoods!!!
Yeah, it could happen, but it's not my first concern.
The biggest difference is that Facebook isnt mandatory. You don't need Facebook to get on a subway, buy an airplane ticket, or enroll in school. If you worried about the government gathering...
The biggest difference is that Facebook isnt mandatory. You don't need Facebook to get on a subway, buy an airplane ticket, or enroll in school. If you worried about the government gathering online information about you, then you should probably stay away from facebook already, as whether you lie on the internet is one of the least concerning things they know about you.
The three letter agencies will cheerfully threaten their execs with prison time if they don't hand it over so I really don't see a chasm of difference between government and US corporate actions...
The three letter agencies will cheerfully threaten their execs with prison time if they don't hand it over so I really don't see a chasm of difference between government and US corporate actions at this point.
Maybe, but there are so few (or many, depending on your perspective) but very large corporations, that decisions of a few private companies can be effectively like laws in the sense of how they...
Maybe, but there are so few (or many, depending on your perspective) but very large corporations, that decisions of a few private companies can be effectively like laws in the sense of how they can affect your life.
Call me cynical, but I can't help but suspect that Facebook is aping China's social credit system in order to have a better shot at being permitted access to the Chinese internet by the Beijing...
Call me cynical, but I can't help but suspect that Facebook is aping China's social credit system in order to have a better shot at being permitted access to the Chinese internet by the Beijing regime. It's not like Mark Zuckerberg or any of his underbosses at Facebook have any principles other than "connection uber alles", so inflicting Chinese "social credit" on the rest of us to get another shot at China would be perfectly in-character for them.
Totally plausible facebook wants to go completely "global" and become on the same standard as china in terms of facial and "social" score. I would count count it at least expected.
Totally plausible facebook wants to go completely "global" and become on the same standard as china in terms of facial and "social" score. I would count count it at least expected.
From the article: This has been developed over the past year, and it's unclear what determines one's score. The system is supposedly used to stem both fake news and improper reporting of truthful...
From the article: This has been developed over the past year, and it's unclear what determines one's score. The system is supposedly used to stem both fake news and improper reporting of truthful news. Since, ultimately, the users of facebook haven't been reliable in policing itself.
And, as usual, Facebook is putting their faith in yet another unpublished, untestable algorithm instead of hiring human beings and paying them a living wage. Assholes like these are why Silicon...
Since, ultimately, the users of facebook haven't been reliable in policing itself.
And, as usual, Facebook is putting their faith in yet another unpublished, untestable algorithm instead of hiring human beings and paying them a living wage. Assholes like these are why Silicon Valley should be razed to the ground and designated a national monument to the folly of technocracy.
I don't think hiring humans to do this task is even an option. If you're trying to assign a trustworthiness score to 2 billion people, it would take an enormous amount of people years to sort...
I don't think hiring humans to do this task is even an option. If you're trying to assign a trustworthiness score to 2 billion people, it would take an enormous amount of people years to sort through, and the judgement of each employee is going to vary to the point that a score will likely be worthless. Humans are good at determining if something is child porn or not, and those cases are rare enough that perhaps humans can manage those reports.
This case seems much more suited to an algorithm, and it's not untestable. Create a data set by manually looking at a lot of profiles, and labelling them with a trustworthiness score. Divide the data set into a training set and a test set. Train the algorithm on the training set, then see how it does on the test set. If it did great on the training set, but terrible on the test set, then your algorithm needs tweaking. If it's able to predict the scores on both sets, then it learned what the humans were looking for when determining the scores themselves, and your model is ready for production.
There's probably too much information for a group of people to sift through. But, equally, we can never trust an algorithm to determine what is true or not, as we struggle to prove more...
There's probably too much information for a group of people to sift through.
But, equally, we can never trust an algorithm to determine what is true or not, as we struggle to prove more fundamental things in programming, and we know certain classes of problems to actually be unsolveable.
I have a different proposal.
These systems track our likes and dislikes, so they can show us things that we will like (badly), and news that it thinks we will like...
So why not always couple the presentation of news to two opposing sources?
Not using technology that exists in order to keep workers around is a recipe for going out of business. Also, please know that FB hires tons of people to moderate content for things their...
Not using technology that exists in order to keep workers around is a recipe for going out of business.
Also, please know that FB hires tons of people to moderate content for things their algorithms cannot detect yet. The problems they face and their "case-law" approach are I think fascinating.
Judging from their market cap, I don't think Facebook is in any such danger. And if they did go out of business: no great loss. The vilification will continue until Facebook (and the rest of...
Not using technology that exists in order to keep workers around is a recipe for going out of business.
Judging from their market cap, I don't think Facebook is in any such danger. And if they did go out of business: no great loss.
I don't love FB at all but am getting tired of the vilification.
The vilification will continue until Facebook (and the rest of GAFAM) cease to be vile.
They're working on it I believe. It may take some time. I doubt anyone coming from a /r/StallmanWasRight perspective will ever really embrace what they're doing, and that's OK.
cease to be vile
They're working on it I believe. It may take some time. I doubt anyone coming from a /r/StallmanWasRight perspective will ever really embrace what they're doing, and that's OK.
It's not quite clear in the article, is the score an integer/boolean or a float? It seems to be a decimal based system but putting it in the headline makes it seem like it's more of a boolean (0...
It's not quite clear in the article, is the score an integer/boolean or a float? It seems to be a decimal based system but putting it in the headline makes it seem like it's more of a boolean (0 you're not trustworthy, 1 your are, versus 0.0, 0.01, 0.05 ... 0.5 ... 1.0)
<sarcasm>Only Facebook knows for sure, but having seen Facebook's general lack of competence, I'm assuming the score is actually stored in the database as varchar.</sarcasm>
<sarcasm>Only Facebook knows for sure, but having seen Facebook's general lack of competence, I'm assuming the score is actually stored in the database as varchar.</sarcasm>
I think it's clear that this entire system is an ad hoc attempt to address what was already widely understood to be sorely exposed weaknesses to abuse and manipulation. I don't think this is the...
I think it's clear that this entire system is an ad hoc attempt to address what was already widely understood to be sorely exposed weaknesses to abuse and manipulation. I don't think this is the sort of problem you can easily machine-learn your way out of. I think it seems reasonable to want to value reports differently based on how likely they are to be effective, but Facebook's need to not share their data keeps getting in the way of what should be a trust-building initiative.
The core feature that we're talking about (a weight assigned to a user's report based on how likely it is to be correct) is something that all sorts of tech sites are experimenting with, including, eventually, this one through reputation. It's something that at minimum requires an input (such as a report) and an output (like a review), and you can maximize its effectiveness when you pair with timely, positive reinforcement for the reporter. There might be new research on this topic, but I'm thinking particular of Thaler and Sunstein's Nudge when I think about structure design to encourage "good" behaviors. All structures encourage certain behaviors, and Facebook's design holistically is what is at the root of what encourages people to be randomly shitty with strangers, what encourages outside parties to target specific groups for their own purposes--whatever purpose that might be, and what has ultimately ended in eroding users' trust in the site. Adding a trustworthiness feature to user reports may allow accurate reports to trickle to the top faster than they otherwise would, but it doesn't address the more fundamental problems with the platform.
Reminder: to get around Washington Post paywalls, you can use Incognito Mode.
Article is also basically repeated on BBC Facebook gives users trustworthiness score
Alternatively: if you have Amazon Prime you are entitled to six-months free access to WaPo, after which you can have access for $4/mo.
with ads soon!
/s
I just block the elements with umatrix and ublock.
That's undoubtedly a strictly better, but not quite as simple, solution.
Just pointing out that it's possible.
Not just possible, but I think it's strictly a better solution - if you know how to do it, it removes a step, and it's seamless from then on.
What's different about incognito mode that allows you to get around the paywall?
This paywall depends on cookies, localStorage and/or IndexedDB (I haven't checked which one, but I'm sure it's one of those) to keep track of how many articles you have read. Incognito mode doesn't save those so the website always thinks you have opened it for the first time when you open it in an incognito window. There are ways to get around this, like storing the IP of the visitor (other news websites do that),
but Washington Post is not using any of those techniques. Edit:I just checked and it seems that they are using a combination of techniques, sometimes they block by IP too.Second edit: I was wrong about that last edit, the GDPR message (which was hidden by uBlock so the page looked blank) confused me.Reminds me of China's huge "social credit" system
There's a difference between a government doing this and a private web company doing it in regards to its users.
It's pretty close though. Companies are already paying for social media data on potential employees and basing their hiring decisions on that. This rating system could affect people's livelihoods.
... ? It doesn't sound like the sort of thing that would be available to outside companies, as it is specifically targeted at determining which reports of abuse / fake news should be taken more or less seriously:
That said, FB does have a rather... poor track record of over-exposing user-data over their existence.
Like you said in your own post, Facebook has handled all of the privacy and all that poorly. They will sell you out in an instant.
Yes, but it seems a rather large jump to go from a rating system to identify false / bad user reports directly to:
I mean, it's about as compelling an argument as saying Silicon Valley's curtailing of employee of perks is going to result in less satisfied employees, which could be detrimental to the tools used for hiring in OP's example, and therefore can affect people's livelihoods!!!
Yeah, it could happen, but it's not my first concern.
The biggest difference is that Facebook isnt mandatory. You don't need Facebook to get on a subway, buy an airplane ticket, or enroll in school. If you worried about the government gathering online information about you, then you should probably stay away from facebook already, as whether you lie on the internet is one of the least concerning things they know about you.
Considering Facebook stores data about you even if you don't have an account any sort of profiling like this can be extremely dangerous.
Not when the private company will share the data with anybody who says "pretty please" or offers enough money.
The three letter agencies will cheerfully threaten their execs with prison time if they don't hand it over so I really don't see a chasm of difference between government and US corporate actions at this point.
Maybe, but there are so few (or many, depending on your perspective) but very large corporations, that decisions of a few private companies can be effectively like laws in the sense of how they can affect your life.
The privacy/russian ads thing was enough for me. I count them as a MAJOR infrastructure on western democracy .
Call me cynical, but I can't help but suspect that Facebook is aping China's social credit system in order to have a better shot at being permitted access to the Chinese internet by the Beijing regime. It's not like Mark Zuckerberg or any of his underbosses at Facebook have any principles other than "connection uber alles", so inflicting Chinese "social credit" on the rest of us to get another shot at China would be perfectly in-character for them.
Totally plausible facebook wants to go completely "global" and become on the same standard as china in terms of facial and "social" score. I would count count it at least expected.
From the article: This has been developed over the past year, and it's unclear what determines one's score. The system is supposedly used to stem both fake news and improper reporting of truthful news. Since, ultimately, the users of facebook haven't been reliable in policing itself.
And, as usual, Facebook is putting their faith in yet another unpublished, untestable algorithm instead of hiring human beings and paying them a living wage. Assholes like these are why Silicon Valley should be razed to the ground and designated a national monument to the folly of technocracy.
I don't think hiring humans to do this task is even an option. If you're trying to assign a trustworthiness score to 2 billion people, it would take an enormous amount of people years to sort through, and the judgement of each employee is going to vary to the point that a score will likely be worthless. Humans are good at determining if something is child porn or not, and those cases are rare enough that perhaps humans can manage those reports.
This case seems much more suited to an algorithm, and it's not untestable. Create a data set by manually looking at a lot of profiles, and labelling them with a trustworthiness score. Divide the data set into a training set and a test set. Train the algorithm on the training set, then see how it does on the test set. If it did great on the training set, but terrible on the test set, then your algorithm needs tweaking. If it's able to predict the scores on both sets, then it learned what the humans were looking for when determining the scores themselves, and your model is ready for production.
FYI, I call the algorithm untestable because Facebook won't publish it. Thus, it cannot be independently tested.
Fair enough.
There's probably too much information for a group of people to sift through.
But, equally, we can never trust an algorithm to determine what is true or not, as we struggle to prove more fundamental things in programming, and we know certain classes of problems to actually be unsolveable.
I have a different proposal.
These systems track our likes and dislikes, so they can show us things that we will like (badly), and news that it thinks we will like...
So why not always couple the presentation of news to two opposing sources?
Not using technology that exists in order to keep workers around is a recipe for going out of business.
Also, please know that FB hires tons of people to moderate content for things their algorithms cannot detect yet. The problems they face and their "case-law" approach are I think fascinating.
Source: https://www.propublica.org/article/facebook-hate-speech-censorship-internal-documents-algorithms
I don't love FB at all but am getting tired of the vilification.
Judging from their market cap, I don't think Facebook is in any such danger. And if they did go out of business: no great loss.
The vilification will continue until Facebook (and the rest of GAFAM) cease to be vile.
They're working on it I believe. It may take some time. I doubt anyone coming from a /r/StallmanWasRight perspective will ever really embrace what they're doing, and that's OK.
I'm tempted to break my vow against Facebook just to put this bio on my profile page:
It's not quite clear in the article, is the score an integer/boolean or a float? It seems to be a decimal based system but putting it in the headline makes it seem like it's more of a boolean (0 you're not trustworthy, 1 your are, versus 0.0, 0.01, 0.05 ... 0.5 ... 1.0)
I hope an integer.
<sarcasm>Only Facebook knows for sure, but having seen Facebook's general lack of competence, I'm assuming the score is actually stored in the database as
varchar
.</sarcasm>Got a chuckle out of me. Thx! :)
I think it's clear that this entire system is an ad hoc attempt to address what was already widely understood to be sorely exposed weaknesses to abuse and manipulation. I don't think this is the sort of problem you can easily machine-learn your way out of. I think it seems reasonable to want to value reports differently based on how likely they are to be effective, but Facebook's need to not share their data keeps getting in the way of what should be a trust-building initiative.
The core feature that we're talking about (a weight assigned to a user's report based on how likely it is to be correct) is something that all sorts of tech sites are experimenting with, including, eventually, this one through reputation. It's something that at minimum requires an input (such as a report) and an output (like a review), and you can maximize its effectiveness when you pair with timely, positive reinforcement for the reporter. There might be new research on this topic, but I'm thinking particular of Thaler and Sunstein's Nudge when I think about structure design to encourage "good" behaviors. All structures encourage certain behaviors, and Facebook's design holistically is what is at the root of what encourages people to be randomly shitty with strangers, what encourages outside parties to target specific groups for their own purposes--whatever purpose that might be, and what has ultimately ended in eroding users' trust in the site. Adding a trustworthiness feature to user reports may allow accurate reports to trickle to the top faster than they otherwise would, but it doesn't address the more fundamental problems with the platform.