The study in the article has a fair bit of "we really needed a study to show us that?" going on, but it's neat to see that the effect of "outsourcing your memory to the internet" is real an...
The study in the article has a fair bit of "we really needed a study to show us that?" going on, but it's neat to see that the effect of "outsourcing your memory to the internet" is real an measurable. I know it's a thing that I do (stackoverflow is a lifesaver for semi-obscure tasks that I have to do once every 6 months), but I'd never considered the implications of the availability of access on my own perception of my competence.
There is an old adage. If you make people think, they will hate you. If you make people think they are thinking, they will love you. And I do love Google.
There is an old adage.
If you make people think, they will hate you.
If you make people think they are thinking, they will love you.
The paper seems to be titled "People mistake the Internet's knowledge for their own." and isn't out yet, but there is an abstract on the author's home page, and if you care about this you could...
The paper seems to be titled "People mistake the Internet's knowledge for their own." and isn't out yet, but there is an abstract on the author's home page, and if you care about this you could send an email.
The abstract doesn't mention Dunning-Kruger, and the claim is more specific: "Eight experiments (n = 1,917) provide evidence that when people “Google” for online information, they fail to accurately distinguish between knowledge stored internally—in their own memories—and knowledge stored externally—on the Internet."
It would be interesting to see if you would get the same result if you gave people the answers on a cheat sheet, then asked them if they would do just as well without the cheat sheet. Or if you...
In another test, the people who relied on memory were told that they got eight of 10 answers right, regardless of their actual performance. The ones who believed this score came away with an inflated sense of confidence that was roughly equal in magnitude to the people who used Google.
It would be interesting to see if you would get the same result if you gave people the answers on a cheat sheet, then asked them if they would do just as well without the cheat sheet.
Or if you gave people ready access to an expert, then asked them if they would do just as well without the expert. Fundamentally in my experience most managers are no smarter than most individual contributors, they are just better at assuming others thinking is their own (I am a manager.)
I think you're right, it's often performative. But also, there is a lot of bad science, more now than ever before. Bad enough that you don't need to be a leader in a field to see basic errors in...
I think you're right, it's often performative.
But also, there is a lot of bad science, more now than ever before. Bad enough that you don't need to be a leader in a field to see basic errors in methodology. Add to that the sensationalism employed by a lot of science writers.
A lot of the takedowns you see are people who geniunely love science, often with relevant experience or education. Well, maybe less on Reddit, it depends on where you're reading :) From a scientific perspective, finding flaws in studies is exactly the right thing to do.
Sometimes the best thing is to turn these takedowns into questions. Someone thinks the methodology is bad? Well, strangers on the Internet can be right or wrong. Read the paper and see what you...
Sometimes the best thing is to turn these takedowns into questions. Someone thinks the methodology is bad? Well, strangers on the Internet can be right or wrong. Read the paper and see what you think.
It's also useful to look for more knowledgable critics, like other scientists in the field.
But in the end, I think we should be humble enough to admit that, as outsiders, we often can't tell good papers from bad. The outside view says that a majority of scientific papers are flawed, but it isn't going to tell us which ones.
Fortunately, it's rare that you need to make a decision about whether a paper is good or not. We're usually just reading about science for fun, so you can move on. If it's important it will come up again.
Related pet peeve: internet folk who point out hypothetical methodological flaws. As in, did scientist with decades of experience in the field consider elementary concept covered in an...
suddenly a bunch of "scientists" (?) appears questioning the sample size (very popular complaint...), methodology, interpretation of the data, whatever
Related pet peeve: internet folk who point out hypothetical methodological flaws. As in, did scientist with decades of experience in the field consider elementary concept covered in an introductory class on the subject? For example, I recently read a comment on HN from someone who seemed to suggest that the neutron lifetime puzzle might be explained by relativistic time dilation. (In fact, the relativistic corrections are negligible compared to the precision of the experiments.)
The study in the article has a fair bit of "we really needed a study to show us that?" going on, but it's neat to see that the effect of "outsourcing your memory to the internet" is real an measurable. I know it's a thing that I do (stackoverflow is a lifesaver for semi-obscure tasks that I have to do once every 6 months), but I'd never considered the implications of the availability of access on my own perception of my competence.
There is an old adage.
If you make people think, they will hate you.
If you make people think they are thinking, they will love you.
And I do love Google.
The paper seems to be titled "People mistake the Internet's knowledge for their own." and isn't out yet, but there is an abstract on the author's home page, and if you care about this you could send an email.
The abstract doesn't mention Dunning-Kruger, and the claim is more specific: "Eight experiments (n = 1,917) provide evidence that when people “Google” for online information, they fail to accurately distinguish between knowledge stored internally—in their own memories—and knowledge stored externally—on the Internet."
It would be interesting to see if you would get the same result if you gave people the answers on a cheat sheet, then asked them if they would do just as well without the cheat sheet.
Or if you gave people ready access to an expert, then asked them if they would do just as well without the expert. Fundamentally in my experience most managers are no smarter than most individual contributors, they are just better at assuming others thinking is their own (I am a manager.)
I think you're right, it's often performative.
But also, there is a lot of bad science, more now than ever before. Bad enough that you don't need to be a leader in a field to see basic errors in methodology. Add to that the sensationalism employed by a lot of science writers.
A lot of the takedowns you see are people who geniunely love science, often with relevant experience or education. Well, maybe less on Reddit, it depends on where you're reading :) From a scientific perspective, finding flaws in studies is exactly the right thing to do.
Sometimes the best thing is to turn these takedowns into questions. Someone thinks the methodology is bad? Well, strangers on the Internet can be right or wrong. Read the paper and see what you think.
It's also useful to look for more knowledgable critics, like other scientists in the field.
But in the end, I think we should be humble enough to admit that, as outsiders, we often can't tell good papers from bad. The outside view says that a majority of scientific papers are flawed, but it isn't going to tell us which ones.
Fortunately, it's rare that you need to make a decision about whether a paper is good or not. We're usually just reading about science for fun, so you can move on. If it's important it will come up again.
Related pet peeve: internet folk who point out hypothetical methodological flaws. As in, did scientist with decades of experience in the field consider elementary concept covered in an introductory class on the subject? For example, I recently read a comment on HN from someone who seemed to suggest that the neutron lifetime puzzle might be explained by relativistic time dilation. (In fact, the relativistic corrections are negligible compared to the precision of the experiments.)