Representations annihilate detail [...] and it’s not anything that you can fix with arbitrary computing power and arbitrary time to think. You need to be able to go back to the well of detail and gather the factors you need. Meaning is a product of interaction, not something that springs from a dead dataset.
To act on anything in the world, you have to represent it in a way where you can imagine which actions to take. We develop this capability before language, as embodied cognition and intuition about physical objects. And part of this capability is the ability to notice when your representations break down and need to be amended. We do this all of the time without needing to consciously think about it.
[...]
We’re assaulted by a flurry of charts and articles and studies and being asked to draw meaningful conclusions from them. (If we’re not just told to “trust science!”, as though there was a single correct way to interpret every finding.) That’s where this story has to end. We intuitively know how to open our eyes in our everyday lives, but how do we open the door between us and the myriad of static representations that modern life puts before us?
[...]
It’s easy to give these proclamations from within Jeremy’s house, but annoying and high-effort to look at “HOUSE: Jeremy. CHAIRS: Six” and try to figure out from there whether there are actually six chairs or if some of the things being counted are ridiculous. So mostly, we just hope that the process of collecting the data doesn’t result in any ridiculous answers. But because data collection and analysis work can be made more and more automatic and scalable while the anti-ridiculousness work is much more manual (going to Jeremy’s house and seeing what the six chairs actually are), the balance is getting ever-more disturbed by modern norms of science.
[...]
[T]hese are problems that humans have experience catching, but only when they’re able to interact with the world and not just with dead representations of things. This story from Chemical & Engineering News is a great case study. A chemist publishes a paper about a supposedly “metal-free” reaction. But other chemists reading it know that palladium is stubborn and hard to completely get rid of. So they replay the methods of the paper and replicate the original results, but then try different things that remove contaminating palladium and show that the reaction doesn’t work anymore. That’s how this sort of verification has to work. Textual or statistical analysis won’t get you anywhere, since the whole problem is that the original paper represents itself as “palladium-free” and you need to have the experience to say: “Well, I’m sure they didn’t mean to have residual palladium, but that’s not the same as palladium-free.”
This was a fantastic read, thanks for sharing @skybrian . I am a bit tired so I need to go over this again and write up my notes, but my main takeaway from this is that data, statistics and models...
This was a fantastic read, thanks for sharing @skybrian . I am a bit tired so I need to go over this again and write up my notes, but my main takeaway from this is that data, statistics and models are only as useful as our capacity for meaning: we need to have words to describe the concepts underlying variances in the world, in order to know how to collect data, if it’s useful, and how to interpret it.
From the article:
[...]
[...]
[...]
This was a fantastic read, thanks for sharing @skybrian . I am a bit tired so I need to go over this again and write up my notes, but my main takeaway from this is that data, statistics and models are only as useful as our capacity for meaning: we need to have words to describe the concepts underlying variances in the world, in order to know how to collect data, if it’s useful, and how to interpret it.