I think this is a large gap in the kind of formal modeling of causality as proposed by Pearl. One tack to fill this is an epistemic notion of causality. A molecular biologist I know pointed me to...
What’s going on is that while our notion of causality is useful for some purposes, it doesn’t necessarily say anything about the details of an underlying causal mechanism, and it doesn’t tell us how the results will apply to other populations. In other words, while it’s a useful and important notion of causality, it’s not the only way of thinking about causality. Something I’d like to do is to understand better what other notions of causality are useful, and how the intervention-based approach we’ve been exploring relates to those other approaches.
I think this is a large gap in the kind of formal modeling of causality as proposed by Pearl. One tack to fill this is an epistemic notion of causality. A molecular biologist I know pointed me to this paper on the philosophy of causation in the domain of biomedicine, but I think the ways of thinking that it presents are relevant outside biomedicine as well. The central idea the paper presents is an epistemic notion of causality, which is distinct from the traditional difference-making or mechanistic notions of causality. The basic idea of epistemic causality is that it is a way of modeling which inferences are epistemologically appropriate to believe.
They also point out that the trend of evidence-based-medicine (EBM), beginning in the 70s, promotes relying primarily on randomized, controlled trials (RCTs). Evidence gathered in RCTs supports difference-making causality: an RCT for a drug candidate may be successful in showing effectiveness. While that can be valuable, they argue EBM does not adequately address mechanisms: an RCT may show effectiveness without demonstrating how it works. This aligns with Nielsen's conclusion.
tl;dr If we think about causality in terms of what inferences are appropriate, based on both difference-making claims and mechanistic claims, then we have a decent model for making causal inferences.
I haven't seen anybody fundamentally counter Hume's objections so I regard causality as a convenient fiction. In other words, I think it's plausible and sufficient for science to work without...
I haven't seen anybody fundamentally counter Hume's objections so I regard causality as a convenient fiction. In other words, I think it's plausible and sufficient for science to work without causation. One only needs to look as far as the recent thread on the Kalam cosmological argument for an example of how taking causation too seriously can go awry. Suddenly, God is necessary to 'cause' the universe.
I think several of the premises of the Kalam cosmological argument are obviously and necessarily invalid. In philosophy, the way in which you abstractly recreate the world is much more important...
I think several of the premises of the Kalam cosmological argument are obviously and necessarily invalid. In philosophy, the way in which you abstractly recreate the world is much more important than the subsequent logic.
If the premises are wrong, the argumentation only holds true for a different world than our reality. That's a discussion for a different thread though!
I think as a rule of thumb in daily life, a reasonable way to short-hand the difference between random correlation and possible/probable causation is the existence of a specific, plausible mechanism.
That's the whole idea behind controlled, randomized experiments: attempting to find individual causal mechanisms by excluding all other explanations.
I think the easiest way to refute Hume is by asking the most important question in philosophy:
So what?
This question bridges the gap between theoretical philosophy and making decisions in one's life.
The question also examines the easiest way to disprove a philosophical argument directly: If the philosophical argument necessitates a reality incompatible with our own, it's the philosophical argument that's wrong, not the observable, demonstrable reality we live in; the purported philosophical map is a wrong description of the terrain, the terrain isn't wrong.
If Hume has come to an incredible point of insight, and we postulate that in fact causality is convenient fiction tying temporally sequential events together, so what?
How does this change my life? If I can't prove that me pulling a trigger of a gun causes something to get shot, does it mean that the courts should let every murderer go free? how could a shooter possibly be morally culpable for firing if we can't prove cause and effect?
In short, the Humean view of causality prevents a reasonable, functioning world, but we have that, so Hume must be wrong. (This is obviously a gross, gross simplification. Many philosophers have spent their whole lives demonstrably proving it to the best of their abilities.)
The practical implication of correlation not implying causation in our lives is that when we are presented with a claim of causation, we should immediately ask what the mechanism is and whether it's possible, plausible or necessary.
Essentially this is skepticism: If we don't know how, do we actually know?
By what mechanism(s) do essential oils supposedly work?
What mechanism(s) lead people deficient in vitamin D to supposedly get sicker from Covid-19?
What mechanisms cause global warming? Are any of these anthropomorphic? Oh, then climate change is anthropomorphic and we should do something about our emissions.
Looking for a mechanism is an attempt at discerning result from reason in a correlation: Do kids who eat breakfast do better in school because they eat breakfast, or because breakfast-eating is a characteristic of children who come from homes/conditions that in other ways lead to doing better in school.
Looking for causal mechanisms turns out to be a powerful tool for optimizing one's life irrespective of one's goals, so whether we can philosophically prove this is rational behavior, it's an effective short-hand in everyday life in any case.
I agree with you for the most part. Certainly don't intend to deny the pragmatism of accepting causation. Hey, even Hume'd fall back on causal reasoning when playing snooker. However, this line I...
I agree with you for the most part. Certainly don't intend to deny the pragmatism of accepting causation. Hey, even Hume'd fall back on causal reasoning when playing snooker.
However, this line I think sells philosophy short (and I think you're hinting that way when you mention the 'gross simplification'):
"The question also examines the easiest way to disprove a philosophical argument directly: If the philosophical argument necessitates a reality incompatible with our own, it's the philosophical argument that's wrong, not the observable, demonstrable reality we live in."
I expect philosophy to challenge my existent model of reality in ways I can't easily dismiss. That's almost all I expect of a good philosophy so to dismiss one on said grounds doesn't seem reasonable to me. What would it be like to operate without common-sense causal reasoning? I tried my hardest to do exactly that, for years, after I first read Hume. I can hardly claim that my attempts 'caused' anything so I'll just claim that, yes, there was a strong subjective difference in how I saw the world during that time compared to pre-Hume.
That’s nice and all, but how do you figure out which light switch turns on a light? Wouldn’t you flip it a few times and see what happens? How long do you need to play with it before you’re...
That’s nice and all, but how do you figure out which light switch turns on a light? Wouldn’t you flip it a few times and see what happens? How long do you need to play with it before you’re convinced? In what sense is knowing that a light switch controls a particular light a convenient fiction? Rejecting all causality as somehow not real seems to depend on a weird notion of truth, demanding more certainty than the everyday notion of truth.
Philosophically, the difficulty is knowing when the light will stop working. Maybe the bulb will burn out the next time you try it, or maybe the power will suddenly go out? Maybe the vaccine will stop working due to a mutation? Our attempts to figure out causality make assumptions that not too much about the environment will change. They can’t rule out that tomorrow we will be surprised. So sure, Hume had a valid point, but we can carry on regardless, admitting that sometimes we will be surprised. Rarely do people know how they will die.
The notion of causality in the article is pretty much about flipping the light switch. (That’s the do operator.) It’s about harder cases when informal testing isn’t enough and we can’t flip the light switch ourselves, but maybe we can watch something else do it. It doesn’t deal with the philosophical problems that come up when the world changes or when the phenomena we’re studying only apply to one time or place, but it seems useful for figuring out static relationships from the data in some cases. It’s also about making some of our assumptions explicit, which helps understand what happened when we’re wrong.
A personal anecdote about my light-switch fiction: There was one light switch on my HVAC room that would fail seemingly randomly. It would work one hour, and not the other. Trial and error yielded...
That’s nice and all, but how do you figure out which light switch turns on a light? Wouldn’t you flip it a few times and see what happens? How long do you need to play with it before you’re convinced? In what sense is knowing that a light switch controls a particular light a convenient fiction?
A personal anecdote about my light-switch fiction: There was one light switch on my HVAC room that would fail seemingly randomly. It would work one hour, and not the other. Trial and error yielded nothing, I wrote it off a failing of the home itself...it's old and only has 100A service. Only seemed to fail when electric usage was high. Usually wasn't a problem, most large appliances are gas, and I only needed in that room like 2 times a year.
Fast forward 5 years, I had the switch for the HVAC room on, waiting for it to work, then went to change a load of laundry. I flicked the laundry switch on, swapped the laundry, and as I went to turn off the switch for the laundry caught that HVAC light (on opposite end of a finished basement) was on. Turns out it was just tapped off the laundry room, despite multiple other non-connected switches and existing between these rooms. Somebody, during a renovation, ran a line specifically across the basement to wire a switch from another switch instead of tapping one of the far closer outlets.
So yea, for 5 years the causal link I jumped to was entirely wrong. Even if a causal link seems obvious, it's entirely possible we're missing the actual causal link that both share.
When I speak of truth in a philosophical sense, I'm often after necessary, non-contingent truths, and I don't think that's particularly weird. On the contrary, I think the everyday (for some)...
When I speak of truth in a philosophical sense, I'm often after necessary, non-contingent truths, and I don't think that's particularly weird. On the contrary, I think the everyday (for some) pragmatic approach to truth isn't deserving of the word. You're right though that my stance on causation (and others) is deeply intertwined with my stance on truth.
To lean on onxyleopard's post a bit, I think that period of trying to abstain from causal reasoning forced me to separate the epistemic and mechanistic modelling of causality. So I could then reason about your light switch in all sorts of ways without ever believing my flicking it 'causes' the light. I know this probably sounds crazy to you skybrian. All good. I think many psychologists would consider a lack of causal reasoning as some sort of dysfunction too. :P
I think the issue is that there's very different conceptions of causality, the one in the Kalam doesn't have much to do with causality we usually think about in science. Personally I quite like...
I think the issue is that there's very different conceptions of causality, the one in the Kalam doesn't have much to do with causality we usually think about in science. Personally I quite like the conserved quantity account :
Conserved quantity accounts of causation are reductive accounts of causation that are explicitly designed to locate causation within the realm of physics avoiding the vagueness challenge. Most prominent here is the causal process account first proposed by Wesley Salmon (1984) and developed further in Phil Dowe’s conserved quantity account (Dowe 2000; see also Kistler 1999 [2006]). Proponents of conserved quantity accounts take their accounts to contribute to the metaphysical project of determining objective causal structures that serve as truth-makers of causal claims.
Dowe distinguishes causal processes and causal interactions, which he defines as follows:
CQ1.
A causal process is a world line of an object that possesses a conserved quantity.
CQ2.
A causal interaction is an intersection of world lines that involves exchange of a conserved quantity. (Dowe 2000: 90)
Conserved quantities are those quantities, such as energy, momentum, mass, or charge, that are conserved according to our physical theories. By deriving its inspiration from physics, where conservation laws play a central role, conserved quantity accounts promise to be able to meet the neo-Machian and neo-Russellian challenges. In fact, since, according to Noether’s First Theorem, there is a conservation law associated with each continuous symmetry property of a system, there seems to be a clear formal route for locating causal claims within physics.
I think this is a large gap in the kind of formal modeling of causality as proposed by Pearl. One tack to fill this is an epistemic notion of causality. A molecular biologist I know pointed me to this paper on the philosophy of causation in the domain of biomedicine, but I think the ways of thinking that it presents are relevant outside biomedicine as well. The central idea the paper presents is an epistemic notion of causality, which is distinct from the traditional difference-making or mechanistic notions of causality. The basic idea of epistemic causality is that it is a way of modeling which inferences are epistemologically appropriate to believe.
I think this is a mostly well-written paper (though it gets a little repetitive at some points). One thing Russo and Williamson cite in support of their idea (and even include verbatim in their paper) is Bradford Hill's famous (at least in the field of medicine) guidelines on association and causation.
They also point out that the trend of evidence-based-medicine (EBM), beginning in the 70s, promotes relying primarily on randomized, controlled trials (RCTs). Evidence gathered in RCTs supports difference-making causality: an RCT for a drug candidate may be successful in showing effectiveness. While that can be valuable, they argue EBM does not adequately address mechanisms: an RCT may show effectiveness without demonstrating how it works. This aligns with Nielsen's conclusion.
tl;dr If we think about causality in terms of what inferences are appropriate, based on both difference-making claims and mechanistic claims, then we have a decent model for making causal inferences.
I haven't seen anybody fundamentally counter Hume's objections so I regard causality as a convenient fiction. In other words, I think it's plausible and sufficient for science to work without causation. One only needs to look as far as the recent thread on the Kalam cosmological argument for an example of how taking causation too seriously can go awry. Suddenly, God is necessary to 'cause' the universe.
I think several of the premises of the Kalam cosmological argument are obviously and necessarily invalid. In philosophy, the way in which you abstractly recreate the world is much more important than the subsequent logic.
If the premises are wrong, the argumentation only holds true for a different world than our reality. That's a discussion for a different thread though!
I think as a rule of thumb in daily life, a reasonable way to short-hand the difference between random correlation and possible/probable causation is the existence of a specific, plausible mechanism.
That's the whole idea behind controlled, randomized experiments: attempting to find individual causal mechanisms by excluding all other explanations.
I think the easiest way to refute Hume is by asking the most important question in philosophy:
This question bridges the gap between theoretical philosophy and making decisions in one's life.
The question also examines the easiest way to disprove a philosophical argument directly: If the philosophical argument necessitates a reality incompatible with our own, it's the philosophical argument that's wrong, not the observable, demonstrable reality we live in; the purported philosophical map is a wrong description of the terrain, the terrain isn't wrong.
If Hume has come to an incredible point of insight, and we postulate that in fact causality is convenient fiction tying temporally sequential events together, so what?
How does this change my life? If I can't prove that me pulling a trigger of a gun causes something to get shot, does it mean that the courts should let every murderer go free? how could a shooter possibly be morally culpable for firing if we can't prove cause and effect?
In short, the Humean view of causality prevents a reasonable, functioning world, but we have that, so Hume must be wrong. (This is obviously a gross, gross simplification. Many philosophers have spent their whole lives demonstrably proving it to the best of their abilities.)
The practical implication of correlation not implying causation in our lives is that when we are presented with a claim of causation, we should immediately ask what the mechanism is and whether it's possible, plausible or necessary.
Essentially this is skepticism: If we don't know how, do we actually know?
Looking for a mechanism is an attempt at discerning result from reason in a correlation: Do kids who eat breakfast do better in school because they eat breakfast, or because breakfast-eating is a characteristic of children who come from homes/conditions that in other ways lead to doing better in school.
Looking for causal mechanisms turns out to be a powerful tool for optimizing one's life irrespective of one's goals, so whether we can philosophically prove this is rational behavior, it's an effective short-hand in everyday life in any case.
I agree with you for the most part. Certainly don't intend to deny the pragmatism of accepting causation. Hey, even Hume'd fall back on causal reasoning when playing snooker.
However, this line I think sells philosophy short (and I think you're hinting that way when you mention the 'gross simplification'):
"The question also examines the easiest way to disprove a philosophical argument directly: If the philosophical argument necessitates a reality incompatible with our own, it's the philosophical argument that's wrong, not the observable, demonstrable reality we live in."
I expect philosophy to challenge my existent model of reality in ways I can't easily dismiss. That's almost all I expect of a good philosophy so to dismiss one on said grounds doesn't seem reasonable to me. What would it be like to operate without common-sense causal reasoning? I tried my hardest to do exactly that, for years, after I first read Hume. I can hardly claim that my attempts 'caused' anything so I'll just claim that, yes, there was a strong subjective difference in how I saw the world during that time compared to pre-Hume.
That’s nice and all, but how do you figure out which light switch turns on a light? Wouldn’t you flip it a few times and see what happens? How long do you need to play with it before you’re convinced? In what sense is knowing that a light switch controls a particular light a convenient fiction? Rejecting all causality as somehow not real seems to depend on a weird notion of truth, demanding more certainty than the everyday notion of truth.
Philosophically, the difficulty is knowing when the light will stop working. Maybe the bulb will burn out the next time you try it, or maybe the power will suddenly go out? Maybe the vaccine will stop working due to a mutation? Our attempts to figure out causality make assumptions that not too much about the environment will change. They can’t rule out that tomorrow we will be surprised. So sure, Hume had a valid point, but we can carry on regardless, admitting that sometimes we will be surprised. Rarely do people know how they will die.
The notion of causality in the article is pretty much about flipping the light switch. (That’s the do operator.) It’s about harder cases when informal testing isn’t enough and we can’t flip the light switch ourselves, but maybe we can watch something else do it. It doesn’t deal with the philosophical problems that come up when the world changes or when the phenomena we’re studying only apply to one time or place, but it seems useful for figuring out static relationships from the data in some cases. It’s also about making some of our assumptions explicit, which helps understand what happened when we’re wrong.
A personal anecdote about my light-switch fiction: There was one light switch on my HVAC room that would fail seemingly randomly. It would work one hour, and not the other. Trial and error yielded nothing, I wrote it off a failing of the home itself...it's old and only has 100A service. Only seemed to fail when electric usage was high. Usually wasn't a problem, most large appliances are gas, and I only needed in that room like 2 times a year.
Fast forward 5 years, I had the switch for the HVAC room on, waiting for it to work, then went to change a load of laundry. I flicked the laundry switch on, swapped the laundry, and as I went to turn off the switch for the laundry caught that HVAC light (on opposite end of a finished basement) was on. Turns out it was just tapped off the laundry room, despite multiple other non-connected switches and existing between these rooms. Somebody, during a renovation, ran a line specifically across the basement to wire a switch from another switch instead of tapping one of the far closer outlets.
So yea, for 5 years the causal link I jumped to was entirely wrong. Even if a causal link seems obvious, it's entirely possible we're missing the actual causal link that both share.
When I speak of truth in a philosophical sense, I'm often after necessary, non-contingent truths, and I don't think that's particularly weird. On the contrary, I think the everyday (for some) pragmatic approach to truth isn't deserving of the word. You're right though that my stance on causation (and others) is deeply intertwined with my stance on truth.
To lean on onxyleopard's post a bit, I think that period of trying to abstain from causal reasoning forced me to separate the epistemic and mechanistic modelling of causality. So I could then reason about your light switch in all sorts of ways without ever believing my flicking it 'causes' the light. I know this probably sounds crazy to you skybrian. All good. I think many psychologists would consider a lack of causal reasoning as some sort of dysfunction too. :P
I think the issue is that there's very different conceptions of causality, the one in the Kalam doesn't have much to do with causality we usually think about in science. Personally I quite like the conserved quantity account :
https://plato.stanford.edu/entries/causation-physics/#ConsQuanAccoCaus