6 votes

The principle of explosion

14 comments

  1. [9]
    mrbig
    (edited )
    Link
    I find the principle of explosion oddly intuitive. Without looking at any logic, it makes sense for the acceptance of a contradiction to detonate all our assumptions about how classical logic is...

    I find the principle of explosion oddly intuitive. Without looking at any logic, it makes sense for the acceptance of a contradiction to detonate all our assumptions about how classical logic is supposed to work. I mean, if 1 = 1 and 1 != 1 are both assumed to be true, why would anything else make the slightest sense?

    That said, I read quite a few explanations and watched some cool videos (1, 2), and at some point I always feel that I got the concept -- after that my mental patterns resume normality and I feel that I should probably read or watch it again.


    In classical logic, intuitionistic logic and similar logical systems, the principle of explosion is the law according to which any statement can be proven from a contradiction. That is, once a contradiction has been asserted, any proposition (including their negations) can be inferred from it; this is known as a deductive explosion. (Wikipedia)


    Bertrand Russell is the Pope

    The story goes that Bertrand Russell, in a lecture on logic, mentioned that in the sense of material implication, a false proposition implies any proposition.

    A student raised his hand and said "In that case, given that 1 = 0, prove that you are the Pope."

    Russell immediately replied, "Add 1 to both sides of the equation: then we have 2 = 1. The set containing just me and the Pope has 2 members. But 2 = 1, so it has only 1 member; therefore, I am the Pope." (source).

    5 votes
    1. [3]
      PapaNachos
      Link Parent
      You can do a lot of weird shit with "math" (and by extension "logic") if you either divide by 0 or sneak in an invalid equality. It's similar to some of those shitty fake math problems that go...

      You can do a lot of weird shit with "math" (and by extension "logic") if you either divide by 0 or sneak in an invalid equality. It's similar to some of those shitty fake math problems that go around social media every few months or so that supposedly prove 0 = 1 or whatever.

      Basically in order for "0 = 1" to be true either "0" does not mean what you think it means or "1" does not mean what you think it means, or "=" does not mean what you think it means. Or some combination of all 3.

      That being said, you can have interesting scenarios where like that's not necessarily wrong. For example, if I have 2.4 of something and 2.3 of something and I add them together I get 4.7 of something. But if for whatever reason we're rounding you get scenarios where 2 + 2 = 5. This gets back to the idea that "2", "2" and "=" aren't necessarily representing what you think they're representing. You can't do that in the formal logic side of math, but it might come up in real world application, so it's important to understand what the symbols you're using actually mean.

      And that pope example is fantastic. It's an extremely elegant way of conveying that idea.

      6 votes
      1. psi
        Link Parent
        More formally, the reason "approximately equals to" behaves differently is because it's not an equivalence relation. For example, define x ~ y to mean |x - y| < 0.1. Then the transitive property...

        More formally, the reason "approximately equals to" behaves differently is because it's not an equivalence relation.

        For example, define x ~ y to mean |x - y| < 0.1. Then the transitive property doesn't hold: 0.25 ~ 0.29 and 0.29 ~ 0.33, but 0.25 ≁ 0.33.

        6 votes
      2. awe777
        Link Parent
        Example of "0 = 1" being true while being consistent of "equal has an equivalence relation", "anything gets added by 0 stays the same", and "anything gets multiplied by 1 stays the same" idea is...

        Example of "0 = 1" being true while being consistent of "equal has an equivalence relation", "anything gets added by 0 stays the same", and "anything gets multiplied by 1 stays the same" idea is the "zero ring".

        1 vote
    2. [5]
      ShroudedMouse
      Link Parent
      Paraconsistent logics explore this question earnestly by rejecting the principle of explosion. Contradictions are still an issue but not necessarily all of them as classical logic has it. I...

      why would anything else make the slightest sense?

      Paraconsistent logics explore this question earnestly by rejecting the principle of explosion. Contradictions are still an issue but not necessarily all of them as classical logic has it.

      I mention this because numerous philosophies find value in contradiction - think Buddhist koan or Hegelian dialectic. It's primarily the analytic tradition that bent backwards avoiding em.

      2 votes
      1. [4]
        mrbig
        Link Parent
        I get the impression that the rejection of contradictions is much older and broader than analytic philosophy, no?

        I get the impression that the rejection of contradictions is much older and broader than analytic philosophy, no?

        1 vote
        1. [3]
          ShroudedMouse
          (edited )
          Link Parent
          Probably. Good question I'd like someone else to answer. :P Edit: I'd say Aristotle is very much in the analytic tradition (or more like the start of it). I'm sure other philosophy was going on...

          Probably. Good question I'd like someone else to answer. :P

          Edit: I'd say Aristotle is very much in the analytic tradition (or more like the start of it). I'm sure other philosophy was going on prior to 300 BC though - perhaps with less concrete definitions. Some of these other philosophy logics may have developed without explicitly rejecting contradictions simply because contradiction hadn't been well-defined.

          1 vote
          1. mrbig
            (edited )
            Link Parent
            I don't see how that can be true, since the analytic tradition is a product of the 20th century. Aristotle was such a giant, his influence on philosophy as a whole is impossible to escape. This is...

            I'd say Aristotle is very much in the analytic tradition (or more like the start of it)

            I don't see how that can be true, since the analytic tradition is a product of the 20th century.

            Aristotle was such a giant, his influence on philosophy as a whole is impossible to escape. This is true for both continental and analytic philosophy. He predates the distinction by such a huge margin, I don't think it makes sense to classify his work like that. Sounds wrong, or at the very least reductive.

            1 vote
          2. mrbig
            Link Parent
            From Wikipedia: That's about 300 BC.

            From Wikipedia:

            The traditional source of the law of non-contradiction is Aristotle's Metaphysics where he gives three different versions.

            1. ontological: "It is impossible that the same thing belong and not belong to the same thing at the same time and in the same respect."

            2. psychological: "No one can believe that the same thing can (at the same time) be and not be."

            3. logical: "The most certain of all basic principles is that contradictory propositions are not true simultaneously."

            That's about 300 BC.

  2. [5]
    skybrian
    Link
    This problem is specific to formal logic. In real life, most "facts" are only mostly true which allows for exceptions, uncertainty, and differences in interpretation, making chains of reasoning...

    This problem is specific to formal logic. In real life, most "facts" are only mostly true which allows for exceptions, uncertainty, and differences in interpretation, making chains of reasoning fragile and direct contradiction rare.

    4 votes
    1. [4]
      mrbig
      (edited )
      Link Parent
      While it is true that real-life contains a lot of uncertainty that is often not reflected in classical logic, it is also true that we often must use deduction due to its practicality. Also, this...

      While it is true that real-life contains a lot of uncertainty that is often not reflected in classical logic, it is also
      true that we often must use deduction due to its practicality.

      Also, this seems relevant: https://plato.stanford.edu/entries/logic-manyvalued/

      1. [3]
        skybrian
        Link Parent
        We don't use the principle of explosion though, nor do normally use multivalued logic. After doing some math you normally do a sanity check, and if the results are absurd then you double-check...

        We don't use the principle of explosion though, nor do normally use multivalued logic. After doing some math you normally do a sanity check, and if the results are absurd then you double-check your inputs, assumptions, and calculation.

        The contexts within which people do practical calculations are often ignored when people philosophize about logic.

        3 votes
        1. [2]
          psi
          Link Parent
          I think people absolutely do use mutlivalued logic, and in fact I think they use it in much more mundane ways. For example, Bayesian probability can be thought of as an extension of classical...

          I think people absolutely do use mutlivalued logic, and in fact I think they use it in much more mundane ways.

          For example, Bayesian probability can be thought of as an extension of classical logic where propositions can have any truth value between 0 and 1 inclusive (these are often referred to as "credences"). If the weather forecaster predicts it "could" rain today (p ~ 0.5), you might prepare differently from a prediction that it "will" rain today (p ~ 0.9).

          Or just consider any game of chance (eg, Poker): if you knew everyone's cards and the order of the deck, you could win with perfect accuracy (the rules of poker follow classical logic). But since you don't, you have to make some intuitive estimate of the credences to predict whether your hand is strong or weak.


          A bit of a tangent, but if you wanted an "empirical" mathematics (ie, something more akin to science, where the "facts" are never perfectly established but you build off them anyway), I think Bayesian reasoning would need to be foundational.

          For instance, imagine if an empirical mathematician assigned all known conjectures with some credence (eg, P = NP with probability 1%). Then a proof of some seemingly unrelated conjecture could cause you to update all your credences (via Bayes theorem), and maybe you could be more (or less) confident about some other conjecture (eg, update your priors so that P=NP with probability 2%).

          I mean, in practice I think it would be a bit of a mess (scientists don't explicitly write down credences unless they're doing Bayesian statistics). And intuitively this is what mathematicians do, anyway, when they're deciding whether a conjecture is more likely to be true or false. But I wonder what math would look like if mathematicians were free to build off conjectures that hadn't been proven true.

          2 votes
          1. skybrian
            Link Parent
            Yes, I agree that probabilities are used a lot! I was thinking of the other kinds of multivalued logic where there are a finite number of values in addition to true and false. I don’t think any of...

            Yes, I agree that probabilities are used a lot! I was thinking of the other kinds of multivalued logic where there are a finite number of values in addition to true and false. I don’t think any of those are popular enough to see much use?

            But working with probabilities is another kind of calculation that also needs to be sanity-checked using informal reasoning. The biggest problem is the closed-world assumption. See the probability of green cheese.

            Also, in everyday communication we often convert our hunches into numbers like “80% chance” but it’s less common to do actual calculations, or check that we are well-calibrated.

            1 vote