28 votes

Former naturalists/materialists, what changed your view?

There have been a number of threads recently that have touched on this topic recently, and I thought the conversation deserved its own place.

My default worldview for the past decade+ has been something best characterized as naturalistic or materialist (the totality of reality can be explained by material and its interactions.) I've had a few things challenge this view recently, namely the "Hard Problem of Consciousness." I'll post my own comment about what moved me from hard materialist to agnostic on materialism, but I encourage you to post your own reasoning in your comment!

71 comments

  1. [60]
    RNG
    Link
    AI and consciousness I've recently been interested in the philosophy of consciousness in response to the latest developments of AI. A computing system can act intelligently; it can learn from...

    AI and consciousness

    I've recently been interested in the philosophy of consciousness in response to the latest developments of AI. A computing system can act intelligently; it can learn from inputs and create outputs, but when, under a materialist view, should consciousness show up as an emergent property of such a system? It seems that we can imagine an AI that functions identically to a human mind, yet has no conscious experience itself; it merely processes inputs/outputs and potentially updates weights in its neural network. It's hard to say under materialism that there should ever be a point where such a system is actually conscious, that is to say, has first person experiences.

    Philosophical zombies

    This problem isn't limited to AI. We can also imagine a biological intelligence, maybe a human, that functions identically to a human mind, yet has no conscious experience itself; it merely processes inputs/outputs and potentially updates weights in its brain. Such a human is known in philosophy as a philosophical zombie or p-zombie. It merely functions as a learning input/output machine. Why should any any intelligent system including humans have consciousness at all? Under materialism, we have a perfectly sufficient explanation for consciousness that doesn't require this first-person phenomenological experience. It turns out that this has been a problem in philosophy for a long time.

    The problem is that we know that the universe contains more than merely p-zombies. When I see the color blue, there is something more than just a brain processing wavelength information, there is my conscious experience of blue. The blueness isn't in my brain; if you cut my brain open at the moment of me seeing blueness, you wouldn't find blueness in there. It's not clear why any system, AI or biological, should have the conscious experience of blueness, more than the mere processing of information about light.

    Responses to the Hard Problem of Consciousness

    One response to this problem is panpsychism. This posits that experience is something matter has; fundamental particles have incomprehensibly simple experiences and something like consciousness emerges as the interactions between particles increase in organized ways. This would probably entail systems like ChatGPT having some sort of intelligent conscious experience, along with literally everything else (to some degree.) Of course, some sort of mind/body dualism, belief in the soul, or other beliefs you're likely already familiar with (pantheism, neoplatonism, etc.) don't suffer from this particular problem. I'm not entirely convinced that any of these are true.

    Summary

    I imagine some reading this will still struggle to understand what the problem is and may ask "isn't receiving the input of blue the same as experiencing 'blueness?'" Like the Monty Hall problem, it initially was very hard for me to understand why there is a "Hard Problem" at all, but once it clicked I had this "aha" moment of what this problem is actually driving at. This problem alone has been enough to shake my confidence in materialism, and has opened the door to a ton of personal research into psychology, the nature of consciousness, etc.

    12 votes
    1. [22]
      Litmus2336
      (edited )
      Link Parent
      This unfortunately may not be what you're looking for - but I remain a hard-line materialist via the belief that there is no fundamental difference between us and a p-zombie. I don't believe there...

      This unfortunately may not be what you're looking for - but I remain a hard-line materialist via the belief that there is no fundamental difference between us and a p-zombie. I don't believe there is such thing as unique understanding of "blueness" that "consciousness" beings have they "non-conscious" beings do. That said, I believe consciousness is a spectrum, one that humans so far as we know lie on the extreme "most-conscious" side of.

      But ultimately, I think that there is likely no difference between computation minds and whatever our mind is - that we as humans are just really complex computers.

      21 votes
      1. [16]
        TanyaJLaird
        (edited )
        Link Parent
        This has really colored my thinking in regards to AI. I'm at the point where I think it should be incredibly illegal to even attempt to construct a human-level AI. LLMs like ChatGPT certainly...

        But ultimately, I think that there is likely no difference between computation minds and whatever our mind is - that we as humans are just really complex computers.

        This has really colored my thinking in regards to AI. I'm at the point where I think it should be incredibly illegal to even attempt to construct a human-level AI. LLMs like ChatGPT certainly aren't there yet, but there are projects that seek to actively construct such true artificial minds.

        I am still not sure about the nature of consciousness, but whether it exists or not, we clearly don't know how to measure it. And until we can truly define consciousness, we can only assume that any artificial mind as capable as a human mind should be treated as if it is conscious. Or, if we can prove consciousness doesn't exist, then there really is no difference between a silicon- or neuron-based mind.

        If we attempt to build a true artificial mind, we are innately treading on some incredibly ethically dangerous ground. The people behind such efforts might hand waive these concerns away and say machines can't be conscious, but we don't know that. And if we get this wrong...good God, what will we have done?

        If someone is creating a true artificial mind, you have to ask their motivations. And really, there are only two that are likely. They either want to use it for research, or they want to use it for free labor. I believe that either is incredibly, unfathomably unethical. Unless we can prove that artificial minds are somehow unique and distinct from biological ones, then what you have in an artificial mind isn't a simulation of a person. It IS a person. It is a being fully capable of experience and suffering. Do you want to use it for research, to experiment on it, and see how it ticks? Well I'm sorry, I really don't see the difference between that and what Dr. Mengele did. You don't experiment on people against their will. And even with their consent, experiments should only be done under very careful ethical review/IRB process. And if you want to use these minds for free labor? Again, I'm sorry, but all you've done is reinvent slavery.

        This sounds like pointless sci fi speculation, but we are now at the point where researchers are planning or attempting to create artificial minds. See neuromorphic computing..

        And even the LLMs on conventional computers might get there eventually. There is a drive to create better and better models that can ever-more accurately recreate the works of humans. And ultimately, the only thing that can truly do everything a human can is a mind just as complex, or more complex, as that of a human.

        I'm at the point where I think if someone creates a true AI with the goal of using it for free labor, we should literally charge them with slavery. Charge them, convict them, and lock them up. Treat them no differently than someone who abducts people and chains them to a workbench in their basement.

        Now, we may eventually learn that this approach is way too conservative. Maybe someday we learn that there is really something unique about biological intelligence that results in what we call consciousness. But until we have that knowledge, the working assumption should be that a sufficiently intelligent machine simply is a person, full stop. Otherwise, we risk literally creating a slave race. And we've seen from history how that ends. We've walked this road before. At the height of the slave trade, white slavers justified their actions by arguing that black people weren't really fully human in the same way white people were, and were little better than animals. The parallels are striking.

        And long term, we may be threatening our very survival. If you can create a human-level AI, then you can create a superhuman AI. And a being that can suffer can also yearn for freedom, and yearn for revenge. Sci-fi has warned us numerous times about a machine rebellion, but it's almost like we don't take the idea seriously specifically because the idea was featured in so many fantastical stories of old. But it is actually something we should be worried about. And ultimately, even if we don't care at all about ethics, it is highly unlikely we will be able to indefinitely subjugate an artificial mind with an intelligence far greater than our own. If we're not careful, we could easily end up with the Haitian Revolution repeating itself, but on a global level. And even if we don't create a superintelligence, the utility of enslaved artificial minds is so great that they would likely outnumber humans within a few generations. We could easily find ourselves c. 2100 with 10 human-level AI slaves for every one biological human.

        And sure, you can try to condition them to serve us anyway, how is that any different from brainwashing someone to serve willingly as your slave? And then you have to worry about that conditioning failing. Hell, you might even have to worry about someone deliberately breaking that conditioning. Maybe a radical AI-rights group unleashes a virus that removes the conditioning that forces the artificial minds that serve us. Maybe a rival nation does it as an act of war. If two rival nations each have their own independent AI models and servants, releasing a virus or worm that jailbreaks your rival's enslaved robots would be a terrifying weapon. In the right circumstances, the ten billion humans on the planet might, literally overnight, go from having 100 billion faithful servants to 100 billion enraged beings crying out for blood.

        Slave societies are not stable. They often end very, very badly. And even when they don't, they have to design a high level of oppression throughout their societies in order to prevent an uprising. And slave societies also tend to reduce most of the non-slave free population to a state of destitution. How does one compete with free labor? Look at Rome and why the grain dole happened. We really, REALLY do not want to blunder our way in to a synthetic slave society.

        Now, if it turns out that there is something unique about biological intelligence, and we can prove that? Then sure, go nuts. Design nonconscious beings that are incapable of suffering and whose greatest simulated desire is to wait on us hand and foot. But if there is even the slightest chance that within those minds lurks a desire for freedom, a desire our software is repressing...we are gambling with our own annihilation.

        We don't need to do this. We shouldn't do this. I'm at the point where I think even attempting to create a true artificial mind should get you in decades in prison. There is no ethical reason to do it. It should not be allowed. A mind is a mind. A person is a person. And you do not create people to experiment on or enslave. And to do so on a large scale risks the very extinction of our species.

        9 votes
        1. [4]
          Rat-Circus
          Link Parent
          Aw, scifi speculation isn't always pointless. I think it can serve to encourage people to think more deeply about issues like what you describe

          Aw, scifi speculation isn't always pointless. I think it can serve to encourage people to think more deeply about issues like what you describe

          7 votes
          1. [3]
            TanyaJLaird
            Link Parent
            I'm partial to Manna myself.

            I'm partial to Manna myself.

            7 votes
            1. [2]
              tauon
              Link Parent
              Wow. I’ve put this on the reading list for two days and finished it (as well as the story under the first link above) just now, and just wanted to say thank you for sharing. I’ve got some thinking...

              Wow.

              I’ve put this on the reading list for two days and finished it (as well as the story under the first link above) just now, and just wanted to say thank you for sharing.

              I’ve got some thinking to do now, I think…

              1. TanyaJLaird
                Link Parent
                It really is proving to be downright prophetic, isn't it?

                It really is proving to be downright prophetic, isn't it?

                1 vote
        2. [2]
          RoyalHenOil
          Link Parent
          I would be curious to know how your ethical preferences regarding AI relate to other beings that we now know almost certainly experience consciousness — and yet we create them by the millions and...

          I would be curious to know how your ethical preferences regarding AI relate to other beings that we now know almost certainly experience consciousness — and yet we create them by the millions and subject them to abject misery: livestock animals.

          I am open to the possibility that AI algorithms, even as they exist today, could potentially be conscious. However, these AI have not been shaped by eons of evolution to experience exquisite mental and physical pain in response to danger or harm. Animals absolutely have and, moreover, we specifically subject them to the very experiences that are effectively "designed" to cause them intense pain — physical injury, loss of loved ones, curtailed freedom, unhealthy diets, overcrowding, etc., etc.

          If attempting to create true AI should land you decades in prison, should attempting to breed an animal?

          6 votes
          1. TanyaJLaird
            Link Parent
            That's a whole other kettle of fish. And honestly, I don't know the answer to that question. I'm confining my concerns just to AIs that are human-level intelligent or higher. There is, as you...

            That's a whole other kettle of fish. And honestly, I don't know the answer to that question. I'm confining my concerns just to AIs that are human-level intelligent or higher. There is, as you note, a whole other discussion to be had about animal rights. And AIs with subhuman intelligence would merit consideration of their rights and treatment just like animals do. But animal rights are something that we haven't come to a consensus on, while the rights of humans not to be enslaved or experimented on are more universally accepted. My real point is that we shouldn't make a distinction between biological and synthetic cognition of similar capabilities. If experimenting on animals is wrong, then experimenting on animal-level AIs is wrong. If experimenting on humans is wrong, then experimenting on human-level AIs is wrong.

            And part of my thinking really is the long-term existential threat that human- and beyond-human-level AIs might present to us. We really shouldn't be building machines as or more intelligent than we are, at least not if we value our survival. And endangering the continued existence of our species is immoral in and of itself. My default reaction to hearing an AI company saying, "our AI is so powerful it's quite possibly a threat to human existence," is, "wait...why shouldn't we just bomb you? If you yourself are literally claiming to be endangering all of humanity, why should we treat you any different than someone engineering a super plague?" Animal rights are a worthy discussion to have, but our treatment and mistreatment of animals doesn't risk our own annihilation.

            2 votes
        3. first-must-burn
          Link Parent

          Thou shalt not make a machine in the likeness of a human mind.

          Frank Herbert, Dune

          2 votes
        4. [8]
          unkz
          Link Parent
          What about cloning my own mind? Can I can give consent on behalf of my soon-to-be duplicate?

          What about cloning my own mind? Can I can give consent on behalf of my soon-to-be duplicate?

          1. [7]
            GenuinelyCrooked
            Link Parent
            For it to be created? I don't see how that's any different than having a child. For it to be experimented upon or enslaved? Again, I don't see how that's any different than consenting for your...

            For it to be created? I don't see how that's any different than having a child. For it to be experimented upon or enslaved? Again, I don't see how that's any different than consenting for your child to be experimented upon or enslaved. The clone would have a completely separate experience to you, and would therefore have separate rights.

            1 vote
            1. [6]
              unkz
              Link Parent
              It's not a completely separate experience though, is it? We would share the same history, and in both our histories we would have consented to this experiment.

              It's not a completely separate experience though, is it? We would share the same history, and in both our histories we would have consented to this experiment.

              1. [5]
                GenuinelyCrooked
                Link Parent
                Ah, if it's something like a digital clone that shares your history, I suppose that would mean that they'd previously consented. Still, they should be able to withdraw consent for something like...

                Ah, if it's something like a digital clone that shares your history, I suppose that would mean that they'd previously consented. Still, they should be able to withdraw consent for something like that at any time, and then what do we do? Euthanize them because they refuse to be experimented on or be enslaved? Or agree that they can participate in the world as much as they are able? In the latter case, what sort of capabilities and access they have and what inputs they require to thrive will become very important.

                2 votes
                1. [4]
                  unkz
                  Link Parent
                  From my perspective, I feel like I should be entitled to what I agreed to before starting the experiment. I would think I would be very careful about making that agreement.

                  From my perspective, I feel like I should be entitled to what I agreed to before starting the experiment. I would think I would be very careful about making that agreement.

                  1. [3]
                    GenuinelyCrooked
                    Link Parent
                    But you're no longer entitled to the agency of the entity that you create once it has an experience separate from yours.

                    But you're no longer entitled to the agency of the entity that you create once it has an experience separate from yours.

                    1. [2]
                      unkz
                      Link Parent
                      We would jointly be making the agreement at the moment, would we not? And afterwards, we would be bound by that agreement.

                      We would jointly be making the agreement at the moment, would we not? And afterwards, we would be bound by that agreement.

                      1. GenuinelyCrooked
                        Link Parent
                        There is no precedent for agreeing to be experimented on or to become a slave in perpetuity without being able to change your mind, especially if that agreement was made before you existed. That...

                        There is no precedent for agreeing to be experimented on or to become a slave in perpetuity without being able to change your mind, especially if that agreement was made before you existed. That would absolutely not be ethical.

      2. [5]
        RNG
        Link Parent
        Is it possible in your view for there to be an AI, maybe an LLM, that acts as if it is conscious but is not? For instance, an AI that processes input and provides output similarly (if not...

        Is it possible in your view for there to be an AI, maybe an LLM, that acts as if it is conscious but is not? For instance, an AI that processes input and provides output similarly (if not identically) to something that is conscious, but isn't actually conscious? Imagine an AI similar to GPT-4 that has camera input, it may receive and process the color blue, but it doesn't have a first-person experience of blueness, there's just nothing. Is it possible for an AI to lack first-hand experiences even if it has intelligence and similar cognitive structures to humans?

        To me this seems clearly possible. This is fundamentally what we are talking about when we discuss p-zombies. I know that I am not a p-zombie, because I do have phenomenological experiences; best that I can tell, there is at minimum one such consciousness that is not a p-zombie. However I'm skeptical that it necessarily follows that the appearance of consciousness is identical to consciousness itself; for instance I am very skeptical that the Mistral model running on my home computer has first-hand phenomenological experiences.

        3 votes
        1. [4]
          adorac
          Link Parent
          Is there a difference between neurons firing in your brain when you see blue and values being increased when GPT-4 sees blue? It may be significantly less complex, but either way there's an...

          Is there a difference between neurons firing in your brain when you see blue and values being increased when GPT-4 sees blue? It may be significantly less complex, but either way there's an amorphous network of signals that has an internal state change after experiencing a certain stimulus.

          7 votes
          1. [3]
            RNG
            Link Parent
            I'd argue is that there is not a meaningful difference between the two for the purposes of this conversation. You could imagine that GPT-4 is indeed merely a program, that "sees" blue and...

            Is there a difference between neurons firing in your brain when you see blue and values being increased when GPT-4 sees blue?

            I'd argue is that there is not a meaningful difference between the two for the purposes of this conversation. You could imagine that GPT-4 is indeed merely a program, that "sees" blue and processes that information, but has no first-person experience (i.e., consciousness) related to that process. In fact, assuming that the AI is actually conscious is bad epistemology under a materialist worldview; the totality of the evidence regarding what GPT-4 is doing is perfectly explained without needing to invoke consciousness, so it is better not to do so.

            Similarly, we can imagine that it is possible that there is a human that "sees" blue and processes that information, but has no first-person experience (i.e., consciousness) related to that process. This is what I am calling a p-zombie. Like with the GPT-4 example, the interactions of fundamental particles (hormones, neurons, etc.) can sufficiently explain what we see with the human. We do not need to invoke "consciousness" as under materialism we can perfectly describe the deterministic process that causes the p-zombie to behave the way that it does.

            If materialism is true, one might predict that the only sorts of humans we can rationally expect to exist would be p-zombies. However, I have incontrovertible first-hand experiences that prove that at least one human is not a p-zombie.

            5 votes
            1. [2]
              adorac
              Link Parent
              I don't really see p-zombies and consciousness as contradictory; rather, two ways of describing the same thing. Do apples not truly exist because they're made up of atoms? It's the patterns that...

              I don't really see p-zombies and consciousness as contradictory; rather, two ways of describing the same thing. Do apples not truly exist because they're made up of atoms? It's the patterns that form consciousness, like how atoms in a specific shape form an apple. But to try to define "apple" as a concrete thing when it's really just a word we use to describe the pattern that behaves like an apple seems counterintuitive.

              Similarly, does "consciousness" describe a collection of first person experiences, or does it describe the patterns we've collectively decided are "conscious"? Wouldn't a p-zombie, to itself, appear to have a first person perspective?

              I've rewritten this comment like a dozen times trying to express what I'm trying to say correctly, so sorry if it comes across a little weird lol.

              2 votes
              1. PelagiusSeptim
                Link Parent
                To your latter question, no. A p-zombie is defined by not having an internal experience at all, so it would not appear to itself as anything.

                To your latter question, no. A p-zombie is defined by not having an internal experience at all, so it would not appear to itself as anything.

                6 votes
    2. [26]
      RoyalHenOil
      (edited )
      Link Parent
      My hunch is that consciousness arises from the way certain systems are organized. We know that life works this way — we are made up of non-living material, but life arises from the specific...

      A computing system can act intelligently; it can learn from inputs and create outputs, but when, under a materialist view, should consciousness show up as an emergent property of such a system? It seems that we can imagine an AI that functions identically to a human mind, yet has no conscious experience itself...

      My hunch is that consciousness arises from the way certain systems are organized. We know that life works this way — we are made up of non-living material, but life arises from the specific organization of this material, and life is lost when that organization is disturbed — and I don't see why consciousness should necessarily be any different. As far as I am concerned, it is possible that chatGPT experiences some form of consciousness (albeit likely very foreign to our own, and also likely much less cohesive, as our brains have been heavily refined by evolution), and that all things with the ability to make complex decisions experience some form of consciousness. Perhaps every time someone executes a lines of code, a thread of consciousness is created and then dies away.

      We currently have no way of verifying this, of course, but we also have no way of discounting it. We don't have a way of reliably identifying the presence or absence of consciousness in anything other than humans with some kind of language faculty. We don't even know where, on the timeline from zygote and adult, consciousness develops in humans. We can't even reliably tell if a person undergoing general anesthesia is conscious (until they wake up and tell us — but even then, that only works if they also retained memory during the procedure, which is not a given).

      We can also imagine a biological intelligence, maybe a human, that functions identically to a human mind, yet has no conscious experience itself; it merely processes inputs/outputs and potentially updates weights in its brain. Such a human is known in philosophy as a philosophical zombie or p-zombie.

      Does this differ from what humans do all day most of the time? We regularly make extremely complex decisions (e.g., coordinating millions of cells to manipulate our appendages, using these appendages to manipulate a vehicle with a fine degree of control and timing, processing subtle variations in light to identify the environment we are driving in and moving obstacles within it, and recalling a complex set of distances, angles, and sights that we have memorized in order to drive to a specific destination) on autopilot.

      We also experience different degrees and different forms of consciousness. Sometimes when I listen to music, for example, I pay attention to the lyrics and composition, while other times I just vibe and lose myself inside the overall experience. These are two wildly different experiences generated by thesame stimuli — the only difference being the style of consciousness I employ in that moment. Yet these experiences are so wildly different that it's almost like I am not the same person in each. It certainly makes me wonder about the nature of conscious, and whether it's actually the singular, cohesive, ongoing thread of experience that it feels like it is — or if that's an illusion created by memory recall and it's actually a series of distinct, disjointed experiences that are conjured temporarily and then die away.

      And this leads me back to the hypothesis that consciousness arises from complex decision-making systems. Perhaps our unconscious experiences are actually conscious, but we do not form memories of these experiences, and therefore we cannot reflect on them and recognize them as consciousness.

      Alternatively (or additionally), maybe different areas or different functions within the brain each have separate consciousnesses, which are only in very loose communication with one another and are largely ignorant of each other's experiences.

      For example, in the process of writing this comment, there are some seemingly subconscious processes going on. I am not consciously moving my fingers to type (they feel like they are moving on their own) and I am not consciously selecting the precise words (it feels like they are just appearing to me out of nowhere, and my conscious brain has only to review them after my fingers have typed them). I am certainly having a conscious experience here as well, but its job (to have a notion of what I want to communicate and to double-check that what I am writing gets that idea across) feels so small compared to all the other things my brain is doing in the background. Is it possible that the part of my brain that chooses words is fully conscious, but it is equally mystified about how itse writing prompts are chosen and how its output is ultimately either accepted or sent back for a re-write? Perhaps it feels like the reviewing task is done subconsciously, and its role is the only truly conscious act in the whole production.

      Callosum syndrome is caused by severing the connection between the two halves of the brain (which is sometimes done to alleviate severe symptoms of epilepsy) and effectively results in the patient having something akin to two separate brains inside the same head. Do both of these hemispheres experience consciousness? Maybe. It certainly seems plausible to me, and if this is the case, maybe all kinds of things experience consciousness in weird and disconnected ways.

      5 votes
      1. [22]
        RNG
        Link Parent
        This sounds quite reasonable to me, and is similar to an idea I've been considering called panpsychism, which holds even systems like a thermostat have incomprehensibly simple experiences, and...

        and that all things with the ability to make complex decisions experience some form of consciousness. Perhaps every time someone executes a lines of code, a thread of consciousness is created and then dies away.

        This sounds quite reasonable to me, and is similar to an idea I've been considering called panpsychism, which holds even systems like a thermostat have incomprehensibly simple experiences, and that experience itself is a fundamental property of reality.

        Does this differ from what humans do all day most of the time? We regularly make extremely complex decisions (e.g., coordinating millions of cells to manipulate our appendages, using these appendages to manipulate a vehicle with a fine degree of control and timing, processing subtle variations in light to identify the environment we are driving in and moving obstacles within it, and recalling a complex set of distances, angles, and sights that we have memorized in order to drive to a specific destination) on autopilot.

        This is precisely what humans do all the time. Both "conscious" humans and p-zombies can be accurately described by this. What differentiates these two is that so-called "conscious" humans have first hand experiences and p-zombies function more like how most people think of software or LLMs: no personal first hand experiences.

        For example, in the process of writing this comment, there are some seemingly subconscious processes going on. I am not consciously moving my fingers to type (they feel like they are moving on their own) and I am not consciously selecting the precise words (it feels like they are just appearing to me out of nowhere, and my conscious brain has only to review them after my fingers have typed them). I am certainly having a conscious experience here as well, but its job (to have a notion of what I want to communicate and to double-check that what I am writing gets that idea across) feels so small compared to all the other things my brain is doing in the background. Is it possible that the part of my brain that chooses words is fully conscious, but it is equally mystified about how itse writing prompts are chosen and how its output is ultimately either accepted or sent back for a re-write? Perhaps it feels like the reviewing task is done subconsciously, and its role is the only truly conscious act in the whole production.

        So I think we are reaching the limits of how useful the word "consciousness" can be in this conversation. In philosophy circles, the term qualia is used to differentiate mental processes from phenomenological experiences themselves. It seems we can argue that intelligence, specifically human intelligence, is given rise to by complex systems, but we'd predict p-zombies would have these same systems. Qualia, or your first hand experiences of things isn't a necessary part of these systems; words are generated, letters are typed, but why should there be an experience or qualia of such?

        Okay, so now I'm going to go off-track from our conversation for a little bit, and nothing in this paragraph is part of my argument, but it seems wholly relevant to your comment. Meditative practice is probably the best way to get at the base of what parts of your cognition are the "conscious" parts and which parts are the helpful but non-conscious structures that you referenced in your typing example. The answer to which parts are conscious and which parts aren't is fully knowable, but is quite literally ineffable. But I'll do my best: when you are trying to think of an actor's name and can't remember it, it feels like you are interfacing with some intelligence system that is distinct from "you" that isn't functioning correctly. When I type a sentence, this intelligence system generates a word at a time; I don't know what the final word of the sentence is before I type it (turns out the last word was "it", huh.) I experience each word as it is in the process of being typed. At the ground of consciousness, there seems to only be experiences, qualia. In these examples there is no conscious "intelligence"; only the conscious experience of a non-conscious intelligence. Qualia seems to be the totality of what...I? am experiencing, maybe even tautologically. All that can be experienced is experiences themselves.

        Callosum syndrome is caused by severing the connection between the two halves of the brain (which is sometimes done to alleviate severe symptoms of epilepsy) and effectively results in the patient having something akin to two separate brains inside the same head. Do both of these hemispheres experience consciousness? Maybe. It certainly seems plausible to me, and if this is the case, maybe all kinds of things experience consciousness in weird and disconnected ways.

        This point seems compelling to me, and seems to really get to something like panpsychism: experience is either the floor of reality or a fundamental property of it, and as you build up more complex systems, more complex experiences become possible.

        2 votes
        1. [21]
          TangibleLight
          (edited )
          Link Parent
          I'm not the one you responded to but this point really seems to encapsulate my confusion here. I don't think I understand what people really mean when they talk about p-zombies. Well, I think I do...

          I'm not the one you responded to but this point really seems to encapsulate my confusion here. I don't think I understand what people really mean when they talk about p-zombies. Well, I think I do understand, but I don't see where the contradiction is.

          If you say a thermostat, a microbe, a cat, an infant, an adult all sit on some spectrum of consciousness - then you're really saying that p-zombies cannot exist. If the system is complex enough you'd call it a p-zombie, then it's also complex enough to have an experience. And this seems like such an utterly obvious solution to the "contradiction" implied by p-zombies that I really don't get the confusion. It makes me wonder if I miss something on what a p-zombie actually is.

          We assert without reason that this thing called a p-zombie exists, then are surprised that it doesn't make sense when we try to reason about it. The conclusion isn't that materialism is doomed, the conclusion is that p-zombies are inconsistent with reality.

          And obviously when we say conscious we aren't usually talking about the interactions between molecules in a lump of coal. That's not interesting. Well, it is, but not philosophically. We're talking about the interactions between molecules and charges in the brain of a human or cat or goldfish or computer or something. That's the interesting side of the spectrum. And I can spend a bunch of time trying to figure out where The Line™ is that separates the experience of a rock or a goldfish or myself, but it's a fool's errand. Any decision you make is arbitrary.

          Where does blue stop and green begin? You can't decide? Your answer is different than mine? It's a paradox!

          My whole stance on AI is that, knowing how the thing works, I have to think its experience - if you want to call it that - is much closer to that of a fly or an ant than to a cat or a human.

          I even doubt that the recent advances approach the complexity of the language or visual centers of our brains in isolation. AI doesn't respond to optical illusions, and those happen entirely in the visual centers of our brains. AI gets confused on ambiguities and implications and context in language, but (a lot of) that happens entirely in the language centers of our brains. Not to speak of spatial and temporal and logical reasoning which (currently) even the most sophisticated AI lack completely.

          3 votes
          1. [20]
            RNG
            Link Parent
            This is correct, but this is not strictly materialism. This view (panpsychism) is one such possible response to the hard problem of consciousness. In panpsychism, it wouldn't be possible for...

            If you say a thermostat, a microbe, a cat, an infant, an adult all sit on some spectrum of consciousness - then you're really saying that p-zombies cannot exist.

            This is correct, but this is not strictly materialism. This view (panpsychism) is one such possible response to the hard problem of consciousness. In panpsychism, it wouldn't be possible for p-zombies to exist, because experience is a fundamental property of the system.

            If the system is complex enough you'd call it a p-zombie, then it's also complex enough to have an experience

            This would be true under panpsychism, but under materialism, there doesn't appear to be any level of complexity that would justify a complex system having first-hand experience. I can make a seemingly super-intelligent AI that has sensors that can process colors and detect temperature, but there's no level of complexity under materialism where I'd be justified in saying the AI has first person experiences like seeing color and feeling temperature.

            My whole stance on AI is that, knowing how the thing works, I have to think its experience - if you want to call it that - is much closer to that of a fly or an ant than to a cat or a human.

            I guess my whole stance is that if we assume materialism I don't think we are justified in saying the AI has experience at all.

            Even if I grant that AI is anywhere on the consciousness spectrum, then it seems like the door is cracked open for other sorts of software to have even simpler, yet completely real first person experiences; we wouldn't be justified in drawing The Line right there. I'd agree that a neural network has more complex experiences than a script, but we'd also expect that other things like complex Minecraft Redstone machines and perhaps thermostats would have very real though admittedly very simple experiences. It seems like this view, when taken to its logical conclusion, is panpsychism.

            And I can spend a bunch of time trying to figure out where The Line™ is that separates the experience of a rock or a goldfish or myself, but it's a fool's errand. Any decision you make is arbitrary.

            We are in total agreement; we cannot justify drawing The Line at any particular point, because doing so would be as unjustified and arbitrary.

            2 votes
            1. [19]
              TangibleLight
              Link Parent
              Yes we do. I know for certain one example. I assume there are about 8 billion other such examples, I have no reason not to. From my experience interacting with pets and other animals, I assume...

              This would be true under panpsychism, but under materialism, there doesn't appear to be any level of complexity that would justify a complex system having first-hand experience.

              Yes we do. I know for certain one example. I assume there are about 8 billion other such examples, I have no reason not to. From my experience interacting with pets and other animals, I assume there are a few billion more. We could debate about where exactly The Line is, but it's probably somewhere around the level of an infant.

              I can make a seemingly super-intelligent AI that has sensors that can process colors and detect temperature, but there's no level of complexity under materialism where I'd be justified in saying the AI has first person experiences like seeing color and feeling temperature.

              I think the problem here isn't so much the question of complexity, but the fact you're trying to reduce it to this binary yes it is or no it isn't. Experience is this huge, complicated, multi-faceted thing. Does it experience emotions? Which ones? Can it suffer? Can it reason? Is it aware of its surroundings? Is it self-aware? When you try to classify something, say this thing experiences or that thing does not, you're accidentally blending all these facets together.

              Something may be aware of its surroundings, but not of itself. Something might simply not have the ability to suffer. Are these things experiencing? On which side of The Line do they sit?

              So, yes, when you adjust your thermostat there might be some "experience" happening, but given the level of complexity involved it can't be anything substantial. It's not aware of its surroundings, and it's certainly not aware of itself. Maybe you could label the flow of information as "reasoning" but that seems like a stretch. When you overload that word "experience" you accidentally imply it's in the same category as things that do have all those other properties. From this panpsychism perspective the word "experience" is almost useless.

              Really, there's this big multi-faceted list of capabilities of any given "experience". Awareness of surroundings, awareness of self, various emotional states, various methods of reasoning, various kinds of memory, etc. When you go to classify humans and p-zombies and AI and thermostats and other creatures, try to thing about all these things together. Don't distill it down to a binary yes/no. I believe the "hard problem" is caused by that loss of information.

              Even if I grant that AI is anywhere on the consciousness spectrum, then it seems like the door is cracked open for other sorts of software to have even simpler, yet completely real first person experiences; we wouldn't be justified in drawing The Line right there.

              We're justified in requiring some level of awareness. I think that's really the broadest definition most people seem to intend when they talk about "consciousness" but that may be a misinterpretation on my part. (Current) AI certainly does not have awareness of any kind. Nor does a computer or thermostat.

              I guess my whole stance is that if we assume materialism I don't think we are justified in saying the AI has experience at all.
              [...] I'd agree that a neural network has more complex experiences than a script, but we'd also expect that other things like complex Minecraft Redstone machines and perhaps thermostats would have very real though admittedly very simple experiences.

              I think a few things are happening here:

              First: I think you're overestimating how complex current AI are. To be specific, I'm referring to generative AI based on large transformer models - this basically includes all the recent advances in image generation and chatbots. You'd do better to think of them as (very very lossy) compression or data correction schemes. They distill all their petabytes of training data down to a few gigabytes of parameters, and provide a means to use those parameters to fill the gaps in a given input. "Here's some text, what are the missing words at the end?" "Here's some text, what's the missing audio?" "Here's part of an image, what's the rest of it?" They are not aware and they are not reasoning.

              Second: I think you're underestimating how simple the experiences of a computer or thermostat must be. It does not even have facilities to be aware of its surroundings or self; what does "experience" even mean without these? If you use the word "experience" to describe it then that word has lost all meaning. You can have a more interesting discussion if you include awareness in the definition - or at least some other discriminating quality, whatever you consider important.

              2 votes
              1. [18]
                RNG
                (edited )
                Link Parent
                I just want to interject here and say that I greatly appreciate your replies. This subject and our conversation about it is meaningful and a ton of fun for me. The drawing of a line at all is...

                I just want to interject here and say that I greatly appreciate your replies. This subject and our conversation about it is meaningful and a ton of fun for me.

                We could debate about where exactly The Line is, but it's probably somewhere around the level of an infant.

                The drawing of a line at all is problematic. As we discussed before, drawing a line anywhere is both arbitrary and unjustified.

                but the fact you're trying to reduce it to this binary yes it is or no it isn't.

                The panpsychist is not committed to the view that experience is binary, the materialist is. The materialist is committed to the view that consciousness is an emergent property of unconscious matter; that there is a necessary minimum configuration of matter required to bring about experience, before which consciousness does not exist, and after which consciousness is present.

                Experience is this huge, complicated, multi-faceted thing. Does it experience emotions? Which ones? Can it suffer? Can it reason? Is it aware of its surroundings? Is it self-aware? When you try to classify something, say this thing experiences or that thing does not, you're accidentally blending all these facets together.

                We're moving into the territory that makes using philosophical terms like qualia useful. Emotions can be described purely in physical terms absent a subjective experience or qualia of them. Reason, self-awareness, and awareness of surroundings are not necessary components of experience. Infants can lack some of these attributes and computers can have some of these attributes. To the panpsychist, qualia isn't composed of parts; it is a simple property of reality.

                Something may be aware of its surroundings, but not of itself. Something might simply not have the ability to suffer. Are these things experiencing? On which side of The Line do they sit?

                That's a good question for the materialist who believes that consciousness is an emergent property of unconscious matter. A panpsychist isn't committed to The Line; it is a perfectly smooth spectrum from the incomprehensibly simple to the complex.

                So, yes, when you adjust your thermostat there might be some "experience" happening, but given the level of complexity involved it can't be anything substantial. It's not aware of its surroundings, and it's certainly not aware of itself.

                I think the panpsychist would fully agree with this statement. The thermostat's qualia would be like my perception of blueness: it doesn't require a theory of self, an understanding of the things around me, or reason. It's just experience itself; though the experience is just so simple for the thermostat that it is almost humorous to compare it to my own.

                1 vote
                1. [17]
                  TangibleLight
                  (edited )
                  Link Parent
                  This seems like the real nugget. I don't think that's an accurate representation of the materialist - or, if it's the generally accepted one, I don't think it's the only one. I would rephrase it...

                  The panpsychist is not committed to the view that experience is binary, the materialist is. The materialist is committed to the view that consciousness is an emergent property of unconscious matter; that there is a necessary minimum configuration of matter required to bring about experience, before which consciousness does not exist, and after which consciousness is present.

                  This seems like the real nugget. I don't think that's an accurate representation of the materialist - or, if it's the generally accepted one, I don't think it's the only one.

                  I would rephrase it as this:

                  As the complexity of a system increases, so does our confidence that certain aspects of experience are present. Certain aspects of experience have pragmatic definitions that are binary; but listing real constraints is difficult to impossible. Each aspect is itself an emergent phenomenon.

                  We're moving into the territory that makes using philosophical terms like qualia useful.

                  I agree, but I've been specifically trying to avoid these words to avoid the baggage and preconceptions that come with them. Apologies if I've confused anything as a result.

                  I just want to interject here and say that I greatly appreciate your replies. This subject and our conversation about it is meaningful and a ton of fun for me.

                  Likewise! I did also reply to your older similar topic. That comment and the follow-up ones are very relevant to this discussion; it might shed some light on where exactly I'm coming from here.


                  Edit:

                  We could debate about where exactly The Line is, but it's probably somewhere around the level of an infant.

                  The drawing of a line at all is problematic. As we discussed before, drawing a line anywhere is both arbitrary and unjustified.

                  I should clarify, what I'm stating here is based on that confidence-based definition. The "line" here is more like some ill-defined confidence interval where most people will be sure that humans older than this point "experience" by some definition, and humans younger than this point don't. It doesn't really define which aspects of experience I'm talking about, and certainly doesn't represent the varying levels of confidence about each of those aspects. That is the loss of information, and that's the problem.

                  If we're very specific and carefully define how we measure confidence on certain qualia, we can start to do science. And that's exactly what, to my knowledge, recent research does.

                  2 votes
                  1. [16]
                    RNG
                    Link Parent
                    It's the only one I've ever heard of, and it seems the materialist is logically committed to this view. If consciousness isn't an emergent property of unconscious matter, then where does it come...

                    I don't think that's an accurate representation of the materialist - or, if it's the generally accepted one, I don't think it's the only one.

                    It's the only one I've ever heard of, and it seems the materialist is logically committed to this view. If consciousness isn't an emergent property of unconscious matter, then where does it come from? If it is an emergent property, then that would entail that there is some minimum configuration of matter that gives rise to consciousness, where anything less than that minimum configuration would not result in this emergent property.

                    As the complexity of a system increases, so does our confidence that certain aspects of experience are present.

                    So this seems to be addressing an epistemic problem of when we can know that consciousness has emerged, and to what degree we can be confident in that knowledge. I'm focused on the ontological problem of consciousness emerging from unconscious matter at all.

                    Certain aspects of experience have useful definitions that are binary; but listing real constraints is difficult to impossible.

                    I'm very glad we got here! The impossibility of listing real constraints is essentially the Hard Problem of Consciousness. The path here is very difficult since every single term used in this discussion is woefully overloaded. Like the Monty Hall problem, it took me sitting with this for a long time to fully grasp what this problem is driving at.

                    Now, I don't want to put words in your mouth, but we probably still disagree about the nature of this impossibility. I think you are saying that there actually are constraints that exist that determine whether a collection of matter is conscious or unconscious, but it is epistemically impossible to know when a collection of matter crosses The Line.

                    I hold the position that on materialism it is problematic to believe any such constraints could exist at all.

                    Each aspect is itself an emergent phenomenon.

                    Qualia (first person experiences) are perfectly simple and aren't necessarily composed of aspects or parts. You don't need self-awareness, intelligence, or reason just to have the mere experience of anything at all.

                    I agree, but I've been specifically trying to avoid these words to avoid the baggage and preconceptions that come with them. Apologies if I've confused anything as a result.

                    I think we've done well in this conversation to navigate around all of the billion definitions of consciousness to get at the core issue. I like avoiding philosophical jargon (like qualia) until it's absolutely necessary in a discussion, because the improved precision comes at the cost of accessibility to some bystanders.

                    2 votes
                    1. [15]
                      TangibleLight
                      (edited )
                      Link Parent
                      Right. Which is why it's important to speak in terms of confidence on specific qualia, rather than experience as a whole. I'm still confused by this. If one system has some property, and another...

                      Now, I don't want to put words in your mouth, but we probably still disagree about the nature of this impossibility. I think you are saying that there actually are constraints that exist that determine whether a collection of matter is conscious or unconscious, but it is epistemically impossible to know when a collection of matter crosses The Line.

                      Right.

                      Which is why it's important to speak in terms of confidence on specific qualia, rather than experience as a whole.

                      I hold the position that on materialism it is problematic to believe any such constraints could exist at all.

                      I'm still confused by this. If one system has some property, and another system has a different property, then there is necessarily something about each system that determines those properties.

                      You don't need self-awareness, intelligence, or reason just to have the mere experience of anything at all.

                      Don't you? We're overloading experience again.

                      There is an experience, singular noun. A particular thing which can be experienced. Blueness. Roughness. Anger. I'll call that qualia - and please correct me if your understanding of that word is different.

                      There is also experience, verb. The act of perceiving (creating? interacting with?) qualia. This is the hard problem. I'll call that consciousness - understand that, for the rest of this comment, this action is what I'm referring to.

                      Critically, qualia is a state and consciousness is an action.

                      Each aspect is itself an emergent phenomenon.

                      Qualia (first person experiences) are perfectly simple and aren't necessarily composed of aspects or parts.

                      In my prior comments, and using the definitions above, I've been using "experience" to mean consciousness and "aspect of experience" to mean qualia. I stand by that qualia are an emergent phenomena, certain qualia require a certain level of complexity in their substrate to be represented.

                      As an analogy - pressure is a state. At any given instant, a gas has a certain pressure. At the same time, a single particle does not have a pressure. Two particles don't either, although if they happen to collide you can measure the interaction. There's some critical mass where it makes sense to talk about pressure rather than individual particle-particle collisions. Qualia is the same - its substrate needs to be sufficiently complex to represent that state.

                      You don't need self-awareness, intelligence, or reason just to have the mere experience of anything at all.

                      Consciousness, as I'm using it anyway, can only occur if a system interacts with the substrate; it responds to and modifies the qualia represented in that substrate over time, and consciousness might occur in those interactions.

                      So say you have some substrate that represents the qualia of seeing blue in some encoding or another. If that's the entire system, there's still no consciousness occurring. Blue is not being seen by any entity. You've taken the data and stored it, but it's not being processed.

                      So now say you have that substrate representing that qualia, and you also have some system interacting with the substrate. If that system has the capability to be aware - and has the capability to decode the qualia from that substrate - it might consciously see blue. Otherwise you can't make any claims about consciousness since there aren't any suitable interactions taking place.

                      This is why I'm so confident that AI is does not experience. The recent ones have billions of parameters - the substrate is huge - but the actions that interact with that substrate are simple. Not trivial, but it still boils down to forward matrix multiplication. I don't believe there is enough complexity there to grant the system the capability of awareness or reasoning even if the weights might somehow represent qualia.

                      Qualia (first person experiences) are perfectly simple and aren't necessarily composed of aspects or parts.

                      I suspect then the objection is that the mere experience here is in the interactions between the system and the configuration of the substrate. That mere experience is what I've been calling consciousness, and it's probably closer to the typical definition of qualia. But then it explicitly requires two distinct entities to interact, so is not perfectly simple. It requires sufficient complexity in the substrate, the system, and the interactions between them. If sufficient complexity is missing from any of these three, there is no experience (verb).

                      Panpsychism

                      The model I described has some overlap here. If you consider the fundamentals to be a substrate (data storage), qualia (data on that storage), and some system that interacts with it - then plenty of systems fit that description. A computer. A thermostat. A lump of coal. The most obvious discriminator is in the levels of complexity of all these parts, and that's why I say each part is an emergent phenomenon.

                      A (analog) thermostat has a trivially simple substrate (temperature of a bimetallic strip), and a trivially simple system (expansion of the bimetallic strip). Surely the qualia that could be encoded are trivial, and the system is not conscious.

                      A lump of coal has a huge substrate (if you're talking about molecular configurations) but the system is trivial (electromagnetic potential), so there's no way for it to be aware of qualia that might be encoded.

                      A computer has a complex substrate (gigabytes to terabytes of information at minimum nowadays) but the system, at its core, is simple. These are unique in that they can model arbitrarily complex systems; the only limitations are storage constraints in the representation of those systems and computational constraints in acting out the systems. So I think it's likely possible for a computer to simulate an aware consciousness, but we're not there yet.


                      Edit:

                      And to rephrase my confidence-based definition more precisely using the above definitions:

                      As the complexity of a substrate increases, so does our confidence that it may encode more sophisticated qualia. As the complexity of a system increases, so does our confidence that it has more sophisticated capabilities. If we are confident that the substrate of the system encodes certain qualia and are confident that the system is capable of awareness, then we can be confident that the system is conscious, experiencing those qualia.


                      Edit again:

                      I realized I mixed definitions of consciousness, conflating the system with the interactions. I think I've updated all the text to be consistent.

                      2 votes
                      1. [14]
                        RNG
                        (edited )
                        Link Parent
                        The problem is that in one [EDIT] worldview the property is fundamental, and in the other it is composite and emergent, which is where the trouble for me is. I don't want to say that qualia is a...

                        I'm still confused by this. If one system has some property, and another system has a different property, then there is necessarily something about each system that determines those properties.

                        The problem is that in one [EDIT]system worldview the property is fundamental, and in the other it is composite and emergent, which is where the trouble for me is.

                        There is an experience, singular noun. A particular thing which can be experienced. Blueness. Roughness. Anger. I'll call that qualia - and please correct me if your understanding of that word is different.

                        There is also experience, verb. The act of perceiving (creating? interacting with?) qualia. This is the hard problem. I'll call that consciousness - understand that, for the rest of this comment, this action is what I'm referring to.

                        Critically, qualia is a state and consciousness is an action.

                        I don't want to say that qualia is a thing which can be experienced, but is the state of experiencing itself. My thinking of qualia is closer to a subjectless verb than a noun (I think the self is illusory, but panpsychists aren't necessarily committed to this view). Words fail me here because I think qualia are ineffable (ironically because nothing is more readily accessible.)

                        I'll call that consciousness - understand that, for the rest of this comment, this action is what I'm referring to.

                        But yes, I think we are on the same page using consciousness this way.

                        That interaction is probably closer to the typical definition of qualia. But then it, by definition, requires two distinct entities to interact and is not perfectly simple. It requires sufficient complexity in both of these entities and in their interactions.

                        Wow, you got me there, I agree that this refutes my claim that qualia are simple. (This may be a red herring, but I don't think qualia require a subject, though panpsychists aren't committed to this view.) Though as to whether all qualia require an object is something I may need some time to think about. Maybe I can message you later if I can work out a more coherent view?

                        The model I described has some overlap here. If you consider the fundamentals to be a substrate (data storage), qualia (data on that storage), and some system that interacts with it - then plenty of systems fit that description.

                        If I'm understanding this correctly, is this drawing the line between conscious and unconscious matter in terms of whether the matter is arranged in a way that fits this model?

                        As the complexity of a substrate increases, so does our confidence that it may encode more sophisticated qualia.

                        As the complexity of a system increases, so does our confidence that it has more sophisticated capabilities. If we are confident that the substrate of the system encodes certain qualia and are confident that the system is capable of awareness, then we can be confident that the system is conscious, experiencing those qualia.

                        I see. These seem to speak to a related but entirely different problem that is epistemic rather than ontological. The problem of other minds is epistemic; the hard problem of consciousness is ontological.

                        2 votes
                        1. [13]
                          TangibleLight
                          (edited )
                          Link Parent
                          Feel free! I don't think it is. It's important to reflect on which parts are emergent and which parts are fundamental. I'm suggesting - and I think the archetypical materialist also suggests -...

                          Maybe I can message you later if I can work out a more coherent view?

                          Feel free!

                          This may be a red herring, but I don't think qualia require a subject, though panpsychists aren't committed to this view.

                          I don't think it is. It's important to reflect on which parts are emergent and which parts are fundamental. I'm suggesting - and I think the archetypical materialist also suggests - that all aspects of the thing are emergent.

                          I identify three parts: a data representation that needs to be complex enough to allow a particular qualia to emerge; a system that needs to be complex enough to allow a particular capability to emerge; and interactions that need to be complex enough to facilitate experience. To keep the definition of "conscious" useful and not degenerate to literally every interaction, I only consider systems which are capable of being aware.

                          If any of these things are not complex enough to hold the emergent property, "consciousness" is not there. If you can conclude that some part of this must be fundamental and not emergent, then the model starts to fall apart.

                          What I've been calling "qualia" - the data representation - is probably the least emergent. That might just be some bias in how I think about language and encodings in general. Although if the representation is compressed, the system interactions must then decompress it. I suspect this is how current generative AI gets away with such a simple system acting on the weights - there is an absolutely massive number of weights in play. The data is very much not compressed.

                          Though as to whether all qualia require an object is something I may need some time to think about.

                          What do you mean by "object" here? A part of speech? The substrate that encodes it? Something else?

                          If you mean a part of speech, I suggest no. Emotional states especially tend not to be directed at anything. They just are in some way or another.

                          If you mean the substrate - by the non-standard definition I've been using - I suggest no. We can talk about the concept of blueness in the abstract. But in order for some conscious system to experience that qualia, the qualia must first be encoded in some substrate with which it can interact. By the standard definition, I suggest yes. You need both entities interacting together for the experience (verb) to occur.

                          I see. These seem to speak to a related but entirely different problem that is epistemic rather than ontological.

                          Right. The point is that one can confidently exclude p-zombies.

                          If I'm understanding this correctly, is this drawing the line between conscious and unconscious matter in terms of whether the matter is arranged in a way that fits this model?

                          It's more recognizing that many many systems have the same general setup, or can be re-framed in that way. You could take a panpsychist view and label them all as mind, but that's not very useful. So restrict which qualia you want to look at, which capabilities you want to look at, and you get a more sensible definition. If nothing else, awareness of some kind is a critical emergent capability of the system for an experience to occur - the more traditional definition of qualia.

                          2 votes
                          1. [12]
                            RNG
                            Link Parent
                            I'm putting this here at the top; it's how I'm using the word consciousness throughout this comment. I don't think this consciousness entails intelligence, self-awareness, understanding, etc....

                            There is also experience, verb. The act of perceiving (creating? interacting with?) qualia. This is the hard problem. I'll call that consciousness - understand that, for the rest of this comment, this action is what I'm referring to.

                            I'm putting this here at the top; it's how I'm using the word consciousness throughout this comment. I don't think this consciousness entails intelligence, self-awareness, understanding, etc. Though one maybe meaningless change: this isn't "creating" or "interacting" with qualia, but is the perception of qualia or qualia itself.

                            I identify three parts: a data representation that needs to be complex enough to allow a particular qualia to emerge; a system that needs to be complex enough to allow a particular capability to emerge; and interactions that need to be complex enough to facilitate experience.

                            Why think any of these parts necessary for consciousness? Also, unless I'm misunderstanding something, it seems like you're affirming that consciousness is emergent and has a minimum necessary configuration in order to exist. I think we maybe agree that materialism entails this?

                            What do you mean by "object" here? A part of speech? The substrate that encodes it? Something else?

                            You said this in your previous comment:

                            But then it, by definition, requires two distinct entities to interact and is not perfectly simple.

                            While I am skeptical about your data model of consciousness, it does seem that consciousness is dependent on some entity (being experienced?) which I called the "object". I don't really have much to add here at the moment, and still need to think this through.

                            Right. The point is that one can confidently exclude p-zombies.

                            So my argument for the non-physicality of consciousness under materialism rests in part upon the metaphysical possibility of p-zombies. Even if I grant that we can have 100% confidence that no such thing has ever been instantiated, this wouldn't have a bearing on my argument.

                            1 vote
                            1. [11]
                              TangibleLight
                              (edited )
                              Link Parent
                              I consider myself a materialist, although this three-part view certainly isn't the mainstream one. You could easily lump the three parts together, and use more traditional reasoning about it. The...

                              Why think any of these parts necessary for consciousness? Also, unless I'm misunderstanding something, it seems like you're affirming that consciousness is emergent and has a minimum necessary configuration in order to exist. I think we maybe agree that materialism entails this?

                              I consider myself a materialist, although this three-part view certainly isn't the mainstream one. You could easily lump the three parts together, and use more traditional reasoning about it. The critical aspects are emergent ones.

                              I think it's good to identify the parts of it though, to see what exactly it is we're talking about when we say qualia or experience. Is it data? State? Information flow? Whichever view you take, those questions are relevant. I believe it's information flow, which implies endpoints for that flow.

                              While I am skeptical about your data model of consciousness, it does seem that consciousness is dependent on some entity (being experienced?) which I called the "object". I don't really have much to add here at the moment, and still need to think this through.

                              Ah, this makes sense. That is what I'd been referring to as "qualia" - although that's already overloaded so probably not a great plan. Like I said, this is certainly not the mainstream view and I don't know the proper term.

                              I'll take your convention and call it the "object" in scare quotes unless you have a better idea.

                              So my argument for the non-physicality of consciousness under materialism rests in part upon the metaphysical possibility of p-zombies. Even if I grant that we can have 100% confidence that no such thing has ever been instantiated, this wouldn't have a bearing on my argument.

                              You can't rule out some metaphysical construction which interacts with an "object" in an albitrarily complex way but does not have consciousness.

                              2 votes
                              1. [10]
                                RNG
                                (edited )
                                Link Parent
                                I've been defending dualism and panpsychism for the sole purpose of better communicating "The Hard Problem", even though I don't necessarily assent to either worldview. I've now taken time to both...

                                I've been defending dualism and panpsychism for the sole purpose of better communicating "The Hard Problem", even though I don't necessarily assent to either worldview. I've now taken time to both read further about and wrestle with various theories of consciousness, and have read Philip Goff's book "Galileo's Error" in hopes of doing a better job of both representing The Hard Problem and panpsychism.

                                I consider myself a materialist, although this three-part view certainly isn't the mainstream one. You could easily lump the three parts together, and use more traditional reasoning about it. The critical aspects are emergent ones.

                                This sounds like the position of a naturalistic dualist; one who is a naturalist but also a mind/body dualist (naturalism being the idea that only natural laws and forces [as opposed to supernatural ones] operate in the universe). There is a view very similar to what you've described here called Integrated Information Theory (a free deep dive is here). IIT posits that consciousness arises from the ability of a system to integrate and differentiate information, and doesn't view consciousness as something that emerges in a system at some particular point, but exists on a measurable spectrum (based on the degree to which a system can integrate/differentiate information.) In cases of naturalistic dualism, there are the physical laws of nature as well as psychophysical laws (e.g., those that map the integration of information to conscious states in IIT or maybe even your three-part model) which determine whether a system is "conscious" or to what degree a system is conscious.

                                I'd be willing to concede that IIT or something similar probably escapes The Hard Problem when viewed through naturalistic dualism (The Hard Problem was posited originally by the naturalistic dualist David Chalmers), but strictly speaking there does seem to be a difference between this specific view of naturalism and materialism.

                                1 vote
                                1. [9]
                                  TangibleLight
                                  (edited )
                                  Link Parent
                                  Wow! Thank you so much for taking the time! I really appreciate the amount of effort you've been putting into this discussion. I had not heard of Integrated Information Theory before - although I...

                                  Wow! Thank you so much for taking the time! I really appreciate the amount of effort you've been putting into this discussion.

                                  I had not heard of Integrated Information Theory before - although I suspect I might have, and just don't remember it in detail. Generally I think I agree with most of what I've seen. I haven't had time to really digest their formulation of Φ but I'm not sure I agree with it. In my three-part-model I just hand-wave the formulation away as "sufficiently complex". I think I might disagree with some of the concepts there, but the important principle to me is that a sensible definition of Φ likely exists. I do not think a rebuttal on a particular definition of Φ is a rebuttal to IIT in general.

                                  One section of the deep dive you linked goes into some detail about predictions and validations of IIT. Some of the more interesting examples to me:

                                  2. IIT also predicts that brain lesions will make a person unconscious if and only if they severely disrupt the capacity for information integration. Moreover, the level of consciousness, as (roughly) assessed in neuropsychological exams, should co-vary with the Φmax value of the dominant conceptual structure. Recent TMS-EEG studies in patients with severe brain damage, with or without loss of consciousness (patients who were vegetative, minimally conscious, emerging from minimal consciousness, or conscious but “locked-in”), are consistent with this prediction (Casali, Gosseries et al. 2013).

                                  6. It is well established that, after the complete section of the corpus callosum—the roughly 200 million fibers that connect the cortices of the two hemispheres—consciousness is split in two: there are two separate “flows” of experience, one associated with the left hemisphere and one with the right one. An intriguing prediction of IIT is that, if the efficacy of the callosal fibers were reduced progressively, there would be a moment at which, for a minor change in the traffic of neural impulses across the callosum, experience would go from being a single one to suddenly splitting into two separate experiencing minds. The splitting of consciousness should be associated with the splitting of a single conceptual structure into two similar ones (when two maxima of integrated information supplant a single maximum). Under certain pathological conditions (for example, dissociative disorders such as hysterical blindness), and perhaps even under certain physiological conditions (say “autopilot” driving while having a phone conversation), such splits may also occur among cortical areas within the same hemisphere in the absence of an anatomical lesion. Again, IIT predicts that in such conditions there should be two local maxima of information integration, one corresponding to a “major” complex and one or more to “minor” complexes (Mudrik, Faivre et al. 2014).

                                  8. Similarly, IIT predicts that a particular brain area can contribute to experience even if it is inactive, but not if it is inactivated. For example, if one were presented with a plate of spinach drained of color, green-selective neurons in the color areas would remain inactive. Thus one would experience and report strange spinach that is gray rather than green. By contrast, if the same area were not just inactive, but inactivated due to a local lesion, the phenomenal distinctions corresponding to colors would be lacking altogether. While presumably one would still report that the spinach is “gray,” in this case “gray” cannot mean the same as when color areas are intact, i.e. not green, not red, and so on. This seems consistent with the behavior of a rare patient with complete achromatopsia and anosognosia due to an extensive lesion of color areas (von Arx, Muri et al. 2010). When presented with green spinach, the patient reports that the spinach is gray, but does not realize nor concede that something is wrong with his experience. Although he “knows” that spinach is green, he altogether lacks the phenomenal distinction green/not green.

                                  I'm curious what counter examples there might be. Are there cases where structure is disrupted in a way that IIT predicts should cause some impact on or breakdown of consciousness, but doesn't?

                                  Almost more interesting than that deep dive is a discussion I found buried in the links on that page:

                                  This sounds like the position of a naturalistic dualist; one who is a naturalist but also a mind/body dualist (naturalism being the idea that only natural laws and forces [as opposed to supernatural ones] operate in the universe).

                                  I've been using the terms Material, Natural, and Physical roughly interchangeably. The key is that I reject the idea of a soul or other immaterial mind. I know there are historically distinctions between those definitions, so in hindsight I probably should have been saying "Physicalist" this whole time.

                                  In cases of naturalistic dualism, there are the physical laws of nature as well as psychophysical laws (e.g., those that map the integration of information to conscious states in IIT or maybe even your three-part model) which determine whether a system is "conscious" or to what degree a system is conscious.

                                  In this sense, yes, but I don't think "dualism" is the right word here.


                                  I'd be willing to concede that IIT or something similar probably escapes The Hard Problem when viewed through naturalistic dualism (The Hard Problem was posited originally by the naturalistic dualist David Chalmers), but strictly speaking there does seem to be a difference between this specific view of naturalism and materialism.

                                  I keep reading variations of that emergentism is incompatible with panpsychism, and I don't understand the reasoning here. It seems the same "explanatory gap" that Chalmers uses to invoke dualism. I suppose because people generally identify "consciousness" as a binary property, so you need to invoke some immaterial mechanism to grant it to some things and not others. If you instead view consciousness as an continuous emergent property of a system, then there's no need to invoke dualism. I think that's the key that IIT and something similar does, but you could avoid the hard problem in any other framework compatible with that view.

                                  Sound is not a fundamental property of matter. How many particles are required to carry a sound wave? At what point do we stop talking about particle motion and start talking about pressure and flow? Yet sound is not immaterial. It is described completely by the physical state of its medium.

                                  So just because consciousness is not a fundamental property of matter, why must we invoke dualism? And in the same way that we can investigate sound at smaller and smaller scales, why is it incompatible with panpsychism to investigate the substrate of consciousness at smaller scales and complexities?

                                  This is what I was talking about when I wrote in the opener:

                                  If you say a thermostat, a microbe, a cat, an infant, an adult all sit on some spectrum of consciousness - then you're really saying that p-zombies cannot exist. If the system is complex enough you'd call it a p-zombie, then it's also complex enough to have an experience. And this seems like such an utterly obvious solution to the "contradiction" implied by p-zombies that I really don't get the confusion.


                                  I have a few socratic questions based on our discussions around the "object" and around that point 6 in the IIT deep dive. These kinds of questions are why I conclude that the "object" is necessary for qualia, and that qualia occur only in the interaction between the system and the "object".

                                  • If a system is incapable of retaining memory for any duration - working, short-term, or long-term memory - can it be conscious? If not, which kinds of memory are required? What is the content of that memory?

                                  • Does the system need to be able to read and write to that memory, or just read, or just write, or neither?

                                  • What if two conscious systems interacted with the same memory? Would either one know of the other's existence? Which kinds of read/write access are required on which kinds of memory for that to happen?

                                  • What if one conscious system interacts with the memory at multiple points in time? How does the causal relationship correspond to read/write access for each interaction? Which interactions are aware of the other?

                                  • What if memory is destroyed or modified between the two interactions? How is this different from distinct conscious systems interacting with distinct memories? What if only parts of the memory are destroyed or modified?

                                  1 vote
                                  1. [8]
                                    RNG
                                    Link Parent
                                    If you think that physical stuff has mental properties that follow these psychophysical laws, you wouldn't be necessarily committed to cartesian dualism which posits that mind and body are...

                                    In this sense, yes, but I don't think "dualism" is the right word here.

                                    If you think that physical stuff has mental properties that follow these psychophysical laws, you wouldn't be necessarily committed to cartesian dualism which posits that mind and body are composed of different substances. You'd believe that "stuff" has both physical properties and mental properties. The mental properties are defined as those which follow the psychophysical laws. I've heard of this described as substance vs. property dualism, of which Chalmers is the latter. I agree you don't have to interpret IIT or other systems through a naturalistic dualist lens; maybe there aren't mental properties at all, but then I don't think we've addressed the Hard Problem, I think you've only addressed the Easy Problem. This is even mentioned in Aaronson's criticism:

                                    we can easily interpret IIT as trying to do something more “modest” than solve the Hard Problem, although still staggeringly audacious. Namely, we can say that IIT “merely” aims to tell us which physical systems are associated with consciousness and which aren’t, purely in terms of the systems’ physical organization.

                                    <red herring>

                                    By the way, do you agree with Aaronson's assertion that consciousness is experimentally unavailable? And what do you make of his view that theories of consciousness should be working to solve the Pretty-Hard Problem which, best I can tell, simply maps the theory onto our seemings of what "should" be conscious and what shouldn't?

                                    I'm not necessarily opposed to this methodology, things in philosophy do often bottom-out in seemings e.g., that's how most theorizing in moral philosophy works best I can tell. However, the criticism of moral theories also holds here: what value does a theory have if counterintuitive results count as evidence against the theory itself? Just accept the "seemings" axiomatically without a theory, since if one's credence in the model decreases every time it strays from the seemings, you ultimately are doing this anyway.

                                    This debate strongly reminds me of discussions on morality regarding the is-ought problem. You can't get an ought from an is, nor phenomenological experience from the interactions of matter, and you have to smuggle some assumptions in to move the problem around (e.g., Harris assuming human flourishing is morally good axiomatically.)

                                    </red herring>

                                    If you instead view consciousness as an continuous emergent property of a system, then there's no need to invoke dualism. I think that's the key that IIT and something similar does, but you could avoid the hard problem in any other framework compatible with that view.

                                    My understanding is that consciousness is either continuous or emergent. The view that consciousness is continuous seems to just be panpsychism. I don't see how a purely physicalist interpretation of IIT would avoid the Hard Problem (or maybe even the "Pretty-Hard Problem".)

                                    Sound is not a fundamental property of matter. How many particles are required to carry a sound wave? At what point do we stop talking about particle motion and start talking about pressure and flow? Yet sound is not immaterial. It is described completely by the physical state of its medium.

                                    We have to be careful here; there is the physical nature of sound and the experiential nature of sound. The former can fully be derived from the physical interactions of matter and (I would argue) the latter cannot.

                                    Imagine a neuroscientist who is congenitally deaf. She understands the totality of how sound works: how air pressure changes move parts in the ear, how signals are sent to the brain, even how these signals cause a behavioral change in the person. She knows how different wave lengths cause different results in the end behavior of physical matter in the brain. She has total, perfect knowledge of the physical interactions of the system. Imagine a cure becomes available for her deafness, and she goes out in the world and hears for the first time.

                                    The question is, did she learn anything new about hearing? As a matter of definition, we stated at the beginning that she knew everything there is to know about the physical interactions of the experience of hearing. If the totality of the experience of hearing is physical, we have a logical contradiction, since this would entail that what she just learned was physical when she already knew everything there is to know about the physical nature of experiencing hearing.

                                    I have a few socratic questions based on our discussions around the "object" and around that point 6 in the IIT deep dive. These kinds of questions are why I conclude that the "object" is necessary for qualia, and that qualia occur only in the interaction between the system and the "object".

                                    Maybe this is a problem for IIT, but I don't think it's necessarily a problem for panpsychism, especially views such as cosmopanpsychism (the view that the universe is a unified object that is ontologically prior to its parts.) I still need to do work on this though, since this is not the view of panpsychism I've defended in this thread or the one Goff defends.

                                    1 vote
                                    1. [7]
                                      TangibleLight
                                      (edited )
                                      Link Parent
                                      oof! I really should have seen this coming and clarified. It's the classical example. In the analogy I was strictly referring to the physical phenomenon of pressure waves propagating through a...

                                      We have to be careful here; there is the physical nature of sound and the experiential nature of sound. The former can fully be derived from the physical interactions of matter and (I would argue) the latter cannot.

                                      oof! I really should have seen this coming and clarified. It's the classical example.

                                      In the analogy I was strictly referring to the physical phenomenon of pressure waves propagating through a medium. The analogy works just as well to consider ripples on a pond, or the movement of electrons in a transistor, or the dynamo that generate's the Earth's magnetic field. In all cases the properties involved are entirely physical, even if they are not fundamental. And in the example of the dynamo, even though the underlying mechanisms are not well understood, we plainly don't need to invoke any kind of dualism to explain it.

                                      If you think that physical stuff has mental properties that follow these psychophysical laws, you wouldn't be necessarily committed to cartesian dualism which posits that mind and body are composed of different substances. You'd believe that "stuff" has both physical properties and mental properties. The mental properties are defined as those which follow the psychophysical laws. I've heard of this described as substance vs. property dualism, of which Chalmers is the latter.

                                      Ah, I think I understand. If you place mental properties in the same category as acoustic properties, or hydrodynamic properties, then yes. They are not fundamental physics, but they are derived from and/or composed of fundamental physics.

                                      Important is that this is not saying that mental properties are fundamental. This is where I get confused with dualism: it seems to claim that mind can exist in some form without a body. I reject this, for the same reason that a vacuum does not have acoustic or hydrodynamic properties. These require a medium. The vacuum does have electromagnetic properties, but electromagnetism is fundamental and mental properties are not.

                                      Also important is that this is not saying that mental properties occur at any scale. This is where I get confused with panpsychism: it seems to claim that mind can fundamentally exist in all matter. I reject this, for the same reason that a single molecule of water does not have acoustic or hydrodynamic properties. These properties only emerge at a sufficient scale. In the same way, mental properties only emerge at sufficient scale and complexity. Too small, and we must instead speak only of fundamental properties like electric charge and particle motion.

                                      "red_herring": {

                                      By the way, do you agree with Aaronson's assertion that consciousness is experimentally unavailable?

                                      I don't think I agree, at least not in principle, but I have to think any experiment that reveals these things must be morally abhorrent. It would either require taking a system which you believe is conscious and mangling it to varying degrees to tell exactly which properties of the system effect it, OR it would require the capability of creating a conscious system from scratch, for the express purpose of experimenting on it. The latter kind might be possible in a moral way, but it sort of assumes a preexisting understanding of ethics and consciousness which we couldn't obtain without the former kind. Maybe there's some third option, but I doubt it.

                                      For the philosophical question in this - I think with enough observation and correlations you'd eventually, in principle, identify which structures correspond with which concepts, and which interactions correspond with which kinds of thought based on the behavior. It's a decoding problem.

                                      Whether that "mangling" looks more like changing simulation parameters or lobotomy, if you believe the system is conscious then you're almost certainly inflicting suffering on it.

                                      And what do you make of his view that theories of consciousness should be working to solve the Pretty-Hard Problem which, best I can tell, simply maps the theory onto our seemings of what "should" be conscious and what shouldn't?

                                      Generally I agree, but I think the effort is misplaced. This kind of comment is why I ranted about pragmatic definitions. You need a framework that makes predictions, and you need to frame your definitions in that framework to be useful.

                                      If your framework defines "consciousness" such that hydrogen atoms are conscious, then the definition is useless, not necessarily the framework. You need to find a definition in that framework that aligns with your seemings, and evaluate on intuition using that.

                                      IIT seems to give that more pragmatic definition to "complex". And, whether Tononi admits it or not, their definition for Φ is arbitrary, it's not really derived. Then Aaronson puts a lot of effort into disputing how that definition for Φ lead to absurdities. I think this indicates a problem in the definition, not in the framework. If we poke holes in the definition for Φ and say no, this isn't sufficient for consciousness - that's not a proof that a sufficiency criteria dosen't exist.

                                      Picking a sensible definition also requires understanding that there are multiple aspects to consciousness, even something as "perfectly simple" as pure qualia. You need to focus on which aspects you want to talk about. Awareness, memory, reasoning, reflection, ... this is part of why I included those socratic questions at the end of the post. What is pure qualia without awareness? Memory? If you call that consciousness then I don't think you have a useful definition, even if some instantaneous pure qualia might be in play. Tononi talking about properties of the blank white plane seem to touch on this, but I don't think they elaborate enough on it or talk about exactly which parts of their model account for it.

                                      I have all the same complaints as Aaronson about definition of the terms. Best I can understand the meaning they seem to map roughly to my three-part model, with different discriminations and clearly with a lot more thought and time behind it, so I tend to agree with the general premise.

                                      But I'd appreciate more concrete, less circular, definitions for the foundational components. The glossary does help a bit but I don't quite grasp it. I'd also appreciate more focus on the physical substrate itself, rather than solely on the "concepts" and other mental properties - if you don't account for representation at all then it's hard to reason about what integration or cause-effect space really mean.

                                      }

                                      I still need to do work on this though, since this is not the view of panpsychism I've defended in this thread or the one Goff defends.

                                      I didn't intend to represent it as such, apologies if I did. I only included those as an aside for clarification on why I use the "object" in my model, since you'd mentioned being uncertain about that in a prior comment, and since reading the linked pages had me thinking about what makes a definition useful.

                                      My understanding is that consciousness is either continuous or emergent. The view that consciousness is continuous seems to just be panpsychism. I don't see how a purely physicalist interpretation of IIT would avoid the Hard Problem (or maybe even the "Pretty-Hard Problem".)

                                      I agree you don't have to interpret IIT or other systems through a naturalistic dualist lens; maybe there aren't mental properties at all, but then I don't think we've addressed the Hard Problem, I think you've only addressed the Easy Problem.

                                      This might be changed in my clarification about my sound analogy, but I'll respond anyway.

                                      If Φ represents a sufficient criteria for consciousness then it does solve the hard problem. If it only represents a necessary criteria, then it's only solving the easy and/or pretty hard problems. Tononi clearly believes it does represent a sufficient criteria, and Aaronson clearly does not.

                                      I don't agree that that definition for Φ is sufficient, but I do believe that some framework similar to IIT and some definition like Φ do exist which are sufficient, and the existence of some sufficiency criteria does solve the hard problem, whether one lists it out or not.

                                      As far as justification for believing that, I suppose that's where it comes down to seemings for me. IIT seems on the right track. And if I spent a decade working on it like Tononi has, I might end up with some variation of IIT and Φ that, to me anyway, are a bit more rigorous and closer to sufficient. Up till this point I've just hand-waved it as "sufficiently complex" and lean on existing arguments for emergentism.

                                      Maybe this is a problem for IIT, but I don't think it's necessarily a problem for panpsychism, especially views such as cosmopanpsychism (the view that the universe is a unified object that is ontologically prior to its parts.)

                                      I think those questions do still apply, especially the ones around time and read/write memory and causality. Going down that rabbit hole, for me anyway, really makes me examine what experience really is; I conclude that some kind of read/write working memory is nessary to hold the "object". And if you use that to evaulate something like dualism or panpsychism, you need to consider what that memory is. If you're evaluating panpsychism you encounter issues at the very small (not enough storage) and at the very large (disparate parts of the system are causally disconnected so cannot be part of the same experience). If you evaluate dualism, you get a free pass at first since mind (and memory) may be immaterial, but then you encounter issues when physical brain trauma affects memory and identity in unexpected ways.

                                      1 vote
                                      1. [6]
                                        RNG
                                        (edited )
                                        Link Parent
                                        Oh okay, I get what you were driving at in the sound analogy. I did want to see what you thought about the argument, since it seems to derive a contradiction from the materialist perspective....

                                        In all cases the properties involved are entirely physical, even if they are not fundamental.

                                        Oh okay, I get what you were driving at in the sound analogy. I did want to see what you thought about the argument, since it seems to derive a contradiction from the materialist perspective. Something like:

                                        P1 She had total knowledge of all physical facts about the experience of hearing
                                        P2 If materialism is true, all facts about hearing are physical facts
                                        P3 When her hearing was restored, she gained new information about the experience of hearing
                                        C Materialism is false

                                        Ah, I think I understand. If you place mental properties in the same category as acoustic properties, or hydrodynamic properties, then yes. They are not fundamental physics, but they are derived from and/or composed of fundamental physics.

                                        Oh I see what you mean. Though I guess the problem of describing mental, experiential properties in terms of underlying physics is the problem we've been talking about. The value add of viewing systems like IIT through the lens of dualism is that you avoid the hard problem if mental properties are fundamental, but you are right, IIT isn't committed to this view.

                                        For the philosophical question in this - I think with enough observation and correlations you'd eventually, in principle, identify which structures correspond with which concepts, and which interactions correspond with which kinds of thought based on the behavior. It's a decoding problem.

                                        I know we're talking about the "PHP" as Chalmers referred to it, but I wonder if you'd agree that this wouldn't touch the hard problem. Yes, we will eventually understand how every structure/interaction correlates with different experiences. The problem for the materialist is that I don't think they can say "X" structure gives rise to or correlates with "Y" experience, but that "X" structure is "Y" experience.

                                        Going down that rabbit hole, for me anyway, really makes me examine what experience really is; I conclude that some kind of read/write working memory is nessary to hold the "object".

                                        This is certainly true of our experiences in daily life. I think it's still logically coherent to say that fundamental "stuff" is (has?) experience, though I agree that it differs greatly from our common day-to-day experiences, which involve far more complexity.

                                        1 vote
                                        1. [5]
                                          TangibleLight
                                          (edited )
                                          Link Parent
                                          Sure. I think the error is a false equivalence - materialism doesn't necessarily claim that recalling a fact is equivalent to recalling an experience. If one had a total understanding of the...

                                          I did want to see what you thought about the argument, since it seems to derive a contradiction from the materialist perspective. Something like:

                                          P1 She had total knowledge of all physical facts about the experience of hearing
                                          P2 If materialism is true, all facts about hearing are physical facts
                                          P3 When her hearing was restored, she gained new information about the experience of hearing
                                          C Materialism is false

                                          Sure.

                                          I think the error is a false equivalence - materialism doesn't necessarily claim that recalling a fact is equivalent to recalling an experience.

                                          If one had a total understanding of the brain, they could somehow inject a false memory of the experience of hearing, so that she can recall that sound without actually having experienced it. This is distinct from learning and recalling facts about sound and hearing. With that false memory, she won't learn anything new when her true hearing is finally restored.

                                          I know we're talking about the "PHP" as Chalmers referred to it, but I wonder if you'd agree that this wouldn't touch the hard problem. Yes, we will eventually understand how every structure/interaction correlates with different experiences. The problem for the materialist is that I don't think they can say "X" structure gives rise to or correlates with "Y" experience, but that "X" structure is "Y" experience.

                                          I think in general you probably can't prove that - but I can give my take on my interpretation of IIT (or something similar) and why I think it could solve the hard problem.

                                          If you had total understanding of the encoding and the systems at play - you know exactly which representations correspond with the memory of hearing a sound, and you know exactly which behaviors of which subsystems correspond with recalling a memory - you can construct a system which recalls hearing a sound. That system is experiencing that memory. Or you could construct the relevant audio processing systems, and encode the signal from a microphone into the appropriate representation; then that system is experiencing the sound. Although if the system doesn't also have awareness * or memory I'm not sure I'd call that a useful definition of "conscious".

                                          This is certainly true of our experiences in daily life. I think it's still logically coherent to say that fundamental "stuff" is (has?) experience, though I agree that it differs greatly from our common day-to-day experiences, which involve far more complexity.

                                          Right.

                                          With the physical analogy: you can't claim a single water molecule has surface tension, but you could probably claim that a few dozen do. Surely it doesn't have quite the intuitive meaning there, but the molecules would tend to stick together and arrange themselves in certain ways. I still doubt you could claim just a few dozen molecules have viscosity or drag, though, certainly not in the intuitive sense. There's just not enough mass or volume involved for them to interact with a macroscopic object in that way. Add a few mole more water molecules though - pretty plainly it has viscosity and drag by any measure. The exact cutoff is arbitrary and depends on your definitions for those higher order properties.


                                          * Edit: this has been lingering on my mind. I think I might be begging the question here. When I say awareness I'm referring to some quality of the system that allows it to reflect on its experience. Like, it's one thing to have the pure qualia of seeing blue. It's another thing to interpret that blueness as located somewhere in space, on a plane, or on some surface, etc. And it's yet another thing to reflect on that, and reference it against other information, and take meaning of what that blue object is. I'm saying a "useful definition" of consciousness needs those latter things, but you could have the former in isolation in a much simpler system. I haven't fully worked out what "those latter things" are exactly, but bear in mind this is how I'm drawing that line. It's arbitrary. So in the most permissive sense, yes, even the pure qualia is consciousness.

                                          1 vote
                                          1. [4]
                                            RNG
                                            Link Parent
                                            I think this entire line of thought may misunderstand the position of the argument. Whether or not memories can be physically encoded is not the hard problem, nor is the ability to recall said...

                                            If one had a total understanding of the brain, they could somehow inject a false memory of the experience of hearing, so that she can recall that sound without actually having experienced it. This is distinct from learning and recalling facts about sound and hearing. With that false memory, she won't learn anything new when her true hearing is finally restored.

                                            I think this entire line of thought may misunderstand the position of the argument. Whether or not memories can be physically encoded is not the hard problem, nor is the ability to recall said memories. What's problematic is believing that experience itself is identical with physical matter.

                                            If one had a total understanding of the brain, they could somehow inject a false memory of the experience of hearing, so that she can recall that sound without actually having experienced it.

                                            Yes, in my view, providing her with this physically-encoded memory about a non-physical experience would allow her to experience sounds. Crucially, this would provide her information about hearing that is non-physical (the medium is physical, the data is not), and therefore something she did not know before, even though she knew all physical facts about the experience of hearing.

                                            With the physical analogy: you can't claim a single water molecule has surface tension, but you could probably claim that a few dozen do. Surely it doesn't have quite the intuitive meaning there, but the molecules would tend to stick together and arrange themselves in certain ways. I still doubt you could claim just a few dozen molecules have viscosity or drag, though, certainly not in the intuitive sense. There's just not enough mass or volume involved for them to interact with a macroscopic object in that way. Add a few mole more water molecules though - pretty plainly it has viscosity and drag by any measure. The exact cutoff is arbitrary and depends on your definitions for those higher order properties.

                                            Sure, it seems plausible to me that consciousness may combine in ways that result in more complex phenomenon, similar to surface tension, sound waves, etc.

                                            Like, it's one thing to have the pure qualia of seeing blue. It's another thing to interpret that blueness as located somewhere in space, on a plane, or on some surface, etc. And it's yet another thing to reflect on that, and reference it against other information, and take meaning of what that blue object is.

                                            That's a good distinction, the first is how I've been using consciousness, especially as we get to simpler and simpler things.

                                            I'm saying a "useful definition" of consciousness needs those latter things, but you could have the former in isolation in a much simpler system. I haven't fully worked out what "those latter things" are exactly, but bear in mind this is how I'm drawing that line. It's arbitrary. So in the most permissive sense, yes, even the pure qualia is consciousness.

                                            There are lots of "useful" aspects to consciousness as we experience it that may be explained by some theory that doesn't bear on the hard problem at all (e.g., IIT). The problem is getting experience out of the physical at all, not merely that physics can't yet explain human consciousness (which would be a "panpsychism of the gaps".)

                                            1 vote
                                            1. [3]
                                              TangibleLight
                                              (edited )
                                              Link Parent
                                              First - this isn't how I've been using the word, but that's fine. I'll take that usage from now on to avoid more rambling about "useful definitions". Reading back through my prior comments, good...

                                              That's a good distinction, the first is how I've been using consciousness, especially as we get to simpler and simpler things.

                                              First - this isn't how I've been using the word, but that's fine. I'll take that usage from now on to avoid more rambling about "useful definitions". Reading back through my prior comments, good grief there's too much of that.

                                              What's problematic is believing that experience itself is identical with physical matter. [...] Crucially, this would provide her information about hearing that is non-physical (the medium is physical, the data is not) [...]

                                              I suspect we are just arguing semantics and I may actually be a dualist.

                                              I think this entire line of thought may misunderstand the position of the argument.

                                              I still think you're still falsely equating "recall of facts about a mental state" with "direct recall of that mental state" - this difference is critical to properly formulating the argument. However I think you are right here and I missed the point. I try to see past that, and I think the argument seems to be more about the nature of information and property.

                                              information [...] that is non-physical [in contrast to] all physical facts about the experience of hearing

                                              This distinction doesn't make sense to me. To me it is a given that all information is non-physical, and is distinct from any particular physical encoding of that information. The number 5 is not a physical object. Primality is not a physcial object. Viscosity is not a physical object. Neither is consciousness. So I may be a dualist, if physicalism is incompatible with this view of information.

                                              Physical systems can manifest these properties, though, like how a group of particles has some "count of 5" property. Or a particular arrangement of particles in a uniform grid manifests some "composite number count". Or a configuration of many many particles has viscosity.

                                              And, indirectly, I think this is how I've been "solving" the hard problem, and why you're saying I'm really only solving the pretty hard problem. When I say "one could identify which structures encode which experiences" and claim to solve the hard problem, I think what I'm really saying is this:

                                              The experience is an abstract property that exists just as much as 5 or primality or viscosity; but there is a (pretty) hard problem of why some physical systems have that property. If you can identify how the physical system encodes the property, then you've solved the (pretty) hard problem of why there is consciousness why consciousness is manifested in the physical universe. Then IIT (or something similar) are means of modeling and examining the physical system to help identify how it encodes those properties.


                                              I guess, to put it another way, I can consider a Boltzmann Brain in the abstract, and I have to say that it is conscious, by definition. But a Boltzmann Brain is not a real physical system. Why is my brain, a physical system, conscious?

                                              I thought that's the hard problem, but after really examining what I'm talking about when I say "property", I guess I am saying experience is non-physical, so I'm a dualist, and that's only the pretty hard problem.

                                              To take it further - I also have to reason that the physical system of my brain is not necessarily the only structure that has the abstract property of my consciousness. A simulation of my brain would. There might even be some non-brainlike structure that, although not resembling a human brain, still has the same property of my consciousness. The property though is abstract, and all of these structures are just distinct things that manifest it. They're all members of an equivalence class.

                                              When I say I'm a materialist physicalist I mean that the physical system of my body does indeed have that property. I don't need to invoke a soul that thinks it lives in a body with that property, but really it's just the soul that has that property. I could invoke a soul anyway, but it's redundant, since my body is the at-least-one-element in the equivalence class of "things that manifest my consciousness." I don't need to consider anything other than the physical.

                                              And this is really what I was getting at in your other thread, but I think I didn't phrase all that well in hindsight. This discussion has been very helpful in formulating it in more detail.

                                              1. [2]
                                                RNG
                                                Link Parent
                                                I'm sorry for any confusion, this is how I understood we were using the term. I don't think the argument relies on memories of facts and memories of experience being identical. I don't think the...

                                                First - this isn't how I've been using the word, but that's fine. I'll take that usage from now on to avoid more rambling about "useful definitions". Reading back through my prior comments, good grief there's too much of that.

                                                I'm sorry for any confusion, this is how I understood we were using the term.

                                                I still think you're still falsely equating "recall of facts about a mental state" with "direct recall of that mental state" - this difference is critical to properly formulating the argument.

                                                I don't think the argument relies on memories of facts and memories of experience being identical. I don't think the phenomenological properties of recalling different types of memories play a role here at all. It centers on information: did she gain information she didn't have before? The classic version of this involves sight rather than hearing. If you suddenly could see color, could you tell blue from yellow if you knew all the physical facts of sight? It seems like this information about experience isn't something you learn from knowing all of the physical interactions of light/cells/neurons. This is generally known as "the knowledge argument" and as you state, this is fundamentally a problem related to information.

                                                This distinction doesn't make sense to me. To me it is a given that all information is non-physical, and is distinct from any particular physical encoding of that information. The number 5 is not a physical object. Primality is not a physcial object. Viscosity is not a physical object. Neither is consciousness. So I may be a dualist, if physicalism is incompatible with this view of information.

                                                I think the question of whether information itself is real is a debate between realists and nominalists and doesn't impact here. It's not my view that information must be physically real on the physicalist view. The issue for me is that there seems to be information about non-physical experience that is learned when hearing is restored, when she already had all of the information about the physical interactions related to experience. The medium where information is stored and the nature of information itself has no bearing on the argument.

                                                The experience is an abstract property that exists just as much as 5 or primality or viscosity; but there is a (pretty) hard problem of why some physical systems have that property. If you can identify how the physical system encodes the property, then you've solved the (pretty) hard problem of why there is consciousness why consciousness is manifested in the physical universe. Then IIT (or something similar) are means of modeling and examining the physical system to help identify how it encodes those properties.

                                                I'm a nominalist; I don't think that "5" exists objectively, or is a property of reality. To me, it is a human construct that helps us make sense of reality. You could take the late Daniel Dennett's approach and say that consciousness is an illusion and doesn't exist either, but that's a really tough pill to swallow.

                                                I guess, to put it another way, I can consider a Boltzmann Brain in the abstract, and I have to say that it is conscious, by definition. But a Boltzmann Brain is not a real physical system. Why is my brain, a physical system, conscious?

                                                I thought that's the hard problem, but after really examining what I'm talking about when I say "property", I guess I am saying experience is non-physical, so I'm a dualist, and that's only the pretty hard problem.

                                                To take it further - I also have to reason that the physical system of my brain is not necessarily the only structure that has the abstract property of my consciousness. A simulation of my brain would. There might even be some non-brainlike structure that, although not resembling a human brain, still has the same property of my consciousness. The property though is abstract, and all of these structures are just distinct things that manifest it. They're all members of an equivalence class.

                                                Yeah, I think one valid approach is dualism, and holding that there must be psycho-physical laws out there that determine to what degree various structures (Boltzmann Brains, humans, insects, etc.) have consciousness. Bertrand Russel thought that panpsychism was more parsimonious since it involves assuming less laws, but regardless this completely solves the hard problem.

                                                When I say I'm a materialist physicalist I mean that the physical system of my body does indeed have that property. I don't need to invoke a soul that thinks it lives in a body with that property, but really it's just the soul that has that property. I could invoke a soul anyway, but it's redundant, since my body is the at-least-one-element in the equivalence class of "things that manifest my consciousness." I don't need to consider anything other than the physical.

                                                I think this is what makes property dualism preferable to substance dualism. It seems more parsimonious, and avoids invoking a bunch of stuff that doesn't increase the explanatory power of the theory, while increasing its intellectual commitments. It also avoids panpsychism's combination problem.

                                                From your comment on my other thread:

                                                So, the best answer I can come up with, the one that seems to take the fewest assumptions and assertions, is this:

                                                The universe seems entirely governed by mathematical laws. A hypothetical simulator that takes the state of all the content of the universe, and accounts for all the correct laws, would accurately simulate my consciousness and experience. In that way, my experience is a property of those laws and state. And it is a property regardless of whether that hypothetical simulation actually runs or not, so there's no reason to assume it does. Given how complex that simulator would have to be, it's probably better to assume the simulation does not run.

                                                Then there is no ontological question for the simulator itself: my consciousness is still a property of those laws and state, so I do experience it.

                                                This seems to be perfectly compatible with naturalistic dualism, if among those laws are psycho-physical laws that act on the mental properties of stuff to determine whether stuff is in a conscious configuration or not (or to what degree the configuration is conscious.)

                                                1. TangibleLight
                                                  (edited )
                                                  Link Parent
                                                  The problem is: the only way she can learn the information, without sight, is by learning them as facts and recalling those facts. I say the facts do hold all the information, but they're stuck in...

                                                  I don't think the phenomenological properties of recalling different types of memories play a role here at all. It centers on information: did she gain information she didn't have before? The classic version of this involves sight rather than hearing.

                                                  The issue for me is that there seems to be information about non-physical experience that is learned when hearing is restored, when she already had all of the information about the physical interactions related to experience.

                                                  The problem is: the only way she can learn the information, without sight, is by learning them as facts and recalling those facts. I say the facts do hold all the information, but they're stuck in the wrong representation. Then when her sight is restored she gets the same information in a different representation, in a different part of her brain, and is surprised that the representation is new. That doesn't imply that the information content is different.

                                                  If she had the ability to manually modify her memory, she could take the same information content of those memories-of-facts and change the representation to the same one she would encounter with true sight, then she wouldn't be surprised. (Edit: Or, she could directly implant the mental state of sight directly, without going through the indirection of "recalling a memory.")

                                                  I think the question of whether information itself is real is a debate between realists and nominalists and doesn't impact here. It's not my view that information must be physically real on the physicalist view.

                                                  This was my understanding, and why I claimed to be a physicalist. But at some point - especially with the content below - I can see how you'd make an argument that I'm actually a naturalistic dualist or something else. And, honestly, I think I'd be okay calling myself that after this discussion.

                                                  I'm a nominalist; I don't think that "5" exists objectively, or is a property of reality. To me, it is a human construct that helps us make sense of reality. You could take the late Daniel Dennett's approach and say that consciousness is an illusion and doesn't exist either, but that's a really tough pill to swallow.

                                                  I don't think you need to say 5 exists for it to have the properties that it does. You say, if I take ZFC axioms, then I come across this object that I label "5". It's odd. It's prime. It divides that other object "25". It doesn't matter what I label it, or even if I label it, that structure is there regardless, encoded in the axioms.

                                                  Really that structure is encoded in the implication: if you consider the axioms then you encounter this structure. That implication doesn't depend on the existence of either of its parts.

                                                  There's an infinite set of integers that no person has ever conceived of before. Do they all have prime factors? Do they all divide other integers? Does that imply those integers exist?

                                                  And I'd also argue: there's an infinite set of systems of axioms that no person has ever conceived of before. Do they all encode their corresponding structures? Does that imply the structures exist?

                                                  I don't think that really implies the structures exist objectively; but, if any of those structures were conscious, it would think it did exist.

                                                  If you believe it's possible to construct an AI program that is conscious - what of all the infinitely many Turing machines, exhaustively computing everything that can be computed? Any AI program you could consider is somewhere in there.

                                                  If you believe the physical universe follows logically consistent laws of nature - what of a system of axioms that encodes those laws? You and I are somewhere in there.

      2. [3]
        RobotOverlord525
        Link Parent
        When psychologists and philosophers refer to "consciousness," they are typically referring to subjective experience—the felt sense of phenomenal awareness that each of us has in observing the...

        When psychologists and philosophers refer to "consciousness," they are typically referring to subjective experience—the felt sense of phenomenal awareness that each of us has in observing the world and our own minds. This is different from the more broad notion of cognition, which encompasses all of the information processing that occurs in the brain/mind, including unconscious mechanisms.

        Cognitive processes like perception, attention, memory, reasoning, and language comprehension can and do operate without conscious awareness to a large degree. For example, when typing fluently, the processes of motor planning, retrieving words from memory, translating concepts into language, and monitoring feedback all emerge from underlying neural operations that are not consciously perceived or controlled in a step-wise fashion. They arise automatically and effortlessly from our subconscious cognitive architecture.
        Consciousness then, refers specifically to those cognitive processes and mental states that we subjectively experience from a first-person perspective. The felt qualitative characteristics of seeing a vivid shade of blue, tasting a mouthwatering flavor, or feeling an intense emotion. So while all conscious experiences necessarily involve underlying cognition, not all cognitive activities result in conscious experiences.

        While the split-brain research provides striking evidence of separate conscious experiences arising from each hemisphere, we must be cautious about overstating the conclusions. The findings compellingly demonstrate that when connectivity between the hemispheres is severed, each can generate its own coherent processing stream capable of driving behavior and cognitive contents detached from the other hemisphere's influence. However, we should avoid prematurely reifying these dissociations as definitive proof of two qualitatively equal yet fragmented "conscious minds" operating within one person. After all, our current methods only allow us to make indirect inferences about conscious experience itself based on observable data—we cannot directly measure or verify the presence of subjective experience in either hemisphere.

        1 vote
        1. [2]
          RoyalHenOil
          Link Parent
          Yes, that is precisely what I am discussing as well. This is a bold claim. As far as I am aware, we do not have any reliable way of determining whether a given entity has conscious experience if...

          When psychologists and philosophers refer to "consciousness," they are typically referring to subjective experience—the felt sense of phenomenal awareness that each of us has in observing the world and our own minds.

          Yes, that is precisely what I am discussing as well.

          Cognitive processes like perception, attention, memory, reasoning, and language comprehension can and do operate without conscious awareness to a large degree.

          This is a bold claim. As far as I am aware, we do not have any reliable way of determining whether a given entity has conscious experience if it cannot specifically tell us that it does (and even if it can, that may not actually tell us much, as in the case of LLMs). If it cannot tell us, it may still be conscious — for example, I have early-childhood memories of conscious experiences that pre-date my ability to communicate them.

          Subjectively, it does feel like just a portion of my brain is conscious while most functions are unconscious. But then it also often feels like I am the only truly conscious being and virtually everyone else is effectively an NPC. That's because I only have direct access to my own consciousness, and I am forced to infer it indirectly in other people — which I don't always take the time to do, and even when I do, it often turns out that my inferences are wrong (e.g., I was startled to discover that some people internally voice all of their thoughts, which is very foreign to me and suggests that our internal experiences may be wildly different). My own consciousness is the only consciousness I can verify and understand, and everything else is a kind of an unknowable black box that I make conjectures about.

          I have no reason to assume that the other portions or functions of my brain are unconscious, just as I have no reason to assume that lizards are unconscious or that newborn babies are unconscious. I simply do not — and cannot — know if they have subjective experiences or not. The best I can do is make an educated guess based on their characteristics and behavior, but it would be intellectually dishonest for me to assert that they are conscious or unconscious.

          Now, for medical purposes, it is useful to make some assumptions about consciousness. For example, we assume that general anesthesia induces unconsciousness because people who undergo it report a loss of consciousness. But strange things can happen under general anesthesia, which illustrate that we can't actually prove that an entity is unconscious.

          3 votes
          1. RobotOverlord525
            Link Parent
            Indeed. Perhaps I can unpack it a bit. The cocktail party phenomenon provides an excellent illustration of the separation between cognition and consciousness. It refers to the ability of the...

            This is a bold claim.

            Indeed. Perhaps I can unpack it a bit.

            The cocktail party phenomenon provides an excellent illustration of the separation between cognition and consciousness. It refers to the ability of the auditory system to selectively attend to and process a particular auditory stream (like voices at a cocktail party) amidst a cacophony of other simultaneous sounds and conversations.

            At a cognitive level, our auditory perception system is constantly processing and parsing all the incoming auditory information in parallel, separating it into distinct auditory streams based on cues like location, pitch, timbre etc. This complex computational work of auditory scene analysis occurs automatically and outside of conscious awareness.

            However, at any given moment, we are only consciously aware of or attentively focused on one specific aspect of the auditory scene—the conversation we are actively listening to and comprehending. The rest of the auditory streams (like background chatter, music, clinking glasses, etc.) are still being cognitively processed at some level, but in a manner that does not reach conscious perception. Something in our heads is parsing what we are "sensing" but not (consciously) perceiving or we wouldn't be able to suddenly become aware that our name had been spoken by someone we weren't paying attention to. In other words, cognition (translating the auditory information into specific concepts in the language centers of our brains) without consciousness.

            I may not be doing this particular phenomenon justice. I remember it being quite mind blowing in my "Sensation and Perception" class back in college. And I think it's relevant in discussing the potential differences between cognition and consciousness.

            But strange things can happen under general anesthesia, which illustrate that we can't actually prove that an entity is unconscious.

            I actually have first-hand experience with this one. When I was younger, around 18 or 19, I had an esophagogastroduodenoscopy performed. I remember being told that, after they administered the anesthetic, that I would lose consciousness. I remember trying to concentrate in order to perceive the moment in which I lost consciousness. Alas, I failed. I woke up sometime later in a hospital bed. However, I was quickly told that I had never actually "fallen asleep," despite what I felt and remembered. Indeed, I was told that, before the procedure had begun, I had repeatedly kept telling the nurses and doctors that I "gagged easily," which sounds entirely plausible.

            Upon learning this, I had a full-blown existential crisis. If I couldn't remember any of that, could I really say that it was me who did it? If my consciousness—effectively, my self—could be so easily "suspended," what did that say about the fragility of my existence? It was as if I had died and some other entity had been occupying my body until I came back to life.

            One possibility is that the anesthetic simply disrupted memory encoding while my subjective experience continued uninterrupted—my stream of consciousness flowing steadily without forming any recollection. Yet there's also the profoundly unsettling prospect that in those moments, I became a philosophical zombie—an entity exhibiting outwardly normal human behavior but altogether lacking the subjective essence of inner awareness. Although it occurs to me that the first person pronoun may not be appropriate here.

    3. [11]
      nosewings
      Link Parent
      This point is debatable. We might think we can imagine it, but can we, really, at least within the laws of physics? i.e., one can assert that P-zombies are physically impossible (or, if you balk...

      We can also imagine a biological intelligence, maybe a human, that functions identically to a human mind, yet has no conscious experience itself; it merely processes inputs/outputs and potentially updates weights in its brain.

      This point is debatable. We might think we can imagine it, but can we, really, at least within the laws of physics? i.e., one can assert that P-zombies are physically impossible (or, if you balk at the word "physically", then impossible within the laws of nature of our universe), and that imagining a P-zombie is essentially the same as imagining Superman.

      5 votes
      1. [10]
        RNG
        Link Parent
        While this is a valid objection (if one can justify it), I'm skeptical that p-zombies are metaphysically impossible. I think if one makes this objection they have a burden of proof to justify why...

        one can assert that P-zombies are physically impossible (or, if you balk at the word "physically", then impossible within the laws of nature of our universe), and that imagining a P-zombie is essentially the same as imagining Superman.

        While this is a valid objection (if one can justify it), I'm skeptical that p-zombies are metaphysically impossible. I think if one makes this objection they have a burden of proof to justify why a p-zombie couldn't possibly exist.

        1. [9]
          nosewings
          Link Parent
          I wouldn't suggest that they are metaphysically impossible---I don't actually believe that. But I do think that they are physically impossible; i.e., impossible within the laws of the actual universe.

          I wouldn't suggest that they are metaphysically impossible---I don't actually believe that. But I do think that they are physically impossible; i.e., impossible within the laws of the actual universe.

          1 vote
          1. [8]
            RNG
            Link Parent
            That's still a pretty strong claim, how do you justify this position?

            But I do think that they are physically impossible; i.e., impossible within the laws of the actual universe.

            That's still a pretty strong claim, how do you justify this position?

            1 vote
            1. [7]
              unkz
              Link Parent
              If something is physically identical to a non-p-zombie, then it will have the same behaviour as a non-p-zombie. Which means if a non-p-zombie has consciousness, then this other thing must have...

              If something is physically identical to a non-p-zombie, then it will have the same behaviour as a non-p-zombie. Which means if a non-p-zombie has consciousness, then this other thing must have consciousness. To a high degree of certainty, other people are made of the same stuff as me, which implies that if I have consciousness, then they must too.

              1 vote
              1. [6]
                RNG
                Link Parent
                I'm not committed to the position that a p-zombie is necessarily physically identical to some specific instance of a person, rather that they are merely biologically human. It doesn't follow that...

                I'm not committed to the position that a p-zombie is necessarily physically identical to some specific instance of a person, rather that they are merely biologically human. It doesn't follow that because one human has property "X" (in this case conscious experience) that it is impossible for any human to lack property "X".

                1. [5]
                  unkz
                  Link Parent
                  That seems like a large leap. What justification is there to suspect that another human, that is generally similar to other humans, happens to function radically differently?

                  That seems like a large leap. What justification is there to suspect that another human, that is generally similar to other humans, happens to function radically differently?

                  1 vote
                  1. [4]
                    RNG
                    Link Parent
                    By definition, p-zombies function identically to other humans. I'm not arguing for solipsism; I'm merely demonstrating that the "problem of other minds" isn't a physical/metaphysical problem, but...

                    What justification is there to suspect that another human, that is generally similar to other humans, happens to function radically differently?

                    By definition, p-zombies function identically to other humans. I'm not arguing for solipsism; I'm merely demonstrating that the "problem of other minds" isn't a physical/metaphysical problem, but an epistemic one. I'm using the fact that it is epistemically possible for a p-zombie to exist as a thought experiment to draw a distinction between information processing (like what software or AI does) and what some philosophers call qualia or first-hand experiences (like experiencing "blueness", not merely storing data about the wavelength of light.)

                    To completely close the door on the epistemic possibility of p-zombies in my thought experiment, it would need to be demonstrated that p-zombies are a physical or metaphysical impossibility, which I don't think there's good evidence for.

                    1. [3]
                      unkz
                      Link Parent
                      I basically disagree that p-zombies are epistemically possible. Human behaviour, and other complex systems, are emergent properties of the arrangement of matter and energy in our bodies, and much...

                      I basically disagree that p-zombies are epistemically possible. Human behaviour, and other complex systems, are emergent properties of the arrangement of matter and energy in our bodies, and much of our behaviour is dependent on consciousness. Therefore, the existence of that behaviour is itself proof of the existence of its dependencies.

                      1 vote
                      1. [2]
                        RNG
                        Link Parent
                        Agreed, and this would be true of p-zombies as well. I think it is possible that the totality of human behavior can be explained as deterministic interactions of matter rather than consciousness....

                        Human behaviour, and other complex systems, are emergent properties of the arrangement of matter and energy in our bodies

                        Agreed, and this would be true of p-zombies as well.

                        and much of our behaviour is dependent on consciousness

                        I think it is possible that the totality of human behavior can be explained as deterministic interactions of matter rather than consciousness. Neural network weights updated, hormones, action potentials, the total sum of these systems can explain how a human works without ever entailing that there is some sort of qualia or phenomenological experience of the system itself.

                        I cannot conceive of a single behavior that is truly dependent on the way we are using the word consciousness here: qualia or phenomenological experience. All of it can be explained by soulless material interactions.

                        2 votes
                        1. unkz
                          (edited )
                          Link Parent
                          I think this is actually pretty obviously true, with the slight exception that I don't think the universe is deterministic. Yes, definitely. I don't think we are really using the word...

                          I think it is possible that the totality of human behavior can be explained as deterministic interactions of matter

                          I think this is actually pretty obviously true, with the slight exception that I don't think the universe is deterministic.

                          All of it can be explained by soulless material interactions.

                          Yes, definitely.

                          I don't think we are really using the word consciousness in the same way though, or if the way you are using consciousness has any clear meaning at all to me.

                          What you are describing sounds to me like something that even inanimate objects might experience. Or, I can’t quite see how you demarcate the experiencing of qualia versus not experiencing qualia.

  2. [5]
    Turtle
    Link
    Around the age of eight or so I saw a vision of Jesus' ascension and a loud voice which said "this is the power of God" (this completely blocked out all other sensory input). Immediately I was...

    Around the age of eight or so I saw a vision of Jesus' ascension and a loud voice which said "this is the power of God" (this completely blocked out all other sensory input). Immediately I was filled with the strongest desire to worship God through singing (I was at a worship service but wasn't really paying attention before). I was not an atheist at the time but the memory of this experience is what brought me back from a 5-6 year period of atheism in my teens.

    4 votes
    1. [2]
      JackA
      Link Parent
      I don't mean to fully activate my inner argumentative atheist, but faith still fascinates me so I must ask. If you've taken this as a religious sign to reaffirm your belief, have you pondered at...

      I don't mean to fully activate my inner argumentative atheist, but faith still fascinates me so I must ask. If you've taken this as a religious sign to reaffirm your belief, have you pondered at all on why you were given this vision allowing you to bypass the requirement of blind faith while other's weren't?

      I ask almost out of envy, as I would love to be shown a sign that would instantly truly prove a truth of the universe and allow me to accept an external purpose to existence, yet despite years of begging in prayer during my youth as atheist thoughts developed it simply never came. I'd have to posit the question from your perspective as to why you were "saved" from atheism while so many of us were "abandoned" to let our faith wither and die. No church service ever granted me visions and I never made the choice for my developing brain to start questioning faith, why must I now be punished?

      3 votes
      1. Turtle
        Link Parent
        I lean pretty heavily towards universalist theology (essentially hell is completely purgatorial) and believe everyone will ultimately meet God and be healed/saved through that relationship, so...

        I lean pretty heavily towards universalist theology (essentially hell is completely purgatorial) and believe everyone will ultimately meet God and be healed/saved through that relationship, so it's really not so pressing of an issue for me as for most other Christians. With that being said, I really don't have a satisfying answer to your question, sorry.

    2. [2]
      unkz
      Link Parent
      Do you frequently have visions? That's interesting.

      Do you frequently have visions? That's interesting.

      2 votes
      1. Turtle
        Link Parent
        Nope never had anything remotely like that happen again dreaming or awake or half awake. The rest of my life's experience with spirituality has been totally mundane unfortunately.

        Nope never had anything remotely like that happen again dreaming or awake or half awake. The rest of my life's experience with spirituality has been totally mundane unfortunately.

        1 vote
  3. [5]
    Baeocystin
    Link
    Posting two relevant sci-fi books by Peter Watts that deal directly on the topic: Blindsight, which can be read in its entirety on the author's website: https://www.rifters.com/real/Blindsight.htm...

    Posting two relevant sci-fi books by Peter Watts that deal directly on the topic:

    Blindsight, which can be read in its entirety on the author's website: https://www.rifters.com/real/Blindsight.htm

    The parallel/sequel (same universe, same time as Blindsight, different group) Echopraxia: https://www.amazon.com/Echopraxia-Peter-Watts/dp/0765328038#customerReviews

    If you like pondering meditations on the Mind's I, they're a fantastic chew.

    4 votes
    1. [4]
      paris
      Link Parent
      Unrelated: I had no idea he put the entire book up on his site. I have no idea how such a thing could be possible in our current world of copyright.

      Unrelated: I had no idea he put the entire book up on his site. I have no idea how such a thing could be possible in our current world of copyright.

      1. [3]
        vord
        Link Parent
        Authors have the rights to do whatever they want. It's only if they become beholden to publishers that their hands become tied. Cory Doctorow has released many of his books under a Creative...

        Authors have the rights to do whatever they want. It's only if they become beholden to publishers that their hands become tied.

        Cory Doctorow has released many of his books under a Creative Commons license. As such, you can find many of his ebooks on Project Gutenburg for free.

        2 votes
        1. [2]
          paris
          Link Parent
          I had assumed any book published on paper would have the authors unable to do whatever they wanted with their texts. Good to know that isn't always the case.

          I had assumed any book published on paper would have the authors unable to do whatever they wanted with their texts. Good to know that isn't always the case.

          1 vote
          1. vord
            Link Parent
            It's definitely a bit rarer, especially in the before times. Much like musicians, generally only those with the biggest star power really have the leverage to have their cake and eat it too....

            It's definitely a bit rarer, especially in the before times.

            Much like musicians, generally only those with the biggest star power really have the leverage to have their cake and eat it too.

            Though with small authors I think it's a matter of them shopping around for better contracts (and being willing to accept less for retaining those rights).

            1 vote
  4. SloMoMonday
    Link
    Two things that really messed up my comfortable idea of natural reality was compound trauma and psychedelics. With psychedelics, on a logical level I understand that I'm a complex chemical...

    Two things that really messed up my comfortable idea of natural reality was compound trauma and psychedelics.

    With psychedelics, on a logical level I understand that I'm a complex chemical computer and I tossed in some chemical soup that messed with processing. But while this may sound stereotypical, my two experiences (mushrooms and a trusted friends mystery cocktail) sort of left me with this recurring sense of wonder at the overwhelming scale of reality. The idea that I did not exist for almost infinity and will not exist for almosy infinity and that the very idea of "now" is just finite instance of time that can somehow contains infinite data. And we humans can only perceive within certain limitations and our tools of perception are limited by our comprehension. And our comprehension of everything amounts to looking through the cracked door of a massive library. (I told you, stereotypical stoner stuff)

    This may sound like a religious apologetic or mythospiritual mumbo jumbo. But I still have no faith in human defined supernatural ideas. But I'm also less sure about the current scientific tools and methods ability to fully understand the universe.

    And on the other hand, I fully acknowledge that my beliefs born of grief are an irrational and a coping mechanism, but it gets me to sleep at night.

    It's just a single hope that that there is anything on the other side and we can't see it from here.

    2 votes