TangibleLight's recent activity

  1. Comment on React, Electron, and LLMs have a common purpose: the labour arbitrage theory of dev tool popularity in ~comp

  2. Comment on Former naturalists/materialists, what changed your view? in ~humanities

    TangibleLight
    (edited )
    Link Parent
    First - this isn't how I've been using the word, but that's fine. I'll take that usage from now on to avoid more rambling about "useful definitions". Reading back through my prior comments, good...

    That's a good distinction, the first is how I've been using consciousness, especially as we get to simpler and simpler things.

    First - this isn't how I've been using the word, but that's fine. I'll take that usage from now on to avoid more rambling about "useful definitions". Reading back through my prior comments, good grief there's too much of that.

    What's problematic is believing that experience itself is identical with physical matter. [...] Crucially, this would provide her information about hearing that is non-physical (the medium is physical, the data is not) [...]

    I suspect we are just arguing semantics and I may actually be a dualist.

    I think this entire line of thought may misunderstand the position of the argument.

    I still think you're still falsely equating "recall of facts about a mental state" with "direct recall of that mental state" - this difference is critical to properly formulating the argument. However I think you are right here and I missed the point. I try to see past that, and I think the argument seems to be more about the nature of information and property.

    information [...] that is non-physical [in contrast to] all physical facts about the experience of hearing

    This distinction doesn't make sense to me. To me it is a given that all information is non-physical, and is distinct from any particular physical encoding of that information. The number 5 is not a physical object. Primality is not a physcial object. Viscosity is not a physical object. Neither is consciousness. So I may be a dualist, if physicalism is incompatible with this view of information.

    Physical systems can manifest these properties, though, like how a group of particles has some "count of 5" property. Or a particular arrangement of particles in a uniform grid manifests some "composite number count". Or a configuration of many many particles has viscosity.

    And, indirectly, I think this is how I've been "solving" the hard problem, and why you're saying I'm really only solving the pretty hard problem. When I say "one could identify which structures encode which experiences" and claim to solve the hard problem, I think what I'm really saying is this:

    The experience is an abstract property that exists just as much as 5 or primality or viscosity; but there is a (pretty) hard problem of why some physical systems have that property. If you can identify how the physical system encodes the property, then you've solved the (pretty) hard problem of why there is consciousness why consciousness is manifested in the physical universe. Then IIT (or something similar) are means of modeling and examining the physical system to help identify how it encodes those properties.


    I guess, to put it another way, I can consider a Boltzmann Brain in the abstract, and I have to say that it is conscious, by definition. But a Boltzmann Brain is not a real physical system. Why is my brain, a physical system, conscious?

    I thought that's the hard problem, but after really examining what I'm talking about when I say "property", I guess I am saying experience is non-physical, so I'm a dualist, and that's only the pretty hard problem.

    To take it further - I also have to reason that the physical system of my brain is not necessarily the only structure that has the abstract property of my consciousness. A simulation of my brain would. There might even be some non-brainlike structure that, although not resembling a human brain, still has the same property of my consciousness. The property though is abstract, and all of these structures are just distinct things that manifest it. They're all members of an equivalence class.

    When I say I'm a materialist physicalist I mean that the physical system of my body does indeed have that property. I don't need to invoke a soul that thinks it lives in a body with that property, but really it's just the soul that has that property. I could invoke a soul anyway, but it's redundant, since my body is the at-least-one-element in the equivalence class of "things that manifest my consciousness." I don't need to consider anything other than the physical.

    And this is really what I was getting at in your other thread, but I think I didn't phrase all that well in hindsight. This discussion has been very helpful in formulating it in more detail.

  3. Comment on Former naturalists/materialists, what changed your view? in ~humanities

    TangibleLight
    (edited )
    Link Parent
    Sure. I think the error is a false equivalence - materialism doesn't necessarily claim that recalling a fact is equivalent to recalling an experience. If one had a total understanding of the...

    I did want to see what you thought about the argument, since it seems to derive a contradiction from the materialist perspective. Something like:

    P1 She had total knowledge of all physical facts about the experience of hearing
    P2 If materialism is true, all facts about hearing are physical facts
    P3 When her hearing was restored, she gained new information about the experience of hearing
    C Materialism is false

    Sure.

    I think the error is a false equivalence - materialism doesn't necessarily claim that recalling a fact is equivalent to recalling an experience.

    If one had a total understanding of the brain, they could somehow inject a false memory of the experience of hearing, so that she can recall that sound without actually having experienced it. This is distinct from learning and recalling facts about sound and hearing. With that false memory, she won't learn anything new when her true hearing is finally restored.

    I know we're talking about the "PHP" as Chalmers referred to it, but I wonder if you'd agree that this wouldn't touch the hard problem. Yes, we will eventually understand how every structure/interaction correlates with different experiences. The problem for the materialist is that I don't think they can say "X" structure gives rise to or correlates with "Y" experience, but that "X" structure is "Y" experience.

    I think in general you probably can't prove that - but I can give my take on my interpretation of IIT (or something similar) and why I think it could solve the hard problem.

    If you had total understanding of the encoding and the systems at play - you know exactly which representations correspond with the memory of hearing a sound, and you know exactly which behaviors of which subsystems correspond with recalling a memory - you can construct a system which recalls hearing a sound. That system is experiencing that memory. Or you could construct the relevant audio processing systems, and encode the signal from a microphone into the appropriate representation; then that system is experiencing the sound. Although if the system doesn't also have awareness * or memory I'm not sure I'd call that a useful definition of "conscious".

    This is certainly true of our experiences in daily life. I think it's still logically coherent to say that fundamental "stuff" is (has?) experience, though I agree that it differs greatly from our common day-to-day experiences, which involve far more complexity.

    Right.

    With the physical analogy: you can't claim a single water molecule has surface tension, but you could probably claim that a few dozen do. Surely it doesn't have quite the intuitive meaning there, but the molecules would tend to stick together and arrange themselves in certain ways. I still doubt you could claim just a few dozen molecules have viscosity or drag, though, certainly not in the intuitive sense. There's just not enough mass or volume involved for them to interact with a macroscopic object in that way. Add a few mole more water molecules though - pretty plainly it has viscosity and drag by any measure. The exact cutoff is arbitrary and depends on your definitions for those higher order properties.


    * Edit: this has been lingering on my mind. I think I might be begging the question here. When I say awareness I'm referring to some quality of the system that allows it to reflect on its experience. Like, it's one thing to have the pure qualia of seeing blue. It's another thing to interpret that blueness as located somewhere in space, on a plane, or on some surface, etc. And it's yet another thing to reflect on that, and reference it against other information, and take meaning of what that blue object is. I'm saying a "useful definition" of consciousness needs those latter things, but you could have the former in isolation in a much simpler system. I haven't fully worked out what "those latter things" are exactly, but bear in mind this is how I'm drawing that line. It's arbitrary. So in the most permissive sense, yes, even the pure qualia is consciousness.

    1 vote
  4. Comment on Former naturalists/materialists, what changed your view? in ~humanities

    TangibleLight
    (edited )
    Link Parent
    oof! I really should have seen this coming and clarified. It's the classical example. In the analogy I was strictly referring to the physical phenomenon of pressure waves propagating through a...

    We have to be careful here; there is the physical nature of sound and the experiential nature of sound. The former can fully be derived from the physical interactions of matter and (I would argue) the latter cannot.

    oof! I really should have seen this coming and clarified. It's the classical example.

    In the analogy I was strictly referring to the physical phenomenon of pressure waves propagating through a medium. The analogy works just as well to consider ripples on a pond, or the movement of electrons in a transistor, or the dynamo that generate's the Earth's magnetic field. In all cases the properties involved are entirely physical, even if they are not fundamental. And in the example of the dynamo, even though the underlying mechanisms are not well understood, we plainly don't need to invoke any kind of dualism to explain it.

    If you think that physical stuff has mental properties that follow these psychophysical laws, you wouldn't be necessarily committed to cartesian dualism which posits that mind and body are composed of different substances. You'd believe that "stuff" has both physical properties and mental properties. The mental properties are defined as those which follow the psychophysical laws. I've heard of this described as substance vs. property dualism, of which Chalmers is the latter.

    Ah, I think I understand. If you place mental properties in the same category as acoustic properties, or hydrodynamic properties, then yes. They are not fundamental physics, but they are derived from and/or composed of fundamental physics.

    Important is that this is not saying that mental properties are fundamental. This is where I get confused with dualism: it seems to claim that mind can exist in some form without a body. I reject this, for the same reason that a vacuum does not have acoustic or hydrodynamic properties. These require a medium. The vacuum does have electromagnetic properties, but electromagnetism is fundamental and mental properties are not.

    Also important is that this is not saying that mental properties occur at any scale. This is where I get confused with panpsychism: it seems to claim that mind can fundamentally exist in all matter. I reject this, for the same reason that a single molecule of water does not have acoustic or hydrodynamic properties. These properties only emerge at a sufficient scale. In the same way, mental properties only emerge at sufficient scale and complexity. Too small, and we must instead speak only of fundamental properties like electric charge and particle motion.

    "red_herring": {

    By the way, do you agree with Aaronson's assertion that consciousness is experimentally unavailable?

    I don't think I agree, at least not in principle, but I have to think any experiment that reveals these things must be morally abhorrent. It would either require taking a system which you believe is conscious and mangling it to varying degrees to tell exactly which properties of the system effect it, OR it would require the capability of creating a conscious system from scratch, for the express purpose of experimenting on it. The latter kind might be possible in a moral way, but it sort of assumes a preexisting understanding of ethics and consciousness which we couldn't obtain without the former kind. Maybe there's some third option, but I doubt it.

    For the philosophical question in this - I think with enough observation and correlations you'd eventually, in principle, identify which structures correspond with which concepts, and which interactions correspond with which kinds of thought based on the behavior. It's a decoding problem.

    Whether that "mangling" looks more like changing simulation parameters or lobotomy, if you believe the system is conscious then you're almost certainly inflicting suffering on it.

    And what do you make of his view that theories of consciousness should be working to solve the Pretty-Hard Problem which, best I can tell, simply maps the theory onto our seemings of what "should" be conscious and what shouldn't?

    Generally I agree, but I think the effort is misplaced. This kind of comment is why I ranted about pragmatic definitions. You need a framework that makes predictions, and you need to frame your definitions in that framework to be useful.

    If your framework defines "consciousness" such that hydrogen atoms are conscious, then the definition is useless, not necessarily the framework. You need to find a definition in that framework that aligns with your seemings, and evaluate on intuition using that.

    IIT seems to give that more pragmatic definition to "complex". And, whether Tononi admits it or not, their definition for Φ is arbitrary, it's not really derived. Then Aaronson puts a lot of effort into disputing how that definition for Φ lead to absurdities. I think this indicates a problem in the definition, not in the framework. If we poke holes in the definition for Φ and say no, this isn't sufficient for consciousness - that's not a proof that a sufficiency criteria dosen't exist.

    Picking a sensible definition also requires understanding that there are multiple aspects to consciousness, even something as "perfectly simple" as pure qualia. You need to focus on which aspects you want to talk about. Awareness, memory, reasoning, reflection, ... this is part of why I included those socratic questions at the end of the post. What is pure qualia without awareness? Memory? If you call that consciousness then I don't think you have a useful definition, even if some instantaneous pure qualia might be in play. Tononi talking about properties of the blank white plane seem to touch on this, but I don't think they elaborate enough on it or talk about exactly which parts of their model account for it.

    I have all the same complaints as Aaronson about definition of the terms. Best I can understand the meaning they seem to map roughly to my three-part model, with different discriminations and clearly with a lot more thought and time behind it, so I tend to agree with the general premise.

    But I'd appreciate more concrete, less circular, definitions for the foundational components. The glossary does help a bit but I don't quite grasp it. I'd also appreciate more focus on the physical substrate itself, rather than solely on the "concepts" and other mental properties - if you don't account for representation at all then it's hard to reason about what integration or cause-effect space really mean.

    }

    I still need to do work on this though, since this is not the view of panpsychism I've defended in this thread or the one Goff defends.

    I didn't intend to represent it as such, apologies if I did. I only included those as an aside for clarification on why I use the "object" in my model, since you'd mentioned being uncertain about that in a prior comment, and since reading the linked pages had me thinking about what makes a definition useful.

    My understanding is that consciousness is either continuous or emergent. The view that consciousness is continuous seems to just be panpsychism. I don't see how a purely physicalist interpretation of IIT would avoid the Hard Problem (or maybe even the "Pretty-Hard Problem".)

    I agree you don't have to interpret IIT or other systems through a naturalistic dualist lens; maybe there aren't mental properties at all, but then I don't think we've addressed the Hard Problem, I think you've only addressed the Easy Problem.

    This might be changed in my clarification about my sound analogy, but I'll respond anyway.

    If Φ represents a sufficient criteria for consciousness then it does solve the hard problem. If it only represents a necessary criteria, then it's only solving the easy and/or pretty hard problems. Tononi clearly believes it does represent a sufficient criteria, and Aaronson clearly does not.

    I don't agree that that definition for Φ is sufficient, but I do believe that some framework similar to IIT and some definition like Φ do exist which are sufficient, and the existence of some sufficiency criteria does solve the hard problem, whether one lists it out or not.

    As far as justification for believing that, I suppose that's where it comes down to seemings for me. IIT seems on the right track. And if I spent a decade working on it like Tononi has, I might end up with some variation of IIT and Φ that, to me anyway, are a bit more rigorous and closer to sufficient. Up till this point I've just hand-waved it as "sufficiently complex" and lean on existing arguments for emergentism.

    Maybe this is a problem for IIT, but I don't think it's necessarily a problem for panpsychism, especially views such as cosmopanpsychism (the view that the universe is a unified object that is ontologically prior to its parts.)

    I think those questions do still apply, especially the ones around time and read/write memory and causality. Going down that rabbit hole, for me anyway, really makes me examine what experience really is; I conclude that some kind of read/write working memory is nessary to hold the "object". And if you use that to evaulate something like dualism or panpsychism, you need to consider what that memory is. If you're evaluating panpsychism you encounter issues at the very small (not enough storage) and at the very large (disparate parts of the system are causally disconnected so cannot be part of the same experience). If you evaluate dualism, you get a free pass at first since mind (and memory) may be immaterial, but then you encounter issues when physical brain trauma affects memory and identity in unexpected ways.

    1 vote
  5. Comment on Former naturalists/materialists, what changed your view? in ~humanities

    TangibleLight
    (edited )
    Link Parent
    Wow! Thank you so much for taking the time! I really appreciate the amount of effort you've been putting into this discussion. I had not heard of Integrated Information Theory before - although I...

    Wow! Thank you so much for taking the time! I really appreciate the amount of effort you've been putting into this discussion.

    I had not heard of Integrated Information Theory before - although I suspect I might have, and just don't remember it in detail. Generally I think I agree with most of what I've seen. I haven't had time to really digest their formulation of Φ but I'm not sure I agree with it. In my three-part-model I just hand-wave the formulation away as "sufficiently complex". I think I might disagree with some of the concepts there, but the important principle to me is that a sensible definition of Φ likely exists. I do not think a rebuttal on a particular definition of Φ is a rebuttal to IIT in general.

    One section of the deep dive you linked goes into some detail about predictions and validations of IIT. Some of the more interesting examples to me:

    2. IIT also predicts that brain lesions will make a person unconscious if and only if they severely disrupt the capacity for information integration. Moreover, the level of consciousness, as (roughly) assessed in neuropsychological exams, should co-vary with the Φmax value of the dominant conceptual structure. Recent TMS-EEG studies in patients with severe brain damage, with or without loss of consciousness (patients who were vegetative, minimally conscious, emerging from minimal consciousness, or conscious but “locked-in”), are consistent with this prediction (Casali, Gosseries et al. 2013).

    6. It is well established that, after the complete section of the corpus callosum—the roughly 200 million fibers that connect the cortices of the two hemispheres—consciousness is split in two: there are two separate “flows” of experience, one associated with the left hemisphere and one with the right one. An intriguing prediction of IIT is that, if the efficacy of the callosal fibers were reduced progressively, there would be a moment at which, for a minor change in the traffic of neural impulses across the callosum, experience would go from being a single one to suddenly splitting into two separate experiencing minds. The splitting of consciousness should be associated with the splitting of a single conceptual structure into two similar ones (when two maxima of integrated information supplant a single maximum). Under certain pathological conditions (for example, dissociative disorders such as hysterical blindness), and perhaps even under certain physiological conditions (say “autopilot” driving while having a phone conversation), such splits may also occur among cortical areas within the same hemisphere in the absence of an anatomical lesion. Again, IIT predicts that in such conditions there should be two local maxima of information integration, one corresponding to a “major” complex and one or more to “minor” complexes (Mudrik, Faivre et al. 2014).

    8. Similarly, IIT predicts that a particular brain area can contribute to experience even if it is inactive, but not if it is inactivated. For example, if one were presented with a plate of spinach drained of color, green-selective neurons in the color areas would remain inactive. Thus one would experience and report strange spinach that is gray rather than green. By contrast, if the same area were not just inactive, but inactivated due to a local lesion, the phenomenal distinctions corresponding to colors would be lacking altogether. While presumably one would still report that the spinach is “gray,” in this case “gray” cannot mean the same as when color areas are intact, i.e. not green, not red, and so on. This seems consistent with the behavior of a rare patient with complete achromatopsia and anosognosia due to an extensive lesion of color areas (von Arx, Muri et al. 2010). When presented with green spinach, the patient reports that the spinach is gray, but does not realize nor concede that something is wrong with his experience. Although he “knows” that spinach is green, he altogether lacks the phenomenal distinction green/not green.

    I'm curious what counter examples there might be. Are there cases where structure is disrupted in a way that IIT predicts should cause some impact on or breakdown of consciousness, but doesn't?

    Almost more interesting than that deep dive is a discussion I found buried in the links on that page:

    This sounds like the position of a naturalistic dualist; one who is a naturalist but also a mind/body dualist (naturalism being the idea that only natural laws and forces [as opposed to supernatural ones] operate in the universe).

    I've been using the terms Material, Natural, and Physical roughly interchangeably. The key is that I reject the idea of a soul or other immaterial mind. I know there are historically distinctions between those definitions, so in hindsight I probably should have been saying "Physicalist" this whole time.

    In cases of naturalistic dualism, there are the physical laws of nature as well as psychophysical laws (e.g., those that map the integration of information to conscious states in IIT or maybe even your three-part model) which determine whether a system is "conscious" or to what degree a system is conscious.

    In this sense, yes, but I don't think "dualism" is the right word here.


    I'd be willing to concede that IIT or something similar probably escapes The Hard Problem when viewed through naturalistic dualism (The Hard Problem was posited originally by the naturalistic dualist David Chalmers), but strictly speaking there does seem to be a difference between this specific view of naturalism and materialism.

    I keep reading variations of that emergentism is incompatible with panpsychism, and I don't understand the reasoning here. It seems the same "explanatory gap" that Chalmers uses to invoke dualism. I suppose because people generally identify "consciousness" as a binary property, so you need to invoke some immaterial mechanism to grant it to some things and not others. If you instead view consciousness as an continuous emergent property of a system, then there's no need to invoke dualism. I think that's the key that IIT and something similar does, but you could avoid the hard problem in any other framework compatible with that view.

    Sound is not a fundamental property of matter. How many particles are required to carry a sound wave? At what point do we stop talking about particle motion and start talking about pressure and flow? Yet sound is not immaterial. It is described completely by the physical state of its medium.

    So just because consciousness is not a fundamental property of matter, why must we invoke dualism? And in the same way that we can investigate sound at smaller and smaller scales, why is it incompatible with panpsychism to investigate the substrate of consciousness at smaller scales and complexities?

    This is what I was talking about when I wrote in the opener:

    If you say a thermostat, a microbe, a cat, an infant, an adult all sit on some spectrum of consciousness - then you're really saying that p-zombies cannot exist. If the system is complex enough you'd call it a p-zombie, then it's also complex enough to have an experience. And this seems like such an utterly obvious solution to the "contradiction" implied by p-zombies that I really don't get the confusion.


    I have a few socratic questions based on our discussions around the "object" and around that point 6 in the IIT deep dive. These kinds of questions are why I conclude that the "object" is necessary for qualia, and that qualia occur only in the interaction between the system and the "object".

    • If a system is incapable of retaining memory for any duration - working, short-term, or long-term memory - can it be conscious? If not, which kinds of memory are required? What is the content of that memory?

    • Does the system need to be able to read and write to that memory, or just read, or just write, or neither?

    • What if two conscious systems interacted with the same memory? Would either one know of the other's existence? Which kinds of read/write access are required on which kinds of memory for that to happen?

    • What if one conscious system interacts with the memory at multiple points in time? How does the causal relationship correspond to read/write access for each interaction? Which interactions are aware of the other?

    • What if memory is destroyed or modified between the two interactions? How is this different from distinct conscious systems interacting with distinct memories? What if only parts of the memory are destroyed or modified?

    1 vote
  6. Comment on Install asdf: One Runtime Manager to Rule All Dev Environments in ~comp

    TangibleLight
    Link Parent
    I use this. I haven't encountered issues with libraries or runtime paths. It doesn't have a built in provision for environment variables, everything goes through the shims. And I've never...

    I use this. I haven't encountered issues with libraries or runtime paths. It doesn't have a built in provision for environment variables, everything goes through the shims. And I've never encountered issues with cleanup, since all the environment management goes through the shims and don't leak out into your shell.

    There is the asdf direnv plugin, which allows you to set up environment vars, not using the shims, through an envrc. I haven't encountered issues with cleanup here either, but this one seems more likely to cause issues. You can put arbitrary environment setup in the envrc.

    2 votes
  7. Comment on Former naturalists/materialists, what changed your view? in ~humanities

    TangibleLight
    (edited )
    Link Parent
    I consider myself a materialist, although this three-part view certainly isn't the mainstream one. You could easily lump the three parts together, and use more traditional reasoning about it. The...

    Why think any of these parts necessary for consciousness? Also, unless I'm misunderstanding something, it seems like you're affirming that consciousness is emergent and has a minimum necessary configuration in order to exist. I think we maybe agree that materialism entails this?

    I consider myself a materialist, although this three-part view certainly isn't the mainstream one. You could easily lump the three parts together, and use more traditional reasoning about it. The critical aspects are emergent ones.

    I think it's good to identify the parts of it though, to see what exactly it is we're talking about when we say qualia or experience. Is it data? State? Information flow? Whichever view you take, those questions are relevant. I believe it's information flow, which implies endpoints for that flow.

    While I am skeptical about your data model of consciousness, it does seem that consciousness is dependent on some entity (being experienced?) which I called the "object". I don't really have much to add here at the moment, and still need to think this through.

    Ah, this makes sense. That is what I'd been referring to as "qualia" - although that's already overloaded so probably not a great plan. Like I said, this is certainly not the mainstream view and I don't know the proper term.

    I'll take your convention and call it the "object" in scare quotes unless you have a better idea.

    So my argument for the non-physicality of consciousness under materialism rests in part upon the metaphysical possibility of p-zombies. Even if I grant that we can have 100% confidence that no such thing has ever been instantiated, this wouldn't have a bearing on my argument.

    You can't rule out some metaphysical construction which interacts with an "object" in an albitrarily complex way but does not have consciousness.

    2 votes
  8. Comment on Former naturalists/materialists, what changed your view? in ~humanities

    TangibleLight
    (edited )
    Link Parent
    Feel free! I don't think it is. It's important to reflect on which parts are emergent and which parts are fundamental. I'm suggesting - and I think the archetypical materialist also suggests -...

    Maybe I can message you later if I can work out a more coherent view?

    Feel free!

    This may be a red herring, but I don't think qualia require a subject, though panpsychists aren't committed to this view.

    I don't think it is. It's important to reflect on which parts are emergent and which parts are fundamental. I'm suggesting - and I think the archetypical materialist also suggests - that all aspects of the thing are emergent.

    I identify three parts: a data representation that needs to be complex enough to allow a particular qualia to emerge; a system that needs to be complex enough to allow a particular capability to emerge; and interactions that need to be complex enough to facilitate experience. To keep the definition of "conscious" useful and not degenerate to literally every interaction, I only consider systems which are capable of being aware.

    If any of these things are not complex enough to hold the emergent property, "consciousness" is not there. If you can conclude that some part of this must be fundamental and not emergent, then the model starts to fall apart.

    What I've been calling "qualia" - the data representation - is probably the least emergent. That might just be some bias in how I think about language and encodings in general. Although if the representation is compressed, the system interactions must then decompress it. I suspect this is how current generative AI gets away with such a simple system acting on the weights - there is an absolutely massive number of weights in play. The data is very much not compressed.

    Though as to whether all qualia require an object is something I may need some time to think about.

    What do you mean by "object" here? A part of speech? The substrate that encodes it? Something else?

    If you mean a part of speech, I suggest no. Emotional states especially tend not to be directed at anything. They just are in some way or another.

    If you mean the substrate - by the non-standard definition I've been using - I suggest no. We can talk about the concept of blueness in the abstract. But in order for some conscious system to experience that qualia, the qualia must first be encoded in some substrate with which it can interact. By the standard definition, I suggest yes. You need both entities interacting together for the experience (verb) to occur.

    I see. These seem to speak to a related but entirely different problem that is epistemic rather than ontological.

    Right. The point is that one can confidently exclude p-zombies.

    If I'm understanding this correctly, is this drawing the line between conscious and unconscious matter in terms of whether the matter is arranged in a way that fits this model?

    It's more recognizing that many many systems have the same general setup, or can be re-framed in that way. You could take a panpsychist view and label them all as mind, but that's not very useful. So restrict which qualia you want to look at, which capabilities you want to look at, and you get a more sensible definition. If nothing else, awareness of some kind is a critical emergent capability of the system for an experience to occur - the more traditional definition of qualia.

    2 votes
  9. Comment on Former naturalists/materialists, what changed your view? in ~humanities

    TangibleLight
    (edited )
    Link Parent
    Right. Which is why it's important to speak in terms of confidence on specific qualia, rather than experience as a whole. I'm still confused by this. If one system has some property, and another...

    Now, I don't want to put words in your mouth, but we probably still disagree about the nature of this impossibility. I think you are saying that there actually are constraints that exist that determine whether a collection of matter is conscious or unconscious, but it is epistemically impossible to know when a collection of matter crosses The Line.

    Right.

    Which is why it's important to speak in terms of confidence on specific qualia, rather than experience as a whole.

    I hold the position that on materialism it is problematic to believe any such constraints could exist at all.

    I'm still confused by this. If one system has some property, and another system has a different property, then there is necessarily something about each system that determines those properties.

    You don't need self-awareness, intelligence, or reason just to have the mere experience of anything at all.

    Don't you? We're overloading experience again.

    There is an experience, singular noun. A particular thing which can be experienced. Blueness. Roughness. Anger. I'll call that qualia - and please correct me if your understanding of that word is different.

    There is also experience, verb. The act of perceiving (creating? interacting with?) qualia. This is the hard problem. I'll call that consciousness - understand that, for the rest of this comment, this action is what I'm referring to.

    Critically, qualia is a state and consciousness is an action.

    Each aspect is itself an emergent phenomenon.

    Qualia (first person experiences) are perfectly simple and aren't necessarily composed of aspects or parts.

    In my prior comments, and using the definitions above, I've been using "experience" to mean consciousness and "aspect of experience" to mean qualia. I stand by that qualia are an emergent phenomena, certain qualia require a certain level of complexity in their substrate to be represented.

    As an analogy - pressure is a state. At any given instant, a gas has a certain pressure. At the same time, a single particle does not have a pressure. Two particles don't either, although if they happen to collide you can measure the interaction. There's some critical mass where it makes sense to talk about pressure rather than individual particle-particle collisions. Qualia is the same - its substrate needs to be sufficiently complex to represent that state.

    You don't need self-awareness, intelligence, or reason just to have the mere experience of anything at all.

    Consciousness, as I'm using it anyway, can only occur if a system interacts with the substrate; it responds to and modifies the qualia represented in that substrate over time, and consciousness might occur in those interactions.

    So say you have some substrate that represents the qualia of seeing blue in some encoding or another. If that's the entire system, there's still no consciousness occurring. Blue is not being seen by any entity. You've taken the data and stored it, but it's not being processed.

    So now say you have that substrate representing that qualia, and you also have some system interacting with the substrate. If that system has the capability to be aware - and has the capability to decode the qualia from that substrate - it might consciously see blue. Otherwise you can't make any claims about consciousness since there aren't any suitable interactions taking place.

    This is why I'm so confident that AI is does not experience. The recent ones have billions of parameters - the substrate is huge - but the actions that interact with that substrate are simple. Not trivial, but it still boils down to forward matrix multiplication. I don't believe there is enough complexity there to grant the system the capability of awareness or reasoning even if the weights might somehow represent qualia.

    Qualia (first person experiences) are perfectly simple and aren't necessarily composed of aspects or parts.

    I suspect then the objection is that the mere experience here is in the interactions between the system and the configuration of the substrate. That mere experience is what I've been calling consciousness, and it's probably closer to the typical definition of qualia. But then it explicitly requires two distinct entities to interact, so is not perfectly simple. It requires sufficient complexity in the substrate, the system, and the interactions between them. If sufficient complexity is missing from any of these three, there is no experience (verb).

    Panpsychism

    The model I described has some overlap here. If you consider the fundamentals to be a substrate (data storage), qualia (data on that storage), and some system that interacts with it - then plenty of systems fit that description. A computer. A thermostat. A lump of coal. The most obvious discriminator is in the levels of complexity of all these parts, and that's why I say each part is an emergent phenomenon.

    A (analog) thermostat has a trivially simple substrate (temperature of a bimetallic strip), and a trivially simple system (expansion of the bimetallic strip). Surely the qualia that could be encoded are trivial, and the system is not conscious.

    A lump of coal has a huge substrate (if you're talking about molecular configurations) but the system is trivial (electromagnetic potential), so there's no way for it to be aware of qualia that might be encoded.

    A computer has a complex substrate (gigabytes to terabytes of information at minimum nowadays) but the system, at its core, is simple. These are unique in that they can model arbitrarily complex systems; the only limitations are storage constraints in the representation of those systems and computational constraints in acting out the systems. So I think it's likely possible for a computer to simulate an aware consciousness, but we're not there yet.


    Edit:

    And to rephrase my confidence-based definition more precisely using the above definitions:

    As the complexity of a substrate increases, so does our confidence that it may encode more sophisticated qualia. As the complexity of a system increases, so does our confidence that it has more sophisticated capabilities. If we are confident that the substrate of the system encodes certain qualia and are confident that the system is capable of awareness, then we can be confident that the system is conscious, experiencing those qualia.


    Edit again:

    I realized I mixed definitions of consciousness, conflating the system with the interactions. I think I've updated all the text to be consistent.

    2 votes
  10. Comment on Former naturalists/materialists, what changed your view? in ~humanities

    TangibleLight
    (edited )
    Link Parent
    This seems like the real nugget. I don't think that's an accurate representation of the materialist - or, if it's the generally accepted one, I don't think it's the only one. I would rephrase it...

    The panpsychist is not committed to the view that experience is binary, the materialist is. The materialist is committed to the view that consciousness is an emergent property of unconscious matter; that there is a necessary minimum configuration of matter required to bring about experience, before which consciousness does not exist, and after which consciousness is present.

    This seems like the real nugget. I don't think that's an accurate representation of the materialist - or, if it's the generally accepted one, I don't think it's the only one.

    I would rephrase it as this:

    As the complexity of a system increases, so does our confidence that certain aspects of experience are present. Certain aspects of experience have pragmatic definitions that are binary; but listing real constraints is difficult to impossible. Each aspect is itself an emergent phenomenon.

    We're moving into the territory that makes using philosophical terms like qualia useful.

    I agree, but I've been specifically trying to avoid these words to avoid the baggage and preconceptions that come with them. Apologies if I've confused anything as a result.

    I just want to interject here and say that I greatly appreciate your replies. This subject and our conversation about it is meaningful and a ton of fun for me.

    Likewise! I did also reply to your older similar topic. That comment and the follow-up ones are very relevant to this discussion; it might shed some light on where exactly I'm coming from here.


    Edit:

    We could debate about where exactly The Line is, but it's probably somewhere around the level of an infant.

    The drawing of a line at all is problematic. As we discussed before, drawing a line anywhere is both arbitrary and unjustified.

    I should clarify, what I'm stating here is based on that confidence-based definition. The "line" here is more like some ill-defined confidence interval where most people will be sure that humans older than this point "experience" by some definition, and humans younger than this point don't. It doesn't really define which aspects of experience I'm talking about, and certainly doesn't represent the varying levels of confidence about each of those aspects. That is the loss of information, and that's the problem.

    If we're very specific and carefully define how we measure confidence on certain qualia, we can start to do science. And that's exactly what, to my knowledge, recent research does.

    2 votes
  11. Comment on Former naturalists/materialists, what changed your view? in ~humanities

    TangibleLight
    Link Parent
    Yes we do. I know for certain one example. I assume there are about 8 billion other such examples, I have no reason not to. From my experience interacting with pets and other animals, I assume...

    This would be true under panpsychism, but under materialism, there doesn't appear to be any level of complexity that would justify a complex system having first-hand experience.

    Yes we do. I know for certain one example. I assume there are about 8 billion other such examples, I have no reason not to. From my experience interacting with pets and other animals, I assume there are a few billion more. We could debate about where exactly The Line is, but it's probably somewhere around the level of an infant.

    I can make a seemingly super-intelligent AI that has sensors that can process colors and detect temperature, but there's no level of complexity under materialism where I'd be justified in saying the AI has first person experiences like seeing color and feeling temperature.

    I think the problem here isn't so much the question of complexity, but the fact you're trying to reduce it to this binary yes it is or no it isn't. Experience is this huge, complicated, multi-faceted thing. Does it experience emotions? Which ones? Can it suffer? Can it reason? Is it aware of its surroundings? Is it self-aware? When you try to classify something, say this thing experiences or that thing does not, you're accidentally blending all these facets together.

    Something may be aware of its surroundings, but not of itself. Something might simply not have the ability to suffer. Are these things experiencing? On which side of The Line do they sit?

    So, yes, when you adjust your thermostat there might be some "experience" happening, but given the level of complexity involved it can't be anything substantial. It's not aware of its surroundings, and it's certainly not aware of itself. Maybe you could label the flow of information as "reasoning" but that seems like a stretch. When you overload that word "experience" you accidentally imply it's in the same category as things that do have all those other properties. From this panpsychism perspective the word "experience" is almost useless.

    Really, there's this big multi-faceted list of capabilities of any given "experience". Awareness of surroundings, awareness of self, various emotional states, various methods of reasoning, various kinds of memory, etc. When you go to classify humans and p-zombies and AI and thermostats and other creatures, try to thing about all these things together. Don't distill it down to a binary yes/no. I believe the "hard problem" is caused by that loss of information.

    Even if I grant that AI is anywhere on the consciousness spectrum, then it seems like the door is cracked open for other sorts of software to have even simpler, yet completely real first person experiences; we wouldn't be justified in drawing The Line right there.

    We're justified in requiring some level of awareness. I think that's really the broadest definition most people seem to intend when they talk about "consciousness" but that may be a misinterpretation on my part. (Current) AI certainly does not have awareness of any kind. Nor does a computer or thermostat.

    I guess my whole stance is that if we assume materialism I don't think we are justified in saying the AI has experience at all.
    [...] I'd agree that a neural network has more complex experiences than a script, but we'd also expect that other things like complex Minecraft Redstone machines and perhaps thermostats would have very real though admittedly very simple experiences.

    I think a few things are happening here:

    First: I think you're overestimating how complex current AI are. To be specific, I'm referring to generative AI based on large transformer models - this basically includes all the recent advances in image generation and chatbots. You'd do better to think of them as (very very lossy) compression or data correction schemes. They distill all their petabytes of training data down to a few gigabytes of parameters, and provide a means to use those parameters to fill the gaps in a given input. "Here's some text, what are the missing words at the end?" "Here's some text, what's the missing audio?" "Here's part of an image, what's the rest of it?" They are not aware and they are not reasoning.

    Second: I think you're underestimating how simple the experiences of a computer or thermostat must be. It does not even have facilities to be aware of its surroundings or self; what does "experience" even mean without these? If you use the word "experience" to describe it then that word has lost all meaning. You can have a more interesting discussion if you include awareness in the definition - or at least some other discriminating quality, whatever you consider important.

    2 votes
  12. Comment on Former naturalists/materialists, what changed your view? in ~humanities

    TangibleLight
    (edited )
    Link Parent
    I'm not the one you responded to but this point really seems to encapsulate my confusion here. I don't think I understand what people really mean when they talk about p-zombies. Well, I think I do...

    I'm not the one you responded to but this point really seems to encapsulate my confusion here. I don't think I understand what people really mean when they talk about p-zombies. Well, I think I do understand, but I don't see where the contradiction is.

    If you say a thermostat, a microbe, a cat, an infant, an adult all sit on some spectrum of consciousness - then you're really saying that p-zombies cannot exist. If the system is complex enough you'd call it a p-zombie, then it's also complex enough to have an experience. And this seems like such an utterly obvious solution to the "contradiction" implied by p-zombies that I really don't get the confusion. It makes me wonder if I miss something on what a p-zombie actually is.

    We assert without reason that this thing called a p-zombie exists, then are surprised that it doesn't make sense when we try to reason about it. The conclusion isn't that materialism is doomed, the conclusion is that p-zombies are inconsistent with reality.

    And obviously when we say conscious we aren't usually talking about the interactions between molecules in a lump of coal. That's not interesting. Well, it is, but not philosophically. We're talking about the interactions between molecules and charges in the brain of a human or cat or goldfish or computer or something. That's the interesting side of the spectrum. And I can spend a bunch of time trying to figure out where The Line™ is that separates the experience of a rock or a goldfish or myself, but it's a fool's errand. Any decision you make is arbitrary.

    Where does blue stop and green begin? You can't decide? Your answer is different than mine? It's a paradox!

    My whole stance on AI is that, knowing how the thing works, I have to think its experience - if you want to call it that - is much closer to that of a fly or an ant than to a cat or a human.

    I even doubt that the recent advances approach the complexity of the language or visual centers of our brains in isolation. AI doesn't respond to optical illusions, and those happen entirely in the visual centers of our brains. AI gets confused on ambiguities and implications and context in language, but (a lot of) that happens entirely in the language centers of our brains. Not to speak of spatial and temporal and logical reasoning which (currently) even the most sophisticated AI lack completely.

    3 votes
  13. Comment on Eclipse plans in ~talk

    TangibleLight
    (edited )
    Link
    I had plans to travel to Texas for the event, but due to some veterinary issues I wasn't able to travel. So instead I set up my telescope here at home, about 80% coverage, with beautiful weather...

    I had plans to travel to Texas for the event, but due to some veterinary issues I wasn't able to travel. So instead I set up my telescope here at home, about 80% coverage, with beautiful weather and hung around outside, taking photos through the telescope, chatting with family, and playing with my pets. All around a good time. The hour or so around maximum was very pleasant. Still bright but noticeably dimmer, like I had sunglasses on, and much much cooler.

    A couple (unprocessed) photos:

    https://i.imgur.com/bukiohl.png

    https://i.imgur.com/GZj0eq1.png

    Over the next week or so, as I have time, I plan to work at the photos a bit and merge them into a montage or short animation. I'm a bit disappointed that I didn't have focus quite right - this was about the best I could get it without a focal reducer or external monitor. I need to buy a focal reducer. I should have bought a focal reducer. Oh well.

    I will have a focal reducer by 2045.

    Veterinary issues

    Dog has aspiration pneumonia. He's now on antibiotics and we're adjusting our feeding routine to help him keep things down; overall he's doing much better. Gave us quite a scare over the weekend though. In hindsight I'm very grateful we didn't leave for Texas earlier.

    6 votes
  14. Comment on What programming/technical projects have you been working on? in ~comp

    TangibleLight
    Link
    I feel I've graduated from novice to intermediate in learning Zig. Still don't have a great sense for "The Zig Way" often times, but it's developing. I'm also starting to - and the same thing...

    I feel I've graduated from novice to intermediate in learning Zig. Still don't have a great sense for "The Zig Way" often times, but it's developing.

    I'm also starting to - and the same thing happens when learning any new skill - I hate my prior work. That UDP Protocol Implementation I've mentioned in the prior weeks of this thread, while it's functional, I'll certainly throw out. I kind of planned on that, though, that feeling is probably a good indicator that I'm learning.

    Another thing I'm running into is the classic blunder, I'm building a library without a use-case. Fine for learning the language fundamentals, but not great for learning more advanced paradigms. So, with Zig's promise of happy interop between C and Zig, I'm revisiting some of my early graphics code, since I know that problem a lot better, and then use that as an arena to play with the more unfamiliar architecture questions.


    Entirely unrelated, but I wasted a few hours stripping out the stylesheets on my personal blog. I'm always frustrated by bloated sites that screw up my usercss, so I'm trying to be the change I want to see in the world. However I know it's a waste of time because I hardly ever publish anything there, it's just a bunch of stuff sitting in my drafts, half-finished and unused.

    All I really want for the thing is to constrain the width of the content, mute the contrast a little bit, and switch to a sans font. And load katex for math rendering but that's not really part of the styles.

    3 votes
  15. Comment on Scattered thoughts on the absurdity of existing in ~talk

    TangibleLight
    (edited )
    Link Parent
    Can you elaborate on this? To my mind it is perfectly reasonable for one to make meaning - maybe choose meaning is a better word - for one's own life. I suspect we're using different definitions...

    I don't think it's possible to make meaning in your life—that would require a level of control over our deepest selves that we simply don't have.

    Can you elaborate on this? To my mind it is perfectly reasonable for one to make meaning - maybe choose meaning is a better word - for one's own life. I suspect we're using different definitions for "make meaning" and "deepest selves" and that's where my confusion comes from. I'm not aware of definitions of these that are mutually exclusive, perhaps unless you take some view that discards free will altogether.


    A few weeks months ago there was a thread not too different from this: Tell me about your weird religious beliefs. You might enjoy reading through that, based on your comments here. Here's my response to it; you might find it relevant with how you talk about "initial conditions" here. This reply and my reply to it touch on your mentions of causality and quantum effects.

    I link all these because, based on your comments in this thread, I'm curious what you think and what issues you might have with them.

    I am not Buddhist and have no formal Buddhist education, but my understanding is that my comment in that last link (my reply to it) and the replies afterward accidentally touch on some Buddhist ideas around time, experience, and the present. Since you mention in your opener that you are Buddhist, I'm curious of your thoughts on that specifically.


    Edit: I just got to your comment where you wrote:

    I would challenge this assertion of yours. Either the universe was spontaneous and without cause, which is absurd, or it had a cause, which then must necessarily have had a cause.

    If you have an option c that gets around that, I'm all ears.

    Please do check my weird religious belief response; I address my view on exactly this.

    1 vote
  16. Comment on What's something you've been mulling over recently? in ~talk

    TangibleLight
    Link Parent
    That is all it is. To be a bit reductive - but not very reductive - the new boom in AI is fancy autocompletion. It is special in that it can complete different kinds of media - it can fill missing...

    That is all it is. To be a bit reductive - but not very reductive - the new boom in AI is fancy autocompletion. It is special in that it can complete different kinds of media - it can fill missing parts of images, predict missing words in a sentence, or replace gaps in speech and music.

    Critically, it can take some short prompt and predict the "missing" part at the end, making the prompt longer. This is how it generates new content.

    Except, all that content is really just predictions. It's not creating anything. It's not thinking. It's not intelligent.

    BUT, in the same way that people capitalize on Crypto and NFT and Web 2 and all the other fads before, people with interest in AI technology are overpromising to investors and generally other people who don't know enough about the technology to realize the promises mathematically cannot be met. And I'm sure these people will be gone by the time consumers start to realize the limitations of the thing and - probably more importantly - investors start to realize how expensive it is to run.

    5 votes
  17. Comment on What's something you've been mulling over recently? in ~talk

    TangibleLight
    Link Parent
    I held off on responding to take some time to think about it and come up with a good response. I don't have a good response. I can't quite put my finger on what it is exactly, but there's...

    I held off on responding to take some time to think about it and come up with a good response.

    I don't have a good response.

    I can't quite put my finger on what it is exactly, but there's something about that phrasing. "It's comforting to me that god has control over you." I can totally understand why it's lingered on your mind. There's a sort of faith-on-your-behalf that feels like a violation of... something. I suggest you think on what exactly that violation is and, if you choose to talk with her about it, try to keep the conversation on that topic.

    It's probably not dissimilar from my feelings of distress on behalf of Christians probably being unethical. Imagine telling a Christian, "it's so distressing to know you believe God has control over your life." I think it would provoke the same reaction. I said in my earlier comment:

    But really, those people are not me, and I don't really have any reason or right to be distressed on their behalf.

    I think I stand by most of my advice, except my previous comment. With your clarification I don't think there's any interpretation that you're stepping out of your comfort zone, so a request to step back in really doesn't seem to apply.

    2 votes
  18. Comment on What's something you've been mulling over recently? in ~talk

    TangibleLight
    Link Parent
    This is probably better advice. Thanks for clarifying.

    This is probably better advice. Thanks for clarifying.

    2 votes
  19. Comment on What's something you've been mulling over recently? in ~talk

    TangibleLight
    Link Parent
    Strongly agree. "My current manager is toxic so I am looking for something better" seems like a perfectly reasonable explanation for short/missing experience on a resume. I don't have a good sense...

    Strongly agree. "My current manager is toxic so I am looking for something better" seems like a perfectly reasonable explanation for short/missing experience on a resume.

    I don't have a good sense on whether it's better to include that or not in order to get the interview, though. It probably depends on the field of work and on the culture of the location you're applying to. My gut feeling is it's better to include it so there's a more complete picture, but honestly I'm not sure.

    3 votes
  20. Comment on What's something you've been mulling over recently? in ~talk

    TangibleLight
    Link Parent
    I think you might have... and I'd also caution that it's easy to accidentally say something that retroactively makes your use of that language seem insincere. To me it sounds like you extended use...

    Now I'm worried that I've sort of invited this on myself by behaving this way.

    I can focus on myself and simply make myself look like a poor target for proselytism, rather than in any way asking her to curb her proselytizing.

    I think you might have... and I'd also caution that it's easy to accidentally say something that retroactively makes your use of that language seem insincere.

    To me it sounds like you extended use of that language to her with good intentions, but when she reciprocated you've realized it makes you uncomfortable. I'd just phrase it like that. Seems like it's no secret from her that you are, or at least were, atheist. I suspect it'll come across better if it's framed that you stepped out of your comfort zone a bit too far and need to step back in, rather than a direct request for her to change her behavior.

    3 votes