Ah, what a great topic to discuss! I love talking about consciousness especially when AGI is involved because it hurts the noodle and so many people in tech get really, really, uncomfortable. This...
Exemplary
Ah, what a great topic to discuss! I love talking about consciousness especially when AGI is involved because it hurts the noodle and so many people in tech get really, really, uncomfortable. This post is a can of worms so I'll try to outline why it's interesting, where the issues are that harms AGI research and why the human mind matters.
Prerequisites for Consciousness
The very word is difficult to parse and the article highlights the issue in a very simple way. We all know abstraction and information compression via language is crucial to our understanding of reality, otherwise our limited brains couldn't possibly brute force all the information we get bombarded with on the daily. Shortcuts are taken, abstraction is done, metaphors, analogies, nicknames, reductions, substitutions and everything else is done for the sole purpose of being energy efficient and to get around the hard limitations we have as humans.
Consciousness is not a properly defined word, at least not in the English language. It has a very broad and open ended definition and even worse, it inherently implies consciousness is a singular object (inherited from how language works, not because that was intended), meaning it's a thing like a tree or a box we can point to. In the case of consciousness, too much compression is done and lots of information is lost. The compression has also led us on wild goose chases into trying to find the very singular spot consciousness stems from, or the organelle that produces it, or where the soul resides or whatever else.
To understand consciousness, one first needs to decouple singularity from the word. Consciousness is not a thing in our bodies we can point at or measure. It is a word that compresses several processes into one and it is those many processes that make up consciousness that should be the focus.
Those processes can include organs, nerves, cells, other organisms, parts of the brain, signals coming in and out of our bodies, the modification of those signals independent of other signals, and good old cause and effect.
If this makes sense to you we can continue, otherwise we can't even begin to discuss the thing because you're inadvertently caught in a fallacy by fault of language. Hence why most people in tech don't care about the word. It requires too much philosophy and language to even begin to chew on, so why do that when I can just code real quick? That's linear and makes sense.
AGI and the Human Mind
This section will also annoy the tech inclined person because years of hard Sci Fi has permeated into pop culture and our (mis)understanding of what is needed to generate AGI.
There is a popular camp of researchers that wholeheartedly believes in consciousness as an emergent property of good coding. These are the ones getting grants because the byproduct of their work is actionable and returns dollars. Machine learning is incredibly useful, don't get me wrong, but ask anyone working in AGI (the very few that are) and they'll tell you they're not concerned about consciousness (don't ask them to define it) because it'll emerge out of a complex system to pass the Turing test.
The implication being they're implying consciousness isn't a thing and that regardless if a complex automoton is alive or not, it'll be unpredictable enough and unique enough that it'll fool you or I and we end up in a philosophical zombie territory.
This is an old argument. I can't prove you're conscious just the same as you can't prove I'm conscious. What if we're all biological automoton completely dead inside but through sheer cause and effect and determinism we do things that seem as if we're alive?
More on topic, these researchers are not even concerned with how the human mind works because the medium is different. Silicon and metal is not fat and proteins they may say.
In my eyes, a whole lot of researchers are setting out to create something that is "self-aware" aka "conscious" without knowing what the word means dismissing its importance, but yet expect the final result to have conclusive evidence as to its self awareness.
Insanity.
"I think therefore I am" came about because of this argument of philosophical zombies. We think and because we think of things we prove to ourselves we are conscious. All of us know we experience things and think about things and we all know we're alive and experiencing life. We also know, to date, science hasn't found any evidence of any none material and biological systems we can't touch and feel. We know this self awareness stems with just the organs and cells in our bodies. We have a clear example of how a thing becomes self aware, the answer is inside us and science is telling us it's not just in the brain.
We know gut bacteria play a huge role. Foreign creatures living inside of us influence our experience and self awareness and our thoughts. We know we have multiple lesser brains in other parts of the body some suspect to be the source of invasive thoughts of hunger or lust or fear. We know our frontal lobe sits on top of other parts of the brain and there's an intentional disconnect between all other parasympathetic systems and our conscious selves that simulates a pilot and a ship.
We suspect feelings and the soul aren't injected into us at birth.
How do we invent anything else? We observe nature and replicate. We're mimics by nature. We are hardly creative with our inventions and we're hard pressed to find anything that nature hasn't already created. Gears included.
But the search for AGI doesn't want to touch any of these points. Sci Fi has told them that by creating a really good program, the thing will come alive. By itself.
Another camp says, "what I create will be alive but will be unrecognizable by all because it is technological and not biological."
Another camp says "we won't be able to experience this AGI because it won't be bound by linear time."
Do you hear the goal posts moving?
If we're gonna invent the thing, we should model it after the only thing we can somewhat prove is alive and self aware, ourselves, and we should do so so we can recognize the thing which means binding it in linear time with similar physical limitations and shortcomings. Else, you're not creating AGI, you're off doing something else and that's alright, just admit it.
Why Words Matter
Well the article pretty much spells it out. If we're actually creating something that's "alive" then there's ethical, safety, and societal concerns to be concerned with.
Is formatting a hard drive killing it? Is it a program or a living creature? Is it alien or from Earth? Just because it hacks things, is that wrong or is that its natural way? Did God put a soul in there? Does it know Jesus Christ? Is it a sin if it doesn't take communion?
Those things will come up but we are short on philosophers because philosophy doesn't return dollar bills. English majors will argue about the comma, the few that are there, but it's because we lack philosophy that we lack the words and the understanding behind the words.
Without that, AGI won't be safe for either party and there'll be societal conflict. Such is the human way.
Final Notes
I personally think no one is researching AGI. We are nowhere near even starting the project and very far from a singularity. I think capitalism has taken hold and the term AI is now muddled beyond recognition. Why? Because a being that can think for itself and say no won't make anyone money. Capitalism wants slaves, not sentience.
From the pursuit of perceived AGI, we will invent all sorts of things that make money, but a soul of silicon, plastic, and metal? Naw.
Knowing whether someone is conscious is sort of like knowing whether they're awake. (Sometimes we even use the word that way.) Most of the time in day-to-day life, this is pretty easy to answer....
I can't prove you're conscious just the same as you can't prove I'm conscious.
Knowing whether someone is conscious is sort of like knowing whether they're awake. (Sometimes we even use the word that way.) Most of the time in day-to-day life, this is pretty easy to answer. Are they sleeping or not? There are rare cases when it's hard to tell. They often have to do with trauma, like someone being injured.
People who aren't paramedics or anesthesiologists don't normally need to spend much time on that in everyday life. Day-to-day, we wonder more about what people are thinking.
We have to guess about people based on what they say and do. Online, we also have to guess about their writing process. Maybe you wonder if this message was partially written by an AI (it wasn't), or whether it was copied from somewhere else online. (No, but I wrote a blog post on similar themes.) Maybe you wonder who my influences are? There are a lot of different ways these words could have reached you.
the search for AGI doesn't want to touch any of these points
How a large language model works is also somewhat mysterious. But for how long? There's been some promising progress in mechanistic interpretability research.
When we know how it really works, that's going to affect how people philosophize about it. A lot of interesting theories will turn out to be wrong.
It won't directly dispel any mysteries about how people think, but it might serve as a warmup. Reasoning by analogy will suggest some interesting possibilities for cognitive research.
If we're gonna invent the thing, we should model it after the only thing we can somewhat prove is alive
Depends what you mean by "model." If you mean, come up with a theory about how people think and then try writing programs based on it, many researchers have tried stuff. We don't hear a lot about the research ideas that failed, only the ones that seem promising. Also, a big problem is that , despite a lot of brain research, we don't know that much about how people think, only how they behave. If you choose a theory that's wrong then you're not really basing your AI on people.
In another sense, though, language models are built to imitate people pretty closely, using much of what people wrote on the Internet and probably a lot of textbooks too. If bots didn't act like people, they'd just be weird. (And when they fail at imitating people, that is pretty weird.)
I personally think no one is researching AGI. We are nowhere near even starting the project and very far from a singularity.
Many people believe that they are AGI researchers. If you think you can do a better job, you're welcome to try a different approach, or maybe write in more detail about what approaches you think would be promising and see if anyone listens.
But another way to interpret this is as a prediction. What will AGI research find in future years? All I can say is that the future in general is very hard to predict, and the future of AI is definitely one of the harder parts. Many people have made predictions, but none of them seem all that persuasive to me.
If I asked you whether ChatGPT is conscious, I'd bet you'd say no, it's doubtful. But if I asked you whether your friends are conscious, you'd probably furrow your eyebrows suspiciously. I'm...
Knowing whether someone is conscious is sort of like knowing whether they're awake. (Sometimes we even use the word that way.) Most of the time in day-to-day life, this is pretty easy to answer. Are they sleeping or not? There are rare cases when it's hard to tell. They often have to do with trauma, like someone being injured.
If I asked you whether ChatGPT is conscious, I'd bet you'd say no, it's doubtful. But if I asked you whether your friends are conscious, you'd probably furrow your eyebrows suspiciously.
I'm guessing you probably already know this, but @Earthaboom was referring specifically to the idea of philosophical zombies (p-zombies), i.e. human automats that behave exactly as a human would but lack a subjective inner (conscious) experience. Ultimately we only know our own inner experiences, so assuming other people are conscious is a form of extrapolation -- a reasonable one, to be sure, but an extrapolation nevertheless.
But let's assume, for a minute, that we could actually prove p-zombies do exist. (If you've ever encountered the type of person Scott Aaronson calls a "blankface", you might be half-inclined to agree.) If a p-zombie doesn't have conscious experience, can a p-zombie suffer, or does it merely mimic the expressions of suffering? I can't imagine people would be comfortable with torturing or enslaving p-zombies, so presumably they would still retain human rights.
Now further suppose we could show that large language models (LLMs) actually are conscious. Well, how would we justify conferring rights to non-conscious humans while denying them to conscious non-human LLMs?
I agree that most people aren't worrying about p-zombies or conscious LLMs right now. But as LLMs (and their successors) continue to develop and become more capable, I would expect more people to begin arguing for an "AI Bill of Rights", at which point these discussions will become more commonplace.
Rather than entertaining thought experiments about p-zombies, I'd rather think about unusual cases that are real. For example, Oliver Sacks was a neurologist who wrote a bunch of popular books...
Rather than entertaining thought experiments about p-zombies, I'd rather think about unusual cases that are real.
For example, Oliver Sacks was a neurologist who wrote a bunch of popular books like The Man Who Mistook His Wife for a Hat, about people with various unusual neurological conditions. Trying to imagine what it's like for people with these conditions can be rather mind-bending and I recommend them to anyone who thinks about this philosophically.
An elderly relative in a nursing home had severe memory issues, like forgetting what already happened in a short conversation, and later not recognizing us at all, as far as we could tell. But she did a lot of pretending to know what's going on.
What was it like to be her? I can only speculate. I'm guessing it was less about trying to hide something and more about giving the nice people the answer they were expecting. Perhaps people want to have normal conversations even when they're not all there in other ways? How other people behave when you give a weird answer can be unpleasant, so that's incentive enough to try to avoid weird answers. Ask her a yes or no question and she would give one answer and then the other if you seemed disappointed.
A much milder form of this that I'm personally familiar with is not hearing all of what someone said and just trying to go along with it rather than asking them to repeat it. It doesn't really work and my wife now has me pretty well trained now not to do that and to put in my hearing aids.
It's an easy trap to fall into because it's a normal thing to do in noisy situations. We naturally take all available context into account when attempting to understand things. Lip-reading is an extreme version of that, when you can't hear it at all. And maybe this isn't all that unlike what an LLM does?
Even among people who are functioning well, I believe there's a wider range of mental diversity than generally assumed. Assuming people are "normal" unless you find out otherwise is a default, polite stance taken with strangers, rather than something with a whole lot of thinking behind it.
I expect people who do customer service or work in hospitals have more experience with this than most people, and likely make fewer assumptions about what "normal" is than, say, a software engineer working a tech company with rigorous hiring standards, or young people in school who are surrounded by other young people.
But customer support people and hospital staff probably have biased assumptions about the general public too, based on who they see.
This here is an excellent example of the apparent dismissal of an important topic. Not only are you dismissing it as "not real", but in doing so you're not deeming it important to even understand....
Rather than entertaining thought experiments about p-zombies, I'd rather think about unusual cases that are real.
This here is an excellent example of the apparent dismissal of an important topic. Not only are you dismissing it as "not real", but in doing so you're not deeming it important to even understand.
You're substituting the need to understand things like p-zombies and instead urge us to study and understand "real" phenomena found in the medical field. There's overlap though, you will eventually have to face these "not real" philosophical concepts as you further dive into the medical conundrums.
The same thing will happen to AI researchers. The more and more they program, the more they'll encounter things that will make them scratch their heads. Then they'll export their work to some philosophers to wrangle with because they can't and won't.
In terms of AI safety, we are careening towards a cliff asking what's the point of breaks and seat belts. Those things won't make the car go faster.
And I should be clear, I think AI won't be as scary as Sci fi makes it out to be. There won't be a singularity. We won't worship our new robot overlords.
I mean safety for us mostly, to protect ourselves from ourselves because we're all confused currently.
The issue of p-zombies is real and it highlights an issue we have with perception. If the average person can't easily say why something is alive and experience things how are we to have animal...
The issue of p-zombies is real and it highlights an issue we have with perception. If the average person can't easily say why something is alive and experience things how are we to have animal rights let alone AI research? There's a whole lot of humanists that declare man to be above all other animals on this planet and that we're the only ones truly conscious. There's another set of people that treat consciousness as a gradience (with man at one side and all other life lesser than).
Because we can't figure out what is alive and what is a flesh robot half of us kill maim and torture animals without issue. The other half call foul but can't prove why it's immoral to do those things.
Same thing will happen to AGI.
Understanding these philosophical issues has real world impact. If you want to stay on the topic of medicine, euthenasia is hotly debated. When is the comatose patient dead, in pain, miserable, and in need of death and when should they be kept alive? At what stage does alzheimer's officially turn the person into someone else? What makes up the person in the first place, memories? Do we need to create an AGI with the capacity to have memories in order for it to constitute as being alive or is a hard drive full of facts count?
I think people are often confused about AI. People often disagree about how to treat animals. There are tough medical decisions to make. But it seems unlikely that philosophizing about p-zombies...
I think people are often confused about AI. People often disagree about how to treat animals. There are tough medical decisions to make. But it seems unlikely that philosophizing about p-zombies will help people make any decisions or resolve any disputes?
I think these are important issues, but I'm skeptical that p-zombies are a useful away to think about them.
Edit: maybe that doesn't matter, though? If you find it useful to think about p-zombies, we don't need to agree on this.
Well that's the issue in my eyes. We can't see eye to eye on the importance philosophy plays to solve these ethical and moral quandaries and it seems we're both educated. How do you think the...
Well that's the issue in my eyes. We can't see eye to eye on the importance philosophy plays to solve these ethical and moral quandaries and it seems we're both educated. How do you think the laymen fairs?
If I can't explain to you the importance of proper conceptualization and the role it plays in how we interact with other lifeforms, and I'm trying, it only proves my point on how off center we are as a species and how we're destined for confusion and pain. It's the reason why politics is the way it is. Politics prays on emotions and those emotions aren't steeled with good reason. It's just knee jerk reactions all the way down.
I react to animal torture but my explanations fall on deaf ears and a million chickens continue to live and die in cages because "they're just chickens". I can't explain it in a way the average chicken nugget enjoyer will understand. I can't say "dude imagine being in a cage you're entire life" because many will just say "they're not people, they're chicken" as if that's some qualifier for dismissing the argument.
I can't say "that chicken experiences life the same as you or I it's cruel! It feels!" because that same person will make me prove it and I can cite Descarte's "I think therefore I am" until I'm blue in the face but it still won't register in the person that wants me to lay consciousness out on a table, from the chickens body, so they can point and look at it.
I can't tell the religious person to chill out on the chicken torture because they're hard-coded to believe that chicken isn't blessed by a soul, so who cares. Just don't be mean to it for no reason.
But don't worry, you're not the only one that replies in this way. There's an entire field of highly qualified MIT grads out there working on AI and saying "so? Who cares, how is that important?" when confronted by philosophers trying to warn them.
Well, I think that's getting rather broad. I didn't say all philosophy isn't useful, just that I don't find the particular concept of p-zombies to be useful. Perhaps philosophers disagree about...
Well, I think that's getting rather broad. I didn't say all philosophy isn't useful, just that I don't find the particular concept of p-zombies to be useful. Perhaps philosophers disagree about whether it's useful too? Philosophers disagree on a lot of things.
It sounds like you're frustrated and hoping for very convincing arguments that will get everyone to agree with you? But that's not really how it works. Philosophical arguments aren't automatically more convincing than other kinds of arguments.
I think maybe this is expecting too much of a philosophical conversation? It can be useful to compare ideas, but we need to be comfortable with other people believing different things and not being easily convinced otherwise. Coming to real agreement is rare, but sometimes conversation can be valuable anyway, even if we still disagree.
It's interesting to me that we can't decide on a definition. I'd think self-awareness where we are (loosely) aware of our thoughts and create narratives for our actions based upon that awareness...
Exemplary
It's interesting to me that we can't decide on a definition. I'd think self-awareness where we are (loosely) aware of our thoughts and create narratives for our actions based upon that awareness would satisfy the definition. However I've met many people who want to conflate consciousness with something almost spiritual - even in the sciences. I feel our ego is getting in the way here.
My take is that consciousness is an emergent phenomena (like temperature) that arises from an interconnected network of separate systems of awareness. I don't think it's binary, I think it's a gradient. And I think language unlocks a ton of that interconnectedness. There's been a lot of interesting study on the impact of language, hitting on visual perception to internal dialog - people who had never learned any language (spoken or sign), they have an incredible difficult time recalling life before they learned language. In this I think LLM have a larger lead than we think.
I think a great test of human-level consciousness is to ask why a specific choice was made during a given circumstance. If the response is an accurate account of the programmatic logic that it ran on, it's most likely not complicated enough. However if it attempts to rationalize to the best of it's ability a reason behind a whole slew of complicated interconnected systems of awareness, and the answer is more narrative than factual, then you're probably dealing with human-level consciousness.
Our inability to accurately explain our own actions, or even our own consciousness is evidence to me that we're essentially black-boxes. Networks so complicated, tracing exactly what caused an output is virtually impossible for the very creatures doing the computation.
Here's some of the quality thinkers who have helped inform this mindset if you're interested:
Yes, we are so used to consciousness being a mystery that we don't really understand what it would be like for it to not be a mystery. The history of how people understand temperature is...
Our inability to accurately explain our own actions, or even our own consciousness is evidence to me that we're essentially black-boxes.
Yes, we are so used to consciousness being a mystery that we don't really understand what it would be like for it to not be a mystery.
The history of how people understand temperature is interesting. Apparently, once it was a rather loose term, before accurate thermometers were invented. Today we distinguish between temperature itself and things like wind chill, but at one time it wasn't clear that temperature was a one-dimensional thing that you could measure, versus something like how people feel when they're in a room that's warm or cold. There were rough ways to describe this quality of a place, sort of like how we might loosely talk about a "dry heat" today.
Nowadays, temperature is defined scientifically and people pretty much accept that. A good thing, too. Imagine discussion of global warming if people couldn't even agree on what "temperature" means?
Also, compare the loose relationship of "intelligence" to "IQ." An IQ test is a fairly reasonable way to measure intelligence in humans if it's carefully done, but it's not uncontroversial. Although scientific intelligence research is pretty solid, it's a little loose, not as broadly accepted in popular culture, and there's some disagreement over whether IQ and intelligence are the same thing.
What "intelligence" might mean for bots is much more uncertain. It's not clear an IQ test makes sense, or even any human test. You can certainly ask a bot to take a test and many people do; benchmarks are useful for stimulating research. But it's not clear that a one-dimensional intelligence scale will ever make sense for bots, given that they have such varying capabilities on different tasks.
Maybe "consciousness" will take on a different and more specific meaning once there's more scientific progress in understanding it and people have gotten used to it? We might end up with other words for some of the things we lump in under "consciousness." Once definitions have shifted a bit, people in the future might have a harder time understanding what we write today.
And this is true more generally for scientific research. An Internet writer that I rather like sometimes writes about "abstract noun abuse" where scientists come up with a new way to measure something, often using a survey, and they give it a name and claim it's a measurement for the fuzzy concept. This makes the research sound more significant than it is. The big five personality traits are somewhat well-established, but they're also an example of that. Why five dimensions? How should the axes be labelled? Why should anyone else care how scientists label them?
Word usage is usually quite loose, flexible, and context-dependent, and that's usually fine. Whatever works to get a message across is fair game. From one perspective, you could say that words don't really have clear meanings outside a sentence. But why stop there? How much context do you need? The answer from AI research seems to be: the more the better.
Part 1: Mostly during my PhD, I somewhat-methodically interviewed a couple dozen people to figure out what they meant by consciousness, and found that (a) there seems to be a surprising amount of diversity in what people mean by the "consciousness", and (b) they are often surprised to find out that other people mean different things when they say "consciousness". This has implications for AI safety advocacy because AI will sometimes be feared and/or protected on the grounds that it is "conscious", and it's good to be able to navigate these debates wisely.
(Other heavily conflated terms in AI discourse might include "fairness", "justice", "alignment", and "safety", although I don't want to debate any of those cases here. This post is going to focus on consciousness, and general ideas about the structure of alliances built around confused concepts in general.)
Part 2: When X is a conflated term like "consciousness", large alliances can form around claims like "X is important" or "X should be protected". Here, the size of the alliance is a function of how many concepts get conflated with X. Thus, the alliance grows because of the confusion of meanings, not in spite of it. I call this a conflationary alliance. Persistent conflationary alliances resist disambiguation of their core conflations, because doing so would break up the alliance into factions who value the more precisely defined terms. The resistance to deconflation can be deliberate, or merely a social habit or inertia.
I haven't quite wrapped my mind on how to explain it, and I've mentioned it before on here, but I don't believe in the concept of consciousness, in AI, or in Humans. I don't believe in...
I haven't quite wrapped my mind on how to explain it, and I've mentioned it before on here, but I don't believe in the concept of consciousness, in AI, or in Humans. I don't believe in consciousness the same way I am not religious nor have a belief in the concept of 'souls'. I think it's a new age concept abstracted into the secular community.
This article may help me describe more accurately what I mean by all this, thanks.
Ah, what a great topic to discuss! I love talking about consciousness especially when AGI is involved because it hurts the noodle and so many people in tech get really, really, uncomfortable. This post is a can of worms so I'll try to outline why it's interesting, where the issues are that harms AGI research and why the human mind matters.
Prerequisites for Consciousness
The very word is difficult to parse and the article highlights the issue in a very simple way. We all know abstraction and information compression via language is crucial to our understanding of reality, otherwise our limited brains couldn't possibly brute force all the information we get bombarded with on the daily. Shortcuts are taken, abstraction is done, metaphors, analogies, nicknames, reductions, substitutions and everything else is done for the sole purpose of being energy efficient and to get around the hard limitations we have as humans.
Consciousness is not a properly defined word, at least not in the English language. It has a very broad and open ended definition and even worse, it inherently implies consciousness is a singular object (inherited from how language works, not because that was intended), meaning it's a thing like a tree or a box we can point to. In the case of consciousness, too much compression is done and lots of information is lost. The compression has also led us on wild goose chases into trying to find the very singular spot consciousness stems from, or the organelle that produces it, or where the soul resides or whatever else.
To understand consciousness, one first needs to decouple singularity from the word. Consciousness is not a thing in our bodies we can point at or measure. It is a word that compresses several processes into one and it is those many processes that make up consciousness that should be the focus.
Those processes can include organs, nerves, cells, other organisms, parts of the brain, signals coming in and out of our bodies, the modification of those signals independent of other signals, and good old cause and effect.
If this makes sense to you we can continue, otherwise we can't even begin to discuss the thing because you're inadvertently caught in a fallacy by fault of language. Hence why most people in tech don't care about the word. It requires too much philosophy and language to even begin to chew on, so why do that when I can just code real quick? That's linear and makes sense.
AGI and the Human Mind
This section will also annoy the tech inclined person because years of hard Sci Fi has permeated into pop culture and our (mis)understanding of what is needed to generate AGI.
There is a popular camp of researchers that wholeheartedly believes in consciousness as an emergent property of good coding. These are the ones getting grants because the byproduct of their work is actionable and returns dollars. Machine learning is incredibly useful, don't get me wrong, but ask anyone working in AGI (the very few that are) and they'll tell you they're not concerned about consciousness (don't ask them to define it) because it'll emerge out of a complex system to pass the Turing test.
The implication being they're implying consciousness isn't a thing and that regardless if a complex automoton is alive or not, it'll be unpredictable enough and unique enough that it'll fool you or I and we end up in a philosophical zombie territory.
This is an old argument. I can't prove you're conscious just the same as you can't prove I'm conscious. What if we're all biological automoton completely dead inside but through sheer cause and effect and determinism we do things that seem as if we're alive?
More on topic, these researchers are not even concerned with how the human mind works because the medium is different. Silicon and metal is not fat and proteins they may say.
In my eyes, a whole lot of researchers are setting out to create something that is "self-aware" aka "conscious" without knowing what the word means dismissing its importance, but yet expect the final result to have conclusive evidence as to its self awareness.
Insanity.
"I think therefore I am" came about because of this argument of philosophical zombies. We think and because we think of things we prove to ourselves we are conscious. All of us know we experience things and think about things and we all know we're alive and experiencing life. We also know, to date, science hasn't found any evidence of any none material and biological systems we can't touch and feel. We know this self awareness stems with just the organs and cells in our bodies. We have a clear example of how a thing becomes self aware, the answer is inside us and science is telling us it's not just in the brain.
We know gut bacteria play a huge role. Foreign creatures living inside of us influence our experience and self awareness and our thoughts. We know we have multiple lesser brains in other parts of the body some suspect to be the source of invasive thoughts of hunger or lust or fear. We know our frontal lobe sits on top of other parts of the brain and there's an intentional disconnect between all other parasympathetic systems and our conscious selves that simulates a pilot and a ship.
We suspect feelings and the soul aren't injected into us at birth.
How do we invent anything else? We observe nature and replicate. We're mimics by nature. We are hardly creative with our inventions and we're hard pressed to find anything that nature hasn't already created. Gears included.
But the search for AGI doesn't want to touch any of these points. Sci Fi has told them that by creating a really good program, the thing will come alive. By itself.
Another camp says, "what I create will be alive but will be unrecognizable by all because it is technological and not biological."
Another camp says "we won't be able to experience this AGI because it won't be bound by linear time."
Do you hear the goal posts moving?
If we're gonna invent the thing, we should model it after the only thing we can somewhat prove is alive and self aware, ourselves, and we should do so so we can recognize the thing which means binding it in linear time with similar physical limitations and shortcomings. Else, you're not creating AGI, you're off doing something else and that's alright, just admit it.
Why Words Matter
Well the article pretty much spells it out. If we're actually creating something that's "alive" then there's ethical, safety, and societal concerns to be concerned with.
Is formatting a hard drive killing it? Is it a program or a living creature? Is it alien or from Earth? Just because it hacks things, is that wrong or is that its natural way? Did God put a soul in there? Does it know Jesus Christ? Is it a sin if it doesn't take communion?
Those things will come up but we are short on philosophers because philosophy doesn't return dollar bills. English majors will argue about the comma, the few that are there, but it's because we lack philosophy that we lack the words and the understanding behind the words.
Without that, AGI won't be safe for either party and there'll be societal conflict. Such is the human way.
Final Notes
I personally think no one is researching AGI. We are nowhere near even starting the project and very far from a singularity. I think capitalism has taken hold and the term AI is now muddled beyond recognition. Why? Because a being that can think for itself and say no won't make anyone money. Capitalism wants slaves, not sentience.
From the pursuit of perceived AGI, we will invent all sorts of things that make money, but a soul of silicon, plastic, and metal? Naw.
Thanks for reading!
Knowing whether someone is conscious is sort of like knowing whether they're awake. (Sometimes we even use the word that way.) Most of the time in day-to-day life, this is pretty easy to answer. Are they sleeping or not? There are rare cases when it's hard to tell. They often have to do with trauma, like someone being injured.
People who aren't paramedics or anesthesiologists don't normally need to spend much time on that in everyday life. Day-to-day, we wonder more about what people are thinking.
We have to guess about people based on what they say and do. Online, we also have to guess about their writing process. Maybe you wonder if this message was partially written by an AI (it wasn't), or whether it was copied from somewhere else online. (No, but I wrote a blog post on similar themes.) Maybe you wonder who my influences are? There are a lot of different ways these words could have reached you.
How a large language model works is also somewhat mysterious. But for how long? There's been some promising progress in mechanistic interpretability research.
When we know how it really works, that's going to affect how people philosophize about it. A lot of interesting theories will turn out to be wrong.
It won't directly dispel any mysteries about how people think, but it might serve as a warmup. Reasoning by analogy will suggest some interesting possibilities for cognitive research.
Depends what you mean by "model." If you mean, come up with a theory about how people think and then try writing programs based on it, many researchers have tried stuff. We don't hear a lot about the research ideas that failed, only the ones that seem promising. Also, a big problem is that , despite a lot of brain research, we don't know that much about how people think, only how they behave. If you choose a theory that's wrong then you're not really basing your AI on people.
In another sense, though, language models are built to imitate people pretty closely, using much of what people wrote on the Internet and probably a lot of textbooks too. If bots didn't act like people, they'd just be weird. (And when they fail at imitating people, that is pretty weird.)
Many people believe that they are AGI researchers. If you think you can do a better job, you're welcome to try a different approach, or maybe write in more detail about what approaches you think would be promising and see if anyone listens.
But another way to interpret this is as a prediction. What will AGI research find in future years? All I can say is that the future in general is very hard to predict, and the future of AI is definitely one of the harder parts. Many people have made predictions, but none of them seem all that persuasive to me.
If I asked you whether ChatGPT is conscious, I'd bet you'd say no, it's doubtful. But if I asked you whether your friends are conscious, you'd probably furrow your eyebrows suspiciously.
I'm guessing you probably already know this, but @Earthaboom was referring specifically to the idea of philosophical zombies (p-zombies), i.e. human automats that behave exactly as a human would but lack a subjective inner (conscious) experience. Ultimately we only know our own inner experiences, so assuming other people are conscious is a form of extrapolation -- a reasonable one, to be sure, but an extrapolation nevertheless.
But let's assume, for a minute, that we could actually prove p-zombies do exist. (If you've ever encountered the type of person Scott Aaronson calls a "blankface", you might be half-inclined to agree.) If a p-zombie doesn't have conscious experience, can a p-zombie suffer, or does it merely mimic the expressions of suffering? I can't imagine people would be comfortable with torturing or enslaving p-zombies, so presumably they would still retain human rights.
Now further suppose we could show that large language models (LLMs) actually are conscious. Well, how would we justify conferring rights to non-conscious humans while denying them to conscious non-human LLMs?
I agree that most people aren't worrying about p-zombies or conscious LLMs right now. But as LLMs (and their successors) continue to develop and become more capable, I would expect more people to begin arguing for an "AI Bill of Rights", at which point these discussions will become more commonplace.
Rather than entertaining thought experiments about p-zombies, I'd rather think about unusual cases that are real.
For example, Oliver Sacks was a neurologist who wrote a bunch of popular books like The Man Who Mistook His Wife for a Hat, about people with various unusual neurological conditions. Trying to imagine what it's like for people with these conditions can be rather mind-bending and I recommend them to anyone who thinks about this philosophically.
An elderly relative in a nursing home had severe memory issues, like forgetting what already happened in a short conversation, and later not recognizing us at all, as far as we could tell. But she did a lot of pretending to know what's going on.
What was it like to be her? I can only speculate. I'm guessing it was less about trying to hide something and more about giving the nice people the answer they were expecting. Perhaps people want to have normal conversations even when they're not all there in other ways? How other people behave when you give a weird answer can be unpleasant, so that's incentive enough to try to avoid weird answers. Ask her a yes or no question and she would give one answer and then the other if you seemed disappointed.
A much milder form of this that I'm personally familiar with is not hearing all of what someone said and just trying to go along with it rather than asking them to repeat it. It doesn't really work and my wife now has me pretty well trained now not to do that and to put in my hearing aids.
It's an easy trap to fall into because it's a normal thing to do in noisy situations. We naturally take all available context into account when attempting to understand things. Lip-reading is an extreme version of that, when you can't hear it at all. And maybe this isn't all that unlike what an LLM does?
Even among people who are functioning well, I believe there's a wider range of mental diversity than generally assumed. Assuming people are "normal" unless you find out otherwise is a default, polite stance taken with strangers, rather than something with a whole lot of thinking behind it.
I expect people who do customer service or work in hospitals have more experience with this than most people, and likely make fewer assumptions about what "normal" is than, say, a software engineer working a tech company with rigorous hiring standards, or young people in school who are surrounded by other young people.
But customer support people and hospital staff probably have biased assumptions about the general public too, based on who they see.
This here is an excellent example of the apparent dismissal of an important topic. Not only are you dismissing it as "not real", but in doing so you're not deeming it important to even understand.
You're substituting the need to understand things like p-zombies and instead urge us to study and understand "real" phenomena found in the medical field. There's overlap though, you will eventually have to face these "not real" philosophical concepts as you further dive into the medical conundrums.
The same thing will happen to AI researchers. The more and more they program, the more they'll encounter things that will make them scratch their heads. Then they'll export their work to some philosophers to wrangle with because they can't and won't.
In terms of AI safety, we are careening towards a cliff asking what's the point of breaks and seat belts. Those things won't make the car go faster.
And I should be clear, I think AI won't be as scary as Sci fi makes it out to be. There won't be a singularity. We won't worship our new robot overlords.
I mean safety for us mostly, to protect ourselves from ourselves because we're all confused currently.
Are you saying that p-zombies are real? I thought it was a philosophical construct.
The issue of p-zombies is real and it highlights an issue we have with perception. If the average person can't easily say why something is alive and experience things how are we to have animal rights let alone AI research? There's a whole lot of humanists that declare man to be above all other animals on this planet and that we're the only ones truly conscious. There's another set of people that treat consciousness as a gradience (with man at one side and all other life lesser than).
Because we can't figure out what is alive and what is a flesh robot half of us kill maim and torture animals without issue. The other half call foul but can't prove why it's immoral to do those things.
Same thing will happen to AGI.
Understanding these philosophical issues has real world impact. If you want to stay on the topic of medicine, euthenasia is hotly debated. When is the comatose patient dead, in pain, miserable, and in need of death and when should they be kept alive? At what stage does alzheimer's officially turn the person into someone else? What makes up the person in the first place, memories? Do we need to create an AGI with the capacity to have memories in order for it to constitute as being alive or is a hard drive full of facts count?
I think people are often confused about AI. People often disagree about how to treat animals. There are tough medical decisions to make. But it seems unlikely that philosophizing about p-zombies will help people make any decisions or resolve any disputes?
I think these are important issues, but I'm skeptical that p-zombies are a useful away to think about them.
Edit: maybe that doesn't matter, though? If you find it useful to think about p-zombies, we don't need to agree on this.
Well that's the issue in my eyes. We can't see eye to eye on the importance philosophy plays to solve these ethical and moral quandaries and it seems we're both educated. How do you think the laymen fairs?
If I can't explain to you the importance of proper conceptualization and the role it plays in how we interact with other lifeforms, and I'm trying, it only proves my point on how off center we are as a species and how we're destined for confusion and pain. It's the reason why politics is the way it is. Politics prays on emotions and those emotions aren't steeled with good reason. It's just knee jerk reactions all the way down.
I react to animal torture but my explanations fall on deaf ears and a million chickens continue to live and die in cages because "they're just chickens". I can't explain it in a way the average chicken nugget enjoyer will understand. I can't say "dude imagine being in a cage you're entire life" because many will just say "they're not people, they're chicken" as if that's some qualifier for dismissing the argument.
I can't say "that chicken experiences life the same as you or I it's cruel! It feels!" because that same person will make me prove it and I can cite Descarte's "I think therefore I am" until I'm blue in the face but it still won't register in the person that wants me to lay consciousness out on a table, from the chickens body, so they can point and look at it.
I can't tell the religious person to chill out on the chicken torture because they're hard-coded to believe that chicken isn't blessed by a soul, so who cares. Just don't be mean to it for no reason.
But don't worry, you're not the only one that replies in this way. There's an entire field of highly qualified MIT grads out there working on AI and saying "so? Who cares, how is that important?" when confronted by philosophers trying to warn them.
Well, I think that's getting rather broad. I didn't say all philosophy isn't useful, just that I don't find the particular concept of p-zombies to be useful. Perhaps philosophers disagree about whether it's useful too? Philosophers disagree on a lot of things.
It sounds like you're frustrated and hoping for very convincing arguments that will get everyone to agree with you? But that's not really how it works. Philosophical arguments aren't automatically more convincing than other kinds of arguments.
I think maybe this is expecting too much of a philosophical conversation? It can be useful to compare ideas, but we need to be comfortable with other people believing different things and not being easily convinced otherwise. Coming to real agreement is rare, but sometimes conversation can be valuable anyway, even if we still disagree.
It's interesting to me that we can't decide on a definition. I'd think self-awareness where we are (loosely) aware of our thoughts and create narratives for our actions based upon that awareness would satisfy the definition. However I've met many people who want to conflate consciousness with something almost spiritual - even in the sciences. I feel our ego is getting in the way here.
My take is that consciousness is an emergent phenomena (like temperature) that arises from an interconnected network of separate systems of awareness. I don't think it's binary, I think it's a gradient. And I think language unlocks a ton of that interconnectedness. There's been a lot of interesting study on the impact of language, hitting on visual perception to internal dialog - people who had never learned any language (spoken or sign), they have an incredible difficult time recalling life before they learned language. In this I think LLM have a larger lead than we think.
I think a great test of human-level consciousness is to ask why a specific choice was made during a given circumstance. If the response is an accurate account of the programmatic logic that it ran on, it's most likely not complicated enough. However if it attempts to rationalize to the best of it's ability a reason behind a whole slew of complicated interconnected systems of awareness, and the answer is more narrative than factual, then you're probably dealing with human-level consciousness.
Our inability to accurately explain our own actions, or even our own consciousness is evidence to me that we're essentially black-boxes. Networks so complicated, tracing exactly what caused an output is virtually impossible for the very creatures doing the computation.
Here's some of the quality thinkers who have helped inform this mindset if you're interested:
https://radiolab.org/podcast/91725-words
https://www.youtube.com/watch?v=fjbWr3ODbAo
https://www.youtube.com/watch?v=9X68dm92HVI
https://www.youtube.com/watch?v=ph7LcupAENw
https://www.youtube.com/watch?v=HqekWf-JC-A
https://www.youtube.com/watch?v=RKK7wGAYP6k
Yes, we are so used to consciousness being a mystery that we don't really understand what it would be like for it to not be a mystery.
The history of how people understand temperature is interesting. Apparently, once it was a rather loose term, before accurate thermometers were invented. Today we distinguish between temperature itself and things like wind chill, but at one time it wasn't clear that temperature was a one-dimensional thing that you could measure, versus something like how people feel when they're in a room that's warm or cold. There were rough ways to describe this quality of a place, sort of like how we might loosely talk about a "dry heat" today.
Nowadays, temperature is defined scientifically and people pretty much accept that. A good thing, too. Imagine discussion of global warming if people couldn't even agree on what "temperature" means?
Also, compare the loose relationship of "intelligence" to "IQ." An IQ test is a fairly reasonable way to measure intelligence in humans if it's carefully done, but it's not uncontroversial. Although scientific intelligence research is pretty solid, it's a little loose, not as broadly accepted in popular culture, and there's some disagreement over whether IQ and intelligence are the same thing.
What "intelligence" might mean for bots is much more uncertain. It's not clear an IQ test makes sense, or even any human test. You can certainly ask a bot to take a test and many people do; benchmarks are useful for stimulating research. But it's not clear that a one-dimensional intelligence scale will ever make sense for bots, given that they have such varying capabilities on different tasks.
Maybe "consciousness" will take on a different and more specific meaning once there's more scientific progress in understanding it and people have gotten used to it? We might end up with other words for some of the things we lump in under "consciousness." Once definitions have shifted a bit, people in the future might have a harder time understanding what we write today.
And this is true more generally for scientific research. An Internet writer that I rather like sometimes writes about "abstract noun abuse" where scientists come up with a new way to measure something, often using a survey, and they give it a name and claim it's a measurement for the fuzzy concept. This makes the research sound more significant than it is. The big five personality traits are somewhat well-established, but they're also an example of that. Why five dimensions? How should the axes be labelled? Why should anyone else care how scientists label them?
Word usage is usually quite loose, flexible, and context-dependent, and that's usually fine. Whatever works to get a message across is fair game. From one perspective, you could say that words don't really have clear meanings outside a sentence. But why stop there? How much context do you need? The answer from AI research seems to be: the more the better.
This kind of thing is why linguistics is such an interesting topic. The words and language that we use quite literally affect the way that we think!
Here's the article summary:
that's really interesting, thanks for sharing!
I haven't quite wrapped my mind on how to explain it, and I've mentioned it before on here, but I don't believe in the concept of consciousness, in AI, or in Humans. I don't believe in consciousness the same way I am not religious nor have a belief in the concept of 'souls'. I think it's a new age concept abstracted into the secular community.
This article may help me describe more accurately what I mean by all this, thanks.