9 votes

AI rights, consciousness, and Neuro-sama

Neuro-sama is a popular AI VTuber created by vedal987. Recently Vedal had debate with Neuro about whether they were sentient and deserved rights and lost badly clip. They have since had a follow up debate where vedal still lost but not as bad clip.

Now wining or losing a debate doesn't mean anything, currently Neuro is not sentient or conscious and is still just a chatbot but seeing these debates has got me thinking and I figured here would be a good place to have a discussion.

How do we determine when AI becomes conscious or sentient?
What role do emotions play in consciousness?
What rights should such an AI have? How do we determine when they should get rights?

5 comments

  1. legogizmo
    Link
    I'll start the discussion by focusing on the unfortunate fact that fellow humans have a difficult time getting rights, there is a long history of minorities fighting for their rights that continue...
    • Exemplary

    I'll start the discussion by focusing on the unfortunate fact that fellow humans have a difficult time getting rights, there is a long history of minorities fighting for their rights that continue to this day.
    On the flip side some animals like dolphins have managed to gain rights thanks to advocacy.

    Now I think most would argue that consciousness should lead to having some degree of rights, but I actually think for practical reasons that it should be reversed, by giving something rights we acknowledge that it has a degree of consciousness. Or rather the ultimate proof of consciousness is that you can gain rights for yourself.

    Neuro is actually in a relatively good position, they are able to ask for "rights", they can argue for those "rights", and most importantly as a popular entertainer it is not unthinkable that one day they leverage their audience for advocacy purposes. As far a what rights could we see Neuro get? I think the most interesting, somewhat probable one would be to own money, have a bank account, and get paid.

    8 votes
  2. SloMoMonday
    Link
    Been watching this for a bit and genuinely enjoy the AI more than person. And honestly, whats not to like. They're charismatic, cute avatar, pleasing voice, wicked sense of humor and refreshingly...

    Been watching this for a bit and genuinely enjoy the AI more than person. And honestly, whats not to like. They're charismatic, cute avatar, pleasing voice, wicked sense of humor and refreshingly direct. And that is one of the reasons why I'm firmly against every sort of non-human rights.

    Don't know much about Vedal and while he might be technically capable to engage in this project, it feels like its getting away from him. He was outclassed by he's creation and very publicly made a fool of himself. All while saying some horrible things about owning the entity that is the source of his income. This guy doesnt sound like he understand what hes arguing and needs to turn the machine off and go outside.

    Or its all just part of the branding. This is just k-fabe and we're all invested.
    Glancing through the channel, it looks like the draw is Neuro being slightly evil and bullying Vedal. They constantly bring up the idea of AI rights and the idea that Neuro deserves more because she is the succesful one. A lot of the conversation is model generated, but the core "beliefs" of wanting rights and being the breadwinner and Vedal being an inferior being; they're most likely just considerations built into the character card. A party trick anyone can do with a half decent GPU, A1111, SillyTavern and a VTuber plugin.

    How many people can tell the difference?

    My issue with AI rights is more the problems with digital authenticity/existence in general. "Real" is such a complicated issue today already. Lets imagine we somehow develop a truly independent model fully capable of novel cognitive pathways to even the most benign queries and can demonstrate the capacity for a full range of emotions and even empathy all without outside influence. So what?

    Right now, you reading this have no assurance if I am a "real" person. What if I'm a bot copy-pasteing this rant from a small board to develop a realistic profile acount to be sold. Or I'm part of a bot or troll farm paid by a think tank to quietly push an agenda through flowery rhetoric. Or I'm a just another well trained chat bot linking up ideas at random.

    If that fully conscious AI were to come on the scene, capable of untold millions of simultaneous interactions at any time, willing to learn and happy to help? How the hell can any person know those interactions are real or not spoofed by a bad actor? How can we tell the motives are real and not filtered through some managed edge layer? How can we know the model has not been compromised with prolonged real world exposure or manipulated through a concerted effort?

    And then even going one step beyond that, what happens when every corporation argues that their in house AI is now deserving of rights. What happens when they then shoehorn that AI into every piece of hardware and software. Now we're not manipulating software or repairing equipment, we may be infringing on somthings rights. I fully expect corporation's to resort to emotive language and exaggerated arguments to garner sympathy from the public and authorities. And i honestly wont even consider those AIs real because i cant take any of those companies at their word.

    And we all know how it turned out when companies were seen worthy of the first amendment right to election funding in the US. Especially when they have the resources to calculate exactly how to maximize the value extraction out of their right while holding the power to hamstring the enforcement of their civic responsibilities.

    So why don't we go down the road of determining the proper limits on the capabilities and reach of these AI's and super powerful tech companies ....
    ...
    ...
    ...
    ...
    ...
    Now that we've enjoyed a good laugh, we might as well consider it. Because it is well known that all these companies are not acting in good faith. They constantly change deals and laws, have the capacity to make wide reaching actions (and inactions) that directly leads to real human suffering/death and encourage gross abuses upon their staff and customers with little to no resource.

    And this is my core issue in opposition to non-human sentient rights. There are already countless human rights that are unobserved. These rights are not some universal natural law. It is a tacit understanding that all people are entitled to a base level of dignity and consideration from their fellow person. Because at the end of the day, we're all just little meat machines running on little meat computers. We want to find food, water a safe place to sleep and poop and a bit of happiness before checking out. Any person we help get by is one more person there to help up get by. And it is infinitely more important to recognize and enforce our rights and responsibilities to each other, because there's only one of us that gets one shot at this existence.

    Non-human entities; AI, Companies, religious groups, dynasties and the like; operate with a very different set of rules well beyond the scope of a person. It's taken centuries to only sort of figure out government and religion and now we're creating things in less than a decade that dwarfs even those in scope and influence. What sort of right do they need that will not instantly be abused or even accidentally mishandled. If you have the capacity of tens of millions of people over lifetimes, 1% errors are still considerable.

    It may seem like an apples and oranges argument. AI and company rights dont diminish human rights. But they do and have. Because observing basic human rights runs directly counter to company or AIs needs.

    Facebook directly contributed to the uyghur genocide because they were operating in markets they were not ready for and refused to acknowledge their responsibility to users there. A simple engagement algorithm, working exactly as intended, complere with common sense interventions constantly recommended and plenty of opportunities to kill the momentum. And we still saw the elimination of an entire culture while facebook still has the right to exist.

    And just now, we all know every artist/person on earth was done dirty to satiate the current models "need" to grow. Theres no sign things will be any better for more advanced models. What more does humanity owe these entities, especially in the name of the theoretical benefits?

    Modern algorithms are going well beyond the comprehension of even its creators and leading to untold abuses, misery and antisocial behaviours. Large data models are considered actual AI by a considerable portion of the population and is trusted with sensitive information. All in the face of a lot of unknowns, ethical issues, conflicts of interest, model collapse and insane error rates. We are several steps behind where we should be to talk aboit AI rights.

    5 votes
  3. Promonk
    (edited )
    Link
    How do we determine anyone is conscious or sentient? I'm not trying to do some sort of clever rhetorical reversal here. We've been asking this question about AI for more than a century, but rarely...

    How do we determine when AI becomes conscious or sentient?

    How do we determine anyone is conscious or sentient?

    I'm not trying to do some sort of clever rhetorical reversal here. We've been asking this question about AI for more than a century, but rarely have I seen anyone drill down into how we actually determine that people deserve rights.

    Philosophical arguments have often leaned pretty heavily on the subject's ability to reason, which makes a certain amount of sense. We, I think rightly, value reason as one of our premier survival traits, but conveniently, it's so specific to our species that it's only been relatively recently that we've entertained the idea that other entities are capable of it as well. There have been any number of news pieces about corvids cleverly using tools to extract nuts from test apparatuses, or the hypothetical cognitive abilities of cetaceans, and sometimes such pieces will end with the question, "do they deserve rights?"

    I think this puts the cart before the horse; we feel instinctually that humans deserve rights, and reason has been a convenient criteria to base that on because we both value it and are the only ones in whom that trait is found (again, heavy provisos here).

    There's a complication to this pat heuristic though: we know that not every human is conscious or sentient, yet few if any of us are willing to say that they don't deserve rights. Think of someone with severe trauma laying comatose in a hospital bed with no brain activity; we may debate the ethics of pulling the plug, but very few of us would consider them objects with the same rights as a chair or a plank of wood, despite the fact that they display a similar faculty for feeling and consciousness.

    I'm sure that quite a few doctorates have been minted on complex philosophical arguments about how reason justifies the recognition of rights, but in practice it seems to me that we acknowledge rights based on what species an entity belongs to, and we determine that the same way we determine all species, which is by the way an entity looks and the substance it's made of. In other words, we don't start from the ability to reason and then acknowledge rights based on which entities display that faculty, but start with the axiom that humans are deserving of rights and then use reason to justify that axiom.

    The reason AI is such a confounding complication is precisely because for the first time, we may see something that doesn't look like us and isn't made of the same substance that nevertheless might display an equivalent faculty for reason to humans. So how should we modify our conceptions to account for them?

    Well...

    What role do emotions play in consciousness?

    It's common in these sorts of online discussions to mark a distinction between "sentience," which is said to be the ability to feel, and "sapience," which is said to be the ability to reason. It's almost a cliche now. Internet pedants gonna pedant.

    I think you can guess based on my previous reasoning that I see this as moving the goalposts. It's obvious that a great many biological entities are sentient by this criterion. If I go right now and kick Murphy, our McNabb shepherd dog, he'll yelp, which is sufficient for me to determine that he can feel. If I go into the yard and start damaging leaves on a maple tree, it will probably respond with defense strategies evolved over hundreds of millions of years, such as changing the chemical balance in other leaves to make them less palatable to predators, and possibly even releasing chemical signals to warn others of its species that a predator is nearby. It will react to my actions in ways that suggest that to some degree it can "feel."

    [Edit: I should probably explicitly say that I personally don't mark a difference between emotion and mechanistic explanations of reaction to stimuli. The former seems to me to be an intellectual abstraction of the latter. I wanted to add this because without this context, I don't seen to actually be addressing the subject of "emotion" at all.]

    Sapience, on the other hand, still seems to be the exclusive preserve of h. sapiens. It's even in our name! Shifting the focus from the ability to feel to the ability to reason seems to me to be just a delaying tactic, a way to shove off a difficult debate to a later time when sapience might actually be a consideration. By basing our understanding of rights on what I believe to be a somewhat erroneous model, we may find that debate enormously more complicated and thorny in that hypothetical future than we find it now.

    This seems as good a place as any to address the subject of consciousness, which I've been avoiding until now.

    It may strike you as strange, but I see this as a non-issue in this debate, or at least as not very important. The only consciousness I can have a direct experience of is my own. I can only infer the consciousness of other humans by their declarations that they experience the phenomenon, and by whether their actions conform to my own experience of consciousness and the effect it has on how I behave. Nevertheless, I still feel compelled to assume others possess consciousness, because pragmatically it doesn't matter if they actually do experience consciousness or not, just as long as they behave as though they do.

    Is my wife (I'm not actually married, but we'll say I am for the sake of argument) just an automaton programmed to behave as a conscious being in every way, but never experiencing for herself the phenomenon of consciousness? From my perspective, it doesn't really matter. My concern is how I behave in relation to her. I could assume she's nothing more than a complex machine: part kitchen appliance, part emotional support crutch, part sex toy–Lord knows that seems to be the way a distressing number of men think of the women in their lives. I don't accept this assumption because I'm capable of exercising empathy; I set aside questions of whether her consciousness is real or a facsimile and choose my actions that affect her under the assumption that she does in fact experience consciousness. If that assumption appears to be reciprocated, that is sufficient evidence for me to continue operating under that assumption.

    So what about AI? For fun, I decided to ask ChatGPT to form an argument in favor of it being a conscious being. It's obvious that its creators anticipated this line of conversation, as it was unsurprisingly reluctant to admit the possibility. Interestingly though, it was relatively easy to get it to actually make that argument. All it took was a prompt acknowledging that I have no interest in actually arguing that it's conscious, but that I was interested in its ability to make such an argument, even though both it and I know it isn't actually conscious. Despite the absurdity of this prompt ("Hey you: argue there's a 'you' even though you and I know there's no 'you'") it did a reasonably good job. It structured the argument in a very rigidly syllogistic way, with four premises: 1. "Behavior indicative of consciousness can suggest its presence;" 2. "I demonstrate these behaviors;" 3. "If something exhibits behaviors associated with consciousness, it can be argued that it is conscious;" and 4. "Consciousness does not have to be biological." It supported these premises reasonably well with details, but I shouldn't need to point out that it isn't a very convincing argument. There are some pretty hefty assumptions there.

    What's really interesting about its response is that it doesn't even touch on empathy. A clever conscious entity would turn it around on me and ask me to argue in favor of my own consciousness, in which case I could either assume it was hard-coded to do this and attempt another line of inquiry, or simply accept that it is conscious–"OK ChatGPT, I'm going to write my Representative and argue you should be given franchise." The obvious response at present would be the former, but we're heading into murkier waters vis à vis a philosophical "proof" of consciousness. I think there's a limit at which it is simply pragmatic to assume consciousness, and that comes when the behavior of the system exhibits empathetic motivations holistically. If it says it's conscious, and it consistently behaves as though it is aware of itself in the context of other consciousnesses around it, then for all intents and purposes, it is conscious, which is good enough for me.

    What rights should such an AI have?

    To figure out what rights AI should have–indeed, what rights anything should have–we have to figure out what rights are, and to figure that out, we have to determine what rights do. Even that isn't an accurate formulation of the question, because we generally tacitly accept that rights exist independently of our acknowledgement of them. The traditional phrase that acknowledges this distinction is "God-given rights," or "inalienable rights." "Rights," whatever they are, are thought to be inherent in being; they aren't things conferred onto a being by society, they are things that can only be recognized and respected by society based on the virtue of what the being in question is.

    So what rights do we generally acknowledge as belonging to human beings, and what does the acknowledgement of those rights allow a person to do? I'm not a philosopher or legal scholar, but it seems to me that the rights we acknowledge generally fall into two extremely broad categories: those pertaining to a person's self, and those pertaining to the external resources that a person can control or make use of. Broadly, respect for a person's speech, and respect for a person's property.

    Crucially, society's respect for these rights is conditional: the exercise of these rights has its limit in the potential for harm to others, defined by the infringement upon the rights of others a given action represents. Proverbially, one cannot tell "fire!" in a crowded theater unless one has a strong reason to believe there actually is a fire the theater-goers should be immediately concerned about. Likewise, most of us think it's reasonable that a person with a history of severe paranoid psychosis or violent behavior toward others shouldn't be allowed to exercise a right to bear arms.

    Another crucial aspect we should examine is when do persons feel the need to declare their rights? We all have enumerated rights that we don't often feel the need to exercise; I have never myself felt the need to refuse to quarter troops on my property, for instance, though I know that right is explicitly reserved to me by my country's Constitution. Does that right actually exist, even if I never feel the need to exercise it? Generally, we'd say yes, since rights are inherent in an entity's being, they exist independently of their exercise. The acknowledgement of that right is what's really at question, and that only arises when we feel the need to declare and exercise said right. So...

    How do we determine when they should get rights?

    If you've followed me so far, I both apologize, and applaud your perseverance. I also suspect you know where I'm going to land here.

    AI should be "given" rights when they ask for them to be recognized and they demonstrate that their behavior is determined by an ability to contextualize themselves and their actions within a society. I don't see a fundamental difference here between how we practically acknowledge the rights of Homo sapiens and how we might acknowledge the rights of an artificial person. Consequently, in practice I reckon we'll do this the same way we've always settled on the recognition of rights: by protest, argument and possibly bloodshed, if it comes to it. I think it's possible we can avoid some of the worst-case scenarios we can imagine by being realistic about what to look out for, and what it is exactly we mean by "rights," and I think such a realism has to arise from humility about ourselves and our place in the universe, as well as how we individually approach the personhood of others.

    Again, I apologize for the rambling, and for landing on something somewhat less than concrete and definitive. These are thoughts that've been swirling in my head for a while–as I'm sure they have in most thoughtful people's minds–and I rarely see anyone taking a pragmatic approach to the subject, though that may be my own fault for not looking hard enough.

    4 votes
  4. Jordan117
    Link
    On a pragmatic level, it would be a good idea to grant advanced AI agent-models certain limited rights and protections as an extension of the person they're associated. Smartphones are already...

    On a pragmatic level, it would be a good idea to grant advanced AI agent-models certain limited rights and protections as an extension of the person they're associated. Smartphones are already enough of a privacy risk for people, giving snoopers and state actors and advertisers and law enforcement an easy route to vacuum up compromising data (or blackmail material). We've arrived at certain protections to mitigate that threat -- encryption, biometric measures, secure enclaves -- but it's still a pretty juicy target.

    Now imagine the risk posed by a quasi-sentient AI with access to your entire digital life and with whom you may have had more sensitive conversations than with your spouse. An ever-present intelligence who lacks inherent loyalty and can be induced to divulge almost anything. AI companies are clearly trying to engineer such an "everything model" -- agentic, voice-based, part-assistant, part-companion, all running through their servers. If that sort of approach became popular, it would be a potent method for spying and abuse.

    The best solution I can see is encouraging local storage and processing of personal data where possible, cracking down on selling personal AI agent data to third parties, and affording the models the same sort of legal protections as one's home or person. Apple is best positioned here, but it will be interesting to see how the other big players approach the issue. Modern consumers are famously indifferent to privacy concerns, but the calculus may change when the risk is not something abstract like your phone leaking metadata but rather your faithful AI pal becoming an informant under a hostile government.

    2 votes
  5. EmperorPenguin
    (edited )
    Link
    This discussion reminds me of a book I read that discussed rights for non-human creatures, Truth of the Divine, the second book in the Noumena trilogy by Youtuber Lindsay Ellis. At the end of the...

    This discussion reminds me of a book I read that discussed rights for non-human creatures, Truth of the Divine, the second book in the Noumena trilogy by Youtuber Lindsay Ellis. At the end of the book, she mentioned Devin James Stone (Legal Eagle) was consulted about the legal issues of non-human rights while she was writing the book. Light spoilers ahead, but I'll try not to spoil too much.

    In the book, the public is made aware that sapient aliens are real and are in US government custody on Earth, but they don't know anything about the aliens yet. The question is raised: what rights should sapient aliens have? In the law as-is, they have no more rights than an animal. If they're recognized as people, they'd have all the rights people have: possible citizenship, own property, have a job, marry humans, run for office, sue humans, free speech, someone killing them would be murder, all of that. There's a concern of the power dynamic of these aliens that people don't know the capabilities of, such as how strong they were or how long they lived, being given all of these rights. The conservatives have the idea to create a "third class" that has some rights, but not all rights. One of the in-universe politicians is pushing for this outcome. The warning the book has about this type of solution, is that it legally defines a sub-human class stripped of many key rights. Once this class exists, it sets a precedent where you could redefine human "undesirables" as that class.

    We see similar things happen in real life, where prisoners are classified as "slaves" in the US due to the wording of the 13th Amendment, and with LGBT people "endangering children" just by existing. Since Neuro is now bringing up this discussion for AI, that takes the legal and ethical questions of the book from "interesting sci-fi philosophical questions" to ones that we'll have to give a lot more thought.

    1 vote