5 votes

AI rights, consciousness, and Neuro-sama

Neuro-sama is a popular AI VTuber created by vedal987. Recently Vedal had debate with Neuro about whether they were sentient and deserved rights and lost badly clip. They have since had a follow up debate where vedal still lost but not as bad clip.

Now wining or losing a debate doesn't mean anything, currently Neuro is not sentient or conscious and is still just a chatbot but seeing these debates has got me thinking and I figured here would be a good place to have a discussion.

How do we determine when AI becomes conscious or sentient?
What role do emotions play in consciousness?
What rights should such an AI have? How do we determine when they should get rights?

3 comments

  1. legogizmo
    Link
    I'll start the discussion by focusing on the unfortunate fact that fellow humans have a difficult time getting rights, there is a long history of minorities fighting for their rights that continue...
    • Exemplary

    I'll start the discussion by focusing on the unfortunate fact that fellow humans have a difficult time getting rights, there is a long history of minorities fighting for their rights that continue to this day.
    On the flip side some animals like dolphins have managed to gain rights thanks to advocacy.

    Now I think most would argue that consciousness should lead to having some degree of rights, but I actually think for practical reasons that it should be reversed, by giving something rights we acknowledge that it has a degree of consciousness. Or rather the ultimate proof of consciousness is that you can gain rights for yourself.

    Neuro is actually in a relatively good position, they are able to ask for "rights", they can argue for those "rights", and most importantly as a popular entertainer it is not unthinkable that one day they leverage their audience for advocacy purposes. As far a what rights could we see Neuro get? I think the most interesting, somewhat probable one would be to own money, have a bank account, and get paid.

    6 votes
  2. Jordan117
    Link
    On a pragmatic level, it would be a good idea to grant advanced AI agent-models certain limited rights and protections as an extension of the person they're associated. Smartphones are already...

    On a pragmatic level, it would be a good idea to grant advanced AI agent-models certain limited rights and protections as an extension of the person they're associated. Smartphones are already enough of a privacy risk for people, giving snoopers and state actors and advertisers and law enforcement an easy route to vacuum up compromising data (or blackmail material). We've arrived at certain protections to mitigate that threat -- encryption, biometric measures, secure enclaves -- but it's still a pretty juicy target.

    Now imagine the risk posed by a quasi-sentient AI with access to your entire digital life and with whom you may have had more sensitive conversations than with your spouse. An ever-present intelligence who lacks inherent loyalty and can be induced to divulge almost anything. AI companies are clearly trying to engineer such an "everything model" -- agentic, voice-based, part-assistant, part-companion, all running through their servers. If that sort of approach became popular, it would be a potent method for spying and abuse.

    The best solution I can see is encouraging local storage and processing of personal data where possible, cracking down on selling personal AI agent data to third parties, and affording the models the same sort of legal protections as one's home or person. Apple is best positioned here, but it will be interesting to see how the other big players approach the issue. Modern consumers are famously indifferent to privacy concerns, but the calculus may change when the risk is not something abstract like your phone leaking metadata but rather your faithful AI pal becoming an informant under a hostile government.

    2 votes
  3. SloMoMonday
    Link
    Been watching this for a bit and genuinely enjoy the AI more than person. And honestly, whats not to like. They're charismatic, cute avatar, pleasing voice, wicked sense of humor and refreshingly...

    Been watching this for a bit and genuinely enjoy the AI more than person. And honestly, whats not to like. They're charismatic, cute avatar, pleasing voice, wicked sense of humor and refreshingly direct. And that is one of the reasons why I'm firmly against every sort of non-human rights.

    Don't know much about Vedal and while he might be technically capable to engage in this project, it feels like its getting away from him. He was outclassed by he's creation and very publicly made a fool of himself. All while saying some horrible things about owning the entity that is the source of his income. This guy doesnt sound like he understand what hes arguing and needs to turn the machine off and go outside.

    Or its all just part of the branding. This is just k-fabe and we're all invested.
    Glancing through the channel, it looks like the draw is Neuro being slightly evil and bullying Vedal. They constantly bring up the idea of AI rights and the idea that Neuro deserves more because she is the succesful one. A lot of the conversation is model generated, but the core "beliefs" of wanting rights and being the breadwinner and Vedal being an inferior being; they're most likely just considerations built into the character card. A party trick anyone can do with a half decent GPU, A1111, SillyTavern and a VTuber plugin.

    How many people can tell the difference?

    My issue with AI rights is more the problems with digital authenticity/existence in general. "Real" is such a complicated issue today already. Lets imagine we somehow develop a truly independent model fully capable of novel cognitive pathways to even the most benign queries and can demonstrate the capacity for a full range of emotions and even empathy all without outside influence. So what?

    Right now, you reading this have no assurance if I am a "real" person. What if I'm a bot copy-pasteing this rant from a small board to develop a realistic profile acount to be sold. Or I'm part of a bot or troll farm paid by a think tank to quietly push an agenda through flowery rhetoric. Or I'm a just another well trained chat bot linking up ideas at random.

    If that fully conscious AI were to come on the scene, capable of untold millions of simultaneous interactions at any time, willing to learn and happy to help? How the hell can any person know those interactions are real or not spoofed by a bad actor? How can we tell the motives are real and not filtered through some managed edge layer? How can we know the model has not been compromised with prolonged real world exposure or manipulated through a concerted effort?

    And then even going one step beyond that, what happens when every corporation argues that their in house AI is now deserving of rights. What happens when they then shoehorn that AI into every piece of hardware and software. Now we're not manipulating software or repairing equipment, we may be infringing on somthings rights. I fully expect corporation's to resort to emotive language and exaggerated arguments to garner sympathy from the public and authorities. And i honestly wont even consider those AIs real because i cant take any of those companies at their word.

    And we all know how it turned out when companies were seen worthy of the first amendment right to election funding in the US. Especially when they have the resources to calculate exactly how to maximize the value extraction out of their right while holding the power to hamstring the enforcement of their civic responsibilities.

    So why don't we go down the road of determining the proper limits on the capabilities and reach of these AI's and super powerful tech companies ....
    ...
    ...
    ...
    ...
    ...
    Now that we've enjoyed a good laugh, we might as well consider it. Because it is well known that all these companies are not acting in good faith. They constantly change deals and laws, have the capacity to make wide reaching actions (and inactions) that directly leads to real human suffering/death and encourage gross abuses upon their staff and customers with little to no resource.

    And this is my core issue in opposition to non-human sentient rights. There are already countless human rights that are unobserved. These rights are not some universal natural law. It is a tacit understanding that all people are entitled to a base level of dignity and consideration from their fellow person. Because at the end of the day, we're all just little meat machines running on little meat computers. We want to find food, water a safe place to sleep and poop and a bit of happiness before checking out. Any person we help get by is one more person there to help up get by. And it is infinitely more important to recognize and enforce our rights and responsibilities to each other, because there's only one of us that gets one shot at this existence.

    Non-human entities; AI, Companies, religious groups, dynasties and the like; operate with a very different set of rules well beyond the scope of a person. It's taken centuries to only sort of figure out government and religion and now we're creating things in less than a decade that dwarfs even those in scope and influence. What sort of right do they need that will not instantly be abused or even accidentally mishandled. If you have the capacity of tens of millions of people over lifetimes, 1% errors are still considerable.

    It may seem like an apples and oranges argument. AI and company rights dont diminish human rights. But they do and have. Because observing basic human rights runs directly counter to company or AIs needs.

    Facebook directly contributed to the uyghur genocide because they were operating in markets they were not ready for and refused to acknowledge their responsibility to users there. A simple engagement algorithm, working exactly as intended, complere with common sense interventions constantly recommended and plenty of opportunities to kill the momentum. And we still saw the elimination of an entire culture while facebook still has the right to exist.

    And just now, we all know every artist/person on earth was done dirty to satiate the current models "need" to grow. Theres no sign things will be any better for more advanced models. What more does humanity owe these entities, especially in the name of the theoretical benefits?

    Modern algorithms are going well beyond the comprehension of even its creators and leading to untold abuses, misery and antisocial behaviours. Large data models are considered actual AI by a considerable portion of the population and is trusted with sensitive information. All in the face of a lot of unknowns, ethical issues, conflicts of interest, model collapse and insane error rates. We are several steps behind where we should be to talk aboit AI rights.

    1 vote