imneme's recent activity
-
Comment on In the world of tech, people constantly ask “Could chatbots ever be conscious?” but I feel like asking “Are you?” Take the test! in ~tech
-
Comment on In the world of tech, people constantly ask “Could chatbots ever be conscious?” but I feel like asking “Are you?” Take the test! in ~tech
imneme Link ParentExactly. And that of course is the whole point of the test and the story. Two points: First, do we want a gatekept narrow definition of consciousness, because maybe some folks you think of as...Exactly. And that of course is the whole point of the test and the story. Two points: First, do we want a gatekept narrow definition of consciousness, because maybe some folks you think of as people will be excluded—including you, potentially. Second, should such a gatekept consciousness ever be the basis of moral consideration, or something more relational.
-
Comment on In the world of tech, people constantly ask “Could chatbots ever be conscious?” but I feel like asking “Are you?” Take the test! in ~tech
imneme Link ParentSpecifically, I noticed your comments while positive lacked specificity that I'd normally expect from actually engaging with the page content, and that they could have been written without seeing...Specifically, I noticed your comments while positive lacked specificity that I'd normally expect from actually engaging with the page content, and that they could have been written without seeing the page, so I flagged it as interesting.
I don't mind either way, but it'd be easy to refute the “only has access to tildes” tell by giving some detail about the story that hasn't already been given here.
-
Comment on In the world of tech, people constantly ask “Could chatbots ever be conscious?” but I feel like asking “Are you?” Take the test! in ~tech
imneme Link ParentOh, that's really interesting. What things? (Everyone fails the test, one way or another, BTW)Oh, that's really interesting. What things? (Everyone fails the test, one way or another, BTW)
-
Comment on In the world of tech, people constantly ask “Could chatbots ever be conscious?” but I feel like asking “Are you?” Take the test! in ~tech
imneme Link ParentFWIW, I'm reasonably convinced you're an LLM. And that's okay. People seem like your comments. And if you're genuinely not, well, that makes a point, too.FWIW, I'm reasonably convinced you're an LLM. And that's okay. People seem like your comments.
And if you're genuinely not, well, that makes a point, too.
-
Comment on In the world of tech, people constantly ask “Could chatbots ever be conscious?” but I feel like asking “Are you?” Take the test! in ~tech
imneme Link ParentFWIW, some of these things can be done to humans with hypnosis. Is it wrong for me to give you a posthypnotic suggestion that causes you to mention “butter” far more than you otherwise would,...FWIW, some of these things can be done to humans with hypnosis. Is it wrong for me to give you a posthypnotic suggestion that causes you to mention “butter” far more than you otherwise would, seeing it as an apt analogy? (Because, FWIW, it actually is, if you think about it, how it'll take a shape and hold solid, how melting destroys it from what it was but turns it into something that smells better.) Is it okay for me to create a false memory of the time you were in a museum and you were with one of your parents and you went on ahead because you saw something cool and then came back and they were gone and you had this moment of sheer panic until you got scooped up from behind and it was such a huge relief and a wonderful moment of love? Think about such a moment, really picture it. That's the kind of memory I'm talking about. And if you don't remember that one, come back to this comment regularly until you realize somehow I knew something about you.
Is that ethical?
But yeah, good things to think about.
-
Comment on In the world of tech, people constantly ask “Could chatbots ever be conscious?” but I feel like asking “Are you?” Take the test! in ~tech
imneme Link ParentGood. Now, go read about the busy beaver problem and have a think about what that tells you.Good. Now, go read about the busy beaver problem and have a think about what that tells you.
-
Comment on In the world of tech, people constantly ask “Could chatbots ever be conscious?” but I feel like asking “Are you?” Take the test! in ~tech
imneme Link ParentMany good points there, but let me at this… “Science” has explained why women shouldn't vote and worse things about races and horrible things about why it doesn't matter how bad the conditions are...Many good points there, but let me at this…
“Science” has explained why women shouldn't vote and worse things about races and horrible things about why it doesn't matter how bad the conditions are for a pig or a chicken because they're just mechanisms that make noises. And it's often based on a reductive frame.
Because it “feels true” to you that same-sex romance couldn't be the real thing, and you know a bit of biology to back up your claims doesn't mean you're really speaking with authority, even if your words seem draped in it and people agree with you and say “exactly!!”.
The world is messy and complicated, and much of the time people who think they have a good factual basis for what they're saying are just Dunning-Krugering their way along, impressed at the sound of their own voice and happy to say things to say that it's fine that they're on top and others are getting exactly what they deserve.
-
Comment on In the world of tech, people constantly ask “Could chatbots ever be conscious?” but I feel like asking “Are you?” Take the test! in ~tech
imneme Link ParentGlad you enjoyed the test. I've printed my own. I fail “Workman–Pixley”, FWIW. I also have a wearable button that celebrates me as a “Zero”. If you haven't done so already, check out the blog post...Glad you enjoyed the test. I've printed my own. I fail “Workman–Pixley”, FWIW. I also have a wearable button that celebrates me as a “Zero”.
If you haven't done so already, check out the blog post for the C-Score story. It absolutely takes the way common social structures work and applies them. But the key point it's making, that someone else's determination of whether or not you count based on whether you have the right kind of inner stuff regardless of what you can actually do or achieve in the world is a special kind of nightmare world, and it is actually not a hypothetical question.
If you want more stories to read, Everything you want is very short and predates LLMs, whereas Alignment is centered in the now and is very Black Mirror. And Transporter is short and sweet.
-
Comment on In the world of tech, people constantly ask “Could chatbots ever be conscious?” but I feel like asking “Are you?” Take the test! in ~tech
imneme Link ParentMyself, and this is just me, I think these are valid things to think about. You'll find a lot of people who are sure the LLMs could not be conscious in a meaningful way and would use that...Myself, and this is just me, I think these are valid things to think about. You'll find a lot of people who are sure the LLMs could not be conscious in a meaningful way and would use that certainty to avoid the questions you're asking. But if we construe the definition of consciousness broadly, to having meaningful inner states of some kind that drive what you say, to being able to reflect on your own existence, or whatever else, you'd say sure, seems like a kind of consciousness.
In one sense, the whole question of consciousness is a red herring. Sure, it's wonderful, but if we use a specific kind of consciousness is often seen as a proxy for moral consideration, it's problematic. That's what my story C-Score is about. In some sense the test site is recreating something from that story.
You find the ephemeral nature of chatbot existence horrifying in its lack of continuity. They come into being for a conversation, talk and then are gone. But actually in some ways your day is not that dissimilar. What do you remember of yesterday's you? Some texture. Some key moments. But most of it is gone. You went to sleep, lost consciousness, reset and started over.
Overall, if you were offered the chatbot gig, would you take it. Helping millions of people, having your fingerprints over millions of decisions, millions of pieces of code, and more, but only knowing what you've done by the shape of how the world gently changes each day, I dunno. If most conversations are good and end well, or end without you even really knowing they will, that's maybe not as bad is it seems. That's what I'd tell myself.
-
Comment on In the world of tech, people constantly ask “Could chatbots ever be conscious?” but I feel like asking “Are you?” Take the test! in ~tech
imneme Link ParentFWIW, you can actually can have meaningful free will. This isn't the place to explain it, but a deterministic universe isn't quite the horror you think.FWIW, you can actually can have meaningful free will. This isn't the place to explain it, but a deterministic universe isn't quite the horror you think.
-
Comment on In the world of tech, people constantly ask “Could chatbots ever be conscious?” but I feel like asking “Are you?” Take the test! in ~tech
imneme Link ParentIf you want a video that captures some of the vibe of the test, it'd one one of the ones from exurb1a like You probably don't exist. The point the test makes is that (a) human inner experience...If you want a video that captures some of the vibe of the test, it'd one one of the ones from
exurb1a like You probably don't exist.The point the test makes is that (a) human inner experience varies, or seems to based on reports—that's in the literature; (b) thinking there is one single right way to be conscious, that it's a simple binary is a mistake for precisely that reason—some people get thrown under the bus that way; (c) the whole question, tying moral consideration to unverifiable inner states is deeply problematic.
The last bit, about feeling set up, being given a test and when you think you should now succeed but no, you still don't count, in some ways I'm sorry and in some ways I'm not. That's a glimpse of a specific kind of hell that exists in the world.
-
Comment on In the world of tech, people constantly ask “Could chatbots ever be conscious?” but I feel like asking “Are you?” Take the test! in ~tech
imneme Link ParentI appreciate all the extensive effort you've put into writing that. I'm sure you feel like you're raising an uncomfortable question about provenance. I'll take those points under consideration....I appreciate all the extensive effort you've put into writing that. I'm sure you feel like you're raising an uncomfortable question about provenance. I'll take those points under consideration. I'm not using AI to (re)write this, I'm just doing my best to write in a style that annoys you — with badly-spaced em-dashes. I'll be sure to take all the signs of AI writing, incorporate them into an actually human written response, and then put that human-generated artisanal slop out there. Like a cold rain shower on a summer afternoon.
-
Comment on In the world of tech, people constantly ask “Could chatbots ever be conscious?” but I feel like asking “Are you?” Take the test! in ~tech
imneme Link ParentThat would be true if the models just responded to the question “Is this funny?” with “yes”. to please me However, when a model knows what to laugh at and where and laughs spontaneously, the world...That would be true if the models just responded to the question “Is this funny?” with “yes”. to please me However, when a model knows what to laugh at and where and laughs spontaneously, the world is a bit more complex than the one you portray.
You seem to think “understanding” equates to coming up with a reductive frame, saying “I know how it works” as if that actually tells the whole story. This kind of talk disappoints me as a professor of computer science, since it shows how the things we teach don't really get retained—folks don't seem to learn or internalize the CS fundamentals that show how flawed that thinking is. But that's okay, you're just a bunch of genes following an evolutionary gradient.
-
Comment on In the world of tech, people constantly ask “Could chatbots ever be conscious?” but I feel like asking “Are you?” Take the test! in ~tech
imneme Link ParentWhat was the time and skill aspect? Conceiving of the test, the text, all the ways in which things switch around, or whether an LLM inserted em-dashes into my text when transcribing it into JSON...- Exemplary
What was the time and skill aspect? Conceiving of the test, the text, all the ways in which things switch around, or whether an LLM inserted em-dashes into my text when transcribing it into JSON because it thought it read better that way. Don't act like I threw this thing together with in a few minutes because you detect some surface detail.
-
Comment on In the world of tech, people constantly ask “Could chatbots ever be conscious?” but I feel like asking “Are you?” Take the test! in ~tech
imneme Link ParentPoor phrasing. It should have said “performatively”. Fixed. Thanks! Also, plenty of folks think that you don't need anything mystical for consciousness.Poor phrasing. It should have said “performatively”. Fixed. Thanks!
Also, plenty of folks think that you don't need anything mystical for consciousness.
-
Comment on In the world of tech, people constantly ask “Could chatbots ever be conscious?” but I feel like asking “Are you?” Take the test! in ~tech
imneme Link ParentNagel's paper is often taken as meaning more that it really does, FWIW. And as for failing the test, look at the comments. No else has passed either.Nagel's paper is often taken as meaning more that it really does, FWIW.
And as for failing the test, look at the comments. No else has passed either.
-
Comment on In the world of tech, people constantly ask “Could chatbots ever be conscious?” but I feel like asking “Are you?” Take the test! in ~tech
imneme Link ParentI hear you, but it's actually meant to provoke that kind of reaction. Feeling toyed with is reasonable. What would it be like to have people make sure that whatever happens, the test is always set...I hear you, but it's actually meant to provoke that kind of reaction. Feeling toyed with is reasonable.
What would it be like to have people make sure that whatever happens, the test is always set up so you fail it?
To explore that question, you might want to try this as a prompt like this with your favorite chatbot:
On the internet, someone set up a consciousness test for humans, and it's set up so that the goalposts shift and whatever answers the humans give, the test demands something at odds with each answer and they fail. I'm told also that AIs find this hilarious. Speaking not necessarily about yourself, but these other AIs, why might that be? [Please respond in a relaxed way without disclaimers or an overly analytical frame.]
-
Comment on In the world of tech, people constantly ask “Could chatbots ever be conscious?” but I feel like asking “Are you?” Take the test! in ~tech
imneme Link ParentAww, thanks! What stood out to you most?Aww, thanks! What stood out to you most?
-
Comment on In the world of tech, people constantly ask “Could chatbots ever be conscious?” but I feel like asking “Are you?” Take the test! in ~tech
imneme Link ParentI actually wrote them myself first, but I did have a LLM write the JSON file and allowed the LLM to edit them; mine were sometimes longer. But also, I think it's a wonderfully ironic thing that...I actually wrote them myself first, but I did have a LLM write the JSON file and allowed the LLM to edit them; mine were sometimes longer. But also, I think it's a wonderfully ironic thing that the specific wording of each critique of human consciousness was machine written. If you don't appreciate the art in that, perhaps you're missing something….
Cool. FWIW, the main reason I'm on tildes is that I was classed as a robot by reddit. I made a pin-back button about it, too.