I have serious reservations about this. Imagine being brought into this world and you're just a series of human brain cells spread on a circuit board and tasked with playing DOOM.
I have serious reservations about this. Imagine being brought into this world and you're just a series of human brain cells spread on a circuit board and tasked with playing DOOM.
I think it’s neat. It’s no more a human brain than an NES is an H100 just because they both use silicon transistors. Things shouldn’t be illegal because they are icky. I try to follow harm-based...
I think it’s neat. It’s no more a human brain than an NES is an H100 just because they both use silicon transistors. Things shouldn’t be illegal because they are icky. I try to follow harm-based ethics. Hopefully this tech can expand the good we can do for each other.
Yeah, I mean... maybe? We don't actually know what human consciousness even is or have a way to objectively measure the presence of it. I think this is a far cry away from an actual human brain,...
Exemplary
Yeah, I mean... maybe?
We don't actually know what human consciousness even is or have a way to objectively measure the presence of it.
I think this is a far cry away from an actual human brain, but it's certainly human at least in some aspect.
I'm not so much horrified at the current iteration, but of the iteration a decade down the road where this stuff gets more and more complex.
If it starts being useful and outperforming silicon based AI, do we eventually get to a point where we're running data centers full of trillions of human neurons to run workloads? How do you verify that something that complex isn't conscious? Where does the line get drawn?
Seems like a potential prototype torment nexus to me.
And the truth is, we will never have the ability to measure consciousness, even in the most fanciful thought experiments. Imagine we identified a set of neurons as perhaps being the seat of...
And the truth is, we will never have the ability to measure consciousness, even in the most fanciful thought experiments. Imagine we identified a set of neurons as perhaps being the seat of consciousness and decided to zap them on and off then ask them what happened to the subject to test whether their consciousness was affected. We would only be measuring the subject's memory, not their actual consciousness. In principle, it should be perfectly possible to remember an event you were not conscious, in the sense of having an actual subjective experience, for. Even if consciousness was somehow a physical object that we could touch with next-generation physics, we would have no way of knowing that it was actually consciousness.
We may never find some atom or laws of qualia, but I don't think we're stumbling in the dark. We can and have been making predictions about the inner life of other humans/animals since forever,...
We may never find some atom or laws of qualia, but I don't think we're stumbling in the dark. We can and have been making predictions about the inner life of other humans/animals since forever, and understanding the relationship between body, brain, and mind since at least the 1860s with Broca's aphasia or the 1750s with L'homme machine wartime observations.
There are reasonable assumptions to be made. We accept self-report when people say how they're feeling, and that this often generalizes. Sometimes corrections have to be made for things like fallible memory, cultural differences (e.g., an Eastern patient caught quietly saying "it hurts, it hurts", deferring to the doctor more than the WEIRD baseline), deception (e.g., sympathetic lameness in a pet who wants to be treated better), and other sources of bias.
If you only had access to a brain scan do you think you wouldn't be able to diagnose whether they're having pain based on that, or what sort of symptoms a stroke in a particular area might cause, along with the qualia that go along with that? Or that this predictive ability says nothing about our understanding of brain function or consciousness?
Currently things like transcranial magnetic stimulation can be used to to non-invasively modulate activity in the brain, and a quick looksuggests it may work like expected:
Overall, TMS was associated with statistically significant effects on warm and cool sensory thresholds, cold pain thresholds, suprathreshold stimulus unpleasantness ratings, and wind-up pain.
We compared time courses of a subjective scale of pain induced by intradermal capsaicin injection in seven normal subjects ... rTMS over M1 induced earlier recovery from acute pain compared with the sham or control conditions.
Bayesian evidence of reduced pain scores and increased heat pain thresholds were found after active rTMS, with no changes occurring after sham rTMS.
For the example of pain or other qualia, we can check brain activity against our own subjective experience (or more precisely, the memory of subjective experience) and, through the principle of...
For the example of pain or other qualia, we can check brain activity against our own subjective experience (or more precisely, the memory of subjective experience) and, through the principle of symmetry that human individuals are far more similar than different physically, can reasonably infer certain sense experience in another person from measuring such brain activity. Another implicit principle in this inference is that memory of subjective experience corresponds to actual experience, which is reasonable in one context but circular when you're trying to measure consciousness itself.
The problem with consciousness is that we can't, even for ourselves, correlate any activity in the brain with consciousness itself as opposed to mere memory. Neuroscientists often operate under a behaviorist definition of consciousness, which we can certainly make significant progress in understanding. But consciousness, as in the pure subjective experience, cannot be measured by logical necessity.
The principle of symmetry wouldn't let me understand that people like cilantro, have same-sex attraction, or the rest of the vast spectrum of the human condition. Self-report is as valid a way of...
The principle of symmetry wouldn't let me understand that people like cilantro, have same-sex attraction, or the rest of the vast spectrum of the human condition. Self-report is as valid a way of understanding consciousness as we've ever had, and if you discard it you may as well be committed to strong solipsism.
It's possible that it's all self-delusion and perfectly inaccessible. You can't categorically say you existed more than the current second, or that a tricksy demiurge didn't bury dinosaur bones to test you.
The question is how far can you go if instead you choose to take human self-report of experience as valid, since I think most people do that [when it's convenient].
We can only measure memory.
In principle, it should be perfectly possible to remember an event you were not conscious, in the sense of having an actual subjective experience, for.
I skipped this in my reply since I wasn't sure if you were saying that you ought to be able to remember events when your brain's memory formation was impaired?
Are you saying that someone having some acute pain induced by something like capsaicin actively reporting how they're feeling on some scale is "memory"?
The regular use of memory is something that can be induced in lab settings. You can have someone fill in details about a childhood trip that they never went on. Memory formation can be inhibited, which may be helpful in anticipation of some traumatic event (a painful surgery).
The amount we understand about how the brain works is wild to compare to any other time in history. That includes, at the very least, the ability to understand a human/mammal/vertebrate/whatever's experience.
But consciousness, as in the pure subjective experience...
Behaviorism was very successful at understanding behavior. It just happened that there was more to the picture, and when extinguishing a response to a phobia didn't extinguish the fear you ended up with a pretty miserable person.
But, accepting self-report, neurology is at the level of interest. We care about pain and suffering. Physics doesn't have to care about relativity until we need good GPS. That's the level you can meaningfully talk about ethical concerns. There may be some Planck length of the atom of qualia that is forever inaccessible. It may also not matter.
All models are flawed, but some are useful. Not all topics need to involve the most low-level layers that conceptually exist. Economics works okay without mastery of psychology->biology->chemistry->physics->fancy physics->inaccessibly fancy physics.
At each layer you get new precision and can ask/answer new questions, but you add a lot of complexity and may lose generalizability, and eventually you'll always hit a wall.
Many things can also be understood indirectly. My eyes glaze over when astronomers I've known have explained how various indirect effects are used in complicated ways, but they seem to get it, at least.
I'm not the person to figure it out. I just think it's overly hasty to dismiss the possibility that data mining the human experience won't turn up some patterns, which in turn can make informed guesses (cybernetics, activity modulation w/rTMS or drugs, designer organoids, who knows). That understanding is used to say "X is linked to acute pain" or that phantom limb syndrome occurs when neighboring regions invade the inactive region and take over the sensory map. They're tested in the same way we accept anytime you ask someone how they feel.
There's a lot variation in terminology and the term functionalist neuroscience may have been better (namely, the notion that the mind and brain should be described and understood in terms of...
There's a lot variation in terminology and the term functionalist neuroscience may have been better (namely, the notion that the mind and brain should be described and understood in terms of evolutionarily-adaptive action guidance).
I never said it was unreasonable to associate the memory of subjective experience, which is all that self-reporting can measure, with actual subjective experience. It's actually extremely useful practically and ethically. The issue is that if you're trying to understand consciousness truly, it becomes circular to say that a measurement of memory is a measurement of experience itself. At the final level of analysis, it isn't and we know that to be true.
Understanding consciousness truly, as opposed to merely for its practical benefits, is also of interest because we cannot understand the basic question why any of us are conscious. It makes perfect sense that complex life needs to make highly complex calculations about how to act based on sense information in order to survive to reproduce. It also makes plenty of sense that these sense experiences, in a functional sense, would need to be collated into a "stage" to be most effective. There is no necessity at all for there to be an actual subjective experience. We could all be "zombies" so to speak and evolution would carry on just the same. If you're interested in answering a question like this, you'll never get there by trying to measure consciousness through a self-reported memory of subjective experience, because it would be circular to do so.
Jargon can be messy, but the point I was going for was that I think neuroscience sits around the level I think most people care about-- giving insight about suffering to inform ethical positions....
Jargon can be messy, but the point I was going for was that I think neuroscience sits around the level I think most people care about-- giving insight about suffering to inform ethical positions. I don't really care about metaphysics or the "why" of consciousness. It's cool, and I'll enjoy Blindsight with the rest, but a true understanding of consciousness may be as inaccessible as a full picture of the universe and it wouldn't matter to me.
You called this "evil", which isn't exactly agnostic. Why? Would it be if it was mouse cells? A blackbox recreation of a neuron? We've been doing this sort of thing with rat cells for decades.
If Google/NPCs start screaming are you locked into consciousness being so completely inaccessible that anything outside of an arms-length of shared evolutionary history with humans is out of luck? Or instead do we have to treat any and all signs of distress/preference as potential harm?
Why do you have whatever views you do about treatment of a foetus? Do octopodes with their very different brain suffer, or just have nociception? Would it be good or bad were we to engineer/breed "pain-free" livestock (barring more ethical alternatives we aren't choosing)?
There are hard and important questions and, unfortunately, we have no other better tool than ourselves to try to answer them. I'm not convinced using our collective experience/reports as a basis can't go a lot further than most people give it credit for.
...it becomes circular to say that a measurement of memory is a measurement of experience itself.
Besides being the thing I care about, self-reports are samples that correspond to experience. There are lots of things we model statistically, indirectly, or understand top-down. We don't have to have access to the neutrino to infer its existence or qualities.
There are things we understand about the brain/consciousness now that were impossible in the past. Who knows what comes up when you start having neural prostheses, higher resolution scans, or deep analysis of all we know about different abnormal brain states? Personal view is it's very premature to write it off as the "hard problem".
Responding to signs of harm and distress is an interesting and valid starting point. I do think that the distinction between functional consciousness and subjective-experience consciousness is...
Responding to signs of harm and distress is an interesting and valid starting point. I do think that the distinction between functional consciousness and subjective-experience consciousness is quite important ethically. Given that we can't ever understand subjective-experience consciousness through measurement, except through introducing certain assumptions that may or may not be valid, we have a deep moral obligation to skew on the side of moral risk aversion. I don't know where consciousness comes from or what, aside from humans, are or aren't conscious. But the entities I can say with the strongest degree of certainty are conscious are humans, and when I see human neurons being hooked up into biological circuits I view it as both extremely morally risky and profane, in the sense that it dehumanizes human essence even if these biological circuits are not ultimately conscious.
I'd consider doing the same with mouse neurons, or any other biological neurons, only marginally less problematic. There's an interesting and serious question about the ethics of artificial neurons with respect to consciousness, but the distinction between artificial and biological neurons is a vitally important one. Not because there isn't a real possibility that these artificial neural circuits aren't conscious. But if we can never have a true understanding of subjective-experience consciousness, we should view any major step in terms of fundamental characteristics toward human brains to be morally perilous.
Really, the ultimate point I'd like to make is that the distinction between functional and subjective-experience consciousness has extremely profound ethical, philosophical, and metaphysical implications. Because it is logically impossible to scientifically isolate and measure subjective-experience consciousness, we will never be able to make progress on those particular ethical, philosophical, and metaphysical implications through scientific advancement. To suggest that it is possible introduces serious moral risk.
glances at the medical industry's insurance takeover with AI coverage auto-denials. looks over at the mishandled and unregulated AI tech industry power grabs over the last 5 years. squints at...
glances at the medical industry's insurance takeover with AI coverage auto-denials. looks over at the mishandled and unregulated AI tech industry power grabs over the last 5 years. squints at anthropic fending off the government due to sketchy power-grabby contracts
Nono, I'm sure the brain-tech will be in good hands.
I think with the kind of things found in the Epstein files, and the lack of action toward them, means that whatever kind of scrutiny there might be is going to be worthless. I don't mean to be a...
I think with the kind of things found in the Epstein files, and the lack of action toward them, means that whatever kind of scrutiny there might be is going to be worthless. I don't mean to be a downer, but this kind of stuff can be found all throughout science fiction, and the elites keep looking at those dystopian novels and forming new business plans.
Tell a CEO they can have cheap gooey workers that can't build a union, and they won't care how much of a soul accidentally gets trapped in the process.
A bit non-sequitor, but I remember being told my brain had orders of magnitude more compute than a desktop computer as a kid. Well, these days we have computers with many orders of magnitude more...
A bit non-sequitor, but I remember being told my brain had orders of magnitude more compute than a desktop computer as a kid. Well, these days we have computers with many orders of magnitude more compute than those old desktops. They were trying to make us feel capable as humans but it’s funny to see how quickly that rhetoric lost its power.
I'm thinking of this in a "ship of theseus" way and wondering how long it is before something approaching a "real human brain" is hooked up to do menial tasks like this.
I'm thinking of this in a "ship of theseus" way and wondering how long it is before something approaching a "real human brain" is hooked up to do menial tasks like this.
I think the Paradox of Heap (Sorites) may be a better analogy: adding more and more sand grains (brain cells), at what point does it become a heap (consciousness)? One grain at on its own can't...
I think the Paradox of Heap (Sorites) may be a better analogy: adding more and more sand grains (brain cells), at what point does it become a heap (consciousness)? One grain at on its own can't make a difference, but get enough together.... how many though?
I for one think this is really neat. This is the research we need to better understand our own brains and how they work. I don't share a lot of the other commentator's reservations about how this...
I for one think this is really neat. This is the research we need to better understand our own brains and how they work. I don't share a lot of the other commentator's reservations about how this is "evil" and how we should stand in horror of what this laboratory has created, but be so for real - we have no idea what consciousness is, how it works or why it does. This technology - reading neural spikes and logging their interactions with a, for lack of a better term, "controlled body", is probably the closest we'll get to a debug harness for the human brain. I really don't get why everyone's so negative about this. Are these cells conscious? If they were, what are the ethical implications? Hell, we don't know, and that's why we're doing this.
It would be amazing if one day we could debug a human brain in real time. Psychology of today will look like the dark ages once we figure that out. Imagine being able to see someone recovering...
It would be amazing if one day we could debug a human brain in real time. Psychology of today will look like the dark ages once we figure that out. Imagine being able to see someone recovering from depression, addiction, or PTSD in real time. Being able to reward their brain instantaneously for taking a step towards repairing a disorder. It’s so easy to get something done when you get dopamine for doing it. What if doing your CBT exercises was gamified?
How do you feel about artificial neural networks (ANN)? Or organoids (collections of functioning cells) for research? While ANN differs in a multitude of ways, there are similarities, and could be...
How do you feel about artificial neural networks (ANN)? Or organoids (collections of functioning cells) for research? While ANN differs in a multitude of ways, there are similarities, and could be designed to be a lot more similar to organic neurones. This is not rhetoric, I’m honestly interested in your answer.
Artificial neural networks are about as related to brains as the tree data structure is related to biological trees. It would be like wondering how pesticides interact with binary trees. They were...
Artificial neural networks are about as related to brains as the tree data structure is related to biological trees.
It would be like wondering how pesticides interact with binary trees.
They were called multi-level perceptrons originally, a more accurate, but less hype-inducing, name.
I would say they are closer than that! But it doesn’t really matter for the question. How do you feel about ANN, or how would you feel about simulated biological neuronal networks to model brain...
I would say they are closer than that! But it doesn’t really matter for the question. How do you feel about ANN, or how would you feel about simulated biological neuronal networks to model brain networks for say neurology research?
Personally, when I think too much about the edges of qualia and ethics, things get difficult/ambiguous/meaningless the deeper I go.
What do you think feels and doesn’t feel, or can suffer? I’m not sure whether the following suffer or not:
simulations of people?
bacteria?
-plants?
bugs/ants/slugs (or whatever simplest multicellular creature you want here)
individuals neurones in my conscious brain?
my immune cells?
my sub “non conscious” parts of my brain?
multi entity biological groupings?
my own recalled imagination of people suffering?
I’m pretty sure things that can’t process any information cannot suffer, but it becomes harder when they are part of a system which can.
I don’t think we will ever know the answer, but I can’t quite decide on an abstract level what I reckon is most plausible. I’m kind of tempted by a panpsychic philosophy but it’s much more soft and fuzzy than what I normally adopt as part of my mental model.
I don't really have an answer as to ethics of those technologies. Maybe we are subjecting the most complex, self-directed ANNs to torture, or maybe we're not. Non-neuronal organoids are less...
I don't really have an answer as to ethics of those technologies. Maybe we are subjecting the most complex, self-directed ANNs to torture, or maybe we're not. Non-neuronal organoids are less concerning to me, but there are bio-hazardous concerns I think with seeking to mechanize biology outside of life (ex. imagine a purely biological machine designed to compost food that's released into the wild and causes all sorts of ecological problems).
To me, given how little we know and can ever know about consciousness, we should be especially cautious when tampering with things we know are tied to consciousness in some way. Even if these human neuron circuits are not conscious, I also view it as certainly profane in that it dehumanizes human flesh. That itself has severe negative moral side effects, both in terms of pure pain/pleasure ethics and preserving things I, on "aesthetic" grounds, view as most essential (our humanity, how we derive meaning, respect for nature). The risk of what it may be doing and the certainty of what it will do is enough for me to call it evil.
Thank you, that was interesting. I like your point about tampering with things we do think cause consciousness, I think that is a reasonable way of thinking about it. Also to a certain extent I...
Thank you, that was interesting. I like your point about tampering with things we do think cause consciousness, I think that is a reasonable way of thinking about it.
Also to a certain extent I agree that it by definition will dehumanise human tissues. Do you feel the same way about blood transfusions, organ transplants, selling hair? I guess I can see how it would be different when it’s going to be made a part of another human.
However, in a way I see benefits of using neurones to process tissue. It keeps the processing within tissues which we use, and might help keep humans more competitive with synthetic computation methods. It makes it feel less like humans are being completely replaced, although thinking about it, it could be seen as enslavement of human cells instead. I’m a fan of huma intelligence amplification rather than replacement, and would like to see more that enables this.
How would you feel if the neurones linked with a human operator who then used them actively as part of wider consciousness? And how wild you feel About a similar device to this (in the article) that was implanted in a brain for stroke recovery for instance?
I guess I should've been more specific that it dehumanizes human brain tissue, which we can reasonably understand to be the seat of consciousness and as what makes us who we are as individuals....
I guess I should've been more specific that it dehumanizes human brain tissue, which we can reasonably understand to be the seat of consciousness and as what makes us who we are as individuals. The objectification of human flesh in general isn't good but in some cases can be justified if precautions are taken and at least the general gravity of the situation is taken into account. I don't think we should be growing human livers for fun or installing humanized tissues in mice for fun. The scientific justification should be sufficient to pass an ethics board. Hair is dead and more like a human product in the way that wool is for sheep, so I think that's much less a concern.
I truly don't see the utility in trying to be "competitive" with synthetic computation, or even trying to infinitely maximize computational ability in general. We've already long solved the food scarcity problem, from a pure production standpoint, globally. We already are on track to solve the greatest life threatening diseases like cancer and Alzheimer's. What need is there to keep going beyond a certain point?
As for medical applications, there are many ways to skin a cat in medicine and I anticipate the number of ways to skin the same cat will dramatically increase as medicine and biomedical technology advances. I'm not sure which article you're referring to, so I'm not sure how exactly they implemented the solution and what results they got. But in general, if there is a medical solution that has substantial ethical problems with it, I'm fairly confident that an alternative solution can be developed or might already exist.
I worry about a world where the majority of humans have no economic or military utility, with no ability to resist or revolt when our rights are violated and our institutions dissolved by ultra...
I worry about a world where the majority of humans have no economic or military utility, with no ability to resist or revolt when our rights are violated and our institutions dissolved by ultra rich and powerful. My view is that maintaining that utility under decentralised power (and ideally individualistic power) is what underwrites democracy. Hence the desire to compete with completely separate synthetic systems and amplify intelligence rather than automate it away from humans. Plus I enjoy thinking and would love to improve my ability to do so.
I was referring to a hypothetical device that had a similar form to the device mentioned in the post title, but was used instead to help stroke victims recover function (like their ability to play video games). I guess I am seeing how such research could be used to help advance medicine indirectly. Because medicine is so heavily regulated, it can be very slow to introduce new ideas (with good reason, but it’s definitely a disadvantage).
Democracy, if you were to provide a minimalist formal definition of it, is essentially the pooling together of power that is then directed according to the vote of a given constituency. A world in...
Democracy, if you were to provide a minimalist formal definition of it, is essentially the pooling together of power that is then directed according to the vote of a given constituency. A world in which there are no constraints on power (ie. the ability to do things, whether that's production or violence) is one where there can be no democracy. That is, only if we constrain the energy supply, it should still be possible to maintain democratic control over how the world operates. A world with limitless energy will operate not unlike a body with metastatic cancer: organizations, individuals, and nations will be able to grow and act with no check on their ambitions. People will say that constraining energy is immoral since it means more people will go hungry, but we have long since solved the food scarcity problem globally and can very easily house all people with current resources. The issue is allocation of existing resources not the quantity of resources.
I think it's the wrong focus to try to compete with AI. Common people currently have very little leverage and are on track, with or without AI, to lose even more if it. I can't imagine that we'd start putting humans in Tesla factories just because they happen to be cyborgs. That also sounds like a pretty hellish world to have to live as barely human cyborgs to maybe sort of compete.
When a person has a stroke or other traumatic brain injury, parts of their brain stop working, and their awareness dims. When we were little kids, our brains grew and our awareness brightened....
When a person has a stroke or other traumatic brain injury, parts of their brain stop working, and their awareness dims.
When we were little kids, our brains grew and our awareness brightened.
Where is the line? Who's to say that there isn't a glimmer of consciousness in these smears of human brain cells that are showing us they can learn and remember?
I have serious reservations about this. Imagine being brought into this world and you're just a series of human brain cells spread on a circuit board and tasked with playing DOOM.
That sounds like a pretty solid fate compared to some of the unluckier actual humans on this planet.
Maybe? Is constant Groundhog Day'ing your way through a life of DOOM, the only world you know, better?
If you have no context for why Doom is a horror game, is it still bad? If it's the only world you know or can imagine, is there suffering?
Mmm, compelling point. Is suffering universal?
Study the wonderful documentary "Wreck it Ralph" to learn more.
Is anyone else feeling the pure horror I am in reaction to this? How is this even legal?
I think it’s neat. It’s no more a human brain than an NES is an H100 just because they both use silicon transistors. Things shouldn’t be illegal because they are icky. I try to follow harm-based ethics. Hopefully this tech can expand the good we can do for each other.
Yeah, I mean... maybe?
We don't actually know what human consciousness even is or have a way to objectively measure the presence of it.
I think this is a far cry away from an actual human brain, but it's certainly human at least in some aspect.
I'm not so much horrified at the current iteration, but of the iteration a decade down the road where this stuff gets more and more complex.
If it starts being useful and outperforming silicon based AI, do we eventually get to a point where we're running data centers full of trillions of human neurons to run workloads? How do you verify that something that complex isn't conscious? Where does the line get drawn?
Seems like a potential prototype torment nexus to me.
And the truth is, we will never have the ability to measure consciousness, even in the most fanciful thought experiments. Imagine we identified a set of neurons as perhaps being the seat of consciousness and decided to zap them on and off then ask them what happened to the subject to test whether their consciousness was affected. We would only be measuring the subject's memory, not their actual consciousness. In principle, it should be perfectly possible to remember an event you were not conscious, in the sense of having an actual subjective experience, for. Even if consciousness was somehow a physical object that we could touch with next-generation physics, we would have no way of knowing that it was actually consciousness.
We may never find some atom or laws of qualia, but I don't think we're stumbling in the dark. We can and have been making predictions about the inner life of other humans/animals since forever, and understanding the relationship between body, brain, and mind since at least the 1860s with Broca's aphasia or the 1750s with L'homme machine wartime observations.
There are reasonable assumptions to be made. We accept self-report when people say how they're feeling, and that this often generalizes. Sometimes corrections have to be made for things like fallible memory, cultural differences (e.g., an Eastern patient caught quietly saying "it hurts, it hurts", deferring to the doctor more than the WEIRD baseline), deception (e.g., sympathetic lameness in a pet who wants to be treated better), and other sources of bias.
If you only had access to a brain scan do you think you wouldn't be able to diagnose whether they're having pain based on that, or what sort of symptoms a stroke in a particular area might cause, along with the qualia that go along with that? Or that this predictive ability says nothing about our understanding of brain function or consciousness?
Currently things like transcranial magnetic stimulation can be used to to non-invasively modulate activity in the brain, and a quick look suggests it may work like expected:
For the example of pain or other qualia, we can check brain activity against our own subjective experience (or more precisely, the memory of subjective experience) and, through the principle of symmetry that human individuals are far more similar than different physically, can reasonably infer certain sense experience in another person from measuring such brain activity. Another implicit principle in this inference is that memory of subjective experience corresponds to actual experience, which is reasonable in one context but circular when you're trying to measure consciousness itself.
The problem with consciousness is that we can't, even for ourselves, correlate any activity in the brain with consciousness itself as opposed to mere memory. Neuroscientists often operate under a behaviorist definition of consciousness, which we can certainly make significant progress in understanding. But consciousness, as in the pure subjective experience, cannot be measured by logical necessity.
The principle of symmetry wouldn't let me understand that people like cilantro, have same-sex attraction, or the rest of the vast spectrum of the human condition. Self-report is as valid a way of understanding consciousness as we've ever had, and if you discard it you may as well be committed to strong solipsism.
It's possible that it's all self-delusion and perfectly inaccessible. You can't categorically say you existed more than the current second, or that a tricksy demiurge didn't bury dinosaur bones to test you.
The question is how far can you go if instead you choose to take human self-report of experience as valid, since I think most people do that [when it's convenient].
I skipped this in my reply since I wasn't sure if you were saying that you ought to be able to remember events when your brain's memory formation was impaired?
Are you saying that someone having some acute pain induced by something like capsaicin actively reporting how they're feeling on some scale is "memory"?
The regular use of memory is something that can be induced in lab settings. You can have someone fill in details about a childhood trip that they never went on. Memory formation can be inhibited, which may be helpful in anticipation of some traumatic event (a painful surgery).
The amount we understand about how the brain works is wild to compare to any other time in history. That includes, at the very least, the ability to understand a human/mammal/vertebrate/whatever's experience.
Behaviorism was very successful at understanding behavior. It just happened that there was more to the picture, and when extinguishing a response to a phobia didn't extinguish the fear you ended up with a pretty miserable person.
But, accepting self-report, neurology is at the level of interest. We care about pain and suffering. Physics doesn't have to care about relativity until we need good GPS. That's the level you can meaningfully talk about ethical concerns. There may be some Planck length of the atom of qualia that is forever inaccessible. It may also not matter.
All models are flawed, but some are useful. Not all topics need to involve the most low-level layers that conceptually exist. Economics works okay without mastery of psychology->biology->chemistry->physics->fancy physics->inaccessibly fancy physics.
At each layer you get new precision and can ask/answer new questions, but you add a lot of complexity and may lose generalizability, and eventually you'll always hit a wall.
Many things can also be understood indirectly. My eyes glaze over when astronomers I've known have explained how various indirect effects are used in complicated ways, but they seem to get it, at least.
I'm not the person to figure it out. I just think it's overly hasty to dismiss the possibility that data mining the human experience won't turn up some patterns, which in turn can make informed guesses (cybernetics, activity modulation w/rTMS or drugs, designer organoids, who knows). That understanding is used to say "X is linked to acute pain" or that phantom limb syndrome occurs when neighboring regions invade the inactive region and take over the sensory map. They're tested in the same way we accept anytime you ask someone how they feel.
There's a lot variation in terminology and the term functionalist neuroscience may have been better (namely, the notion that the mind and brain should be described and understood in terms of evolutionarily-adaptive action guidance).
I never said it was unreasonable to associate the memory of subjective experience, which is all that self-reporting can measure, with actual subjective experience. It's actually extremely useful practically and ethically. The issue is that if you're trying to understand consciousness truly, it becomes circular to say that a measurement of memory is a measurement of experience itself. At the final level of analysis, it isn't and we know that to be true.
Understanding consciousness truly, as opposed to merely for its practical benefits, is also of interest because we cannot understand the basic question why any of us are conscious. It makes perfect sense that complex life needs to make highly complex calculations about how to act based on sense information in order to survive to reproduce. It also makes plenty of sense that these sense experiences, in a functional sense, would need to be collated into a "stage" to be most effective. There is no necessity at all for there to be an actual subjective experience. We could all be "zombies" so to speak and evolution would carry on just the same. If you're interested in answering a question like this, you'll never get there by trying to measure consciousness through a self-reported memory of subjective experience, because it would be circular to do so.
Jargon can be messy, but the point I was going for was that I think neuroscience sits around the level I think most people care about-- giving insight about suffering to inform ethical positions. I don't really care about metaphysics or the "why" of consciousness. It's cool, and I'll enjoy Blindsight with the rest, but a true understanding of consciousness may be as inaccessible as a full picture of the universe and it wouldn't matter to me.
You called this "evil", which isn't exactly agnostic. Why? Would it be if it was mouse cells? A blackbox recreation of a neuron? We've been doing this sort of thing with rat cells for decades.
If Google/NPCs start screaming are you locked into consciousness being so completely inaccessible that anything outside of an arms-length of shared evolutionary history with humans is out of luck? Or instead do we have to treat any and all signs of distress/preference as potential harm?
Why do you have whatever views you do about treatment of a foetus? Do octopodes with their very different brain suffer, or just have nociception? Would it be good or bad were we to engineer/breed "pain-free" livestock (barring more ethical alternatives we aren't choosing)?
There are hard and important questions and, unfortunately, we have no other better tool than ourselves to try to answer them. I'm not convinced using our collective experience/reports as a basis can't go a lot further than most people give it credit for.
Besides being the thing I care about, self-reports are samples that correspond to experience. There are lots of things we model statistically, indirectly, or understand top-down. We don't have to have access to the neutrino to infer its existence or qualities.
There are things we understand about the brain/consciousness now that were impossible in the past. Who knows what comes up when you start having neural prostheses, higher resolution scans, or deep analysis of all we know about different abnormal brain states? Personal view is it's very premature to write it off as the "hard problem".
Not going to reply anymore, but cheers.
Responding to signs of harm and distress is an interesting and valid starting point. I do think that the distinction between functional consciousness and subjective-experience consciousness is quite important ethically. Given that we can't ever understand subjective-experience consciousness through measurement, except through introducing certain assumptions that may or may not be valid, we have a deep moral obligation to skew on the side of moral risk aversion. I don't know where consciousness comes from or what, aside from humans, are or aren't conscious. But the entities I can say with the strongest degree of certainty are conscious are humans, and when I see human neurons being hooked up into biological circuits I view it as both extremely morally risky and profane, in the sense that it dehumanizes human essence even if these biological circuits are not ultimately conscious.
I'd consider doing the same with mouse neurons, or any other biological neurons, only marginally less problematic. There's an interesting and serious question about the ethics of artificial neurons with respect to consciousness, but the distinction between artificial and biological neurons is a vitally important one. Not because there isn't a real possibility that these artificial neural circuits aren't conscious. But if we can never have a true understanding of subjective-experience consciousness, we should view any major step in terms of fundamental characteristics toward human brains to be morally perilous.
Really, the ultimate point I'd like to make is that the distinction between functional and subjective-experience consciousness has extremely profound ethical, philosophical, and metaphysical implications. Because it is logically impossible to scientifically isolate and measure subjective-experience consciousness, we will never be able to make progress on those particular ethical, philosophical, and metaphysical implications through scientific advancement. To suggest that it is possible introduces serious moral risk.
glances at the medical industry's insurance takeover with AI coverage auto-denials. looks over at the mishandled and unregulated AI tech industry power grabs over the last 5 years. squints at anthropic fending off the government due to sketchy power-grabby contracts
Nono, I'm sure the brain-tech will be in good hands.
If it's any consolation I think the fact that they are human brain cells means it's more likely to be under scrutiny.
I think with the kind of things found in the Epstein files, and the lack of action toward them, means that whatever kind of scrutiny there might be is going to be worthless. I don't mean to be a downer, but this kind of stuff can be found all throughout science fiction, and the elites keep looking at those dystopian novels and forming new business plans.
Tell a CEO they can have cheap gooey workers that can't build a union, and they won't care how much of a soul accidentally gets trapped in the process.
We always knew The Matrix wasn't using our bodies as batteries....but using our unused grey matter for extra compute makes a lot more sense.
Wasn't that the Wachowski's initial intent before it got simplified into "batteries" due to test screenings not getting it?
A bit non-sequitor, but I remember being told my brain had orders of magnitude more compute than a desktop computer as a kid. Well, these days we have computers with many orders of magnitude more compute than those old desktops. They were trying to make us feel capable as humans but it’s funny to see how quickly that rhetoric lost its power.
I'm thinking of this in a "ship of theseus" way and wondering how long it is before something approaching a "real human brain" is hooked up to do menial tasks like this.
I think the Paradox of Heap (Sorites) may be a better analogy: adding more and more sand grains (brain cells), at what point does it become a heap (consciousness)? One grain at on its own can't make a difference, but get enough together.... how many though?
I for one think this is really neat. This is the research we need to better understand our own brains and how they work. I don't share a lot of the other commentator's reservations about how this is "evil" and how we should stand in horror of what this laboratory has created, but be so for real - we have no idea what consciousness is, how it works or why it does. This technology - reading neural spikes and logging their interactions with a, for lack of a better term, "controlled body", is probably the closest we'll get to a debug harness for the human brain. I really don't get why everyone's so negative about this. Are these cells conscious? If they were, what are the ethical implications? Hell, we don't know, and that's why we're doing this.
It would be amazing if one day we could debug a human brain in real time. Psychology of today will look like the dark ages once we figure that out. Imagine being able to see someone recovering from depression, addiction, or PTSD in real time. Being able to reward their brain instantaneously for taking a step towards repairing a disorder. It’s so easy to get something done when you get dopamine for doing it. What if doing your CBT exercises was gamified?
This is evil to me. We have to draw a line somewhere and, for me, this is it.
How do you feel about artificial neural networks (ANN)? Or organoids (collections of functioning cells) for research? While ANN differs in a multitude of ways, there are similarities, and could be designed to be a lot more similar to organic neurones. This is not rhetoric, I’m honestly interested in your answer.
Artificial neural networks are about as related to brains as the tree data structure is related to biological trees.
It would be like wondering how pesticides interact with binary trees.
They were called multi-level perceptrons originally, a more accurate, but less hype-inducing, name.
I would say they are closer than that! But it doesn’t really matter for the question. How do you feel about ANN, or how would you feel about simulated biological neuronal networks to model brain networks for say neurology research?
Personally, when I think too much about the edges of qualia and ethics, things get difficult/ambiguous/meaningless the deeper I go.
What do you think feels and doesn’t feel, or can suffer? I’m not sure whether the following suffer or not:
-plants?
I’m pretty sure things that can’t process any information cannot suffer, but it becomes harder when they are part of a system which can.
I don’t think we will ever know the answer, but I can’t quite decide on an abstract level what I reckon is most plausible. I’m kind of tempted by a panpsychic philosophy but it’s much more soft and fuzzy than what I normally adopt as part of my mental model.
I don't really have an answer as to ethics of those technologies. Maybe we are subjecting the most complex, self-directed ANNs to torture, or maybe we're not. Non-neuronal organoids are less concerning to me, but there are bio-hazardous concerns I think with seeking to mechanize biology outside of life (ex. imagine a purely biological machine designed to compost food that's released into the wild and causes all sorts of ecological problems).
To me, given how little we know and can ever know about consciousness, we should be especially cautious when tampering with things we know are tied to consciousness in some way. Even if these human neuron circuits are not conscious, I also view it as certainly profane in that it dehumanizes human flesh. That itself has severe negative moral side effects, both in terms of pure pain/pleasure ethics and preserving things I, on "aesthetic" grounds, view as most essential (our humanity, how we derive meaning, respect for nature). The risk of what it may be doing and the certainty of what it will do is enough for me to call it evil.
Thank you, that was interesting. I like your point about tampering with things we do think cause consciousness, I think that is a reasonable way of thinking about it.
Also to a certain extent I agree that it by definition will dehumanise human tissues. Do you feel the same way about blood transfusions, organ transplants, selling hair? I guess I can see how it would be different when it’s going to be made a part of another human.
However, in a way I see benefits of using neurones to process tissue. It keeps the processing within tissues which we use, and might help keep humans more competitive with synthetic computation methods. It makes it feel less like humans are being completely replaced, although thinking about it, it could be seen as enslavement of human cells instead. I’m a fan of huma intelligence amplification rather than replacement, and would like to see more that enables this.
How would you feel if the neurones linked with a human operator who then used them actively as part of wider consciousness? And how wild you feel About a similar device to this (in the article) that was implanted in a brain for stroke recovery for instance?
I guess I should've been more specific that it dehumanizes human brain tissue, which we can reasonably understand to be the seat of consciousness and as what makes us who we are as individuals. The objectification of human flesh in general isn't good but in some cases can be justified if precautions are taken and at least the general gravity of the situation is taken into account. I don't think we should be growing human livers for fun or installing humanized tissues in mice for fun. The scientific justification should be sufficient to pass an ethics board. Hair is dead and more like a human product in the way that wool is for sheep, so I think that's much less a concern.
I truly don't see the utility in trying to be "competitive" with synthetic computation, or even trying to infinitely maximize computational ability in general. We've already long solved the food scarcity problem, from a pure production standpoint, globally. We already are on track to solve the greatest life threatening diseases like cancer and Alzheimer's. What need is there to keep going beyond a certain point?
As for medical applications, there are many ways to skin a cat in medicine and I anticipate the number of ways to skin the same cat will dramatically increase as medicine and biomedical technology advances. I'm not sure which article you're referring to, so I'm not sure how exactly they implemented the solution and what results they got. But in general, if there is a medical solution that has substantial ethical problems with it, I'm fairly confident that an alternative solution can be developed or might already exist.
I worry about a world where the majority of humans have no economic or military utility, with no ability to resist or revolt when our rights are violated and our institutions dissolved by ultra rich and powerful. My view is that maintaining that utility under decentralised power (and ideally individualistic power) is what underwrites democracy. Hence the desire to compete with completely separate synthetic systems and amplify intelligence rather than automate it away from humans. Plus I enjoy thinking and would love to improve my ability to do so.
I was referring to a hypothetical device that had a similar form to the device mentioned in the post title, but was used instead to help stroke victims recover function (like their ability to play video games). I guess I am seeing how such research could be used to help advance medicine indirectly. Because medicine is so heavily regulated, it can be very slow to introduce new ideas (with good reason, but it’s definitely a disadvantage).
Democracy, if you were to provide a minimalist formal definition of it, is essentially the pooling together of power that is then directed according to the vote of a given constituency. A world in which there are no constraints on power (ie. the ability to do things, whether that's production or violence) is one where there can be no democracy. That is, only if we constrain the energy supply, it should still be possible to maintain democratic control over how the world operates. A world with limitless energy will operate not unlike a body with metastatic cancer: organizations, individuals, and nations will be able to grow and act with no check on their ambitions. People will say that constraining energy is immoral since it means more people will go hungry, but we have long since solved the food scarcity problem globally and can very easily house all people with current resources. The issue is allocation of existing resources not the quantity of resources.
I think it's the wrong focus to try to compete with AI. Common people currently have very little leverage and are on track, with or without AI, to lose even more if it. I can't imagine that we'd start putting humans in Tesla factories just because they happen to be cyborgs. That also sounds like a pretty hellish world to have to live as barely human cyborgs to maybe sort of compete.
When a person has a stroke or other traumatic brain injury, parts of their brain stop working, and their awareness dims.
When we were little kids, our brains grew and our awareness brightened.
Where is the line? Who's to say that there isn't a glimmer of consciousness in these smears of human brain cells that are showing us they can learn and remember?
It's fascinating research, but also horrific.