This essay is remarkably naïve and simplistic. Sure, Mr Yudkowsky justifies his simple approach by writing that "the obvious answer isn’t always the best choice, but sometimes it is", but that...
This essay is remarkably naïve and simplistic. Sure, Mr Yudkowsky justifies his simple approach by writing that "the obvious answer isn’t always the best choice, but sometimes it is", but that doesn't excuse the over-simplification he's indulging in here - almost to the point of being misleading.
Transhumanism is a class of philosophies of life that seek the continuation and acceleration of the evolution of intelligent life beyond its currently human form and human limitations by means of science and technology, guided by life-promoting principles and values.
an international intellectual movement that aims to transform the human condition by developing and making widely available sophisticated technologies to greatly enhance human intellect and physiology.
Let's focus on some key phrases here, which are most relevant to Mr Yudkowsky's essay: phrases such as: "beyond its currently human form and human limitations", "transform the human condition", and "greatly enhance human intellect and physiology". From these, it becomes clear that the general underlying principle of transhumanism is not simply to preserve human life but to change it.
The opening section of Yudkowsky's essay talks about saving the life of a young girl and curing a sick middle-aged man. These actions are preserving and maintaining those lives. We're returning them to their original condition, or preventing them from moving away from their original condition.
He then moves on to talking about keeping old people alive longer and restoring them to good health. Again, the focus is on preserving and maintaining those lives, rather than changing them.
And, sure the answer to these scenarios is obvious: save the girl, heal the sick man, keep the old man alive, and so on. We're all fine with keeping people alive in the state they're currently in.
Then he takes a step away from preservation and maintenance. He talks about a brother and sister and their IQs. He gets us to agree that it's good to preserve the boy's IQ, and then uses that to try to get us to agree to improve the girl's IQ. But that's a dishonest comparison: it's not like for like, or apples and apples. Until now, we've been agreeing to preserve people in the state they're already in; in other words, preventing change. Then, he uses the argument we've agreed to, that preventing change is good, to somehow argue the opposite - that instigating change is good. But they're direct opposites: preventing change and instigating change are different things and require different arguments.
From his early arguments, one might reasonably infer that it's good to use technology to maintain someone's health. For example, if someone has a degenerative brain disease, we could argue that implanting computer chips into their brain to preserve their brain function is a good thing: it's maintaining their life as it was.
However, that's not what transhumanism is about. Transhumanism isn't about keeping people alive as they are. It is explicitly about "transforming the human condition" and "greatly enhancing human intellect". Instead of implanting computer chips to prevent a degenerative brain disease, it's implanting computer chips into a healthy person's brain to improve their thinking (like improving the girl's IQ from 110 to 120).
Yudkowsky is being disingenuous at best and deceitful at worst to equate transhumanism with saving, or even extending, human life. That's not what transhumanism is. Transhumanism is about change, not preservation. It's about making humans stronger or smarter or faster or longer-lived. You can't use an argument for preservation as an argument for change.
We then get into questions about identity. By maintaining the boy's IQ at 120, we are saving him as he is. But when we change his sister's IQ to 120, we are changing her. Is the resulting person still the same as the person we tried to help, or did we change her significantly enough that we created a new person - and, by inference, destroyed the original person? Who are we saving? Who are we helping? Riffing on the old Ship of Theseus question, how many changes can we make before the resulting person is no longer the person we set out to help? What happened to the original person?
He's sort of bouncing back and forth between the applied case and the philosophical case it seems. From a purely application perspective, I'd say that even cyborg-like prosthetics that we have now...
He's sort of bouncing back and forth between the applied case and the philosophical case it seems. From a purely application perspective, I'd say that even cyborg-like prosthetics that we have now constitute a form of trans-humanism. We might not be giving 8 year olds super human strength, but this technology enables that down the road. Also, we probably could be doing that, and probably are in some secret Chinese lab somewhere. And that's where it butts up against the philosophical case - what is the reason why a prosthetic is currently seen only as an opportunity to approximate human-like physical capabilities, and not immediately seen as an excuse to enhance/upgrade? In what ways is the collective human condition advanced by enhancing the individual? Or even, is it? Should it?
The issue which pops up quickly with many flavors of philosophical trans-humanism, as with many other similar utopian concepts, is that of organic existence of the individual as a participant in the collective existence of humanity. Theses idea about pushing the boundaries of collective humanity in some direction, via planned enhancement of individuals, presumes that one's vision of such romanticism is "good" or "valuable." It's these thoughts which get us things like the eugenics movement, which puts the value of some abstract collective beyond that of the individual.
Meanwhile, in the realm of applied trans-humanism, we seem to avoid such issues by simply saying "if we can then let's try it." And it may at first blush appear that we are blindly raging against our own divinity - "playing God" - but we've also avoided the pitfalls of the "planned humanity" by focusing on individual achievements without any real goal in sight. Even something as simple as saying "we want everyone to live forever if they want to" reeks of prescriptive judgement making. But "It would be fucking cool to have six arms" makes no such statement. It contains no "oughts" and I think that is paradoxically what terrifies most people. But isn't that how we live today? When we reject ideas like eugenics, we aren't rejecting the technology behind sterilization - we are rejecting the vision in which that technology is used to chart a course for human kind. I think that's a big part of what is being said here - transhumanism is all about pushing boundaries for the human, not for the society, which is why it is fundamentally a humanist endeavor.
I think you and I may have got different things from the reading. The core of the Transhumanist philosophy, I think, he laid out quite well: Yes, it's simplistic. The point of the essay is to...
I think you and I may have got different things from the reading.
The core of the Transhumanist philosophy, I think, he laid out quite well:
Where does it end? It doesn’t. Why should it? Life is good, health is good, beauty and happiness and fun and laughter and challenge and learning are good. This does not change for arbitrarily large amounts of life and beauty. If there were an upper bound, it would be a special case, and that would be inelegant.
Ultimate physical limits may or may not permit a lifespan of at least length X for some X – just as the medical technology of a particular century may or may not permit it. But physical limitations are questions of simple fact, to be settled strictly by experiment. Transhumanism, as a moral philosophy, deals only with the question of whether a healthy lifespan of length X is desirable if it is physically possible. Transhumanism answers yes for all X. Because, you see, it’s not a trick question.
So that is “Transhumanism” – loving life without special exceptions and without upper bound.
Yes, it's simplistic. The point of the essay is to distill Transhumanism—quite a broad family of philosophies—into an easy-to-understand nugget. Its purpose was to be simple. Of course it's going to miss a lot of the nuances involved in Transhumanism and its sub-philosophies.
At the center of Transhumanism is the full realization of Humanism. To maximize life and health and wellness for all people, without special cases. From this point we diverge into the how do we do that, and what do we do after?, and that's where Transhumanists begin to have their own ideas. Mind-uploading, merging with technology, synthetic biology, cryonics, genetic therapies, and such. But they have that common core. It's a fundamentally progressive ideology; change comes with the package.
It doesn't seem like such a massive jump to me. Yudkowsky gives us the thesis: We save the girl's life, because life is good and death is bad. That much is simple enough. This is our...
It doesn't seem like such a massive jump to me. Yudkowsky gives us the thesis:
"Life is good, death is bad; health is good, sickness is bad"
We save the girl's life, because life is good and death is bad. That much is simple enough.
This is our linking-logic:
If a young child falls on the train tracks, it is good to save them, and if a 45-year-old suffers from a debilitating disease, it is good to cure them. ...we can follow this general principle to a surprising new conclusion: If a 95-year-old is threatened by death from old age, it would be good to drag them from those train tracks, if possible. And if a 120-year-old is starting to feel slightly sickly, it would be good to restore them to full vigor, if possible. With current technology it is not possible. But if the technology became available in some future year – given sufficiently advanced medical nanotechnology, or such other contrivances as future minds may devise – would you judge it a good thing, to save that life, and stay that debility?
So here is where we really start to touch on Transhumanism. The bolded section gives us something quite interesting. The maximum human lifespan is somewhere around 120-125 years. If we invent the technology to heal this sickly 120-year old, then we should do so. We've begun to improve upon and transcend our natural limits to promote the ideal: Life is good, death is bad.
Either it’s better to have an IQ of 110 than 120, in which case we should strive to decrease IQs of 120 to 110. Or it’s better to have an IQ of 120 than 110, in which case we should raise the sister’s IQ if possible.
I don't love the IQ example, because IQ is a poor measure of intelligence and it's a prickly topic overall. So, let's look past the specific words on IQ, and examine the meaning. He is arguing that there are states of being that are better than others. It doesn't have to be IQ, but intelligence (and superintelligence) is a pretty popular topic in Transhuman-circles. We could use eyesight, or metabolism, or something like that, and it wouldn't change the core of the idea. It's better to have 20/20 vision than 20/100. We have a whole field already to improve eyesight. Health is good, sickness is bad. Similarly, if we have the technology to give 20/05 vision to whoever wants it, then that's good.
We should extend life, because death is bad.
We should improve health, because sickness is bad.
If we maximize life and health through our normal means, we shouldn't stop there.
I feel like we're talking past each other. I'm not disagreeing with transhumanism, as such. I'm saying that the arguments in this essay do not support transhumanism. I have an antique wooden...
I feel like we're talking past each other.
I'm not disagreeing with transhumanism, as such. I'm saying that the arguments in this essay do not support transhumanism.
I have an antique wooden table. Its legs are rotting away. I can choose to make new wooden legs to replace the old ones, or I can choose to make new metal legs to improve them. But what's my goal? Is my goal to restore the table to its original condition, or is my goal to change the table? Am I going to be a tablist or a transtablist? They're two different and separate goals.
If I argue that we need to use the wooden legs to restore the table to its original condition, does that necessarily imply that we should use the metal legs instead? (Do I heal the sick man to his previous healthy state or do I improve him?)
They're two different goals: make the table the same as it was, or make the table different.
If I install the metal legs, I may have made the table better, but I've also changed it. Did I want to change it? Part of the value of the original table was that it reflected a particular design. It had a certain aesthetic. The shiny metal legs have a different aesthetic. Is it the same table? More importantly: convincing me that replacing the old wooden legs with new wooden legs is a good thing is not the same thing as convincing me that replacing the old wooden legs with new metal legs is a good thing. They're based on different arguments. One is arguing for restoring the antique table to its original condition. The other is arguing for changing the table to a new condition. Agreeing to one is not the same as agreeing to the other.
Let me summarise this essay:
Premise 1: Preserving people is good.
Premise 2: Changing people is good.
Conclusion: If you agree with preserving people, then you must agree with changing people.
But that conclusion does not follow from those premises.
I'm not sure why you're so concerned about 'same vs different', or 'preservation vs change'. It's your table, do with it as you will. Consent is all a part of this. But with your sick man example,...
I'm not sure why you're so concerned about 'same vs different', or 'preservation vs change'.
It's your table, do with it as you will. Consent is all a part of this. But with your sick man example, we make him as healthy as we have the ability to. Because life and health are good. No exceptions. If his previous healthy state was say, 80% healthy, but have the technology to make him as healthy as the most healthy person, then we haven't maximized the amount of good we can do for that person. There's work yet to be done.
Because those are the arguments that Yudkowsky is presenting in his essay: "We agree that keeping this girl healthy (the same) is good, so you must agree that making her smarter (different) is...
I'm not sure why you're so concerned about 'same vs different', or 'preservation vs change'.
Because those are the arguments that Yudkowsky is presenting in his essay: "We agree that keeping this girl healthy (the same) is good, so you must agree that making her smarter (different) is good." And that's a very flawed argument.
If his previous healthy state was say, 80% healthy, but have the technology to make him as healthy as the most healthy person, then we haven't maximized the amount of good we can do for that person.
That's a whole different argument.
It's obvious that you agree with the conclusions of this essay, which might be influencing your ability to assess its arguments objectively. But, whether one agrees with the conclusions or not, it's still a badly constructed essay, using flawed arguments to try to support its conclusions.
Apples are tasty.
Bananas are tasty.
If you like apples, you must also like bananas.
Even if you like bananas, you have to see that this argument doesn't make sense.
The only preservation being done in the essay is the preservation of life. Life is good. Do whatever we can to keep people alive. Health is good. Change isn't of any concern here. Do whatever we...
The only preservation being done in the essay is the preservation of life. Life is good. Do whatever we can to keep people alive.
Health is good. Change isn't of any concern here. Do whatever we can to maximize health.
Where does it end? It doesn’t. Why should it? Life is good, health is good, beauty and happiness and fun and laughter and challenge and learning are good. This does not change for arbitrarily large amounts of life and beauty. If there were an upper bound, it would be a special case, and that would be inelegant.
But I think A_A's point here is that the point of Transhumanism is change. At the core of it, it is about becoming more than human. Life is good, but humans are senescent. That is part of what...
Health is good. Change isn't of any concern here. Do whatever we can to maximize health.
But I think A_A's point here is that the point of Transhumanism is change. At the core of it, it is about becoming more than human.
Life is good, but humans are senescent. That is part of what makes humans human. Finding a cure for senescence would be a transhumanist goal, but then you are changing what humans are. This is change.
The argument being had here isn't that transhumanism is bad or good, it's that the article you linked has faulty logic and doesn't do a good job of proving what it's trying to prove.
I'm willing to accept the essay is a poorly constructed one. While I don't know A_A's life and can't see what's going on in their head, I'm interpreting their comments as having missed the point...
I'm willing to accept the essay is a poorly constructed one.
While I don't know A_A's life and can't see what's going on in their head, I'm interpreting their comments as having missed the point of the essay. That being: transhumanism can be seen as humanism without any special cases. They clearly think transhumanism is distinct from humanism. So that's largely where our personal argument lies, at least.
I got the point that Yudowsky thinks he's arguing. However, his essay is flawed, and his arguments don't work. He's reached a conclusion without supporting that conclusion with valid arguments.
I got the point that Yudowsky thinks he's arguing. However, his essay is flawed, and his arguments don't work. He's reached a conclusion without supporting that conclusion with valid arguments.
That may be the point, but the fact is that the essay does not lead to this conclusion. It asserts it, but the way it gets from beginning to conclusion is not valid. Also, this is not in keeping...
transhumanism can be seen as humanism without any special cases
That may be the point, but the fact is that the essay does not lead to this conclusion. It asserts it, but the way it gets from beginning to conclusion is not valid. Also, this is not in keeping with the core tenets of transhumanism in a wider sense, not just the kind of techno-restoration described in the essay. What is described in the essay does not jibe with the larger body of transhumanist theory w/r/t the goals of transhumanism.
I hope this is a decent place to jump in... Flaws of the essay aside, @CALICO and @Algernon_Asimov seem to have different interpretations of which parts of a human can change. They've talked about...
I hope this is a decent place to jump in...
Flaws of the essay aside, @CALICO and @Algernon_Asimov seem to have different interpretations of which parts of a human can change. They've talked about this already, but I'd like to expand on it a little.
I think everyone can agree that we -- the vibrant people worth helping -- are stuck inside meat cages. It's easy to see in the case of disease, when malevolent tiny organisms invade our bodies: there's a clear separation of "us" (people) and "them" (weak body infiltrated by foreign harmful entities).
Then we move to aging. Still no problem, because even though slowed aging seems futuristic, we still feel like our souls are comfortably separate from the mechanisms causing aging. (Well, maybe it's not super intuitive. See The Fable of the Dragon-Tyrant.)
The jump to IQ is what causes issues. Is our intelligence level, bad or good, part of who we are? Yikes, that's a difficult question. Yudkowsky assumes that intelligence is not close enough to core personhood to be worth preserving. A_A assumes that intelligence is part of the core person.
In summary: It matters greatly whether a trait (ability to age, intelligence or lack thereof, positive or negative disposition, capacity for empathy, etc.) is part of one's identity or just part of the structure that's holding one back from realizing one's potential. Upgrade the mechanical bodies that we use? Sure, as long as that's really all we're doing. (I think this naturally follows from humanist principles.) Upgrade something that's entwined with our identity? Let's think about that a little longer. (This kind of upgrade wouldn't necessarily follow from humanism.)
Including change them until they're no longer the people we wanted to keep alive. If I replace your dying grandmother, piece by piece, with metal and electronics, until there's no flesh left......
Do whatever we can to keep people alive.
Including change them until they're no longer the people we wanted to keep alive.
If I replace your dying grandmother, piece by piece, with metal and electronics, until there's no flesh left... will she still be your grandmother at the end of the process? Did I keep her alive?
No, I haven't. As I was writing my comment about the hypothetical grandmother, two examples were in my mind: Bareil Antos in 'Star Trek: Deep Space Nine'. There's an episode where he gets sick,...
No, I haven't. As I was writing my comment about the hypothetical grandmother, two examples were in my mind:
Bareil Antos in 'Star Trek: Deep Space Nine'. There's an episode where he gets sick, but he's needed for some diplomatic negotiations. So a doctor inserts technological devices into his brain to keep him alive, but each extra piece of technology that's inserted reduces his humanity so that the person that's being kept alive is less and less like the person the doctor was trying to save.
The Tin Man from 'The Wonderful Wizard of Oz' (the book, not the famous musical). He was originally a fully flesh human being who worked as a wood-chopper. However, for reasons I won't bother explaining here, the Wicked Witch of the West put a spell on his axe so that it attacked him. It chopped off one leg, and he found a tinsmith who made him a replacement leg out of tin. Then the other leg. Then one arm. Then the other arm. Then his head. The only remaining flesh part of his body was his torso. Then his axe chopped that in half, and the tinsmith replaced his torso - but forgot to include a heart in the tin version.
Check out Altered Carbon if you get the chance then. There's a scene where the mind of someone's grandma is put in the body of a hardened criminal, and then comes to family dinner. It's both...
Check out Altered Carbon if you get the chance then. There's a scene where the mind of someone's grandma is put in the body of a hardened criminal, and then comes to family dinner. It's both comical and a little disturbing.
I think the point of transhumanism though is to go beyond current human capability. Through artificial enhancements (mechanical or biological or genetic) you make people more capable than would be...
I think the point of transhumanism though is to go beyond current human capability. Through artificial enhancements (mechanical or biological or genetic) you make people more capable than would be possible normally. Worries about "the human condition" or the concept of humanity become secondary to the idea of superiority and increased capability. The idea of transitioning past Homo Sapiens into Homo Potens (or some similar Latin term) is the key to transhumanism. Or at least, that's my understanding of it. That's distinctly different from Humanism.
I consider myself a Transhumanist, and more-specifically a Singularitartian. Something I've noticed very much recently is that there is a group of very-loud people who promote the idea that...
I consider myself a Transhumanist, and more-specifically a Singularitartian. Something I've noticed very much recently is that there is a group of very-loud people who promote the idea that Tranhumanism is a very Silicon Valley, individualistic, capital-L Libertarian philosophy that values the ability and efficacy of the individual without regard for the health and well-being of anybody else. This quickly gets No True Scotsman-y, but that's certainly not the dominant view among Transhumanists.
Transhumanists are humanists (wow, that was pretty quick) who recognize that the current state of the species is not an ideal. Humans are fragile, and limited by the circumstances of their birth. These circumstances do not have to be binding.
As technology permits, it is desirable to extend ourselves beyond the boundaries evolution has set for us. We wish to tinker—ethically—with what it means to be human, because the whole of humanity stands to benefit. (Of course, forcing change on anybody is banned. That's not ethical.)
Long life is good, and good health is good, and being more capable is good. No special exceptions needed.
No, they're not, any more than transplant is the same as plant, or transform is the same as form, or transparent is the same as parent. Don't be misled by the similarity in the words. A...
Transhumanists are humanists
No, they're not, any more than transplant is the same as plant, or transform is the same as form, or transparent is the same as parent. Don't be misled by the similarity in the words.
A transhumanist is someone who believes in transhumans - people who are beyond humans ("transitional humans", on their way to becoming "post-humans"). Transhumanists want to change people.
We wish to tinker—ethically—with what it means to be human
Yes, we are. I'd very much like for you to tell me how I'm not a humanist. Transhumans aren't something to believe in, and I certainly don't see myself as beyond or separate from anyone. I would...
Yes, we are. I'd very much like for you to tell me how I'm not a humanist.
Transhumans aren't something to believe in, and I certainly don't see myself as beyond or separate from anyone. I would change myself if I could, not anybody else. Though if they wish to do so as well, then I think they should able to.
Because we have a different definition of "human". To misquote the Bible: "What is Man, that thou art mindful of him?" You want to change humans away from what they currently are. That doesn't...
I'd very much like for you to tell me how I'm not a humanist.
Because we have a different definition of "human". To misquote the Bible: "What is Man, that thou art mindful of him?" You want to change humans away from what they currently are. That doesn't seem very humanistic to me.
"I love my dog so much I'm going to replace his meat legs with robot legs, then I'm going to replace his meat body with a robot body, then I'm going to replace his meat head with a robot head, then I'm going to replace his meat brain with a robot brain." Somewhere along the way, he's going to stop being the dog I loved so much and wanted to save. My love for my dog somehow didn't translate into actual care for him. I changed him beyond recognition. He's no longer a dog. If I'm a Doggist, I've failed.
Imagine the Ship of Theseus. Instead of replacing its old wooden parts with new wooden parts, I replace them with metal parts, because I like ships and want to improve them. Eventually, every part of the ship is metal. Have I saved the ship?
Changing something until it's no longer the thing it was is not the same as helping it.
Transhumans aren't something to believe in
My apologies for my unclear language. Let me try again.
A "transhuman" is a transitional human: a human in an intermediate state between a current human and a post-human, which is our potential evolutionary future. A transhumanist is a person who believes in changing current humans into transhumans (transitional humans) as a step towards becoming post-humans. They believe in "transhuman" as a desirable goal to achieve.
Just to make a point here, humanism, as a school of thought, is different from what you're describing. Humanism is, as I understand it, a line of thinking that moves to use rational thinking and...
Just to make a point here, humanism, as a school of thought, is different from what you're describing.
Humanism is, as I understand it, a line of thinking that moves to use rational thinking and empiricism for the basis of all decision-making and social structures as opposed to religious or supernatural motivations. Instead of having supernatural motivations and anchors, humanism tries to make thinking human-centric. This seems to be pretty different from the "humanism" described both in the essay and the conversation here.
Yeah, I probably got a bit carried away there, with the whole Ship of Theseus issue. It just doesn't seem to me that wanting to change humans and how they are is compatible with human-centric...
Yeah, I probably got a bit carried away there, with the whole Ship of Theseus issue. It just doesn't seem to me that wanting to change humans and how they are is compatible with human-centric thinking. It reminds me of the title of a musical: 'I Love You, You're Perfect, Now Change".
To pull from Wikipedia: Transhumanism as a philosophy is much the same. They are not necessarily separate. It does involve changing ourselves, yes. But the goal isn't getting robot parts or...
To pull from Wikipedia:
Humanism is a philosophical and ethical stance that emphasizes the value and agency of human beings, individually and collectively, and generally prefers critical thinking and evidence (rationalism and empiricism) over acceptance of dogma or superstition. ...Generally, however, humanism refers to a perspective that affirms some notion of human freedom and progress. It views humans as solely responsible for the promotion and development of individuals and emphasizes a concern for man in relation to the world.
Transhumanism as a philosophy is much the same. They are not necessarily separate. It does involve changing ourselves, yes. But the goal isn't getting robot parts or synthetic organs for the fun of it.
If you're operating under the notion that I would consider transhumans or post-humans* as distinct from human-beings, I must say that I certainly don't think so. Even if there is one day a Homo potens as @kijafa gave us, or Homo deus as Prof. Yuval Noah Harari wrote, they would be as human as H. sapiens, H. heidelbergensis, or H. erectus. Getting into the weeds on what is a human? doesn't change what Humanism as a philosophy is.
I'm very familiar with the Ship of Theseus. It invariably comes up in damn near every conversation about transhuman ideas. If we're invoking it to try and pin down when a human becomes a non-human, I think that's futile. Firstly, because it's a philosophical thought experiment without an answer by nature, secondly because the definition of human—even as broadly as I've defined it in my personal usage—is still too tenuous to really mean anything. When did the very first Homo come from Australopithecus, and what was that special difference between the mother Australopithecus and her child to make a whole new genus? At the end of the day, that's another debate with no answer. How we define species and the lines between them is, at the end of the day, arbitrary. God didn't come down from the sky and hand us a cosmic dictionary. We made up all these labels. All words are made up.
Applying the Ship of Theseus to an individual is easier, but still has no real answer. I imagine our ideas on this are going to be very individual, and really, there are no wrong answers. Just disagreements. Personally, I think the only thing that matters here is the continuous subjective experience. Even when asleep my mind is doing something. I have a subjective experience that stretches back to my first lucid moment** all those years ago. As far as I'm concerned, you could replace all my limbs and organs with synthetic versions and I would still be myself. Even slowly replacing all my neurons with artificial versions would allow me to stay myself, provided there is not a break in subjective experience. I likely am not composed of any of the same molecules that I was made up of a decade ago. But I've had a continuous experience the whole time, so as far as I'm concerned I'm the same person. Unless a subjective consciousness is only possible on meat-based architecture for some reason, I don't see why my conscious experience couldn't transition to another architecture one day.
I certainly think our current "model" of H. sapiens is pretty lacking, and has quite a few bugs that could be worked out. I figure that we might as well take human evolution into our own hands, and improve up what unguided natural processes gave us to work with. What that means for future humans is a big question mark. Some people think we've already been doing this. I often see eyeglasses mentioned as an augmentative technology that already makes us poorly-sighted folks Cyborgs of some kind. Kurzweil thinks our smartphones are acting as an extension of our minds, even if they are apart from us. We're even starting to use more active approaches, like using gene therapies to save children from otherwise fatal genetic diseases. I don't think there's a point where we should just stop trying to improve upon the human condition, or a line we shouldn't cross. So long as everything we do makes us live as long as we'd like, be as healthy as we'd like, as capable as we'd like, and give everyone else the option to do so as well.
*This is a problematic word, because it's so loosely defined. What actually is a post-human? What does that even mean? What would they look like,
how would they live? Everyone has a different idea about it. I'd just say it's some future step in human evolution, but I don't necessarily think they would be a new genus.
**I have gone under general aesthetic several times, where I have gaps in which there was not even the sense of time passing. There is an ongoing debate on whether or not this constitutes the death of one subjective experience and the birth of another. I certainly have a feeling of disconnect between pre-anesthesia me and post-anesthesia me, and am willing to entertain the idea that "I" have died during those times. Even if the conscious experience post-anesthesia has all the memories of pre-anesthesia, they might be actual different subjective experiences. I expect this is one of those things that may never be answered for certain, and certainly not before we have any scientific consensus for what consciousness actually is.
There've been myriad different science-fictional models of the transhuman, everything from various uber-menschen (comic book heroes), Theodore Sturgeon's More than Human, to the directed evolution...
There've been myriad different science-fictional models of the transhuman, everything from various uber-menschen (comic book heroes), Theodore Sturgeon's More than Human, to the directed evolution of David Brin's Uplift series, to the half-machine (and corruptible) demigods of Alistair Reynolds' Revelation series, and even the AI/human cultural symbiosis of Iain Banks.
Ultimately, transhumanism potentially fails by selecting a singular path for human development - we don't know what we're losing by choosing one optimization over another. Is it ethical to optimize "intelligence" to the point where we're no longer capable of socializing with one another? Is it ethical to maximize physical durability at the expense of the ability to feel pain/joy? Is it likely that we can optimize ourselves into an evolutionary dead end which is incapable of adaptation to drastic change in circumstances (like the gender selection resulting from "one-child" policies)?
All of the scenarios in the article imply that the person under consideration has someone else making the decision about what is optimal, with no regard for the consequences of that decision over the course of generations. Likewise, there are many models for what happens if death is postponed indefinitely - exponential population growth, draconian population control, gerontocracy...
We can't become transhuman without also accepting the necessities of consent, civilizational/governance change, resource allocation, and all the other human negotiations which have taken place on historical scales.
This essay is remarkably naïve and simplistic. Sure, Mr Yudkowsky justifies his simple approach by writing that "the obvious answer isn’t always the best choice, but sometimes it is", but that doesn't excuse the over-simplification he's indulging in here - almost to the point of being misleading.
Let's go to https://whatistranshumanism.org/ to get a definition of transhumanism straight from the source:
Similarly, the Wikipedia article on transhumanism starts out by defining transhumanism as:
Let's focus on some key phrases here, which are most relevant to Mr Yudkowsky's essay: phrases such as: "beyond its currently human form and human limitations", "transform the human condition", and "greatly enhance human intellect and physiology". From these, it becomes clear that the general underlying principle of transhumanism is not simply to preserve human life but to change it.
The opening section of Yudkowsky's essay talks about saving the life of a young girl and curing a sick middle-aged man. These actions are preserving and maintaining those lives. We're returning them to their original condition, or preventing them from moving away from their original condition.
He then moves on to talking about keeping old people alive longer and restoring them to good health. Again, the focus is on preserving and maintaining those lives, rather than changing them.
And, sure the answer to these scenarios is obvious: save the girl, heal the sick man, keep the old man alive, and so on. We're all fine with keeping people alive in the state they're currently in.
Then he takes a step away from preservation and maintenance. He talks about a brother and sister and their IQs. He gets us to agree that it's good to preserve the boy's IQ, and then uses that to try to get us to agree to improve the girl's IQ. But that's a dishonest comparison: it's not like for like, or apples and apples. Until now, we've been agreeing to preserve people in the state they're already in; in other words, preventing change. Then, he uses the argument we've agreed to, that preventing change is good, to somehow argue the opposite - that instigating change is good. But they're direct opposites: preventing change and instigating change are different things and require different arguments.
From his early arguments, one might reasonably infer that it's good to use technology to maintain someone's health. For example, if someone has a degenerative brain disease, we could argue that implanting computer chips into their brain to preserve their brain function is a good thing: it's maintaining their life as it was.
However, that's not what transhumanism is about. Transhumanism isn't about keeping people alive as they are. It is explicitly about "transforming the human condition" and "greatly enhancing human intellect". Instead of implanting computer chips to prevent a degenerative brain disease, it's implanting computer chips into a healthy person's brain to improve their thinking (like improving the girl's IQ from 110 to 120).
Yudkowsky is being disingenuous at best and deceitful at worst to equate transhumanism with saving, or even extending, human life. That's not what transhumanism is. Transhumanism is about change, not preservation. It's about making humans stronger or smarter or faster or longer-lived. You can't use an argument for preservation as an argument for change.
We then get into questions about identity. By maintaining the boy's IQ at 120, we are saving him as he is. But when we change his sister's IQ to 120, we are changing her. Is the resulting person still the same as the person we tried to help, or did we change her significantly enough that we created a new person - and, by inference, destroyed the original person? Who are we saving? Who are we helping? Riffing on the old Ship of Theseus question, how many changes can we make before the resulting person is no longer the person we set out to help? What happened to the original person?
I think this essay is flawed and misleading.
He's sort of bouncing back and forth between the applied case and the philosophical case it seems. From a purely application perspective, I'd say that even cyborg-like prosthetics that we have now constitute a form of trans-humanism. We might not be giving 8 year olds super human strength, but this technology enables that down the road. Also, we probably could be doing that, and probably are in some secret Chinese lab somewhere. And that's where it butts up against the philosophical case - what is the reason why a prosthetic is currently seen only as an opportunity to approximate human-like physical capabilities, and not immediately seen as an excuse to enhance/upgrade? In what ways is the collective human condition advanced by enhancing the individual? Or even, is it? Should it?
The issue which pops up quickly with many flavors of philosophical trans-humanism, as with many other similar utopian concepts, is that of organic existence of the individual as a participant in the collective existence of humanity. Theses idea about pushing the boundaries of collective humanity in some direction, via planned enhancement of individuals, presumes that one's vision of such romanticism is "good" or "valuable." It's these thoughts which get us things like the eugenics movement, which puts the value of some abstract collective beyond that of the individual.
Meanwhile, in the realm of applied trans-humanism, we seem to avoid such issues by simply saying "if we can then let's try it." And it may at first blush appear that we are blindly raging against our own divinity - "playing God" - but we've also avoided the pitfalls of the "planned humanity" by focusing on individual achievements without any real goal in sight. Even something as simple as saying "we want everyone to live forever if they want to" reeks of prescriptive judgement making. But "It would be fucking cool to have six arms" makes no such statement. It contains no "oughts" and I think that is paradoxically what terrifies most people. But isn't that how we live today? When we reject ideas like eugenics, we aren't rejecting the technology behind sterilization - we are rejecting the vision in which that technology is used to chart a course for human kind. I think that's a big part of what is being said here - transhumanism is all about pushing boundaries for the human, not for the society, which is why it is fundamentally a humanist endeavor.
I think you and I may have got different things from the reading.
The core of the Transhumanist philosophy, I think, he laid out quite well:
Yes, it's simplistic. The point of the essay is to distill Transhumanism—quite a broad family of philosophies—into an easy-to-understand nugget. Its purpose was to be simple. Of course it's going to miss a lot of the nuances involved in Transhumanism and its sub-philosophies.
At the center of Transhumanism is the full realization of Humanism. To maximize life and health and wellness for all people, without special cases. From this point we diverge into the how do we do that, and what do we do after?, and that's where Transhumanists begin to have their own ideas. Mind-uploading, merging with technology, synthetic biology, cryonics, genetic therapies, and such. But they have that common core. It's a fundamentally progressive ideology; change comes with the package.
But you don't get from "save a girl's life" to "change the girl" without some massive jumps of logic. He can't use the former to justify the latter.
It doesn't seem like such a massive jump to me. Yudkowsky gives us the thesis:
We save the girl's life, because life is good and death is bad. That much is simple enough.
This is our linking-logic:
So here is where we really start to touch on Transhumanism. The bolded section gives us something quite interesting. The maximum human lifespan is somewhere around 120-125 years. If we invent the technology to heal this sickly 120-year old, then we should do so. We've begun to improve upon and transcend our natural limits to promote the ideal: Life is good, death is bad.
I don't love the IQ example, because IQ is a poor measure of intelligence and it's a prickly topic overall. So, let's look past the specific words on IQ, and examine the meaning. He is arguing that there are states of being that are better than others. It doesn't have to be IQ, but intelligence (and superintelligence) is a pretty popular topic in Transhuman-circles. We could use eyesight, or metabolism, or something like that, and it wouldn't change the core of the idea. It's better to have 20/20 vision than 20/100. We have a whole field already to improve eyesight. Health is good, sickness is bad. Similarly, if we have the technology to give 20/05 vision to whoever wants it, then that's good.
We should extend life, because death is bad.
We should improve health, because sickness is bad.
If we maximize life and health through our normal means, we shouldn't stop there.
I feel like we're talking past each other.
I'm not disagreeing with transhumanism, as such. I'm saying that the arguments in this essay do not support transhumanism.
I have an antique wooden table. Its legs are rotting away. I can choose to make new wooden legs to replace the old ones, or I can choose to make new metal legs to improve them. But what's my goal? Is my goal to restore the table to its original condition, or is my goal to change the table? Am I going to be a tablist or a transtablist? They're two different and separate goals.
If I argue that we need to use the wooden legs to restore the table to its original condition, does that necessarily imply that we should use the metal legs instead? (Do I heal the sick man to his previous healthy state or do I improve him?)
They're two different goals: make the table the same as it was, or make the table different.
If I install the metal legs, I may have made the table better, but I've also changed it. Did I want to change it? Part of the value of the original table was that it reflected a particular design. It had a certain aesthetic. The shiny metal legs have a different aesthetic. Is it the same table? More importantly: convincing me that replacing the old wooden legs with new wooden legs is a good thing is not the same thing as convincing me that replacing the old wooden legs with new metal legs is a good thing. They're based on different arguments. One is arguing for restoring the antique table to its original condition. The other is arguing for changing the table to a new condition. Agreeing to one is not the same as agreeing to the other.
Let me summarise this essay:
Premise 1: Preserving people is good.
Premise 2: Changing people is good.
Conclusion: If you agree with preserving people, then you must agree with changing people.
But that conclusion does not follow from those premises.
Apples are tasty.
Bananas are tasty.
If you like apples, you must also like bananas.
Can you see the illogic?
I'm not sure why you're so concerned about 'same vs different', or 'preservation vs change'.
It's your table, do with it as you will. Consent is all a part of this. But with your sick man example, we make him as healthy as we have the ability to. Because life and health are good. No exceptions. If his previous healthy state was say, 80% healthy, but have the technology to make him as healthy as the most healthy person, then we haven't maximized the amount of good we can do for that person. There's work yet to be done.
Because those are the arguments that Yudkowsky is presenting in his essay: "We agree that keeping this girl healthy (the same) is good, so you must agree that making her smarter (different) is good." And that's a very flawed argument.
That's a whole different argument.
It's obvious that you agree with the conclusions of this essay, which might be influencing your ability to assess its arguments objectively. But, whether one agrees with the conclusions or not, it's still a badly constructed essay, using flawed arguments to try to support its conclusions.
Apples are tasty.
Bananas are tasty.
If you like apples, you must also like bananas.
Even if you like bananas, you have to see that this argument doesn't make sense.
The only preservation being done in the essay is the preservation of life. Life is good. Do whatever we can to keep people alive.
Health is good. Change isn't of any concern here. Do whatever we can to maximize health.
But I think A_A's point here is that the point of Transhumanism is change. At the core of it, it is about becoming more than human.
Life is good, but humans are senescent. That is part of what makes humans human. Finding a cure for senescence would be a transhumanist goal, but then you are changing what humans are. This is change.
The argument being had here isn't that transhumanism is bad or good, it's that the article you linked has faulty logic and doesn't do a good job of proving what it's trying to prove.
I'm willing to accept the essay is a poorly constructed one.
While I don't know A_A's life and can't see what's going on in their head, I'm interpreting their comments as having missed the point of the essay. That being: transhumanism can be seen as humanism without any special cases. They clearly think transhumanism is distinct from humanism. So that's largely where our personal argument lies, at least.
I got the point that Yudowsky thinks he's arguing. However, his essay is flawed, and his arguments don't work. He's reached a conclusion without supporting that conclusion with valid arguments.
That may be the point, but the fact is that the essay does not lead to this conclusion. It asserts it, but the way it gets from beginning to conclusion is not valid. Also, this is not in keeping with the core tenets of transhumanism in a wider sense, not just the kind of techno-restoration described in the essay. What is described in the essay does not jibe with the larger body of transhumanist theory w/r/t the goals of transhumanism.
I hope this is a decent place to jump in...
Flaws of the essay aside, @CALICO and @Algernon_Asimov seem to have different interpretations of which parts of a human can change. They've talked about this already, but I'd like to expand on it a little.
I think everyone can agree that we -- the vibrant people worth helping -- are stuck inside meat cages. It's easy to see in the case of disease, when malevolent tiny organisms invade our bodies: there's a clear separation of "us" (people) and "them" (weak body infiltrated by foreign harmful entities).
Then we move to aging. Still no problem, because even though slowed aging seems futuristic, we still feel like our souls are comfortably separate from the mechanisms causing aging. (Well, maybe it's not super intuitive. See The Fable of the Dragon-Tyrant.)
The jump to IQ is what causes issues. Is our intelligence level, bad or good, part of who we are? Yikes, that's a difficult question. Yudkowsky assumes that intelligence is not close enough to core personhood to be worth preserving. A_A assumes that intelligence is part of the core person.
In summary: It matters greatly whether a trait (ability to age, intelligence or lack thereof, positive or negative disposition, capacity for empathy, etc.) is part of one's identity or just part of the structure that's holding one back from realizing one's potential. Upgrade the mechanical bodies that we use? Sure, as long as that's really all we're doing. (I think this naturally follows from humanist principles.) Upgrade something that's entwined with our identity? Let's think about that a little longer. (This kind of upgrade wouldn't necessarily follow from humanism.)
Including change them until they're no longer the people we wanted to keep alive.
If I replace your dying grandmother, piece by piece, with metal and electronics, until there's no flesh left... will she still be your grandmother at the end of the process? Did I keep her alive?
Have you seen Altered Carbon? Because there's a scene in that show with someone's grandma that I think runs this scenario pretty well.
No, I haven't. As I was writing my comment about the hypothetical grandmother, two examples were in my mind:
Bareil Antos in 'Star Trek: Deep Space Nine'. There's an episode where he gets sick, but he's needed for some diplomatic negotiations. So a doctor inserts technological devices into his brain to keep him alive, but each extra piece of technology that's inserted reduces his humanity so that the person that's being kept alive is less and less like the person the doctor was trying to save.
The Tin Man from 'The Wonderful Wizard of Oz' (the book, not the famous musical). He was originally a fully flesh human being who worked as a wood-chopper. However, for reasons I won't bother explaining here, the Wicked Witch of the West put a spell on his axe so that it attacked him. It chopped off one leg, and he found a tinsmith who made him a replacement leg out of tin. Then the other leg. Then one arm. Then the other arm. Then his head. The only remaining flesh part of his body was his torso. Then his axe chopped that in half, and the tinsmith replaced his torso - but forgot to include a heart in the tin version.
Check out Altered Carbon if you get the chance then. There's a scene where the mind of someone's grandma is put in the body of a hardened criminal, and then comes to family dinner. It's both comical and a little disturbing.
That show is already on my ever-growing "to watch" list. I'll probably get around to it sometime in the next decade or so. :)
I think the point of transhumanism though is to go beyond current human capability. Through artificial enhancements (mechanical or biological or genetic) you make people more capable than would be possible normally. Worries about "the human condition" or the concept of humanity become secondary to the idea of superiority and increased capability. The idea of transitioning past Homo Sapiens into Homo Potens (or some similar Latin term) is the key to transhumanism. Or at least, that's my understanding of it. That's distinctly different from Humanism.
I consider myself a Transhumanist, and more-specifically a Singularitartian. Something I've noticed very much recently is that there is a group of very-loud people who promote the idea that Tranhumanism is a very Silicon Valley, individualistic, capital-L Libertarian philosophy that values the ability and efficacy of the individual without regard for the health and well-being of anybody else. This quickly gets No True Scotsman-y, but that's certainly not the dominant view among Transhumanists.
Transhumanists are humanists (wow, that was pretty quick) who recognize that the current state of the species is not an ideal. Humans are fragile, and limited by the circumstances of their birth. These circumstances do not have to be binding.
As technology permits, it is desirable to extend ourselves beyond the boundaries evolution has set for us. We wish to tinker—ethically—with what it means to be human, because the whole of humanity stands to benefit. (Of course, forcing change on anybody is banned. That's not ethical.)
Long life is good, and good health is good, and being more capable is good. No special exceptions needed.
No, they're not, any more than transplant is the same as plant, or transform is the same as form, or transparent is the same as parent. Don't be misled by the similarity in the words.
A transhumanist is someone who believes in transhumans - people who are beyond humans ("transitional humans", on their way to becoming "post-humans"). Transhumanists want to change people.
Which is not the same as preserving humans.
Yes, we are. I'd very much like for you to tell me how I'm not a humanist.
Transhumans aren't something to believe in, and I certainly don't see myself as beyond or separate from anyone. I would change myself if I could, not anybody else. Though if they wish to do so as well, then I think they should able to.
You can be a fan of humanism and transhumanism at the same time, but that doesn't make them the same thing.
Because we have a different definition of "human". To misquote the Bible: "What is Man, that thou art mindful of him?" You want to change humans away from what they currently are. That doesn't seem very humanistic to me.
"I love my dog so much I'm going to replace his meat legs with robot legs, then I'm going to replace his meat body with a robot body, then I'm going to replace his meat head with a robot head, then I'm going to replace his meat brain with a robot brain." Somewhere along the way, he's going to stop being the dog I loved so much and wanted to save. My love for my dog somehow didn't translate into actual care for him. I changed him beyond recognition. He's no longer a dog. If I'm a Doggist, I've failed.
Imagine the Ship of Theseus. Instead of replacing its old wooden parts with new wooden parts, I replace them with metal parts, because I like ships and want to improve them. Eventually, every part of the ship is metal. Have I saved the ship?
Changing something until it's no longer the thing it was is not the same as helping it.
My apologies for my unclear language. Let me try again.
A "transhuman" is a transitional human: a human in an intermediate state between a current human and a post-human, which is our potential evolutionary future. A transhumanist is a person who believes in changing current humans into transhumans (transitional humans) as a step towards becoming post-humans. They believe in "transhuman" as a desirable goal to achieve.
Just to make a point here, humanism, as a school of thought, is different from what you're describing.
Humanism is, as I understand it, a line of thinking that moves to use rational thinking and empiricism for the basis of all decision-making and social structures as opposed to religious or supernatural motivations. Instead of having supernatural motivations and anchors, humanism tries to make thinking human-centric. This seems to be pretty different from the "humanism" described both in the essay and the conversation here.
Yeah, I probably got a bit carried away there, with the whole Ship of Theseus issue. It just doesn't seem to me that wanting to change humans and how they are is compatible with human-centric thinking. It reminds me of the title of a musical: 'I Love You, You're Perfect, Now Change".
To pull from Wikipedia:
Transhumanism as a philosophy is much the same. They are not necessarily separate. It does involve changing ourselves, yes. But the goal isn't getting robot parts or synthetic organs for the fun of it.
If you're operating under the notion that I would consider transhumans or post-humans* as distinct from human-beings, I must say that I certainly don't think so. Even if there is one day a Homo potens as @kijafa gave us, or Homo deus as Prof. Yuval Noah Harari wrote, they would be as human as H. sapiens, H. heidelbergensis, or H. erectus. Getting into the weeds on what is a human? doesn't change what Humanism as a philosophy is.
I'm very familiar with the Ship of Theseus. It invariably comes up in damn near every conversation about transhuman ideas. If we're invoking it to try and pin down when a human becomes a non-human, I think that's futile. Firstly, because it's a philosophical thought experiment without an answer by nature, secondly because the definition of human—even as broadly as I've defined it in my personal usage—is still too tenuous to really mean anything. When did the very first Homo come from Australopithecus, and what was that special difference between the mother Australopithecus and her child to make a whole new genus? At the end of the day, that's another debate with no answer. How we define species and the lines between them is, at the end of the day, arbitrary. God didn't come down from the sky and hand us a cosmic dictionary. We made up all these labels. All words are made up.
Applying the Ship of Theseus to an individual is easier, but still has no real answer. I imagine our ideas on this are going to be very individual, and really, there are no wrong answers. Just disagreements. Personally, I think the only thing that matters here is the continuous subjective experience. Even when asleep my mind is doing something. I have a subjective experience that stretches back to my first lucid moment** all those years ago. As far as I'm concerned, you could replace all my limbs and organs with synthetic versions and I would still be myself. Even slowly replacing all my neurons with artificial versions would allow me to stay myself, provided there is not a break in subjective experience. I likely am not composed of any of the same molecules that I was made up of a decade ago. But I've had a continuous experience the whole time, so as far as I'm concerned I'm the same person. Unless a subjective consciousness is only possible on meat-based architecture for some reason, I don't see why my conscious experience couldn't transition to another architecture one day.
I certainly think our current "model" of H. sapiens is pretty lacking, and has quite a few bugs that could be worked out. I figure that we might as well take human evolution into our own hands, and improve up what unguided natural processes gave us to work with. What that means for future humans is a big question mark. Some people think we've already been doing this. I often see eyeglasses mentioned as an augmentative technology that already makes us poorly-sighted folks Cyborgs of some kind. Kurzweil thinks our smartphones are acting as an extension of our minds, even if they are apart from us. We're even starting to use more active approaches, like using gene therapies to save children from otherwise fatal genetic diseases. I don't think there's a point where we should just stop trying to improve upon the human condition, or a line we shouldn't cross. So long as everything we do makes us live as long as we'd like, be as healthy as we'd like, as capable as we'd like, and give everyone else the option to do so as well.
*This is a problematic word, because it's so loosely defined. What actually is a post-human? What does that even mean? What would they look like,
how would they live? Everyone has a different idea about it. I'd just say it's some future step in human evolution, but I don't necessarily think they would be a new genus.
**I have gone under general aesthetic several times, where I have gaps in which there was not even the sense of time passing. There is an ongoing debate on whether or not this constitutes the death of one subjective experience and the birth of another. I certainly have a feeling of disconnect between pre-anesthesia me and post-anesthesia me, and am willing to entertain the idea that "I" have died during those times. Even if the conscious experience post-anesthesia has all the memories of pre-anesthesia, they might be actual different subjective experiences. I expect this is one of those things that may never be answered for certain, and certainly not before we have any scientific consensus for what consciousness actually is.
There've been myriad different science-fictional models of the transhuman, everything from various uber-menschen (comic book heroes), Theodore Sturgeon's More than Human, to the directed evolution of David Brin's Uplift series, to the half-machine (and corruptible) demigods of Alistair Reynolds' Revelation series, and even the AI/human cultural symbiosis of Iain Banks.
Ultimately, transhumanism potentially fails by selecting a singular path for human development - we don't know what we're losing by choosing one optimization over another. Is it ethical to optimize "intelligence" to the point where we're no longer capable of socializing with one another? Is it ethical to maximize physical durability at the expense of the ability to feel pain/joy? Is it likely that we can optimize ourselves into an evolutionary dead end which is incapable of adaptation to drastic change in circumstances (like the gender selection resulting from "one-child" policies)?
All of the scenarios in the article imply that the person under consideration has someone else making the decision about what is optimal, with no regard for the consequences of that decision over the course of generations. Likewise, there are many models for what happens if death is postponed indefinitely - exponential population growth, draconian population control, gerontocracy...
We can't become transhuman without also accepting the necessities of consent, civilizational/governance change, resource allocation, and all the other human negotiations which have taken place on historical scales.