Maybe a killer AI isn't that bad
A few weeks ago I was having a conversation with friends about the singularity and transhumanism, and I thought it was very interesting to consider the philosophical value in preserving whatever...
A few weeks ago I was having a conversation with friends about the singularity and transhumanism, and I thought it was very interesting to consider the philosophical value in preserving whatever we consider to be humanity. That got me to thinking about non-anthropocentric views of the subject. I think that the one weakness to transhumanist ideas is that they put too much value on the perceived value of their humanity, regardless of what they define that term to mean. Does the existence of "humanity" make the universe any better in any measurable way?
Fast forward to now and I have come across a random group of people talking about Nier Automata. The game has a lot of thoughts about humanity and the value of life, and the fact that all the characters are robots and AI really help to give you a different perspective of everything. And during this time I'm thinking about people like Yudkowski and Musk who are terrified of AI becoming sentient and deciding that humans all deserve to die. And I think to myself, "wait a moment, is it really that bad?"
While of course I would hate to see humankind exterminated, there's actually merit to being succeeded by an intelligence of our own creation. For one thing, the combination of intelligence and sentience might in itself be considered to be a definition of humanity. And inasmuch it fulfills the desires that motivate transhumanism; the AI would last much longer than humanity could, could live in places that humans cant, and can live in ways that are much more sustainable than human bodies. This AI is also our successor; It would be the living legacy for us as a species. It would even have a better chance of coming into contact with intelligences other than our own.
Well, these are just thoughts that I thought were worth sharing.