• Activity
  • Votes
  • Comments
  • New
  • All activity
  • Showing only topics with the tag "philosophy". Back to normal view
    1. Atheism and moral realism/objectivism?

      *Disclaimer: I am not an apologist, theologian, or a philosopher, just someone interested in the topic. Perhaps this could've been asked in r/AskPhilosophy or maybe even r/changemyview, but I...

      *Disclaimer: I am not an apologist, theologian, or a philosopher, just someone interested in the topic. Perhaps this could've been asked in r/AskPhilosophy or maybe even r/changemyview, but I figure the conversation might be good here

      The recent post here on Absurd Trolley Problems has had me thinking about ethics again, and I realized I've never been introduced to how one can be an atheist and be not only a moral objectivist, but a moral realist. I remember a debate I watched years ago with William Lane Craig and Christopher Hitchens where Craig asks Hitchens what the basis of morality is, and he acts insulted, insinuating that Craig intended to say that atheists couldn't "be good without God" (which I think became a famous moment for the both of them.)

      But I never got the answer to Craig's question that I wanted. Without God, how should we determine what moral facts there are? How should we determine if there are moral facts at all? I grew up in a fundamentalist religion, and found myself in adulthood deeply interested in apologetics, and see similar responses in debates to the one mentioned above. Now while I believe Hitchens was a moral relativist, I often see and hear cases where atheists do seem to want to say that [insert atrocity here] was objectively morally wrong. Can atheists reasonably claim that there are not only moral facts, but objective moral facts that they can access? Upon examination, aren't you ultimately required to derive an "ought" from an "is"?

      I skimmed The Moral Landscape by Sam Harris some years ago, and it seems to "avoid" (i.e. commit) the "is/ought" fallacy by simply declaring that "human flourishing" (however that may be defined, separate issue) is an irreducible "ought" in his eyes. The book is great, I think that science should be part of the discussion about how one ought to live their life if the goal is some end like human flourishing; doctors already give prescriptions for behavior based on a presupposed goal between both parties to promote health and well-being. Both of these necessarily presuppose a state of affairs that one "ought" to seek to attain.

      But none of this answers why one "ought" to do anything; sure, there are facts about what one "ought" to do in order to attain a state of affairs, but that isn't morality: that's true of any subject where two people agree to share a goal. It doesn't tell us why they should have that goal. None of this feels like a satisfying answer to the question Craig posed. I don't feel like I'm any closer to these objective moral facts.

      I should say this topic is really meaningful to me. I've thought a lot about veganism, and the suffering of non-human animals. I've thought about the impact of my consumption decisions instead of perpetually leaning on the "no ethical consumption" crutch (even though there are reasons why that would have merit in certain circumstances. I literally can't stop thinking about climate change and how powerless, yet simultaneously complicit I feel. I've read Peter Singer, Scripture, Kant, John Stewart Mill, Rawls, and works from many others, and can't find any reason for an atheist (and maybe even a theist?) to think that there are these moral facts at all, much less objective, accessible ones. This really leaves me with "I guess I should just do whatever it is that I feel like doing", which probably seems to you as unsatisfying as it was for me to type.

      14 votes
    2. Maybe a killer AI isn't that bad

      A few weeks ago I was having a conversation with friends about the singularity and transhumanism, and I thought it was very interesting to consider the philosophical value in preserving whatever...

      A few weeks ago I was having a conversation with friends about the singularity and transhumanism, and I thought it was very interesting to consider the philosophical value in preserving whatever we consider to be humanity. That got me to thinking about non-anthropocentric views of the subject. I think that the one weakness to transhumanist ideas is that they put too much value on the perceived value of their humanity, regardless of what they define that term to mean. Does the existence of "humanity" make the universe any better in any measurable way?

      Fast forward to now and I have come across a random group of people talking about Nier Automata. The game has a lot of thoughts about humanity and the value of life, and the fact that all the characters are robots and AI really help to give you a different perspective of everything. And during this time I'm thinking about people like Yudkowski and Musk who are terrified of AI becoming sentient and deciding that humans all deserve to die. And I think to myself, "wait a moment, is it really that bad?"

      While of course I would hate to see humankind exterminated, there's actually merit to being succeeded by an intelligence of our own creation. For one thing, the combination of intelligence and sentience might in itself be considered to be a definition of humanity. And inasmuch it fulfills the desires that motivate transhumanism; the AI would last much longer than humanity could, could live in places that humans cant, and can live in ways that are much more sustainable than human bodies. This AI is also our successor; It would be the living legacy for us as a species. It would even have a better chance of coming into contact with intelligences other than our own.

      Well, these are just thoughts that I thought were worth sharing.

      14 votes