This interview is fantastic. Ted Chiang is able to pull out the essence of the questions he deals with in such an elegant way. I aspire to do the same thing, but feel like I am able to accomplish...
This interview is fantastic. Ted Chiang is able to pull out the essence of the questions he deals with in such an elegant way. I aspire to do the same thing, but feel like I am able to accomplish it so rarely.
Imagine you have some hypothetical AI that is better at accomplishing tasks than humans and that does exactly what you tell it to do. Do you want ExxonMobil to have such an AI at its disposal? That doesn’t sound good. Conversely, imagine a hypothetical AI that does what is best for the world as a whole, even if human beings are asking it to do something else. Who would buy such an AI? Certainly not ExxonMobil.
Such a beautiful way to demonstrate that the alignment problem is not an AI problem. It’s a human problem.
Ted identifies so many areas where we blame AI and technology for a lot of the failings of our society when those tools are just expressions and extensions of the way we behave. I’ve definitely fallen into this trap myself.
Great piece, great title made even better by reading, great stuff all around. Thanks for posting.
This interview is fantastic. Ted Chiang is able to pull out the essence of the questions he deals with in such an elegant way. I aspire to do the same thing, but feel like I am able to accomplish it so rarely.
Such a beautiful way to demonstrate that the alignment problem is not an AI problem. It’s a human problem.
Ted identifies so many areas where we blame AI and technology for a lot of the failings of our society when those tools are just expressions and extensions of the way we behave. I’ve definitely fallen into this trap myself.
Great piece, great title made even better by reading, great stuff all around. Thanks for posting.