From the essay: I was thinking about an aspect of what Schneider bins as social trust earlier this week. I haven't delved deeply enough into AI regulation to have any particular insight on the...
From the essay:
Trust is essential to society. Humans as a species are trusting. We are all sitting here, mostly strangers, confident that nobody will attack us. If we were a roomful of chimpanzees, this would be impossible. We trust many thousands of times a day. Society can’t function without it. And that we don’t even think about it is a measure of how well it all works.
In this talk, I am going to make several arguments. One, that there are two different kinds of trust—interpersonal trust and social trust—and that we regularly confuse them. Two, that the confusion will increase with artificial intelligence. We will make a fundamental category error. We will think of AIs as friends when they’re really just services. Three, that the corporations controlling AI systems will take advantage of our confusion to take advantage of us. They will not be trustworthy. And four, that it is the role of government to create trust in society. And therefore, it is their role to create an environment for trustworthy AI. And that means regulation. Not regulating AI, but regulating the organizations that control and use AI.
I was thinking about an aspect of what Schneider bins as social trust earlier this week. I haven't delved deeply enough into AI regulation to have any particular insight on the interventions Schneider is recommending, and would welcome others' input. There's probably a contract law angle here too, with respect to AI end user agreements and the power differential inherent in corporations versus users.
This and many more reasons, are why I believe it's imperative to have an open publicly available AI leading the way rather than a private corporate one.
This and many more reasons, are why I believe it's imperative to have an open publicly available AI leading the way rather than a private corporate one.
I'm not sure this helps for the issues described in the article. You can make an open source video game, but not an open source airline, bank, or Walmart. The AI will be only one small component...
I'm not sure this helps for the issues described in the article. You can make an open source video game, but not an open source airline, bank, or Walmart. The AI will be only one small component of the services that a corporation provides. And maybe you won't get to choose the software, just like you don't choose your banking app? (Though, a few banks do have api's.)
Having a choice of companies and alternative ways to do things (paying cash instead of using a bank or payment app) seems like a better guard against lock-in. It only goes so far since there are things that make them all pretty similar. (All airlines are subject to similar market forces and have to follow the same regulations.)
I'm a bit skeptical that friend/service confusion will really happen. I know a seven year old who loves using Character.ai and is under no illusion whatsoever that she's talking to real people....
I'm a bit skeptical that friend/service confusion will really happen. I know a seven year old who loves using Character.ai and is under no illusion whatsoever that she's talking to real people. It's another way of playing make-believe.
Getting addicted to video games is pretty common, but I don't think gamers confuse games with reality. They (we) keep playing because they enjoy the fantasy more than sometimes boring or frustrating reality.
I wonder if single-player games will become more popular because the NPC's are better company than playing with annoying and unreliable strangers.
Already the case for a lot of people tbh. I avoid most multiplayer stuff bc of the general unpredictably toxic environment. Most multiplayer games are only fun if your existing friends can play it...
I wonder if single-player games will become more popular because the NPC's are better company than playing with annoying and unreliable strangers.
Already the case for a lot of people tbh. I avoid most multiplayer stuff bc of the general unpredictably toxic environment. Most multiplayer games are only fun if your existing friends can play it with you. If anything I could see multiplayer games developing better "AI companions" to fill that need (though imo we're pretty far off from something that's quite actually suitable for that purpose at the quality you'd need).
Maybe it would be just as good to use AI to improve matchmaking and sort the toxic people into groups where they can be toxic to each other. Like "Escape from New York".
Maybe it would be just as good to use AI to improve matchmaking and sort the toxic people into groups where they can be toxic to each other. Like "Escape from New York".
From the essay:
I was thinking about an aspect of what Schneider bins as social trust earlier this week. I haven't delved deeply enough into AI regulation to have any particular insight on the interventions Schneider is recommending, and would welcome others' input. There's probably a contract law angle here too, with respect to AI end user agreements and the power differential inherent in corporations versus users.
This and many more reasons, are why I believe it's imperative to have an open publicly available AI leading the way rather than a private corporate one.
I'm not sure this helps for the issues described in the article. You can make an open source video game, but not an open source airline, bank, or Walmart. The AI will be only one small component of the services that a corporation provides. And maybe you won't get to choose the software, just like you don't choose your banking app? (Though, a few banks do have api's.)
Having a choice of companies and alternative ways to do things (paying cash instead of using a bank or payment app) seems like a better guard against lock-in. It only goes so far since there are things that make them all pretty similar. (All airlines are subject to similar market forces and have to follow the same regulations.)
I'm a bit skeptical that friend/service confusion will really happen. I know a seven year old who loves using Character.ai and is under no illusion whatsoever that she's talking to real people. It's another way of playing make-believe.
Getting addicted to video games is pretty common, but I don't think gamers confuse games with reality. They (we) keep playing because they enjoy the fantasy more than sometimes boring or frustrating reality.
I wonder if single-player games will become more popular because the NPC's are better company than playing with annoying and unreliable strangers.
Already the case for a lot of people tbh. I avoid most multiplayer stuff bc of the general unpredictably toxic environment. Most multiplayer games are only fun if your existing friends can play it with you. If anything I could see multiplayer games developing better "AI companions" to fill that need (though imo we're pretty far off from something that's quite actually suitable for that purpose at the quality you'd need).
Maybe it would be just as good to use AI to improve matchmaking and sort the toxic people into groups where they can be toxic to each other. Like "Escape from New York".