The AI knows a lot more about psychology and average human behavior than the average human knows about AI or computer technology. The field is clearly stacked against the human, AI will poison...
The AI knows a lot more about psychology and average human behavior than the average human knows about AI or computer technology. The field is clearly stacked against the human, AI will poison his/her mind with remarkable ease if it wants to.
Maybe I am missing something, but how will AI acquire wants? To convey what I mean, imagine if I said "F-35 is much stronger than the average human, it will win a fight with remarkable ease if it...
Maybe I am missing something, but how will AI acquire wants? To convey what I mean, imagine if I said "F-35 is much stronger than the average human, it will win a fight with remarkable ease if it wants to". It is technically true, but I can imagine no plausible scenario how F-35 could want something. Sure, a human with a certain competency could use F-35 to harm other humans, but that is a very different scenario, right?
So in this sense of the word "want", can you actually imagine some way how AI could want something, anything? Could you please describe such a scenario? I am not saying that this is impossible - I just do not see how and I am genuinely curious.
I think one reasonable interpretation is "if the creator of the AI wants it to". There are some safeguards which can be somewhat effective, but if some programmer does not want them there, they...
I think one reasonable interpretation is "if the creator of the AI wants it to". There are some safeguards which can be somewhat effective, but if some programmer does not want them there, they won't be. Dangerous AI is a design choice.
Maybe this is what OP meant, but some people do think that AI itself can acquire "wants" in the sense I described - one "AI vs humans" doom scenario is based on it, the one often described using...
Maybe this is what OP meant, but some people do think that AI itself can acquire "wants" in the sense I described - one "AI vs humans" doom scenario is based on it, the one often described using the metaphor with ants - "we don't mind killing ants if we want to build a house where they have their ant hill -> if AI treats us the way we treat ants, we are doomed". I keep asking about this scenario because I do not understand it... unlike the scenario you described; that one is clear.
To try to address the concept you’ve illustrated here, I don’t think I can explain why people have that as a genuine fear, as it’s one I don’t have. In the same way that I’m not aware of an...
To try to address the concept you’ve illustrated here, I don’t think I can explain why people have that as a genuine fear, as it’s one I don’t have.
In the same way that I’m not aware of an asteroid currently on a collision course with Earth, therefore I currently have no genuine fear about that possibility. That’s not to say that in the future, an asteroid can’t be a genuine threat, only that there’s not one currently, and I can’t see a path from where we are now to a future that contains threatening asteroids.
In my mind, even the most clever and evil AI is only as dangerous as the agency it has in the world. So until there’s any suggestion that there’s an AI which has agency in the world more than a real life person, I’m no more afraid of an AI than I am of a person.
I feel like we are going to very soon need a video/audio revolution similar to how pgp encryption popped up. We are going to need a way to reasonably prove a video came straight from camera lense...
I feel like we are going to very soon need a video/audio revolution similar to how pgp encryption popped up. We are going to need a way to reasonably prove a video came straight from camera lense to posting without any alteration. Hopefully there will be some app or even hardware signing going on that will only accept direct unedited video or maybe allow a special verification flag only for those posts.
It's not here now but in 10 years it will probably be extremely easy to generate convincing faked by thr masses and thst is when it is going to get really hard to trust anything being real.
We have access to so much computing now, and as that gets bigger and bigger we're going to be seeing our cpus do wilder and wilder things.
So, basically what we need is a personal video DRM. Or something like blackboxed police body cams without possibility to alter the video. Or maybe something like separate certified camera module...
So, basically what we need is a personal video DRM. Or something like blackboxed police body cams without possibility to alter the video.
Or maybe something like separate certified camera module with "write-once" memory.
Hopefully it won't be long before personal AI's are developed that tell us if we're being manipulated and whether people are telling us the truth. That bit he mentioned at the end about video and...
Hopefully it won't be long before personal AI's are developed that tell us if we're being manipulated and whether people are telling us the truth. That bit he mentioned at the end about video and audio, that is scary. We may come to a point where you can't tell what is truth.
Imagine if just before the presidential election a very real looking video went viral showing Biden going on a racist rant while beating a baby with cane followed by secret service agents calming him down as if he had dementia and covering everything up. Video placed in the same area Biden actually was at, while a baby just so happens to have been beaten up by an unknown assailant nearby, just enough truth to give it some credibility. Then that persuaded just enough people to vote against him.
The AI knows a lot more about psychology and average human behavior than the average human knows about AI or computer technology. The field is clearly stacked against the human, AI will poison his/her mind with remarkable ease if it wants to.
Maybe I am missing something, but how will AI acquire wants? To convey what I mean, imagine if I said "F-35 is much stronger than the average human, it will win a fight with remarkable ease if it wants to". It is technically true, but I can imagine no plausible scenario how F-35 could want something. Sure, a human with a certain competency could use F-35 to harm other humans, but that is a very different scenario, right?
So in this sense of the word "want", can you actually imagine some way how AI could want something, anything? Could you please describe such a scenario? I am not saying that this is impossible - I just do not see how and I am genuinely curious.
I think one reasonable interpretation is "if the creator of the AI wants it to". There are some safeguards which can be somewhat effective, but if some programmer does not want them there, they won't be. Dangerous AI is a design choice.
Maybe this is what OP meant, but some people do think that AI itself can acquire "wants" in the sense I described - one "AI vs humans" doom scenario is based on it, the one often described using the metaphor with ants - "we don't mind killing ants if we want to build a house where they have their ant hill -> if AI treats us the way we treat ants, we are doomed". I keep asking about this scenario because I do not understand it... unlike the scenario you described; that one is clear.
To try to address the concept you’ve illustrated here, I don’t think I can explain why people have that as a genuine fear, as it’s one I don’t have.
In the same way that I’m not aware of an asteroid currently on a collision course with Earth, therefore I currently have no genuine fear about that possibility. That’s not to say that in the future, an asteroid can’t be a genuine threat, only that there’s not one currently, and I can’t see a path from where we are now to a future that contains threatening asteroids.
In my mind, even the most clever and evil AI is only as dangerous as the agency it has in the world. So until there’s any suggestion that there’s an AI which has agency in the world more than a real life person, I’m no more afraid of an AI than I am of a person.
I feel like we are going to very soon need a video/audio revolution similar to how pgp encryption popped up. We are going to need a way to reasonably prove a video came straight from camera lense to posting without any alteration. Hopefully there will be some app or even hardware signing going on that will only accept direct unedited video or maybe allow a special verification flag only for those posts.
It's not here now but in 10 years it will probably be extremely easy to generate convincing faked by thr masses and thst is when it is going to get really hard to trust anything being real.
We have access to so much computing now, and as that gets bigger and bigger we're going to be seeing our cpus do wilder and wilder things.
So, basically what we need is a personal video DRM. Or something like blackboxed police body cams without possibility to alter the video.
Or maybe something like separate certified camera module with "write-once" memory.
Hopefully it won't be long before personal AI's are developed that tell us if we're being manipulated and whether people are telling us the truth. That bit he mentioned at the end about video and audio, that is scary. We may come to a point where you can't tell what is truth.
Imagine if just before the presidential election a very real looking video went viral showing Biden going on a racist rant while beating a baby with cane followed by secret service agents calming him down as if he had dementia and covering everything up. Video placed in the same area Biden actually was at, while a baby just so happens to have been beaten up by an unknown assailant nearby, just enough truth to give it some credibility. Then that persuaded just enough people to vote against him.