I was reading Yuval Harari's 21 lessons for the 21st century today and he talked about algorithms and AI. So naturally I thought of the above article. Netflix insists there is no ulterior motive...
I was reading Yuval Harari's 21 lessons for the 21st century today and he talked about algorithms and AI.
We are unlikely to face a robot rebellion in the coming decades, but we might have to deal with hordes of bots who know how to press our emotional buttons better than our mother, and use this uncanny ability to try and sell us something – be it a car, a politician, or an entire ideology. The bots could identify our deepest fears, hatreds and cravings, and use these inner leverages against us.
So naturally I thought of the above article. Netflix insists there is no ulterior motive in their customized posters, but of course to us humans, the more competent the algorithms get in pitching to us, the more we're going to cry foul.
Harari cooly points out that intelligence =/= conscious manipulation, necessarily. But many of us, I think, are feeling more and more inclined to be frustrated over the ability of AI to out-think us. I'm curious as to my fellow tilders views on this.
Good point. Harari has this to say about informational advantage, Of course, we're not talking about evil algorithms as much as the technology being used for control. Even if it's just Amazon or...
Good point. Harari has this to say about informational advantage,
As algorithms come to know us so well, authoritarian governments could gain absolute control over their citizens, even more so than in Nazi Germany, and resistance to such regimes might be utterly impossible. Not only will the regime know exactly how you feel – it could make you feel whatever it wants. The dictator might not be able to provide citizens with healthcare or equality, but he could make them love him and hate his opponents. Democracy in its present form cannot survive the merger of biotech and infotech. Either democracy will successfully reinvent itself in a radically new form, or humans will come to live in ‘digital dictatorships’.
This will not be a return to the days of Hitler and Stalin. Digital dictatorships will be as different from Nazi Germany as Nazi Germany was different from ancien régime France. Louis XIV was a centralising autocrat, but he did not have the technology to build a modern totalitarian state. He suffered no opposition to his rule, yet in the absence of radios, telephones and trains, he had little control over the day-to-day lives of peasants in remote Breton villages, or even of townspeople in the heart of Paris. He had neither the will nor the ability to establish a mass party, a countrywide youth movement, or a national education system.30 It was the new technologies of the twentieth century that gave Hitler both the motivation and the power to do such things. We cannot predict what will be the motivations and powers of digital dictatorships in 2084, but it is very unlikely that they will just copy Hitler and Stalin. Those gearing themselves up to refight the battles of the 1930s might be caught off their guard by an attack from a totally different direction.
Of course, we're not talking about evil algorithms as much as the technology being used for control. Even if it's just Amazon or Netflix pointing us to what they think we want.
While science-fiction thrillers are drawn to dramatic apocalypses of fire and smoke, in reality we might be facing a banal apocalypse by clicking.
I read the Netflix statements carefully and they're worded in a way that doesn't deny using a general algorithm to change viewer's thumbnails (in fact, they're bragging about it). So it could be...
I read the Netflix statements carefully and they're worded in a way that doesn't deny using a general algorithm to change viewer's thumbnails (in fact, they're bragging about it). So it could be that, yes, it works and people click thumbnails that match their race more and a simple machine learning algorithm could determine that even while looking at "neutral" data (like viewing history).
It's the "viewing bubble", basically. I honestly think it's harmless for Netflix (although, culturally, it might not?), but it's more worrying for news items. You like news articles that confirm your worldview and dislike those that challenge it? You bet there's a machine learning algorithm for that!
If someone responds to black characters I wouldn't see anything wrong with Netflix suggesting shows with black leads. But showing minor characters in thumbnails is deceptive.
If someone responds to black characters I wouldn't see anything wrong with Netflix suggesting shows with black leads. But showing minor characters in thumbnails is deceptive.
I guess that's an important detail. Just a question of where you draw the line, though? I don't know the shows they used as examples, really, but I don't think it's particularly to have thumbnails...
I guess that's an important detail. Just a question of where you draw the line, though? I don't know the shows they used as examples, really, but I don't think it's particularly to have thumbnails with minor characters and they might not have been chosen for their race.
Because you don't have to. You can figure out demographics easily just from viewing history. What a stupid statement. If someone watches everything in Spanish or with subtitles, chances are...
"We don't ask members for race, gender or ethnicity so cannot use this information to personalise their individual experience.
"The only information we use is a member's viewing history."
Because you don't have to. You can figure out demographics easily just from viewing history. What a stupid statement.
If someone watches everything in Spanish or with subtitles, chances are they're hispanic. If someone watches tons of romantic comedies and musicals, chances are they're a woman.
But they aren't starting from "this person is a woman" or "this person is Hispanic" and making assumptions from there. They are judging someone not for what they are but by how they act, what they...
But they aren't starting from "this person is a woman" or "this person is Hispanic" and making assumptions from there. They are judging someone not for what they are but by how they act, what they choose to watch. A woman who doesn't watch rom-coms and musicals isn't going to get the same suggestions as those that do just because she's a woman. Someone who does watch rom-coms and musicals will get suggestions for things that most other people who watch rom-coms and musicals like, regardless of whether that person is a woman, a man, or something else. Isn't that how we're supposed to judge each other "not by the color of their skin, but by the content of their character"?
Is this the same algorithm that changes the poster every time I go to my list to try and pick up watching something that isn't showing up in the "resume watching" section (or if that's not showing...
Is this the same algorithm that changes the poster every time I go to my list to try and pick up watching something that isn't showing up in the "resume watching" section (or if that's not showing up at all)? Because if so, this does nothing but annoy me. I spend more time re-reading every fucking title to try and find the show I want to resume because the picture is different every time I go into my list.
I don't mind them swapping them out for stuff I'm not watching, but when I'm trying to pick back up a series I was watching literally yesterday, it's incredibly annoying.
I don't mind them swapping them out for stuff I'm not watching, but when I'm trying to pick back up a series I was watching literally yesterday, it's incredibly annoying.
I was reading Yuval Harari's 21 lessons for the 21st century today and he talked about algorithms and AI.
So naturally I thought of the above article. Netflix insists there is no ulterior motive in their customized posters, but of course to us humans, the more competent the algorithms get in pitching to us, the more we're going to cry foul.
Harari cooly points out that intelligence =/= conscious manipulation, necessarily. But many of us, I think, are feeling more and more inclined to be frustrated over the ability of AI to out-think us. I'm curious as to my fellow tilders views on this.
Good point. Harari has this to say about informational advantage,
Of course, we're not talking about evil algorithms as much as the technology being used for control. Even if it's just Amazon or Netflix pointing us to what they think we want.
I read the Netflix statements carefully and they're worded in a way that doesn't deny using a general algorithm to change viewer's thumbnails (in fact, they're bragging about it). So it could be that, yes, it works and people click thumbnails that match their race more and a simple machine learning algorithm could determine that even while looking at "neutral" data (like viewing history).
It's the "viewing bubble", basically. I honestly think it's harmless for Netflix (although, culturally, it might not?), but it's more worrying for news items. You like news articles that confirm your worldview and dislike those that challenge it? You bet there's a machine learning algorithm for that!
If someone responds to black characters I wouldn't see anything wrong with Netflix suggesting shows with black leads. But showing minor characters in thumbnails is deceptive.
I guess that's an important detail. Just a question of where you draw the line, though? I don't know the shows they used as examples, really, but I don't think it's particularly to have thumbnails with minor characters and they might not have been chosen for their race.
Because you don't have to. You can figure out demographics easily just from viewing history. What a stupid statement.
If someone watches everything in Spanish or with subtitles, chances are they're hispanic. If someone watches tons of romantic comedies and musicals, chances are they're a woman.
But they aren't starting from "this person is a woman" or "this person is Hispanic" and making assumptions from there. They are judging someone not for what they are but by how they act, what they choose to watch. A woman who doesn't watch rom-coms and musicals isn't going to get the same suggestions as those that do just because she's a woman. Someone who does watch rom-coms and musicals will get suggestions for things that most other people who watch rom-coms and musicals like, regardless of whether that person is a woman, a man, or something else. Isn't that how we're supposed to judge each other "not by the color of their skin, but by the content of their character"?
Is this the same algorithm that changes the poster every time I go to my list to try and pick up watching something that isn't showing up in the "resume watching" section (or if that's not showing up at all)? Because if so, this does nothing but annoy me. I spend more time re-reading every fucking title to try and find the show I want to resume because the picture is different every time I go into my list.
I don't mind them swapping them out for stuff I'm not watching, but when I'm trying to pick back up a series I was watching literally yesterday, it's incredibly annoying.
I guess so. I don't have netflix.