This is interesting. Not only are they disabling it, but they are trashing the data it generated. That's a big step, if true – but excuse me if I somehow don't trust them to keep their word......
If you have opted into our Face Recognition setting, we will delete the template used to identify you. If you have the face recognition setting turned off, there is no template to delete and there will be no change.
This is interesting. Not only are they disabling it, but they are trashing the data it generated. That's a big step, if true – but excuse me if I somehow don't trust them to keep their word...
Every new technology brings with it potential for both benefit and concern, and we want to find the right balance. In the case of facial recognition, its long-term role in society needs to be debated in the open, and among those who will be most impacted by it. We will continue engaging in that conversation and working with the civil society groups and regulators who are leading this discussion.
This makes it seem as if they are doing this preemptively to avoid coming regulation. Maybe they know about something coming down the pike that the rest of us haven't heard of yet?
https://web.archive.org/web/20211102173015/https://about.fb.com/news/2021/11/update-on-use-of-face-recognition/ I wonder what their actual motivation to do this is? They do seem to recognize in...
We’re shutting down the Face Recognition system on Facebook. People who’ve opted in will no longer be automatically recognized in photos and videos and we will delete more than a billion people’s individual facial recognition templates.
This change will also impact Automatic Alt Text (AAT), which creates image descriptions for blind and visually-impaired people. After this change, AAT descriptions will no longer include the names of people recognized in photos but will function normally otherwise.
We need to weigh the positive use cases for facial recognition against growing societal concerns, especially as regulators have yet to provide clear rules.
I wonder what their actual motivation to do this is? They do seem to recognize in the article that it's because of privacy concerns - but why now? Is it because they realize they're scaring people away with it? Is it because they don't want regulators to crack down on them?
Could be they want to fill their rebranding with substance? I mean, what better time to polish up your reputation than right when everyone is talking about your brand?
Could be they want to fill their rebranding with substance? I mean, what better time to polish up your reputation than right when everyone is talking about your brand?
You're probably right. I mean, it's working - it got my attention, at least. I still don't feel any different about FB after this, but I suspect many people may read about this and feel a little...
You're probably right. I mean, it's working - it got my attention, at least. I still don't feel any different about FB after this, but I suspect many people may read about this and feel a little better about using the site.
I'm not even sure this is about the facebook brand. Predictably, everyone went "ohh, they're calling themselves meta now, but it's still fb.". But now, this meta brand is immediately filled with...
I'm not even sure this is about the facebook brand. Predictably, everyone went "ohh, they're calling themselves meta now, but it's still fb.". But now, this meta brand is immediately filled with more positive associations, like making fb less shit. Basically, don't give people time to equate the two, but while it's still fresh associate meta with positive stuff.
Realistically, of course, this will also affect the fb brand, but I don't think that's too important right now.
Facebook says it will stop using facial recognition for photo-tagging. In a Monday blog post, Meta, the social network’s new parent company, announced that the platform will delete the facial templates of more than a billion people and shut off its facial recognition software, which uses an algorithm to identify people in photos they upload to Facebook. This decision represents a major step for the movement against facial recognition, which experts and activists have warned is plagued with bias and privacy problems.
But Meta’s announcement comes with a couple of big caveats. While Meta says that facial recognition isn’t a feature on Instagram and its Portal devices, the company’s new commitment doesn’t apply to its metaverse products, Meta spokesperson Jason Grosse told Recode. In fact, Meta is already exploring ways to incorporate biometrics into its emerging metaverse business, which aims to build a virtual, internet-based simulation where people can interact as avatars. Meta is also keeping DeepFace, the sophisticated algorithm that powers its photo-tagging facial recognition feature.
“We believe this technology has the potential to enable positive use cases in the future that maintain privacy, control, and transparency, and it’s an approach we’ll continue to explore as we consider how our future computing platforms and devices can best serve people’s needs,” Grosse told Recode. “For any potential future applications of technologies like this, we’ll continue to be public about intended use, how people can have control over these systems and their personal data, and how we’re living up to our responsible innovation framework.”
Several of Meta’s current projects show that the company has no plans to stop collecting data about peoples’ bodies. Meta is developing hyper-realistic avatars that people will operate as they travel through the metaverse, which requires tracking someone’s facial movements in real time so they can be recreated by their avatar. A new virtual reality headset that Meta plans to release next year will include sensors that track peoples’ eye and facial movements. The company also weighed incorporating facial recognition into its new Ray-Ban smart glasses, which allow the wearer to record their surroundings as they walk around, and Reality Labs, Meta’s hub for studying virtual and augmented reality, is conducting ongoing research into biometrics, according to postings on Facebook’s careers website.
Interesting - it's a good move. Sounds like there was a recent settlement/class action lawsuit about it? Shame about how it affects auto alt text, but that's more of an issue with social media not...
Interesting - it's a good move. Sounds like there was a recent settlement/class action lawsuit about it?
Shame about how it affects auto alt text, but that's more of an issue with social media not requiring alt text or human intervention in auto alt text. My ideal scenario (if they insist on automation) would be: 1) social media site creates auto alt text, 2) the site shows it to the user before they post and asks them to make edits, with suggestions (such as "it looks like there are people in this photo - if it's important information, you can edit the alt text to include their name and/or description"). Social media platforms could do a lot more to educate users about accessibility and potentially even enforce it.
It seems like a basic privacy issue is when you should identify people in photos. A lot of people seem to be saying we should't do it automatically, which makes me wonder if they would also object...
It seems like a basic privacy issue is when you should identify people in photos. A lot of people seem to be saying we should't do it automatically, which makes me wonder if they would also object to people doing it manually.
If it's okay to identify other people in photos, do we need to explicitly get their consent, and how should machine assistance be used?
More generally, how much of a conflict is there between accessibility and privacy? Accessibility features can be used by bad actors. Alt text is useful data for image search engines, making it that much easier to find photos of you, even without face recognition.
In theory, anyway. In practice, if people have to do the work manually, we get a lot less of it.
If the photo is on a private Facebook page, inaccessible to search engines, I don't really see it as an issue. If it's more public, then you should be getting permission to post images of other...
If the photo is on a private Facebook page, inaccessible to search engines, I don't really see it as an issue. If it's more public, then you should be getting permission to post images of other people anyway, in my opinion. People can recognize you visually, as well.
Plus, alt text relies on context - if you're just sharing an image with friends, you might write "Me and my boyfriend John on the beach," not "Me, Jane Doe, on the beach with my boyfriend, John Doe." Either the people seeing the picture/hearing the alt text already know who you are, or they don't. The information conveyed visually and through alt text is relatively equivalent.
So yeah. I wouldn't consider there to be a conflict between alt text and privacy. (There's probably a conflict between privacy and search engines' ability to find people's names on websites and in alt text.)
I agree that if you don’t identify the people in the photo, it’s not a problem. I was talking about when you do. I guess that’s fairly rare, but it happens for group photos. For example, family...
I agree that if you don’t identify the people in the photo, it’s not a problem. I was talking about when you do. I guess that’s fairly rare, but it happens for group photos. For example, family photos or class photos.
Ah. But what about the models they trained off of them? They are likely doing a similar variation as this cryptocurrency where they scan iris' and only keep the hash and not the physical...
Ah. But what about the models they trained off of them? They are likely doing a similar variation as this cryptocurrency where they scan iris' and only keep the hash and not the physical manifestation of the model itself to be 'privacy compliant'.
"Deleting users' facial recognition templates" is pretty clear wording. I don't think there's a lot of creative lawyering you can do here to sidestep that phrasing. If you have these templates,...
"Deleting users' facial recognition templates" is pretty clear wording. I don't think there's a lot of creative lawyering you can do here to sidestep that phrasing. If you have these templates, that means your recognition model must rely on them. Delete them, and your model can't recognize anything. Likely their model is trained to represent likeness between an image (or image region) and a "template". Some ML black magic is used to generate templates from positively identified faces.
It's highly unlikely they cooked the association between your facial structure and your name directly into the monolithic model. This would quite poorly I would wager. For example, adding even one person into the model leaves you with no guarantees that the model will still work properly. Which means you're going to spend a lot of time supervising that model.
Consequently, deleting the templates but not the models will probably leave them with what is basically an algorithm that can extract some kind of representation of faces independently of photographic artifacts. This could for example be an algorithm that generates the same (or a quite similar) vector representation of a face independently of how it was photographed, while giving a different representation for different faces. That's not nothing, but not remarkable either.
This is interesting. Not only are they disabling it, but they are trashing the data it generated. That's a big step, if true – but excuse me if I somehow don't trust them to keep their word...
This makes it seem as if they are doing this preemptively to avoid coming regulation. Maybe they know about something coming down the pike that the rest of us haven't heard of yet?
https://web.archive.org/web/20211102173015/https://about.fb.com/news/2021/11/update-on-use-of-face-recognition/
I wonder what their actual motivation to do this is? They do seem to recognize in the article that it's because of privacy concerns - but why now? Is it because they realize they're scaring people away with it? Is it because they don't want regulators to crack down on them?
Could be they want to fill their rebranding with substance? I mean, what better time to polish up your reputation than right when everyone is talking about your brand?
You're probably right. I mean, it's working - it got my attention, at least. I still don't feel any different about FB after this, but I suspect many people may read about this and feel a little better about using the site.
I'm not even sure this is about the facebook brand. Predictably, everyone went "ohh, they're calling themselves meta now, but it's still fb.". But now, this meta brand is immediately filled with more positive associations, like making fb less shit. Basically, don't give people time to equate the two, but while it's still fresh associate meta with positive stuff.
Realistically, of course, this will also affect the fb brand, but I don't think that's too important right now.
I think that's it. It makes a lot of sense and honestly, I'm quite impressed by this move.
Facebook is backing away from facial recognition. Meta isn’t.
Interesting - it's a good move. Sounds like there was a recent settlement/class action lawsuit about it?
Shame about how it affects auto alt text, but that's more of an issue with social media not requiring alt text or human intervention in auto alt text. My ideal scenario (if they insist on automation) would be: 1) social media site creates auto alt text, 2) the site shows it to the user before they post and asks them to make edits, with suggestions (such as "it looks like there are people in this photo - if it's important information, you can edit the alt text to include their name and/or description"). Social media platforms could do a lot more to educate users about accessibility and potentially even enforce it.
It seems like a basic privacy issue is when you should identify people in photos. A lot of people seem to be saying we should't do it automatically, which makes me wonder if they would also object to people doing it manually.
If it's okay to identify other people in photos, do we need to explicitly get their consent, and how should machine assistance be used?
More generally, how much of a conflict is there between accessibility and privacy? Accessibility features can be used by bad actors. Alt text is useful data for image search engines, making it that much easier to find photos of you, even without face recognition.
In theory, anyway. In practice, if people have to do the work manually, we get a lot less of it.
If the photo is on a private Facebook page, inaccessible to search engines, I don't really see it as an issue. If it's more public, then you should be getting permission to post images of other people anyway, in my opinion. People can recognize you visually, as well.
Plus, alt text relies on context - if you're just sharing an image with friends, you might write "Me and my boyfriend John on the beach," not "Me, Jane Doe, on the beach with my boyfriend, John Doe." Either the people seeing the picture/hearing the alt text already know who you are, or they don't. The information conveyed visually and through alt text is relatively equivalent.
So yeah. I wouldn't consider there to be a conflict between alt text and privacy. (There's probably a conflict between privacy and search engines' ability to find people's names on websites and in alt text.)
I agree that if you don’t identify the people in the photo, it’s not a problem. I was talking about when you do. I guess that’s fairly rare, but it happens for group photos. For example, family photos or class photos.
Ah. But what about the models they trained off of them? They are likely doing a similar variation as this cryptocurrency where they scan iris' and only keep the hash and not the physical manifestation of the model itself to be 'privacy compliant'.
"Deleting users' facial recognition templates" is pretty clear wording. I don't think there's a lot of creative lawyering you can do here to sidestep that phrasing. If you have these templates, that means your recognition model must rely on them. Delete them, and your model can't recognize anything. Likely their model is trained to represent likeness between an image (or image region) and a "template". Some ML black magic is used to generate templates from positively identified faces.
It's highly unlikely they cooked the association between your facial structure and your name directly into the monolithic model. This would quite poorly I would wager. For example, adding even one person into the model leaves you with no guarantees that the model will still work properly. Which means you're going to spend a lot of time supervising that model.
Consequently, deleting the templates but not the models will probably leave them with what is basically an algorithm that can extract some kind of representation of faces independently of photographic artifacts. This could for example be an algorithm that generates the same (or a quite similar) vector representation of a face independently of how it was photographed, while giving a different representation for different faces. That's not nothing, but not remarkable either.