23
votes
iOS 13 redraws your eyes using ARKit so that you're looking at the camera instead of the screen
@schukin:
How iOS 13 FaceTime Attention Correction works: it simply uses ARKit to grab a depth map/position of your face, and adjusts the eyes accordingly. Notice the warping of the line across both the eyes and nose. https://t.co/U7PMa4oNGN
I have surprisingly conflicted feelings about this. On one hand, it's always mildly annoyed me that you can't have the illusion of eye contact with someone over video chat because they're always looking just below the camera. So why not fix it with software?
On the other hand, the more I read about the subtle ways modern digital cameras are manipulating images (by default), the more disconcerted I become. I expect photography to create a good-faith facsimile of the real world, not a deliberate uncanny valley approximation of it. At the very least, I'd like to see these beautifying / smile-improving / attention-correcting "enhancements" all locked behind discrete settings that are not enabled out of the box.
I think this is just a sign of the times. This attention correction feature is likely just a limited-scope "beta" feature for what will certainly become a tier-one feature at some point, like adjusting all the eyes in group photos to all look at your camera.
I get the sort of philosophy that cameras should be a facsimile of real life, but tools like this will improve photos overall. A feature like this does no harm to anyone, and only improves outcomes (unless it breaks).
I want a feature like this to be prolific across platforms sooner rather than later. After a few years of bad glassesâthink nearly "coke bottle" glasses, even with today's high-index lensesâcoupled with rapidly degrading eyesight, my left eye now drifts a bit after prolonged strain, causing semi-frequent cross-eyed photos. (Recent Lasik surgery has undone a ton of the damage, but it's not gone.) Unless I take a photo from a few angles that don't trigger it, I can't stand to look at any photo of myself. Realizing that people's phones will have features like this on by default will make me less camera shy.
This is unquestionably a net positive, no matter how "uncanny valley" this may feel from newness.
All that being said, I'm not the biggest fan of other software photography features, like fake depth of field. It's extremely obvious when it's software applied, and skill + real lenses still trumps that.
It used to be that you hate how you look in photos because it looks so much different from how you see yourself in mirrors. Soon, it will be the other way round!
You can always use a third-party camera app that does none of that, lets you shoot RAW, gives you control over shutter speed and ISO, focus point, etc.
Given how much of an unreliable narrator we are in our own lives, I suppose the next step would be to make us question it even more. It's a slippery slope to throw what is "real" out the window, since we naturally will adapt to believe what we think is true.
At the same time, with how fluid reality is already I can't say I can think of any good reason why we shouldn't other than that it makes me uncomfortable.
I'm not too knowledgeable on this, but doesn't this kind of start a slippery slope?
It seems useful, but aren't cameras for taking "snapshots" of the real world, not getting one, changing it a bit so it looks nicer, and making it seem like it's just how it is. You've got a lot of other stuff, like beautification, baked in and doesn't this increase the gap between the way a person actually is and the way they're perceived online?
It's not the beginning of the slippery slope at all. We pushed the rock down that hill a long time ago with smartphone cameras and Instagram. Not that this justifies it, mind you. This is just another shitty step towards a world where reality does not match content. Deepfakes, fake news, photoshop, etc.
This seems like a very un-Apple like move, and I don't appreciate it.
It's not that photos have always been a 1:1 imitation of reality either. You could argue this started with our first ability to capture photos, even.
A pro-photographer, with good gear, knows they're not capturing realityâthey're only capturing an imitation of it. Passing light through lenses to capture onto a film or digital sensor introduces a menagerie of things that distort reality: barrel distortion, grain/noise, chromatic aberration, you name it. They'll even manipulate things outside of the camera, like setting up extremely advantageous lighting scenarios to modify the outcome of their photos. If you flip through any magazine of beautiful homes, food, or people, they've all been modified to be "more than real"âmagazines dating to before the days of photoshop. (People even thirst for some of this, like tasteful lens flares and film grain.)
In a way, it's not great that software is reducing the skill required in taking truly mind blowing photosâwe shouldn't accept a cheap software blur as a substitute for a fast lensâbut I don't exactly mind having software around to help in the lowest common denominator of scenarios, either.
There's a sliding scale between acceptable and unacceptable, in my mind; and while most of the examples you mention seem fine to me, this new feature to me crosses a threshold of reality distortion I'm not comfortable with.
I didn't say it's unacceptable, I said I'm not comfortable with it. It reminds me of a number of dystopic futures where the humanity is essentially removed from humanity. What makes us ourselves is concealed.
It will be like in the movie Surrogates. People just interact with others through perfectly looking robot bodies.
There is something eerily dystopian about it. A thing as common as looking into someone's eyes suddenly becomes âimprovedâ by technology. In the pretend-world you're looking into people's eyes more, but in reality you don't see the eyes at all; you see an animation that looks like your partner's eyes. This desire to make the world look more refined without actually trying to refine it bothers me greatly.