The idea itself is neat (except actually not), if you forget to ask why this excessive data collection is even necessary, a point made very well by a blog post shared here on Tildes some time ago...
The idea itself is neat (except actually not), if you forget to ask why this excessive data collection is even necessary, a point made very well by a blog post shared here on Tildes some time ago but it suffers from most of the pitfalls new ideas for tech have, which mean that it'll probably never be implemented:
Standardization: It's hell. Standardizing anything in computer science is fucking terrible. Much less if you involve a bunch of big companies that don't want to play well together cause they're competing (e.g. Netflix, Youtube, Apple and it's new streaming service, etc.) and with this idea you're actively blocking them from creating their own walled gardens. They don't want to work well together, because then you might use Netflix and Apple TV+ instead of just TV+.
You're blocking big companies from collecting your data. They hate that, because they want to sell your data and sometimes use it to better their own service. Even more so, how would you actually block companies from transferring your data? The Netlix app isn't open source, and it requires internet access. Once it's inside the circle, it can send anything home it wants. You couldn't even watch the data because you're streaming video and therefore already consuming large amounts of data. The same goes for Youtube or Google Maps. All you need is one shady app inside your circle and it's breached. And since almost nothing big is open source, everything is shady. The point about end-to-end encryption doesn't really stand, because for an app to able to work with your data, it has to decrypt it. That's the moment shit gets sent home.
Centralizing everything on your device isn't always good because then all data lines converge on a single point of attack: Your phone. Lose it, have it stolen, plug it in to a public charge station and get a rootkit installed... (USB is insecure as shit lads and lasses)
How necessary is this? This leads me back to the post I linked above. The site talks about how "AI" and "on device machine-learning" will help keep our data safe while personalizing our adventures through the internet. Except the personalization sucks. Analyzing big data is fucking difficult, often inconclusive and frequently completely useless and unnecessary for the product. I don't need Google to track my every step. When I look something up through it, it can connect me to the product I'm looking for, or a list of them. Bam! Personalized advertising, and you didn't even need to know the name of my firstborn child for it to work!
I don't want to be come a see-through person, where Facebook/Siri/whoever knows what exactly I'm doing right now and what I'm intending to do. It's creepy. Please stop.
Now, you could be entirely right that mass data collection is not helpful for personalization, and that personalization might not be a worthy goal. However, I would like to push back a little and...
How necessary is this? (...) Except the personalization sucks. (...)
Now, you could be entirely right that mass data collection is not helpful for personalization, and that personalization might not be a worthy goal. However, I would like to push back a little and point out that a lot of the complaints in the article you linked stem from perverse economic incentives that prevent a truly user-positive system. The article talks about how advertising companies have incentives to not interoperate well, leading to bloated web pages and extraneous tracking. It talks about how advertisers are terrible at understanding how deeply personalization could work (or are limited by the previously mentioned poor interoperation), and default to known demographic trends. It also points out how Netflix's business model incentivizes retention and broad appeal as opposed to high quality or individualized content.
These don't prove that machine learning is ineffective nor that people won't find it useful in their day-to-day lives if presented with the right use-cases. It's only proof that the most obvious uses of it are inefficient or user-hostile. Yes, Google sometimes gets my demographic information wrong, but I also have no, or even negative, incentive to correct them! (Which not just hurts my demographic data, but their ability to accurately predict it in the future.)
My parents love the speech-to-text feature on their phones, despite the fact that I find it creepy. This feature is currently possible because Google can collect millions of voice samples and flag recordings with low confidence to further refine their model. You can't do that with a simple heuristic.
I'm currently not entirely on-board with the whole AI train. It is definitely way over-hyped. But, it is a real tool that sometimes solves real problems, and when those problem spaces require mass collection of data, like with text-to-speech/speech-to-text, there are going to be strong privacy concerns that we clearly still don't know how to deal with.
Finally, this UX doesn't have to use AI exclusively. If you think there are plenty of problem spaces where heuristics work just as well, or better, then this UX is still useful if those problems require data from multiple domains. For instance, if I search for a grocery item, in addition to having a search-term relevant ad show up, you could also offer to add it to my grocery list. Does that need AI? No. Does that need mass data collection and profiling? No. But, does that need strong interoperation between my apps and a cohesive and intuitive data environment for them to operate in? Yes.
So, in total, my point is that if we are trying to develop a user-centric UX paradigm, possibly involving AI, we might need to rethink how we express access rights and data flows to the user. This article might not be the right approach, but its goal is nonetheless worthy.
The point the essay raises is to explicitly avoid the user espionage. They won't know what you're doing unless you specifically tell the middle layer they can.
I don't want to be come a see-through person, where Facebook/Siri/whoever knows what exactly I'm doing right now and what I'm intending to do. It's creepy. Please stop.
The point the essay raises is to explicitly avoid the user espionage. They won't know what you're doing unless you specifically tell the middle layer they can.
I'm still very uncomfortable with giving an app full access to the data in my device by moving it into the circle and question how necessary it really would be.
I'm still very uncomfortable with giving an app full access to the data in my device by moving it into the circle and question how necessary it really would be.
If you want suggestions, you share with the desired app. Considering it defaults to "no access", it's as simple a rule about privacy you'll get – which is exactly what people with no technical...
If you want suggestions, you share with the desired app. Considering it defaults to "no access", it's as simple a rule about privacy you'll get – which is exactly what people with no technical aptitude need.
You may argue whether this is the best way to handle privacy in a simple way – in which case, you're welcome to suggest a better option. Better yet: share it here and write Lennart, the author of the essay (he's left his Twitter handle and email address at the bottom). Maybe the ensuring discussion would produce an even-better option.
It's by no means a perfect solution, but it's the simplest and most straightforward, concept-wise, I've heard so far.
As for the necessity of it all – well, I'd say it's more than necessary: it's inevitable, and it's the way we handle this massive change that matters. Not gonna bet my money on it, 'cause I'm no machine-learning expert, but I've seen technology move on even in my relatively-short life.
When I got my first PC, I couldn't even imagine one day seeing a computer the size of a wristwatch, talking back to me, checking which restaurants are open nearby, and playing music on my wireless headphones. 3D games of such amazing fidelity within just a decade? Duuude. Or, how 'bout dual cameras, or a front-facing camera hidden entirely under the screen? Or maybe VR, in all of its still-expensive-as-hell glory.
What I'm saying is: shit moves on. We'll get to the point where Siri could tell me I'm eating too much yogurt and not enough apples (in which case, oddly enough, shit will not move on, in a more literal sense). It's a question not of time but of our approach to it. It's not good enough now? Uh, have you seen the pace of it?
Facebook and Google and Microsoft will keep getting your data, and refining their secret projects, and enact their crazy experiments (a lot of which I would be fine with if they defaulted to consenting a portion of users before running). Maybe let's not let them – not without our explicit consent? Just an idea.
Cloudfall, as far as I'm aware, is the best idea on how to approach this.
The cloud cripples your data. Instagram has your photos, iMessage your messages and Google your documents. By splitting up our data, we prevent any AI from truly knowing us as individuals. And by giving away all control, we relegate ourselves to mindless drivers of engagement. What have we gained from sharing our lives with Facebook? If data is the new oil, where are our cars?
Algorithms will soon influence every part of our world. Google and Facebook already use AI to decide what you see, and by extension what you think, feel and do. Do we really want to base our behaviour on systems we can’t understand and can’t control?
I believe that users having control over their data will let companies achieve the full potential of AI. By designing technology to align with user values, we can reignite our confidence in technology that amplifies and advances humanity.
We need to rethink the relation between our data and the services that use it.
The idea itself is neat (except actually not), if you forget to ask why this excessive data collection is even necessary, a point made very well by a blog post shared here on Tildes some time ago but it suffers from most of the pitfalls new ideas for tech have, which mean that it'll probably never be implemented:
Standardization: It's hell. Standardizing anything in computer science is fucking terrible. Much less if you involve a bunch of big companies that don't want to play well together cause they're competing (e.g. Netflix, Youtube, Apple and it's new streaming service, etc.) and with this idea you're actively blocking them from creating their own walled gardens. They don't want to work well together, because then you might use Netflix and Apple TV+ instead of just TV+.
You're blocking big companies from collecting your data. They hate that, because they want to sell your data and sometimes use it to better their own service. Even more so, how would you actually block companies from transferring your data? The Netlix app isn't open source, and it requires internet access. Once it's inside the circle, it can send anything home it wants. You couldn't even watch the data because you're streaming video and therefore already consuming large amounts of data. The same goes for Youtube or Google Maps. All you need is one shady app inside your circle and it's breached. And since almost nothing big is open source, everything is shady. The point about end-to-end encryption doesn't really stand, because for an app to able to work with your data, it has to decrypt it. That's the moment shit gets sent home.
Centralizing everything on your device isn't always good because then all data lines converge on a single point of attack: Your phone. Lose it, have it stolen, plug it in to a public charge station and get a rootkit installed... (USB is insecure as shit lads and lasses)
How necessary is this? This leads me back to the post I linked above. The site talks about how "AI" and "on device machine-learning" will help keep our data safe while personalizing our adventures through the internet. Except the personalization sucks. Analyzing big data is fucking difficult, often inconclusive and frequently completely useless and unnecessary for the product. I don't need Google to track my every step. When I look something up through it, it can connect me to the product I'm looking for, or a list of them. Bam! Personalized advertising, and you didn't even need to know the name of my firstborn child for it to work!
I don't want to be come a see-through person, where Facebook/Siri/whoever knows what exactly I'm doing right now and what I'm intending to do. It's creepy. Please stop.
Now, you could be entirely right that mass data collection is not helpful for personalization, and that personalization might not be a worthy goal. However, I would like to push back a little and point out that a lot of the complaints in the article you linked stem from perverse economic incentives that prevent a truly user-positive system. The article talks about how advertising companies have incentives to not interoperate well, leading to bloated web pages and extraneous tracking. It talks about how advertisers are terrible at understanding how deeply personalization could work (or are limited by the previously mentioned poor interoperation), and default to known demographic trends. It also points out how Netflix's business model incentivizes retention and broad appeal as opposed to high quality or individualized content.
These don't prove that machine learning is ineffective nor that people won't find it useful in their day-to-day lives if presented with the right use-cases. It's only proof that the most obvious uses of it are inefficient or user-hostile. Yes, Google sometimes gets my demographic information wrong, but I also have no, or even negative, incentive to correct them! (Which not just hurts my demographic data, but their ability to accurately predict it in the future.)
My parents love the speech-to-text feature on their phones, despite the fact that I find it creepy. This feature is currently possible because Google can collect millions of voice samples and flag recordings with low confidence to further refine their model. You can't do that with a simple heuristic.
I'm currently not entirely on-board with the whole AI train. It is definitely way over-hyped. But, it is a real tool that sometimes solves real problems, and when those problem spaces require mass collection of data, like with text-to-speech/speech-to-text, there are going to be strong privacy concerns that we clearly still don't know how to deal with.
Finally, this UX doesn't have to use AI exclusively. If you think there are plenty of problem spaces where heuristics work just as well, or better, then this UX is still useful if those problems require data from multiple domains. For instance, if I search for a grocery item, in addition to having a search-term relevant ad show up, you could also offer to add it to my grocery list. Does that need AI? No. Does that need mass data collection and profiling? No. But, does that need strong interoperation between my apps and a cohesive and intuitive data environment for them to operate in? Yes.
So, in total, my point is that if we are trying to develop a user-centric UX paradigm, possibly involving AI, we might need to rethink how we express access rights and data flows to the user. This article might not be the right approach, but its goal is nonetheless worthy.
The point the essay raises is to explicitly avoid the user espionage. They won't know what you're doing unless you specifically tell the middle layer they can.
I'm still very uncomfortable with giving an app full access to the data in my device by moving it into the circle and question how necessary it really would be.
If you want suggestions, you share with the desired app. Considering it defaults to "no access", it's as simple a rule about privacy you'll get – which is exactly what people with no technical aptitude need.
You may argue whether this is the best way to handle privacy in a simple way – in which case, you're welcome to suggest a better option. Better yet: share it here and write Lennart, the author of the essay (he's left his Twitter handle and email address at the bottom). Maybe the ensuring discussion would produce an even-better option.
It's by no means a perfect solution, but it's the simplest and most straightforward, concept-wise, I've heard so far.
As for the necessity of it all – well, I'd say it's more than necessary: it's inevitable, and it's the way we handle this massive change that matters. Not gonna bet my money on it, 'cause I'm no machine-learning expert, but I've seen technology move on even in my relatively-short life.
When I got my first PC, I couldn't even imagine one day seeing a computer the size of a wristwatch, talking back to me, checking which restaurants are open nearby, and playing music on my wireless headphones. 3D games of such amazing fidelity within just a decade? Duuude. Or, how 'bout dual cameras, or a front-facing camera hidden entirely under the screen? Or maybe VR, in all of its still-expensive-as-hell glory.
What I'm saying is: shit moves on. We'll get to the point where Siri could tell me I'm eating too much yogurt and not enough apples (in which case, oddly enough, shit will not move on, in a more literal sense). It's a question not of time but of our approach to it. It's not good enough now? Uh, have you seen the pace of it?
Facebook and Google and Microsoft will keep getting your data, and refining their secret projects, and enact their crazy experiments (a lot of which I would be fine with if they defaulted to consenting a portion of users before running). Maybe let's not let them – not without our explicit consent? Just an idea.
Cloudfall, as far as I'm aware, is the best idea on how to approach this.