13
votes
TikTok might be a Chinese Cambridge Analytica-scale privacy threat
Link information
This data is scraped automatically and may be incorrect.
- Title
- Is TikTok a Chinese Cambridge Analytica data bomb waiting to explode?
- Published
- May 7 2019
- Word count
- 2659 words
Honestly, this article feels a little empty to me. Like, it reiterates the Cambridge Analytica scandal again, talks about the Great Firewall of China and China's human rights record, then goes on to document an email exchange that honestly goes better than anytime I try and contact AT&T. Near the end, it has this quote:
(context provided in brackets)
which seems to nullify the thesis it has been implying the whole time. So, the article is forced to end with a hypothetical about how this conversation could have gone and why we should be afraid of China and TikTok anyway. It all feels... like nothing happened? Like, there wasn't much journalism going on. There is nothing in this article that is more true of TikTok than any Silicon Valley-backed company, and to call out TikTok as a national security issue feels a little xenophobic, given the only evidence this article could present was hyopthetical.
Sure, this is an attack vector we should be aware of, but the title implies there's evidence to imply this attack might be ongoing, which just isn't true. I think @Defluo's interpretation of TikTok as a soft power play fits the evidence much better at this point.
[EDIT: please ignore my hasty conclusion about them doing a terrible job of data collection. The below replies have made me reconsider. I still do think it's more about soft power projection than collecting data though. ]
If they're collecting data, they are doing a terrible job of it. If I open my profile I can see my real name, a username they generated for me, and empty fields where you can put a profile pic, profile video, bio, instagram link, and youtube link. That data doesn't really seem like enough to target ads to you with, nor useful to a government. When I run facebook ads, it's so granular you can narrow things down to tiny niche groups (american women between ages 18-19 who like anime and drag car races and watching children ride emus), why would I want to run an ad on tiktok besides to build a base of teens with no money?
I suppose they could be taking a device fingerprint and location data or something to track you, but that doesn't seem very useful either. They could build a heatmap of places with free wifi that teens like to congregate at I guess? Or build a map of cellular coverage.
If anything it's more of a soft power projection platform. I get a lot of videos from China, Japan, Korea, and India on my feed and it honestly makes me feel more connected to those countries and people because I can see what their memes are like and often find them enjoyable.
Well, if I wanna put my tinfoil hat on for a moment and wildly speculate with absolutely zero evidence, they could be building large facial and voice recognition datasets. This would give them access to a huge dataset, sourced from outside their borders, and might provide value apart from advertising money. (Sure, the set would skew young, but they have enough data they can either selectively unbias their samples; or maybe they just don't care.) Additionally, (super conspiracy time) there could be code that only activates for certain targets. Like, if they can identify the child of an important figure, then they can turn on all the nasties.
Honestly, I highly doubt either of these is happening, especially the second (way too risky for a nation to try that). I think you're right that soft power is probably a major aspect of this, but I did want to point out that just because you can't really see much individual metadata on yourself, that doesn't mean a dedicated government can't derive value from your data.
EDIT: typo
Your right, I didn't consider the videos themselves! They could create giant datasets for machine learning. They'd have videos of people, places, animals, cooking and a whole variety of things. I don't think their videos would be very useful for voice recognition though. Most of the videos share sound/music and just have people repeating the memes to them ("I smell pennies" someone in a costume doing a weird walk runs at the camera "Ahhhh")
Well, if you look at their android client, from Exodus' analysis here, that's where the fun stuff happens, now you have Facebook, Umeng, Google and others jumping in on the action which can give more details (according to each of their policies).
And then if you look at their permissions, I'm sure some are genuine but I think it's fair to say there's a lot more than needed. Some are most likely not enabled for the application to function and can be disabled on android but some are not user-settable from my understanding.
Thanks, I never knew about Exodus. The 13 trackers do concern me, any clue what they're actually sending back? But permission wise, I can honestly see a valid reason for the majority of them. If you use the app it's pretty obvious what they're for. "GET_ACCOUNTS" worries me though. I haven't developed anything for android in a decade, how much information is available with that permission?
Exodus actually lets you check what each tracker does, as for the "GET_ACCOUNTS" permission, I'm not sure but I think it'd only say which applications use it (not all applications use Android's system), I'd have to check to see what exactly.