24
votes
Sophisticated exploits used to breach fully-patched iPhones of journalists, activists, as detailed by Amnesty International's Security Lab
Link information
This data is scraped automatically and may be incorrect.
- Title
- Forensic Methodology Report: How to catch NSO Group's Pegasus
- Published
- Jul 18 2021
- Word count
- 7008 words
I've spent a lot of time expressing my distaste for Amazon here, but I have to say that they 100% did the right thing today by shutting down NSO's use of AWS.
Relevant articles from the Washington Post: Despite the hype, iPhone security no match for NSO spyware
Private Isreali spyware used to hack cellphones of journalists, activists worldwide
Few things in tech and security surprise, much less outrage me these days, but this really got me going. The lack of discretion and accountability shown by NSO Group is shocking to me, frankly. By selling this software suite to any government that asks, they are effectively lowering the barrier to entry to achieve a high-level state-sponsored campaign against an individual. Whereas the number of countries with the resources and skills to conduct such a campaign is previously somewhat small, the range broadens considerably thanks to NSO. I'm not referring to the texted links, where a target must click the link like normal spam. That, by in large, is on the user. Stop clicking random links FFS! I'm specifically referring to the zero-click, 0-day exploitation chains enacted against fully-patched iPhones.
Moreover, the "crime-fighting" smokescreen that NSO pushes is hardly good enough to justify the means. It is obvious that this software suite is being abused.
In light of this news, it is exceedingly frustrating that we have to put our faith in companies like Apple to handle security and privacy in a closed-source ecosystem. It's always frustrated me, and they almost had me with their recent marketing campaigns about privacy and security.
Apple was never promising extremely high level security that would protect users against coordinated state-level attacks. They're offering consumer-grade privacy and security focus and basic protection because most people, left to their own devices, basically have none.
I agree with the sentiment, but the targeted users are journalists, lawyers, and others. They're just civilians. They're not in the military where that level of security is "normal". Maybe it will become normal after this reporting.
But the question should be why civilians were targetted that way, not why Apple isn't providing a much higher level of security than should be necessary for civilians.
Anyone who believes they need higher degrees of security shouldn't be offloading it entirely to automated solutions in the first place, especially if they are known and accepted targets for state-sponsored security breaches.
Your average journalist or politician isn't very tech-savvy. They will need to outsource their security to someone. If Apple can't do it for them, who can?
I don't see any alternatives that are obviously more secure other than to stop using smartphones altogether. Maybe Precursor would work for secure messaging, when it ships.
Their parent company (for journalist). Their staff (who can work with their government’s digital security staff).
Politicians who haven't been elected yet usually don't really have IT departments; they're running temporary organizations mostly of volunteers and maybe a few professionals if they're lucky. Perhaps some tech-savy person might volunteer.
Once they're elected, government IT is often far behind the times, to the point where government officials will work around them. (Remember Hillary Clinton's email?)
The largest news organizations have IT staff but many news organizations are small and losing money and can't afford to pay journalists let alone IT tech salaries. So we aren't talking about the best and brightest, necessarily.
Sure, but that’s their responsibility to fix. The answer to, who should these people outsource their IT security to is: the security staff that works with them.
That in the current state that staff is often pretty bad at their jobs is a problem with the staff, not something a consumer electronics manufacturer’s role. Security for a person of interest will always be different and make different trade off than the ones for us random smucks.
I’m wondering what you expect from IT? Most IT organizations are small, and aren’t in the business of writing their own mobile OS or maintaining a Linux distro. Typical IT organizations, in the US at least, buy software from Microsoft and hardware from Dell. They might configure Android or iPhones to be more secure, based on the manufacturer’s recommendations.
I would expect them to know the field, know what to buy and what to do with it. For phones, for instance, they would purchase the hardened Samsung Galaxy series (the "Tactical Edition"), which actually does come with POI level security assurances, unlike a bog standard consumer Galaxy or iPhone.
I don't expect them to write software from the source up.
Is this “Tactical Edition” real or is it just hype? I see skeptical takes from Ars Technica and Verge but it sounds like they reviewed the press release, not the phone. Maybe there is a more serious review somewhere?
I’m biased but I don’t really trust Samsung on software; I’d rather get a phone directly from Google than give Samsung a chance to screw it up.
Also, it looks like ordinary people can’t buy it.
It's real. Samsung has had a deal with the US government for hardened phones for a couple years now.
Indeed, and if you're a politician, you're not a normal person, and if you're a journalist, your organization is not a normal person, or a person at all, and can contact the Samsung sales rep for details.
Basically, the point is, I expect the staffers and parent organizations of people of interest to have the know-how and the connections to purchase devices and use software made for the situation, with contractual obligations in the case that they are broken into. Security unfortunately is not something you get for free - it comes with compromise, and the level of compromise for consumers , is not, and will not be, the same as a person of interest.
Instead of "real" I should have asked if it has better security in a way that would be relevant to journalists or politicians, and how would we know?
Conforming to government regulations is something government contractors have to do, even if it doesn't make sense and actually reduces security, for example by specifying older encryption algorithms. Also, the resulting tech might not be so relevant for civilians. There seem to be features more relevant for soldiers, like ruggedness and a way to store and transmit classified documents.
But journalists and politicians have their own needs that these devices aren't designed for. There don't seem to be products specifically for them and it's quite possible that products designed for consumers and businesses are a better place to start from when securing a device.
I don't know about Apple, but Google is a high-profile target for hackers (including nation-state actors) and they are in part designing for their own IT department and employees (and for other businesses that need secure devices). They also have some of the best security engineers in the business. This doesn't mean there are no zero-days, but they're hard to beat.
(This is similar to the logic behind why IT departments use cloud infrastructure instead of trying to maintain a secure data center themselves.)
So I'm not going to take for granted that any particular "hardened" phone is better. I don't think we know one way or the other. It would have to be proven, which is something we're unlikely to get to the bottom of in a casual conversation.
Adapting existing tech to your own security needs is typically a messy compromise however you go about it.
Their IT departments.
Politicians aren't handling their own IT security, especially elected ones. After all, they're not civilians by IT security standards, they're high priority government targets. They're given specifically configured devices by government IT, and are subject to those usage policies. Personal IT breaches that happen at that level usually only happen when they circumvent government IT standards or practices. The unelected ones usually have an IT department from their party. I can't really speak to the quality of that as it differs greatly by region and party but there is generally something in place.
I can't speak to the journalism industry but ideally there ideally should be something in place with their IT departments that would help them with their IT security in a similar fashion. They are less regulated than government when it comes to IT process and equipment so there's probably more personal maintenance required, but at that point we should start to talk about the prerequisite skills to be in such a field at all.
It's just not Apple's prerogative to make sure absolutely everyone has ironclad protection. It shouldn't be anyway, true full protection isn't a one-size-fits-all initiative. If you're in any position to be targeted by state-sponsored actions, it would be an incredibly bad idea to rely on a free, automated, one-stop shop security solution meant to help out people who would otherwise have none.
Based on the following article, I don't think the IT staff of election campaigns is up to the job. They aren't going to be able to supply their organization with secure custom phones. They're lucky if they can configure standard phones to be more secure:
What I Learned Trying To Secure Congressional Campaigns.
Then the campaign staff should prioritize their IT more because clearly they aren't doing it enough there.
This isn't an exclusive phenomenon. There's poor IT everywhere you look. You can (and will) find stories like this about IT in every single industry in the world, but it will never mean that it's somehow Apple's fault for not offering enough security in their universal security offering to protect against targeted and concentrated cyberattacks.
I don't know about "fault" but I think Apple aspires to provide that level of security? They are a high-profile target and sell to businesses that are also high-profile targets.
Typically there isn't special hardware, but there are OS settings that enterprises can use to lock things down.
I don't think Apple could even pretend to offer that level of security with a mass market solution without exposing themselves legally. There's really no way to do it without personalized solutions and more restricted devices than stock configurations, which Apple can't really offer in mass out-of-the-box production. If anything, this would have to be a very premium white glove service of sorts.
At the very least, I can't find anything in Apple's security pages detailing what levels of threat they can really protect you from, only how they design for security in general.
Both iOS and Android have enterprise enrollment. This either separates enterprise from personal apps, or with consent gives the enterprise full control over the phone.
I'm not sure what you have in mind but don't assume it can't be done without looking.
Enterprise enrollment is not a mass market solution, and it is not set up by Apple (or Google/Samsung on Androids).
Again, if Apple actually was trying to offer a level of security that would protect against multimillion dollar individually targetted cyberattacks, they would be offering a whole other specialized service. I can't see why anyone would think Apple is promising this level of protection with their current mass market offerings.
There’s no reason not to fix security bugs that are exploited by hackers for everyone. For example, it doesn’t make sense to maintain two versions of iMessage, a “secure” one and an “insecure” one.
More likely they will adopt architectural changes to make it more secure for everyone.
Would an open source phone be more secure? It seems there’s just too much attack surface on a modern device for us to properly secure it.
It depends on who does the security auditing. And, I suppose, whether users actually update their devices. The user is generally the weakest link. But, there are potentially a lot of eyes on open-source code. Right now, there's a lot of money to be made selling exploits. There's zero incentive for a criminal to tell a company like Apple they found a major vulnerability. In an ideal situation, where a piece of software is open source, more people can review it, and hopefully find the exploit first.
Open sourcing isn't a magic potion, and I'm under no impression that it makes all vulnerabilities just disappear. But, perhaps they could be found sooner and announced publicly.
Who actually sits out there and gives open source projects free security audits? I can’t imagine there are many contributors with the skill to find anything but the most basic of security holes.
Probably no-one? Maybe the maintainers, as in the case of the recent Univ. of Minnesota drama? Maybe academics, as in the case of the Signal protocol? There's no reason that a security audit should be cost-free. I certainly don't mean to conflate open-source with "costs nothing". Open-source should still cost money, but it's about improving transparency and trust, and having additional eyes on the code/reducing the barriers to access.
I agree. I just don't think there's anything special about the level of robustness to code you can read vs. code you can't.
Isn’t that what Apple’s bounty program is for? (I’m not arguing that a bounty model is perfect, but just pointing out its existence.)
Yes, and those payouts have the potential to be substantial, depending on type. But, they exclude:
That list includes Russia, China, Iran, etc.,.
So for many developers, sure, Apple's bounty program may be fair-market value. For other individuals trying to exploit Apple's systems, they can't sell their findings to Apple if they wanted to. In the case of NSO, WaPo reported their licensing price at $1 million dollars. You can sell a zero-click exploit to Apple for $1 million, or you can license it to dozens of countries. I'm intentionally blurring the lines between criminal activity and NSO here.