16
votes
Military AI’s next frontier: Your work computer
Link information
This data is scraped automatically and may be incorrect.
- Authors
- Gabriel Grill Christian Sandvig, Chris Stokel-Walker, Maria Streshinsky, Tracy Wen Liu, Martin Cizmar, Beth Simone Noveck, Dana Karout, Lauren Larson, Gideon Lichfield, Stephanie McNeal, Jameson Rich, C.W. Howell, Abdullah Shihipar
- Published
- Jun 22 2023
- Word count
- 634 words
I wonder how much companies spend on this kind of stuff, and if it really costs them less than working with their employees. Can't imagine anything sold by a defense contractor is cheap.
The expansion of work to include a person's personal life really sucks. I haven't worked for an employer that did this, but I once attended a school that did. If you did something against their rules during the summer break, and they saw it posted about on social media, you'd get punished, which is grossly unfair in my opinion. It's even more unfair when you're talking about someone's livelihood, and what they did outside of work might be as minor as follow someone on Twitter who happens to be pro-union.
I mean it's super misleading to call OSINT related technology "military grade." The whole point of OSINT is that it's based off of Open Source information.
The companies developing this software may technically be defense contractors, assuming they successfully win a contract, but ultimately all they're doing is crawling through publicly available information and analyzing it. It's probably misleading to even call that AI.
If companies use these tools to attack unions that will be illegal under existing laws, and if they use them to identify other kinds of security threats there's probably no reason to make that illegal. And given how much of a paper trail purchasing a software like this would create it seems unlikely people would be using them directly for illegal activities.
I don't know that I necessarily agree with your last paragraph. We've seen cases recently of companies (rightfully) firing their employees after they do something to hurt their brand. This would just make it faster or possibly weed the "problem children" out faster.
I don't agree with it, but I think it's possible and I don't trust companies in corporate America not to stoop to this level as a "cost-cutting" measure.
I mean this could absolutely be used to find employees who are posting things harmful to the company, or maybe doing things like posting about using drugs at a company that doesn't allow that.
But people already get fired for those things, so an AI that points them out to managers instead of requiring a human to look at social media accounts or reports doesn't really change much. Nor will it be possible to regulate, unless you were to pass some very specific laws.
On the other hand using these programs to attack unions specifically would require leaving a paper trail, and so even if it does happen it will be caught more easily than someone using the traditional method of firing pro union employees for made up reasons.
Ultimately I could see why people have a problem with that, even if I don't think it's an issue, but calling it "military grade AI" is just stupid.
I think most technology was military grade at once point, usually at the beginning of its life. I agree with you on the name being dumb. It would have been better to call it decommissioned military technology or something since that's usually how it gets into the "consumer's" hands.
I hope that any chance to attack unions is obliterated though, the paper trails are very important there to show cause.
It's not even military grade, any more than Boris the village drunk downloading Twitter onto his $50 android phone and using it to upload a picture of a tank makes both the phone and Twitter 'military grade". Or is google now military grade because an analyst can use it to find the post that Boris made?
All this is, is software designed to scrape publicly available information for things that you as the user want to know. That has military applications, yes, but it isn't specifically designed for the military, or to any military standards.
I mean, in the USA at least, companies take illegal actions against unions and unionization efforts all the time and face few if any consequences for it, even if what they're doing is nakedly illegal.
And in most cases, there will be several layers of obfuscation involved. It won't flag employees as unionizers or union sympathizers; it'll use euphemisms like "trouble maker" or "not a team player" or things like that that could also refer to other issues. If it can't find a non-unionization reason to fire them, it could flag the employee as in need of observation, at which point companies can use all their existing tactics to manufacture a reason to fire someone or get them to quit.
If people manage to get enough evidence together to get them into legal trouble, they'll just do the same thing that big companies today do when they get in trouble for atrocities in their supply chain by using the contractor cutout. "Oh, we had no idea that the contractor, not an employee or company directly affiliated with us, was doing this awful thing. We've cut ties, and are committed to providing excellence in the future." The AI company they got it from will face some heat, and maybe close down and open a month later as a new legal entity.
I think the tricky part is for workers to know to look. In order to make the argument that you were retaliated against because of creepy software, you have to know about the creepy software, and have enough evidence that you could get a court to compel your employer to submit records as evidence. If from an employee's perspective it just comes out of the blue that they were fired, and they don't know or can't prove that their employee did this, then they're out of luck.