I don't necessarily agree with the viewpoint, or the conclusions, but I find it to be an interesting discussion. I fundamentally disagree with this section, though: Yet in some sense, this view is...
I don't necessarily agree with the viewpoint, or the conclusions, but I find it to be an interesting discussion.
I fundamentally disagree with this section, though:
Yet in some sense, this view is in fact the ultimate apologetics for computational tyranny disguised as woke criticism. It implicitly maintains that algorithms, aside from the faults of their human programmers and impurities in their training data, are in principle value-free. In this reckoning, computer science itself is non-ideological: it merely seeks to improve and automate that which already exists.
It seems to mark algorithms as the harbingers of a neoliberal capitalist society, that'll cement and worsen everything today. It feels as if someone in the humanities wrote an article about algorithms, without an understand of what they actually mean or represent. It makes good points of the transcendence of modern computational methods, the bias within them, and how they'll shape the future, but it makes the mistake of jumping forward into a dystopian ending, without regard for the extreme measures most companies are putting into place to make sure that it's as equitable and efficient as possible.
It feels like the author is almost railing against progress. The whole article is so communist it's almost overbearing; like "By this she means not the naive rejection of high technology, but the transformation of the industry into one funded, owned, and controlled by workers and the broader society—a people’s technology sector."
I thought it was thought provoking and a different perspective from the usual one we take here.
I wish I'd seen this before reading the article: It would have made the pieces leading to the conclusion a lot easier to pick up and put together. I'm really torn on this article. On one hand, I...
I wish I'd seen this before reading the article:
Commune is a popular magazine for a new era of revolution
It would have made the pieces leading to the conclusion a lot easier to pick up and put together.
I'm really torn on this article. On one hand, I agree with the author that we're obligated to consider the implications of the data collection and manipulation we participate in. And I think it's always worthwhile to reconsider the social and economic systems we unconsciously perpetuate and reinforce.
On the other hand, I'm unimpressed with the tone, writing style, structure, and conclusion. For instance, the whole section on the history of linear programming seems completely unnecessary — I felt like the author was trying to impress me with facts to distract from the argument.
Yet in positioning itself as tech’s moral compass, academic computer science belies the fact that its own intellectual tools are the source of the technology industry’s dangerous power.
What does that mean? That university computer science programs are ivory towers of ethics and morality? I don't know; maybe the culture in Silicon Valley is different. But in my mind they're more like vocational programs.
I just think it's weird to consider supply chain logistics, creating maps of freely available data, formal artificial neural networks, and racial biases in image recognition as the same kind of thing. They are different tools for different kinds of data with different implications if they draw the wrong conclusions and perpetuate different systems if they draw the "right" ones.
Also, I can't reconcile the Eightmaps story with the author's point. Are the website owners supposed to do something to prevent people from being harassed? Then surely it's not a problem with ethics in academia, since they are talking about it. Or are they blameless because the data is already publicly available? Then the fault must not lie with "algorithms" at all, but rather that the data is public in the first place.
I don't necessarily agree with the viewpoint, or the conclusions, but I find it to be an interesting discussion.
I fundamentally disagree with this section, though:
Yet in some sense, this view is in fact the ultimate apologetics for computational tyranny disguised as woke criticism. It implicitly maintains that algorithms, aside from the faults of their human programmers and impurities in their training data, are in principle value-free. In this reckoning, computer science itself is non-ideological: it merely seeks to improve and automate that which already exists.
It seems to mark algorithms as the harbingers of a neoliberal capitalist society, that'll cement and worsen everything today. It feels as if someone in the humanities wrote an article about algorithms, without an understand of what they actually mean or represent. It makes good points of the transcendence of modern computational methods, the bias within them, and how they'll shape the future, but it makes the mistake of jumping forward into a dystopian ending, without regard for the extreme measures most companies are putting into place to make sure that it's as equitable and efficient as possible.
It feels like the author is almost railing against progress. The whole article is so communist it's almost overbearing; like "By this she means not the naive rejection of high technology, but the transformation of the industry into one funded, owned, and controlled by workers and the broader society—a people’s technology sector."
I thought it was thought provoking and a different perspective from the usual one we take here.
I wish I'd seen this before reading the article:
It would have made the pieces leading to the conclusion a lot easier to pick up and put together.
I'm really torn on this article. On one hand, I agree with the author that we're obligated to consider the implications of the data collection and manipulation we participate in. And I think it's always worthwhile to reconsider the social and economic systems we unconsciously perpetuate and reinforce.
On the other hand, I'm unimpressed with the tone, writing style, structure, and conclusion. For instance, the whole section on the history of linear programming seems completely unnecessary — I felt like the author was trying to impress me with facts to distract from the argument.
What does that mean? That university computer science programs are ivory towers of ethics and morality? I don't know; maybe the culture in Silicon Valley is different. But in my mind they're more like vocational programs.
I just think it's weird to consider supply chain logistics, creating maps of freely available data, formal artificial neural networks, and racial biases in image recognition as the same kind of thing. They are different tools for different kinds of data with different implications if they draw the wrong conclusions and perpetuate different systems if they draw the "right" ones.
Also, I can't reconcile the Eightmaps story with the author's point. Are the website owners supposed to do something to prevent people from being harassed? Then surely it's not a problem with ethics in academia, since they are talking about it. Or are they blameless because the data is already publicly available? Then the fault must not lie with "algorithms" at all, but rather that the data is public in the first place.