6 votes

The Welfare State Is Committing Suicide by Artificial Intelligence: Denmark is using algorithms to deliver benefits to citizens—and undermining its own democracy in the process.

4 comments

  1. cfabbro
    (edited )
    Link
    This article is pretty sparse on details and citations, yet draws some rather bold conclusions regarding the programs it's commenting on. Does anyone have any better sources of information on the...

    This article is pretty sparse on details and citations, yet draws some rather bold conclusions regarding the programs it's commenting on. Does anyone have any better sources of information on the programs mentioned or claims in this article?

    The only things I could find in English were rather vague [pdf], or related to Healthcare not welfare.

    8 votes
  2. clerical_terrors
    (edited )
    Link
    This article reads like a confused mish-mash of reporting, speculation, and hand-wrining about privacy laws. As others have pointed out the real case is much less sexy and grave than we're being...

    This article reads like a confused mish-mash of reporting, speculation, and hand-wrining about privacy laws. As others have pointed out the real case is much less sexy and grave than we're being led to believe.
    I'm usually loathe to use a quote-by-quote refutation style, but I feel in this case it's kind of important to break down the article's constituent parts, since it has very little good throughline.

    Yet the idea of legal constraint is increasingly difficult to reconcile with the revolution promised by artificial intelligence and machine learning—specifically, those technologies’ promises of vast social benefits in exchange for unconstrained access to data and lack of adequate regulation on what can be done with it.

    This is just a very limited understaning of the possibilities of Artificial Intelligence. It may surprise you to know that not all models of AI are reliant on having a huge dataset condemning the field as a whole based on this limited understanding is just reductive, in my opinion.

    And the opacity of the algorithms’ power means that it isn’t easy to determine when algorithmic governance stops serving the common good and instead becomes the servant of the powers that be. This will inevitably take a toll on privacy, family life, and free speech, as individuals will be unsure when their personal actions may come under the radar of the government.

    I feel as though the author here wants to write about China's AI powered social control systems (first google result, sorry about that) but somehow refrains from doing so? It just adds to the article's confusion if I'm being honest, it neither talks about the larger dangers of AI adequately nor about the actual project it's supposed to talk about.

    Overall I'd say this is a pretty sensationalist and poorly researched article, which does not justify it's own far-reaching conclusions. AI use in social welfare systems is a sensitive topic prone to many errors and nobody is more aware of the issue than the people working on it. That is why the controlled study this article supposedly reports on is conducted: to test the limits and pitfalls of it's application. To discard the very notion based of an, arguable, failure to achieve success alone instead of a recurring patterns is bordering on Luddite scaremongering as far as I'm concerned.

    4 votes
  3. [2]
    Sahasrahla
    Link
    Aside from issues of privacy or potential for abuse, what worries me most about a system like this is how false positives will be handled. A person could be completely innocent of any wrong-doing...

    Aside from issues of privacy or potential for abuse, what worries me most about a system like this is how false positives will be handled. A person could be completely innocent of any wrong-doing but if they're a bit outside the norm they could be flagged by an opaque algorithm and subject to constant harassment as they try to live their life. I'm also not confident in the technical understanding of enforcement officers and other bureaucrats to understand the limitations of the system; how many of them will treat the system like some omniscient 'Minority Report'-like oracle, and will they understand that it can make mistakes?

    1 vote
    1. clerical_terrors
      Link Parent
      This is not a problem inherent to AI. It already exists: laws and practices regulating welfare benefits are already deeply confusing and error-prone, not in the least because of human error. The...

      This is not a problem inherent to AI. It already exists: laws and practices regulating welfare benefits are already deeply confusing and error-prone, not in the least because of human error. The true danger in AI lies in the possibility that such systems may be uncritically unconsciously programmed to reproduce rather than fix these errors.

      5 votes