11 votes

An investigation of AI induced mental illness

2 comments

  1. [2]
    nonethewiser
    Link
    AI induced seems to really play up the AI and downplay the user. The only way out of this is raising expectations for individuals. Otherwise we're just arguing over what a benevolent master looks...

    AI induced seems to really play up the AI and downplay the user. The only way out of this is raising expectations for individuals. Otherwise we're just arguing over what a benevolent master looks like.

    Although more than anything, the emphasis placed on AI is probably more about playing on people's anxiety of an AI threat for clicks.

    8 votes
    1. BartHarleyJarvis
      Link Parent
      I'm going to apologize in advance, because this might come off like I am attacking you personally, but I can assure you that is not the case. I'm merely expressing frustration at a very common...
      • Exemplary

      I'm going to apologize in advance, because this might come off like I am attacking you personally, but I can assure you that is not the case. I'm merely expressing frustration at a very common trope I see across the internet, particularly in spaces like these.

      Although more than anything, the emphasis placed on AI is probably more about playing on people's anxiety of an AI threat for clicks.

      If you disagree with the way that something is presented, fine. No, fantastic. Elaborate on that. Why is AI not a problem in these unique circumstances that were presented before you? It would be an interesting thing for the rest of us to read.

      What's not interesting is seeing the work of the journalist belittled as "just for clicks." All media is for clicks. In fact, one could make this argument about any piece of media or journalism if they wanted to. "Oh, Woodward and Bernstein are just playing up on the 'corruption' to sell papers." There are things that literally exist "just for clicks" but that's the "9 photos of celebrities from when they were going through puberty" mindless slop. This isn't that.

      Some things will be sensationalized, and certain hot button topics might get played up to generate more interest, but that's the nature of the business. It shouldn't distract from the body of work or be used to discredit everything whenever someone has a vague disagreement about something in the piece. Reminder that this isn't all about you, OP, it's something I see all over the place. In fact, I might've posted a similar comment last year.

      As for the other part of your comment, the actual meat of it, I partially agree with you. Some of the responsibility falls on the individual, and there are a myriad of factors - some societal, some personal - that are beyond the control of the AI companies. After all, this is a stupid country with a ton of mentally unwell people who lack access to the resources they need. That's been a problem long before Open AI came onto the scene. However, that doesn't absolve the AI companies of responsibility, nor does it mean we shouldn't have some form of protection in place that goes beyond the trial and error method we're watching play out. The examples presented would not have happened if not for the sycophantic nature of the LLMs that was hastily released to the public. When it comes to responsibility, it's giving less 'violence in video games' and more 'the pharmaceutical company pushed addictive drugs.' When we fail to hold companies accountable for reckless and dangerous behavior, we increase the likelihood that they will behave recklessly and dangerously in the future.

      Besides, if we wait for society to have the emotional wellbeing and critical thinking skills that are required to avoid this trap, it would take decades, and that's only if we somehow reversed course from our current trajectory. That's an improbable outcome that is being made near-impossible with the damage that companies like Open AI are doing to people in their relentless pursuit of improving the bottom line. Oh, shit, they too are doing it just for clicks.

      11 votes