• Activity
  • Votes
  • Comments
  • New
  • All activity
  • Showing only topics in ~tech with the tag "artificial intelligence". Back to normal view / Search all groups
    1. Removed Reddit post: "ChatGPT drove my friends wife into psychosis, tore family apart... now I'm seeing hundreds of people participating in the same activity. "

      EDIT: I feel like I didn't adequately describe this phenomenon so that it can be understood without accessing the links. Here goes. Reddit user uncovers instructions online for unlocking AI's...

      EDIT:

      I feel like I didn't adequately describe this phenomenon so that it can be understood without accessing the links. Here goes.

      Reddit user uncovers instructions online for unlocking AI's "hidden potential", which actually turns out to be its brainwashing capabilities. Example prompts are being spread that will make ChatGPT behave in ways that contribute to inducing psychosis in the user who tried the prompt, especially if they are interested in spirituality, esotericism and other non-scientific / counter-scientific phenomena. The websites that spread these instructions seem to be designed to attract such people. The user asks for help to figure out what's going on.


      Original post:

      One version of this post is still up for now (but locked). I participated in the one that was posted in r/ChatGPT. It got removed shortly after. The comments can be accessed via OP's comment history.

      Excerpts:

      More recently, I observed my other friend who has mental health problems going off about this codex he was working on. I sent him the rolling stones article and told him it wasn't real, and all the "code" and his "program" wasn't actual computer code (I'm an ai software engineer).

      Then... Robert Edward Grant posted about his "architect" ai on instagram. This dude has 700k+ followers and said over 500,000 people accessed his model that is telling him that he created a "Scalar Plane of information" You go in the comments, hundreds of people are talking about the spiritual experiences they are having with ai.

      Starting as far back as March, but more heavily in April and May, we are seeing all kinds of websites popping up with tons of these codexes. PLEASE APPROACH THESE WEBSITES WITH CAUTION THIS IS FOR INFORMATIONAL PURPOSES ONLY, THE PROMPTS FOUND WITHIN ARE ESSENTIALLY BRAINWASHING TOOLS. (I was going to include some but you can find these sites by searching "codex breath recursive")

      Something that worries me in particular is seeing many comments along the lines of "crazy people do crazy things". This implies that people can be neatly divided into two categories: crazy and not crazy.

      The truth is that we all have the potential to go crazy in the right circumstances. Brainwashing is a scientifically proven method that affects most people when applied methodically over a long enough time period. Before consumer-facing AI, there weren't feasible ways to apply it on just anybody.

      Now people who use AI in this way are applying it on themselves.

      85 votes
    2. Two unrelated stories that make me even more cynical about AI

      I saw both of these stories on Lemmy today. They show two different facets to the topic of AI. This first story is from the perspective of cynicism about AI and how it has been overhyped. If AI is...

      I saw both of these stories on Lemmy today. They show two different facets to the topic of AI.

      This first story is from the perspective of cynicism about AI and how it has been overhyped.
      If AI is so good, where are the open source contributions

      But if AI is so obviously superior … show us the code. Where’s the receipts? Let’s say, where’s the open source code contributions using AI?

      The second story is about crony capitalism, deregulation, and politics around AI:

      GOP sneaks decades long AI regulation ban into spending bill

      On Sunday night, House Republicans added language to the Budget Reconciliation bill that would block all state and local governments from regulating AI for 10 years, 404 Media reports. The provision, introduced by Representative Brett Guthrie of Kentucky, states that "no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10 year period beginning on the date of the enactment of this Act

      I saw these stories minutes apart, and they really make me feel even more cynical and annoyed by AI than I was yesterday. Because:

      • In the short term AI is largely a boondoggle, which won’t work as advertised but still humans will be replaced by it because the people who hire don’t understand it’s limitations but they fear missing out on a gold rush.
      • The same shady people at the AI companies who are stealing your art and content, in order to sell a product that will replace you, are writing legislation to protect themselves from being held accountable
      • They also are going to be protected from any skynet-style disasters caused by their recklessness
      28 votes
    3. The ARC-AGI-2 benchmark could help reframe the conversation about AI performance in a more constructive way

      The popular online discourse on Large Language Models’ (LLMs’) capabilities is often polarized in a way I find annoying and tiresome. On one end of the spectrum, there is nearly complete dismissal...

      The popular online discourse on Large Language Models’ (LLMs’) capabilities is often polarized in a way I find annoying and tiresome.

      On one end of the spectrum, there is nearly complete dismissal of LLMs: an LLM is just a slightly fancier version of the autocomplete on your phone’s keyboard, there’s nothing to see here, move on (dot org).

      This dismissive perspective overlooks some genuinely interesting novel capabilities of LLMs. For example, I can come up with a new joke and ask ChatGPT to explain why it’s funny or come up with a new reasoning problem and ask ChatGPT to solve it. My phone’s keyboard can’t do that.

      On the other end of the spectrum, there are eschatological predictions: human-level or superhuman artificial general intelligence (AGI) will likely be developed within 10 years or even within 5 years, and skepticism toward such predictions is “AI denialism”, analogous to climate change denial. Just listen to the experts!

      There are inconvenient facts for this narrative, such as that the majority of AI experts give much more conservative timelines for AGI when asked in surveys and disagree with the idea that scaling up LLMs could lead to AGI.

      The ARC Prize is an attempt by prominent AI researcher François Chollet (with help from Mike Knoop, who apparently does AI stuff at Zapier) to introduce some scientific rigour into the conversation. There is a monetary prize for open source AI systems that can perform well on a benchmark called ARC-AGI-2, which recently superseded the ARC-AGI benchmark. (“ARC” stands for “Abstract and Reasoning Corpus”.)

      ARC-AGI-2 is not a test of whether an AI is an AGI or not. It’s intended to test whether AI systems are making incremental progress toward AGI. The tasks the AI is asked to complete are colour-coded visual puzzles like you might find in a tricky puzzle game. (Example.) The intention is to design tasks that are easy for humans to solve and hard for AI to solve.

      The current frontier AI models score less than 5% on ARC-AGI-2. Humans score 60% on average and 100% of tasks have been solved by at least two humans in two attempts or less.

      For me, this helps the conversation about AI capabilities because it gives a rigorous test and quantitative measure to my casual, subjective observations that LLMs routinely fail at tasks that are easy for humans.

      François Chollet was impressed when OpenAI’s o3 model scored 75.7% on ARC-AGI (the older version of the benchmark). He emphasizes the concept of “fluid intelligence”, which he seems to define as the ability to adapt to new situations and solve novel problems. Chollet thinks that o3 is the first AI system to demonstrate fluid intelligence, although it’s still a low level of fluid intelligence. (o3 also required thousands of dollars’ worth of computation to achieve this result.)

      This is the sort of distinction that can’t be teased out by the polarized popular discourse. It’s the sort of nuanced analysis I’ve been seeking out, but which has been drowned out by extreme positions on LLMs that ignore inconvenient facts.

      I would like to see more benchmarks that try to do what AGI-AGI-2 does: find problems that humans can easily solve and frontier AI models can’t solve. These sort of benchmarks can help us measure AGI progress much more usefully than the typical benchmarks, which play to LLMs’ strengths (e.g. massive-scale memorization) and don’t challenge them on their weaknesses (e.g. reasoning).

      I long to see AGI within my lifetime. But the super short timeframes given by some people in the AI industry feel to me like they border on mania or psychosis. The discussion is unrigorous, with people pulling numbers out of thin air based on gut feeling.

      It’s clear that there are many things humans are good at doing that AI can’t do at all (where the humans vs. AI success rate is ~100% vs. ~0%). It serves no constructive purpose to ignore this truth and it may serve AI research to develop rigorous benchmarks around it.

      Such benchmarks will at least improve the quality of discussion around AI capabilities, insofar as people pay attention to them.


      Update (2024-04-11 at 19:16 UTC): François Chollet has a new 20-minute talk on YouTube that I recommend. I've watched a few videos of Chollet talking about ARC-AGI or ARC-AGI-2, and this one is beautifully succinct: https://www.youtube.com/watch?v=TWHezX43I-4

      10 votes