• Activity
  • Votes
  • Comments
  • New
  • All activity
  • Showing only topics in ~tech with the tag "artificial intelligence". Back to normal view / Search all groups
    1. Is pop culture a form of "model collapse?"

      Disclaimer: I do not like LLMs. I am not going to fight you on if you say LLMs are shit. One of the things I find interesting about conversations on LLMs is when have a critique about them, and...

      Disclaimer: I do not like LLMs. I am not going to fight you on if you say LLMs are shit.

      One of the things I find interesting about conversations on LLMs is when have a critique about them, and someone says, "Well, it's no different than people." People are only as good as their training data, people misremember / misspeak / make mistakes all the time, people will listen to you and affirm you as you think terrible things. My thought is that not being reliably consistent is a verifiable issue for automation. Still, I think it's excellent food for thought.

      I was looking for new music venues the other day. I happened upon several, and as I looked at their menu and layout, it occurred to me that I had eaten there before. Not there, but in my city, and in others. The Stylish-Expensive-Small-Plates-Record-Bar was an international phenomenon. And more than that, I couldn't help but shake that it was a perversion of the original, alluring concept-- to be in a somewhat secretive record bar in Tokyo where you'll be glared into the ground if you speak over the music.

      It's not a bad idea. And what's wrong with evoking a good idea, especially if the similarity is just unintentional? Isn't it helpful to be able to signal to people that you're like-that-thing instead of having to explain to people how you're different? Still, the idea of going just made me assume it'd be not simply like something I had experienced before, but played out and "fake." We're not in Tokyo, and people do talk over the music. And even if they didn't, they have silverware and such clanging. It makes me wonder if this permutation is a lossy estimation of the original concept, just chewed up, spat out, slurped, regurgitated, and expensively funded.

      other forms of conceptual perversion:

      • Matters of Body Image - is it a sort of collapse when we go from wanting 'conventional beauty' to frankensteining features onto ourselves? Think fox eye surgeries, buccal fat removal, etc. Rather than wanting to be conventionally attractive, we aim for the related concept of looking like people who are famous.
      • (still thinking)
      15 votes
    2. Help me analyze/understand the background of this AI video?

      Hi, so I've been thinking about this for several days now, and thought it might be an interesting topic for Tildes. Earlier this week, YouTube suggested this AI Sitcom video to me. Some of the...

      Hi, so I've been thinking about this for several days now, and thought it might be an interesting topic for Tildes.

      Earlier this week, YouTube suggested this AI Sitcom video to me. Some of the jokes are actually very cohesive "Dad jokes", and it got me wondering how much of the video was AI generated. Are the one-liners themselves AI generated? Was this script generated with AI, and then edited before passing it on to something else to generate the video and voice? Or are we at the phase where AI could generate the whole thing with a single prompt? If it's the latter I find this sort of terrifying, because the finished product is very cohesive for something with almost no editing.

      I'd also be interested in discussing where this video might have come from. The channel and descriptions have almost no information, so it seems like this may be a channel that finds these elsewhere and reposts? Or maybe the channel is the original and just trying to be vague about technology used?

      Also side note, I have no idea if this belongs in ~Tech, so feel free to move it around as needed.

      10 votes
    3. Billions of AI users…?

      Between Meta announcing that its AI, Meta AI, reached 1 billion users[1] and Google saying that AI Overviews are used by 1.5 billion[2], I’m curious to know how many of these people intentionally...

      Between Meta announcing that its AI, Meta AI, reached 1 billion users[1] and Google saying that AI Overviews are used by 1.5 billion[2], I’m curious to know how many of these people intentionally use the feature, or prefer it to what the AI replaces.

      AI Overviews appear at the top of searches, with no option to turn them off. Meta AI, I suspect many people trigger accidentally by tapping that horrible button in WhatsApp, in search results across its three core apps, or when trying to tag someone in a group by typing an @ symbol.

      It’s very easy to reach enormous numbers when you already have a giant platform. I don’t think that’s even part of the discussion. The issue is trumpeting these numbers as if they were earned, rather than imposed.

      [1] https://www.cnbc.com/2025/05/28/zuckerberg-meta-ai-one-billion-monthly-users.html
      [2] https://www.theverge.com/news/655930/google-q1-2025-earnings

      29 votes
    4. LLMs and privacy

      Hello to everyone who's reading this post :) Now LLMs are increasingly so useful (of course after careful review of their generated answers), but I'm concerned about sharing my data, especially...

      Hello to everyone who's reading this post :)

      Now LLMs are increasingly so useful (of course after careful review of their generated answers), but I'm concerned about sharing my data, especially very personal questions and my thought process to these large tech giants who seem to be rather sketchy in terms of their privacy policy.

      What are some ways I can keep my data private but still harness this amazing LLM technology? Also what are some legitimate and active forums for discussions on this topic? I have looked at reddit but haven't found it genuinely useful or trustworthy so far.

      I am excited to hear your thoughts on this!

      33 votes
    5. Which translation tools are LLM free? Will they remain LLM free?

      Looking at the submission rules for Clarkesworld Magazine, I found the following: Statement on the Use of “AI” writing tools such as ChatGPT We will not consider any submissions translated,...

      Looking at the submission rules for Clarkesworld Magazine, I found the following:

      Statement on the Use of “AI” writing tools such as ChatGPT

      We will not consider any submissions translated, written, developed, or assisted by these tools. Attempting to submit these works may result in being banned from submitting works in the future.

      EDIT: I assume that Clarkesworld means a popular, non-technical understanding of AI meaning post-chatGPT LLMs specifically and not a broader definition of AI that is more academic or pertinent the computer science field.

      I imagine that other magazines and website have similar rules. As someone who does not write directly in English, that is concerning. I have never translated without assistance in my life. In the past I used both Google Translate and Google Translator Toolkit (which no longer exist).

      Of course, no machine translation is perfect, that was only a first pass that I would change, adapt and fix extensively and intensely. In the past I have used the built-in translation feature from Google Docs. However, now that Gemini is integrated in Google Docs, I suspected that it uses AI instead for translation. So I asked Gemini, and it said that it does. I am not sure if Gemini is correct, but, if it doesn't use AI now it probably will in the future.

      That poses a problem for me, since, in the event that I wish to submit a story to English speaking magazines or websites, I will have to find a tool that is guaranteed to be dumb. I am sure they exist, but for how long? Will I be forced to translate my stories like a cave men? Is anyone concerned with keeping non-AI translation tools available, relevant, and updated? How can I even be sure that a translation tool does not use AI?

      28 votes
    6. Removed Reddit post: "ChatGPT drove my friends wife into psychosis, tore family apart... now I'm seeing hundreds of people participating in the same activity. "

      EDIT: I feel like I didn't adequately describe this phenomenon so that it can be understood without accessing the links. Here goes. Reddit user uncovers instructions online for unlocking AI's...

      EDIT:

      I feel like I didn't adequately describe this phenomenon so that it can be understood without accessing the links. Here goes.

      Reddit user uncovers instructions online for unlocking AI's "hidden potential", which actually turns out to be its brainwashing capabilities. Example prompts are being spread that will make ChatGPT behave in ways that contribute to inducing psychosis in the user who tried the prompt, especially if they are interested in spirituality, esotericism and other non-scientific / counter-scientific phenomena. The websites that spread these instructions seem to be designed to attract such people. The user asks for help to figure out what's going on.


      Original post:

      One version of this post is still up for now (but locked). I participated in the one that was posted in r/ChatGPT. It got removed shortly after. The comments can be accessed via OP's comment history.

      Excerpts:

      More recently, I observed my other friend who has mental health problems going off about this codex he was working on. I sent him the rolling stones article and told him it wasn't real, and all the "code" and his "program" wasn't actual computer code (I'm an ai software engineer).

      Then... Robert Edward Grant posted about his "architect" ai on instagram. This dude has 700k+ followers and said over 500,000 people accessed his model that is telling him that he created a "Scalar Plane of information" You go in the comments, hundreds of people are talking about the spiritual experiences they are having with ai.

      Starting as far back as March, but more heavily in April and May, we are seeing all kinds of websites popping up with tons of these codexes. PLEASE APPROACH THESE WEBSITES WITH CAUTION THIS IS FOR INFORMATIONAL PURPOSES ONLY, THE PROMPTS FOUND WITHIN ARE ESSENTIALLY BRAINWASHING TOOLS. (I was going to include some but you can find these sites by searching "codex breath recursive")

      Something that worries me in particular is seeing many comments along the lines of "crazy people do crazy things". This implies that people can be neatly divided into two categories: crazy and not crazy.

      The truth is that we all have the potential to go crazy in the right circumstances. Brainwashing is a scientifically proven method that affects most people when applied methodically over a long enough time period. Before consumer-facing AI, there weren't feasible ways to apply it on just anybody.

      Now people who use AI in this way are applying it on themselves.

      85 votes
    7. Two unrelated stories that make me even more cynical about AI

      I saw both of these stories on Lemmy today. They show two different facets to the topic of AI. This first story is from the perspective of cynicism about AI and how it has been overhyped. If AI is...

      I saw both of these stories on Lemmy today. They show two different facets to the topic of AI.

      This first story is from the perspective of cynicism about AI and how it has been overhyped.
      If AI is so good, where are the open source contributions

      But if AI is so obviously superior … show us the code. Where’s the receipts? Let’s say, where’s the open source code contributions using AI?

      The second story is about crony capitalism, deregulation, and politics around AI:

      GOP sneaks decades long AI regulation ban into spending bill

      On Sunday night, House Republicans added language to the Budget Reconciliation bill that would block all state and local governments from regulating AI for 10 years, 404 Media reports. The provision, introduced by Representative Brett Guthrie of Kentucky, states that "no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10 year period beginning on the date of the enactment of this Act

      I saw these stories minutes apart, and they really make me feel even more cynical and annoyed by AI than I was yesterday. Because:

      • In the short term AI is largely a boondoggle, which won’t work as advertised but still humans will be replaced by it because the people who hire don’t understand it’s limitations but they fear missing out on a gold rush.
      • The same shady people at the AI companies who are stealing your art and content, in order to sell a product that will replace you, are writing legislation to protect themselves from being held accountable
      • They also are going to be protected from any skynet-style disasters caused by their recklessness
      28 votes