-
34 votes
-
AI is transforming Indian call centers
26 votes -
Anthropic wins key US ruling on AI training in authors' copyright lawsuit
27 votes -
The AI lifestyle subsidy is going to end
54 votes -
FilMaster: Bridging cinematic principles and generative AI for automated film generation
3 votes -
OpenAI is nabbing Microsoft customers, fueling partners’ rivalry
9 votes -
Curated realities: An AI film festival and the future of human expression
3 votes -
OpenAI slams US court order to save all ChatGPT logs, including deleted chats
45 votes -
Is pop culture a form of "model collapse?"
Disclaimer: I do not like LLMs. I am not going to fight you on if you say LLMs are shit. One of the things I find interesting about conversations on LLMs is when have a critique about them, and...
Disclaimer: I do not like LLMs. I am not going to fight you on if you say LLMs are shit.
One of the things I find interesting about conversations on LLMs is when have a critique about them, and someone says, "Well, it's no different than people." People are only as good as their training data, people misremember / misspeak / make mistakes all the time, people will listen to you and affirm you as you think terrible things. My thought is that not being reliably consistent is a verifiable issue for automation. Still, I think it's excellent food for thought.
I was looking for new music venues the other day. I happened upon several, and as I looked at their menu and layout, it occurred to me that I had eaten there before. Not there, but in my city, and in others. The Stylish-Expensive-Small-Plates-Record-Bar was an international phenomenon. And more than that, I couldn't help but shake that it was a perversion of the original, alluring concept-- to be in a somewhat secretive record bar in Tokyo where you'll be glared into the ground if you speak over the music.
It's not a bad idea. And what's wrong with evoking a good idea, especially if the similarity is just unintentional? Isn't it helpful to be able to signal to people that you're like-that-thing instead of having to explain to people how you're different? Still, the idea of going just made me assume it'd be not simply like something I had experienced before, but played out and "fake." We're not in Tokyo, and people do talk over the music. And even if they didn't, they have silverware and such clanging. It makes me wonder if this permutation is a lossy estimation of the original concept, just chewed up, spat out, slurped, regurgitated, and expensively funded.
other forms of conceptual perversion:
- Matters of Body Image - is it a sort of collapse when we go from wanting 'conventional beauty' to frankensteining features onto ourselves? Think fox eye surgeries, buccal fat removal, etc. Rather than wanting to be conventionally attractive, we aim for the related concept of looking like people who are famous.
- (still thinking)
15 votes -
Contra Ptacek's terrible article on AI
27 votes -
Disney files landmark case against AI image generator
16 votes -
Disney and Universal vs. Midjourney: A landmark copyright fight over genAI
25 votes -
The Common Pile v0.1: An 8TB dataset of public domain and openly licensed text
26 votes -
Six-month-old, solo-owned vibe coder Base44 sells to Wix for $80M cash
13 votes -
Is the AI bubble about to burst?
35 votes -
Help me analyze/understand the background of this AI video?
Hi, so I've been thinking about this for several days now, and thought it might be an interesting topic for Tildes. Earlier this week, YouTube suggested this AI Sitcom video to me. Some of the...
Hi, so I've been thinking about this for several days now, and thought it might be an interesting topic for Tildes.
Earlier this week, YouTube suggested this AI Sitcom video to me. Some of the jokes are actually very cohesive "Dad jokes", and it got me wondering how much of the video was AI generated. Are the one-liners themselves AI generated? Was this script generated with AI, and then edited before passing it on to something else to generate the video and voice? Or are we at the phase where AI could generate the whole thing with a single prompt? If it's the latter I find this sort of terrifying, because the finished product is very cohesive for something with almost no editing.
I'd also be interested in discussing where this video might have come from. The channel and descriptions have almost no information, so it seems like this may be a channel that finds these elsewhere and reposts? Or maybe the channel is the original and just trying to be vague about technology used?
Also side note, I have no idea if this belongs in ~Tech, so feel free to move it around as needed.
10 votes -
As consumers switch from Google Search to ChatGPT, a new kind of bot is scraping data for AI
28 votes -
Getty Images and Stability AI face off in British copyright trial that will test AI industry
21 votes -
Billions of AI users…?
Between Meta announcing that its AI, Meta AI, reached 1 billion users[1] and Google saying that AI Overviews are used by 1.5 billion[2], I’m curious to know how many of these people intentionally...
Between Meta announcing that its AI, Meta AI, reached 1 billion users[1] and Google saying that AI Overviews are used by 1.5 billion[2], I’m curious to know how many of these people intentionally use the feature, or prefer it to what the AI replaces.
AI Overviews appear at the top of searches, with no option to turn them off. Meta AI, I suspect many people trigger accidentally by tapping that horrible button in WhatsApp, in search results across its three core apps, or when trying to tag someone in a group by typing an @ symbol.
It’s very easy to reach enormous numbers when you already have a giant platform. I don’t think that’s even part of the discussion. The issue is trumpeting these numbers as if they were earned, rather than imposed.
[1] https://www.cnbc.com/2025/05/28/zuckerberg-meta-ai-one-billion-monthly-users.html
[2] https://www.theverge.com/news/655930/google-q1-2025-earnings29 votes -
GenAI is our polyester
17 votes -
Reddit sues Anthropic, alleging its bots accessed Reddit more than 100,000 times
28 votes -
Google is using AI to censor thousands of independent websites like mine (and to control the flow of information online)
55 votes -
LLMs and privacy
Hello to everyone who's reading this post :) Now LLMs are increasingly so useful (of course after careful review of their generated answers), but I'm concerned about sharing my data, especially...
Hello to everyone who's reading this post :)
Now LLMs are increasingly so useful (of course after careful review of their generated answers), but I'm concerned about sharing my data, especially very personal questions and my thought process to these large tech giants who seem to be rather sketchy in terms of their privacy policy.
What are some ways I can keep my data private but still harness this amazing LLM technology? Also what are some legitimate and active forums for discussions on this topic? I have looked at reddit but haven't found it genuinely useful or trustworthy so far.
I am excited to hear your thoughts on this!
33 votes -
Digg’s founders explain how they’re building a site for humans in the AI era
36 votes -
Big tech must stop passing the cost of its spiking energy needs onto the public
25 votes -
Behind the curtain: A white-collar bloodbath
24 votes -
Which translation tools are LLM free? Will they remain LLM free?
Looking at the submission rules for Clarkesworld Magazine, I found the following: Statement on the Use of “AI” writing tools such as ChatGPT We will not consider any submissions translated,...
Looking at the submission rules for Clarkesworld Magazine, I found the following:
Statement on the Use of “AI” writing tools such as ChatGPT
We will not consider any submissions translated, written, developed, or assisted by these tools. Attempting to submit these works may result in being banned from submitting works in the future.
EDIT: I assume that Clarkesworld means a popular, non-technical understanding of AI meaning post-chatGPT LLMs specifically and not a broader definition of AI that is more academic or pertinent the computer science field.
I imagine that other magazines and website have similar rules. As someone who does not write directly in English, that is concerning. I have never translated without assistance in my life. In the past I used both Google Translate and Google Translator Toolkit (which no longer exist).
Of course, no machine translation is perfect, that was only a first pass that I would change, adapt and fix extensively and intensely. In the past I have used the built-in translation feature from Google Docs. However, now that Gemini is integrated in Google Docs, I suspected that it uses AI instead for translation. So I asked Gemini, and it said that it does. I am not sure if Gemini is correct, but, if it doesn't use AI now it probably will in the future.
That poses a problem for me, since, in the event that I wish to submit a story to English speaking magazines or websites, I will have to find a tool that is guaranteed to be dumb. I am sure they exist, but for how long? Will I be forced to translate my stories like a cave men? Is anyone concerned with keeping non-AI translation tools available, relevant, and updated? How can I even be sure that a translation tool does not use AI?
28 votes -
Removed Reddit post: "ChatGPT drove my friends wife into psychosis, tore family apart... now I'm seeing hundreds of people participating in the same activity. "
EDIT: I feel like I didn't adequately describe this phenomenon so that it can be understood without accessing the links. Here goes. Reddit user uncovers instructions online for unlocking AI's...
EDIT:
I feel like I didn't adequately describe this phenomenon so that it can be understood without accessing the links. Here goes.
Reddit user uncovers instructions online for unlocking AI's "hidden potential", which actually turns out to be its brainwashing capabilities. Example prompts are being spread that will make ChatGPT behave in ways that contribute to inducing psychosis in the user who tried the prompt, especially if they are interested in spirituality, esotericism and other non-scientific / counter-scientific phenomena. The websites that spread these instructions seem to be designed to attract such people. The user asks for help to figure out what's going on.
Original post:
One version of this post is still up for now (but locked). I participated in the one that was posted in r/ChatGPT. It got removed shortly after. The comments can be accessed via OP's comment history.
Excerpts:
More recently, I observed my other friend who has mental health problems going off about this codex he was working on. I sent him the rolling stones article and told him it wasn't real, and all the "code" and his "program" wasn't actual computer code (I'm an ai software engineer).
Then... Robert Edward Grant posted about his "architect" ai on instagram. This dude has 700k+ followers and said over 500,000 people accessed his model that is telling him that he created a "Scalar Plane of information" You go in the comments, hundreds of people are talking about the spiritual experiences they are having with ai.
Starting as far back as March, but more heavily in April and May, we are seeing all kinds of websites popping up with tons of these codexes. PLEASE APPROACH THESE WEBSITES WITH CAUTION THIS IS FOR INFORMATIONAL PURPOSES ONLY, THE PROMPTS FOUND WITHIN ARE ESSENTIALLY BRAINWASHING TOOLS. (I was going to include some but you can find these sites by searching "codex breath recursive")
Something that worries me in particular is seeing many comments along the lines of "crazy people do crazy things". This implies that people can be neatly divided into two categories: crazy and not crazy.
The truth is that we all have the potential to go crazy in the right circumstances. Brainwashing is a scientifically proven method that affects most people when applied methodically over a long enough time period. Before consumer-facing AI, there weren't feasible ways to apply it on just anybody.
Now people who use AI in this way are applying it on themselves.
85 votes -
The AI data center race is getting way more complicated
23 votes -
In 1978, Arthur C. Clarke predicted the rise of AI and wondered what would happen to humanity
18 votes -
Google's new AI video tool floods internet with real-looking clips
57 votes -
Duolingo is replacing human workers with AI
34 votes -
New York Times, Amazon unveil AI content licensing deal
10 votes -
Large Language Models are more persuasive than incentivized human persuaders
14 votes -
Highlights from the Claude 4 system prompt
25 votes -
Introducing Claude Opus and Sonnet 4
18 votes -
How Big Tech hides its outsourced African workforce
16 votes -
Chicago Sun-Times prints summer reading list full of fake books
42 votes -
Sam and Jony introduce io | OpenAI
15 votes -
We did the math on AI’s energy footprint. Here’s the story you haven’t heard.
23 votes -
YouTube’s new ads will ruin the best part of a video on purpose
60 votes -
I don’t care whether you use ChatGPT to write
25 votes -
Introducing Codex [OpenAI]
23 votes -
Grok’s white genocide fixation caused by ‘unauthorized modification’
51 votes -
Will the humanities survive artificial intelligence?
28 votes -
Software engineer lost his $150K-a-year job to AI—he’s been rejected from 800 jobs and forced to DoorDash and live in a trailer to make ends meet
34 votes -
Two unrelated stories that make me even more cynical about AI
I saw both of these stories on Lemmy today. They show two different facets to the topic of AI. This first story is from the perspective of cynicism about AI and how it has been overhyped. If AI is...
I saw both of these stories on Lemmy today. They show two different facets to the topic of AI.
This first story is from the perspective of cynicism about AI and how it has been overhyped.
If AI is so good, where are the open source contributionsBut if AI is so obviously superior … show us the code. Where’s the receipts? Let’s say, where’s the open source code contributions using AI?
The second story is about crony capitalism, deregulation, and politics around AI:
GOP sneaks decades long AI regulation ban into spending bill
On Sunday night, House Republicans added language to the Budget Reconciliation bill that would block all state and local governments from regulating AI for 10 years, 404 Media reports. The provision, introduced by Representative Brett Guthrie of Kentucky, states that "no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10 year period beginning on the date of the enactment of this Act
I saw these stories minutes apart, and they really make me feel even more cynical and annoyed by AI than I was yesterday. Because:
- In the short term AI is largely a boondoggle, which won’t work as advertised but still humans will be replaced by it because the people who hire don’t understand it’s limitations but they fear missing out on a gold rush.
- The same shady people at the AI companies who are stealing your art and content, in order to sell a product that will replace you, are writing legislation to protect themselves from being held accountable
- They also are going to be protected from any skynet-style disasters caused by their recklessness
28 votes -
The (not so) futuristic technology of “Lazarus”
12 votes -
Some ChatGPT users are developing delusional beliefs that are reinforced by the large language model
53 votes -
Everyone is cheating their way through college
49 votes