-
18 votes
-
Exploring the dangers of AI in mental health care
15 votes -
Sam Altman says Meta offered OpenAI staff $100 million bonuses, as Mark Zuckerberg ramps up AI poaching efforts
37 votes -
China hosts first fully autonomous AI robot football (soccer) match
7 votes -
Cats confuse reasoning LLM: Query-agnostic adversarial triggers for reasoning models
24 votes -
You're going to use Gemini on Android whether you like it or not
48 votes -
'Positive review only': Researchers hide AI prompts in papers to influence automated review
29 votes -
TikTok is being flooded with racist AI videos generated by Google’s Veo 3
35 votes -
Content Independence Day: No AI crawl without compensation!
14 votes -
Can AI-generated photos be art?
24 votes -
AI’s ability to read and summarize is making it a useful tool for scholarship
18 votes -
An industry group representing almost all of Denmark's media outlets including broadcasters and newspapers has said it's suing ChatGPT's parent company OpenAI for using its content
13 votes -
Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task
54 votes -
Meta poaches three OpenAI researchers: Lucas Beyer, Alexander Kolesnikov and Xiaohua Zhai
13 votes -
US Federal judge sides with Meta in lawsuit over training AI models on copyrighted books
22 votes -
User-friendly and privacy-friendly LLM experience?
I've been thinking perhaps I'll need to get one of the desktop LLM UI. I've been out of touch with the state of the art of end user LLM as I've been exclusively using it via API, but tech-y people...
I've been thinking perhaps I'll need to get one of the desktop LLM UI. I've been out of touch with the state of the art of end user LLM as I've been exclusively using it via API, but tech-y people (who are not developers) mostly talk about the end-user products that I lack the knowledge of.
Ethical problems aside, the problem with non-API usage is, even if you pay, I can't find one that have better privacy policy than API. And the problem with API version is that it is not as good as the completed apps unless you want to reinvent the wheel. The apps also may include ads in the future, while API technically cannot as it would affect some downstream usecases.
Provider Data Retention (API) Data Retention (Consumer) UI-only features ChatGPT Plus 30 days, no training Training opt-out, 30 days for temp. chat, unknown retention otherwise Voice, Canvas, Image generation in chat, screensharing, Mobile app Google AI Pro 0 72 hours if you disable history, or up to 3 years and trained upon otherwise Android assistant, Canvas, AI in Google Drive/Docs, RAG (NotebookLM), Podcast generation, Browser use (Mariner), Coding (Gemini CLI), Screensharing Gemini in Google Workspace See above 0-18 months, but no human review/training See above Claude Pro 30 days Up to 2 years (no training without opt-in) Coding, Artifact, Desktop app, RAG, MCP As a dual use technology, the table doesn't include the extra retention period if they detect an abuse. Additionally, if you click on thumbs up/down it may also be recorded for the provider's employee to review.
I don't think OpenWebUI, self hosted models, etc. would suffice if they are not built to the same quality as the first party products. I know I'm probably asking for something that doesn't exists here, but at least I hope it will bring to people's attention that even if you're paying for the product you might not get the same privacy protection as API users.
15 votes -
Echo Chamber: A context-poisoning jailbreak that bypasses LLM guardrails
34 votes -
AI is transforming Indian call centers
26 votes -
The boss of mobile gaming giant Supercell says the industry needs to take bigger risks to compete
7 votes -
Anthropic wins key US ruling on AI training in authors' copyright lawsuit
27 votes -
The AI lifestyle subsidy is going to end
54 votes -
FilMaster: Bridging cinematic principles and generative AI for automated film generation
3 votes -
OpenAI is nabbing Microsoft customers, fueling partners’ rivalry
9 votes -
Curated realities: An AI film festival and the future of human expression
3 votes -
OpenAI slams US court order to save all ChatGPT logs, including deleted chats
45 votes -
Is pop culture a form of "model collapse?"
Disclaimer: I do not like LLMs. I am not going to fight you on if you say LLMs are shit. One of the things I find interesting about conversations on LLMs is when have a critique about them, and...
Disclaimer: I do not like LLMs. I am not going to fight you on if you say LLMs are shit.
One of the things I find interesting about conversations on LLMs is when have a critique about them, and someone says, "Well, it's no different than people." People are only as good as their training data, people misremember / misspeak / make mistakes all the time, people will listen to you and affirm you as you think terrible things. My thought is that not being reliably consistent is a verifiable issue for automation. Still, I think it's excellent food for thought.
I was looking for new music venues the other day. I happened upon several, and as I looked at their menu and layout, it occurred to me that I had eaten there before. Not there, but in my city, and in others. The Stylish-Expensive-Small-Plates-Record-Bar was an international phenomenon. And more than that, I couldn't help but shake that it was a perversion of the original, alluring concept-- to be in a somewhat secretive record bar in Tokyo where you'll be glared into the ground if you speak over the music.
It's not a bad idea. And what's wrong with evoking a good idea, especially if the similarity is just unintentional? Isn't it helpful to be able to signal to people that you're like-that-thing instead of having to explain to people how you're different? Still, the idea of going just made me assume it'd be not simply like something I had experienced before, but played out and "fake." We're not in Tokyo, and people do talk over the music. And even if they didn't, they have silverware and such clanging. It makes me wonder if this permutation is a lossy estimation of the original concept, just chewed up, spat out, slurped, regurgitated, and expensively funded.
other forms of conceptual perversion:
- Matters of Body Image - is it a sort of collapse when we go from wanting 'conventional beauty' to frankensteining features onto ourselves? Think fox eye surgeries, buccal fat removal, etc. Rather than wanting to be conventionally attractive, we aim for the related concept of looking like people who are famous.
- (still thinking)
15 votes -
Contra Ptacek's terrible article on AI
27 votes -
Disney files landmark case against AI image generator
16 votes -
Disney and Universal vs. Midjourney: A landmark copyright fight over genAI
25 votes -
The Common Pile v0.1: An 8TB dataset of public domain and openly licensed text
26 votes -
Six-month-old, solo-owned vibe coder Base44 sells to Wix for $80M cash
13 votes -
Is the AI bubble about to burst?
35 votes -
Check out my ongoing project where I try to find out how accurately a LLM can predict sports outcomes
5 votes -
After eighteen years of infertility, an AI tool let a couple conceive
22 votes -
Help me analyze/understand the background of this AI video?
Hi, so I've been thinking about this for several days now, and thought it might be an interesting topic for Tildes. Earlier this week, YouTube suggested this AI Sitcom video to me. Some of the...
Hi, so I've been thinking about this for several days now, and thought it might be an interesting topic for Tildes.
Earlier this week, YouTube suggested this AI Sitcom video to me. Some of the jokes are actually very cohesive "Dad jokes", and it got me wondering how much of the video was AI generated. Are the one-liners themselves AI generated? Was this script generated with AI, and then edited before passing it on to something else to generate the video and voice? Or are we at the phase where AI could generate the whole thing with a single prompt? If it's the latter I find this sort of terrifying, because the finished product is very cohesive for something with almost no editing.
I'd also be interested in discussing where this video might have come from. The channel and descriptions have almost no information, so it seems like this may be a channel that finds these elsewhere and reposts? Or maybe the channel is the original and just trying to be vague about technology used?
Also side note, I have no idea if this belongs in ~Tech, so feel free to move it around as needed.
10 votes -
Everyone in Hollywood is already using AI (and hiding it)
29 votes -
Self-driving company Waymo’s market share in San Francisco exceeds Lyft’s
27 votes -
As consumers switch from Google Search to ChatGPT, a new kind of bot is scraping data for AI
28 votes -
Journalists are adding extra checks to keep ahead of the fake experts
15 votes -
Getty Images and Stability AI face off in British copyright trial that will test AI industry
21 votes -
Billions of AI users…?
Between Meta announcing that its AI, Meta AI, reached 1 billion users[1] and Google saying that AI Overviews are used by 1.5 billion[2], I’m curious to know how many of these people intentionally...
Between Meta announcing that its AI, Meta AI, reached 1 billion users[1] and Google saying that AI Overviews are used by 1.5 billion[2], I’m curious to know how many of these people intentionally use the feature, or prefer it to what the AI replaces.
AI Overviews appear at the top of searches, with no option to turn them off. Meta AI, I suspect many people trigger accidentally by tapping that horrible button in WhatsApp, in search results across its three core apps, or when trying to tag someone in a group by typing an @ symbol.
It’s very easy to reach enormous numbers when you already have a giant platform. I don’t think that’s even part of the discussion. The issue is trumpeting these numbers as if they were earned, rather than imposed.
[1] https://www.cnbc.com/2025/05/28/zuckerberg-meta-ai-one-billion-monthly-users.html
[2] https://www.theverge.com/news/655930/google-q1-2025-earnings29 votes -
I think I’m done thinking about genAI for now
37 votes -
Waymos are getting more assertive. Why the driverless taxis are learning to drive like humans.
45 votes -
GenAI is our polyester
17 votes -
Reddit sues Anthropic, alleging its bots accessed Reddit more than 100,000 times
28 votes -
Google is using AI to censor thousands of independent websites like mine (and to control the flow of information online)
55 votes -
LLMs and privacy
Hello to everyone who's reading this post :) Now LLMs are increasingly so useful (of course after careful review of their generated answers), but I'm concerned about sharing my data, especially...
Hello to everyone who's reading this post :)
Now LLMs are increasingly so useful (of course after careful review of their generated answers), but I'm concerned about sharing my data, especially very personal questions and my thought process to these large tech giants who seem to be rather sketchy in terms of their privacy policy.
What are some ways I can keep my data private but still harness this amazing LLM technology? Also what are some legitimate and active forums for discussions on this topic? I have looked at reddit but haven't found it genuinely useful or trustworthy so far.
I am excited to hear your thoughts on this!
33 votes -
Digg’s founders explain how they’re building a site for humans in the AI era
36 votes -
Big tech must stop passing the cost of its spiking energy needs onto the public
25 votes -
Behind the curtain: A white-collar bloodbath
24 votes