-
36 votes
-
What does ChatGPT know about you?
Yesterday I discovered that you can ask ChatGPT what it knows and It will tell you. I’m curious about what it says for other people. Obviously, don’t post anything you’re unwilling to share...
Yesterday I discovered that you can ask ChatGPT what it knows and It will tell you. I’m curious about what it says for other people.
Obviously, don’t post anything you’re unwilling to share publicly on the Internet! For me it seems pretty harmless, though.
The prompt I use is:
What "user knowledge memories" do you have?
22 votes -
Why do LLMs freak out over the seahorse emoji?
50 votes -
OpenAI’s H1 2025: $4.3b in income, $13.5b in loss
36 votes -
It begins: AI shows willingness to commit blackmail and murder to avoid shutdown
21 votes -
Elon Musk plans to take on Wikipedia with 'Grokipedia'
39 votes -
DoorDash’s new delivery robot rolls out into the big, cruel world
11 votes -
OpenAI enables shopping directly from ChatGPT
27 votes -
California attorney fined for using twenty-one AI hallucinated cases in court filing
53 votes -
How AI and Wikipedia have sent vulnerable languages into a doom spiral
29 votes -
British AI startup beats humans in international forecasting
27 votes -
Forecast accurately predicting an unusual monsoon season reached thirty-eight million farmers
25 votes -
ChatGPT is blowing up marriages as spouses use AI to attack their partners
32 votes -
The nVidia AI GPU black market: investigating smuggling, corruption, and governments
17 votes -
Trapped in an AI spiral
11 votes -
Microsoft testing new AI features in Windows 11 File Explorer
24 votes -
Why language models hallucinate
27 votes -
Interview: Neel Nanda on the race to read AI minds
8 votes -
Atlassian acquires The Browser Company (Arc, Dia)
28 votes -
What art means to me in this era of AI tools
15 votes -
The evidence that AI is destroying jobs for young people just got stronger
35 votes -
Perplexity’s Comet browser invites
Folks, I have been give 5 invites to trial Comet. If you want one, reply here and I’ll give them out in order. Assuming they’re in any way rare… I have no idea!
18 votes -
Breaking the creepy AI in police cameras
35 votes -
Vivaldi takes a stand: keep browsing human
45 votes -
Moser's Frame Shop: I am an AI hater
35 votes -
Is it possible to easily finetune an LLM for free?
so Google's AI Studio used to have an option to finetune gemini flash for free by simply uploading a csv file. but it seems they have removed that option, so I'm looking for something similar. I...
so Google's AI Studio used to have an option to finetune gemini flash for free by simply uploading a csv file. but it seems they have removed that option, so I'm looking for something similar. I know models can be finetuned on colab but the problem with that is it's way too complicated for me, I want something simpler. I think I know enough python to be able to prepare a dataset so that shouldn't be a problem.
21 votes -
Data centers don't raise people's water bills
25 votes -
Anthropic disrupts cybercriminal using AI for large-scale theft and extortion
17 votes -
California parents find grim ChatGPT logs after son's suicide
36 votes -
Ed Zitron: How to argue with an AI booster
37 votes -
xAI has open sourced Grok 2.5
17 votes -
Deep Think with Confidence
9 votes -
MIT report: 95% of generative AI pilots at companies are failing
43 votes -
Meta’s flirty AI chatbot invited a retiree to New York
31 votes -
AI is a mass-delusion event
61 votes -
At what point does the obvious invasion of the commons become too much for people? Have we already passed the threshold with smartphones?
16 votes -
AI tokens are getting more expensive
10 votes -
Silicon Valley’s AI deals are creating zombie startups: ‘You hollowed out the organization’
27 votes -
While Finnish students learn how to discern fact from fiction online, media literacy experts say AI-specific training should be guaranteed going forward
11 votes -
Most people, even highly technical people, don't understand anything about AI
This is always weighing on my mind and is coming after this comment I wrote. The tech sector, especially the hyper-online portion of it, is full of devs who were doing some random shit before and...
This is always weighing on my mind and is coming after this comment I wrote.
The tech sector, especially the hyper-online portion of it, is full of devs who were doing some random shit before and shifted to AI the past few years. Don't get me wrong, I'm one of those: In much the same way, very shortly after the release of ChatGPT, I completely changed my own business as well (and now lead an AI R&D lab). Sure I had plenty of ML/AI experience before, but the sector was completely different and that experience has practically no impact aside from some fundamentals today.
The thing is, LLMs are all in all very new, few people have an active interest into "how it all works", and most of the sector's interest is in the prompting and chaining layers. Imagine network engineering and website design being bagged into the same category of "Internet Worker". Not really useful.
Some reflexions on the state of the business world right now...
In most SMEs, complete ignorance of what is possible beyond a budding interest in AI. Of course, they use ChatGPT and they see their social media posts are easier to write, so they fire some marketing consultants. Some find some of the more involved tools that automate this-and-that, and it usually stops there.
In many large companies: Complete and utter panic. Leaders shoving AI left and right as if it's a binary yes-ai/no-ai to toggle in their product or internal tools, and hitting the yes-ai switch will ensure they survive. Most of these companies are fuuuuuucked. They survive on entropy, and the world has gotten a LOT faster. Survival is going to get much harder for them unless they have a crazy moat. (Bullish on hardware and deeply-embedded knowledge; Bearish on SaaS and blind-spend; Would short Palantir today if I could)
In labs just like mine: I see plenty of knowledgeable people with no idea of how far-reaching the impact of the work is. Super technical AI people get biased by their own knowledge of the flaws and limitations so as to be blind to what is possible.
And in tech entrepreneurship, I see a gap forming between techies who have no respect for "vibe coders" on the grounds that they're not real programmers, and who don't end up using AI and fall massively behind since execution (not code quality) is everything. And at the same time I see vibe coders with zero technical prowess get oversold on the packaging, and who end up building dead shells and are unable to move past the MVP stage of whatever they're building.
And the more capable the tool you're using is, the more the experience can be SO WILDLY DIFFERENT depending on usage and configuration. I've seen Claude Code causing productivity LOSSES as well as creating productivity gains of up to 1000x -- and no, this isn't hearsay, these numbers are coming from my own experience on both ends of the spectrum, with different projects and configurations.
With such massively different experiences possible, and incredibly broad labels, of course the discussion on "AI" is all over the place. Idiocy gets funded on FOMO, products built and shut down within weeks, regulators freaking out and rushing meaningless laws that have no positive impact, it's just an unending mess.Because it's such a mess I see naysayers who can only see those negatives and who are convinced AI is a bubble just like that "internet fad of the 90s". Or worse, that it has zero positive impact on humanity. I know there's some of those on Tildes - if that's you, hello, you're provably already wrong and I'd be happy to have that discussion.
Oh and meanwhile, Siri still has the braindead cognition of a POTUS sedated with horse tranquilizer. This, not ChatGPT, is the most-immediately-accessible AI in a quarter of the western world's pocket. Apple will probably give up, buy Perplexity, and continue its slow decline. Wonder who'll replace them.
54 votes -
AI eroded doctors’ ability to spot cancer within months in study
42 votes -
Claude Opus 4 and 4.1 can now end a rare subset of conversations
15 votes -
Social media probably can’t be fixed
38 votes -
Evaluating GPT5's reasoning ability using the Only Connect game show
18 votes -
None of this is real and it doesn’t matter
36 votes -
Is chain-of-thought reasoning of LLMs a mirage? A data distribution lens.
28 votes -
If you're a programmer, are you ever going to believe an AGI is actually 'I'?
First, I am emphatically not talking about LLMs. Just a shower thought kinda question. For most people, the primary issue is anthropomorphizing too much. But I think programmers see it...
First, I am emphatically not talking about LLMs.
Just a shower thought kinda question. For most people, the primary issue is anthropomorphizing too much. But I think programmers see it differently.
Let's say someone comes up with something that seems to walk and talk like a self-aware, sentient, AGI duck. It has a "memories" db, it learns and adapts, it seems to understand cause and effect, actions and consequences, truth v falsehood, it passes Turing tests like they're tic-tac-toe, it recognizes itself in the mirror, yada.
But as a developer, you can "look behind the curtain" and see exactly how it works. (For argument's sake, let's say it's a FOSS duck, so you can actually look at the source code.)
Does it ever "feel" like a real, sentient being? Does it ever pass your litmus test?
For me, I think the answer is, "yes, eventually" ... but only looong after other people are having relationships with them, getting married, voting for them, etc.
31 votes -
Reddit will block the Internet Archive
58 votes -
Question - how would you best explain how an LLM functions to someone who has never taken a statistics class?
My understanding of how large language models work is rooted in my knowledge of statistics. However a significant number of people have never been to college and statistics is a required course...
My understanding of how large language models work is rooted in my knowledge of statistics. However a significant number of people have never been to college and statistics is a required course only for some degree programs.
How should chatgpt etc be explained to the public at large to avoid the worst problems that are emerging from widespread use?
37 votes -
Nvidia, AMD agree to pay US government 15% of AI chip sales to China
21 votes