-
24 votes
-
Can AI rescue us from the mess of prior auth?
24 votes -
“First of its kind” AI settlement: Anthropic to pay authors $1.5 billion
45 votes -
Swedish Performing Rights Society signs licensing agreement with Songfox – Stockholm-based start-up lets fans and creators legally produce AI-generated compositions
4 votes -
Why language models hallucinate
27 votes -
Interview: Neel Nanda on the race to read AI minds
8 votes -
We risk a deluge of AI-written ‘science’ pushing corporate interests
22 votes -
Bland, easy to follow, for fans of everything: what has the Netflix algorithm done to our films?
24 votes -
Where's the Shovelware?
54 votes -
Atlassian acquires The Browser Company (Arc, Dia)
28 votes -
What art means to me in this era of AI tools
15 votes -
The evidence that AI is destroying jobs for young people just got stronger
35 votes -
An AI social coach is teaching empathy to people with autism
19 votes -
Perplexity’s Comet browser invites
Folks, I have been give 5 invites to trial Comet. If you want one, reply here and I’ll give them out in order. Assuming they’re in any way rare… I have no idea!
18 votes -
Therapists are secretly using ChatGPT
38 votes -
Breaking the creepy AI in police cameras
35 votes -
Vivaldi takes a stand: keep browsing human
45 votes -
Taco Bell rethinks AI drive-through after man orders 18,000 waters
52 votes -
Moser's Frame Shop: I am an AI hater
35 votes -
Is it possible to easily finetune an LLM for free?
so Google's AI Studio used to have an option to finetune gemini flash for free by simply uploading a csv file. but it seems they have removed that option, so I'm looking for something similar. I...
so Google's AI Studio used to have an option to finetune gemini flash for free by simply uploading a csv file. but it seems they have removed that option, so I'm looking for something similar. I know models can be finetuned on colab but the problem with that is it's way too complicated for me, I want something simpler. I think I know enough python to be able to prepare a dataset so that shouldn't be a problem.
21 votes -
Data centers don't raise people's water bills
25 votes -
Anthropic disrupts cybercriminal using AI for large-scale theft and extortion
17 votes -
California parents find grim ChatGPT logs after son's suicide
36 votes -
Ed Zitron: How to argue with an AI booster
37 votes -
Ted Chiang interview: life is more than an engineering problem
24 votes -
Colleges have a new worry: ‘Ghost students’—AI powered fraud rings angling to get millions in financial aid
23 votes -
xAI has open sourced Grok 2.5
17 votes -
Deep Think with Confidence
9 votes -
MIT report: 95% of generative AI pilots at companies are failing
43 votes -
Meta’s flirty AI chatbot invited a retiree to New York
31 votes -
Google has released data on how much energy an AI prompt uses
36 votes -
AI is a mass-delusion event
61 votes -
AI is creeping into the Linux kernel - and official policy is needed ASAP
29 votes -
At what point does the obvious invasion of the commons become too much for people? Have we already passed the threshold with smartphones?
16 votes -
Copilot broke your audit log, but Microsoft won’t tell you
38 votes -
AI tokens are getting more expensive
10 votes -
Silicon Valley’s AI deals are creating zombie startups: ‘You hollowed out the organization’
27 votes -
While Finnish students learn how to discern fact from fiction online, media literacy experts say AI-specific training should be guaranteed going forward
11 votes -
Most people, even highly technical people, don't understand anything about AI
This is always weighing on my mind and is coming after this comment I wrote. The tech sector, especially the hyper-online portion of it, is full of devs who were doing some random shit before and...
This is always weighing on my mind and is coming after this comment I wrote.
The tech sector, especially the hyper-online portion of it, is full of devs who were doing some random shit before and shifted to AI the past few years. Don't get me wrong, I'm one of those: In much the same way, very shortly after the release of ChatGPT, I completely changed my own business as well (and now lead an AI R&D lab). Sure I had plenty of ML/AI experience before, but the sector was completely different and that experience has practically no impact aside from some fundamentals today.
The thing is, LLMs are all in all very new, few people have an active interest into "how it all works", and most of the sector's interest is in the prompting and chaining layers. Imagine network engineering and website design being bagged into the same category of "Internet Worker". Not really useful.
Some reflexions on the state of the business world right now...
In most SMEs, complete ignorance of what is possible beyond a budding interest in AI. Of course, they use ChatGPT and they see their social media posts are easier to write, so they fire some marketing consultants. Some find some of the more involved tools that automate this-and-that, and it usually stops there.
In many large companies: Complete and utter panic. Leaders shoving AI left and right as if it's a binary yes-ai/no-ai to toggle in their product or internal tools, and hitting the yes-ai switch will ensure they survive. Most of these companies are fuuuuuucked. They survive on entropy, and the world has gotten a LOT faster. Survival is going to get much harder for them unless they have a crazy moat. (Bullish on hardware and deeply-embedded knowledge; Bearish on SaaS and blind-spend; Would short Palantir today if I could)
In labs just like mine: I see plenty of knowledgeable people with no idea of how far-reaching the impact of the work is. Super technical AI people get biased by their own knowledge of the flaws and limitations so as to be blind to what is possible.
And in tech entrepreneurship, I see a gap forming between techies who have no respect for "vibe coders" on the grounds that they're not real programmers, and who don't end up using AI and fall massively behind since execution (not code quality) is everything. And at the same time I see vibe coders with zero technical prowess get oversold on the packaging, and who end up building dead shells and are unable to move past the MVP stage of whatever they're building.
And the more capable the tool you're using is, the more the experience can be SO WILDLY DIFFERENT depending on usage and configuration. I've seen Claude Code causing productivity LOSSES as well as creating productivity gains of up to 1000x -- and no, this isn't hearsay, these numbers are coming from my own experience on both ends of the spectrum, with different projects and configurations.
With such massively different experiences possible, and incredibly broad labels, of course the discussion on "AI" is all over the place. Idiocy gets funded on FOMO, products built and shut down within weeks, regulators freaking out and rushing meaningless laws that have no positive impact, it's just an unending mess.Because it's such a mess I see naysayers who can only see those negatives and who are convinced AI is a bubble just like that "internet fad of the 90s". Or worse, that it has zero positive impact on humanity. I know there's some of those on Tildes - if that's you, hello, you're provably already wrong and I'd be happy to have that discussion.
Oh and meanwhile, Siri still has the braindead cognition of a POTUS sedated with horse tranquilizer. This, not ChatGPT, is the most-immediately-accessible AI in a quarter of the western world's pocket. Apple will probably give up, buy Perplexity, and continue its slow decline. Wonder who'll replace them.
54 votes -
AI eroded doctors’ ability to spot cancer within months in study
42 votes -
Claude Opus 4 and 4.1 can now end a rare subset of conversations
15 votes -
Social media probably can’t be fixed
38 votes -
Meta appoints anti-LGBTQ+ conspiracy theorist Robby Starbuck as AI bias advisor
29 votes -
Evaluating GPT5's reasoning ability using the Only Connect game show
18 votes -
None of this is real and it doesn’t matter
36 votes -
Is chain-of-thought reasoning of LLMs a mirage? A data distribution lens.
28 votes -
If you're a programmer, are you ever going to believe an AGI is actually 'I'?
First, I am emphatically not talking about LLMs. Just a shower thought kinda question. For most people, the primary issue is anthropomorphizing too much. But I think programmers see it...
First, I am emphatically not talking about LLMs.
Just a shower thought kinda question. For most people, the primary issue is anthropomorphizing too much. But I think programmers see it differently.
Let's say someone comes up with something that seems to walk and talk like a self-aware, sentient, AGI duck. It has a "memories" db, it learns and adapts, it seems to understand cause and effect, actions and consequences, truth v falsehood, it passes Turing tests like they're tic-tac-toe, it recognizes itself in the mirror, yada.
But as a developer, you can "look behind the curtain" and see exactly how it works. (For argument's sake, let's say it's a FOSS duck, so you can actually look at the source code.)
Does it ever "feel" like a real, sentient being? Does it ever pass your litmus test?
For me, I think the answer is, "yes, eventually" ... but only looong after other people are having relationships with them, getting married, voting for them, etc.
31 votes -
Reddit will block the Internet Archive
58 votes -
Question - how would you best explain how an LLM functions to someone who has never taken a statistics class?
My understanding of how large language models work is rooted in my knowledge of statistics. However a significant number of people have never been to college and statistics is a required course...
My understanding of how large language models work is rooted in my knowledge of statistics. However a significant number of people have never been to college and statistics is a required course only for some degree programs.
How should chatgpt etc be explained to the public at large to avoid the worst problems that are emerging from widespread use?
37 votes -
Nvidia, AMD agree to pay US government 15% of AI chip sales to China
21 votes