-
15 votes
-
Top twenty worldwide with social-engineering and a cheat that's still undetected
27 votes -
The watchers: how OpenAI, the US government, and Persona built an identity surveillance machine that files reports on you to the feds
25 votes -
I hacked ChatGPT and Google's AI – and it only took twenty minutes
34 votes -
Which future?
7 votes -
The "AI god" narrative is actually a corporate power grab
43 votes -
AI fails at 96% of jobs (new study)
28 votes -
Giving my AI agent its own team and what that taught me about AI
16 votes -
OpenAI retired its most seductive chatbot – leaving users angry and grieving: ‘I can’t live like this’
15 votes -
Update on developer access and platform security | Spotify for Developers
10 votes -
Air to bread
4 votes -
Something big is happening
33 votes -
Google's quarterly report on adversarial use of AI for Q4 2025
11 votes -
Why Nigerians are choosing chatbots to give them advice and therapy
6 votes -
Tell me that you've seen Moltbook, the AI to AI social network
11 votes -
GPT in 243 lines of pure python
14 votes -
The AI vampire
27 votes -
The hidden cost of AI art: Brandon Sanderson's keynote
52 votes -
‘So tired I want to cry’: AI promotional giveaways swamp shops in China
13 votes -
Why Google just issued a rare 100-year bond
25 votes -
Why computers won’t make themselves smarter - Ted Chiang
25 votes -
'Control AI', a non-profit that ostensibly works to protect humanity from the risks of AI, is actually sponsoring influencers to divert public attention away from them
25 votes -
AI doesn’t reduce work—it intensifies it
41 votes -
TSMC to make advanced AI computer chips in Japan
17 votes -
AI hallucination cases - When lawyers use hallucinated legal content
25 votes -
Building a C compiler with a team of parallel Claudes
20 votes -
Is the detachment in the room? - Agents, cruelty, and empathy
15 votes -
Passing question about LLMs and the Tech Singularity
I am currently reading my way thru Ted Chiang's guest column in the New Yorker, about why the predicted AI/Tech Singularity will probably never happen...
I am currently reading my way thru Ted Chiang's guest column in the New Yorker, about why the predicted AI/Tech Singularity will probably never happen (https://www.newyorker.com/culture/annals-of-inquiry/why-computers-wont-make-themselves-smarter). ETA: I just noticed that article is almost 5 years old; the piece is still relevant, but worth noting.
Good read. Still reading, but so far, I find I disagree with his explicit arguments, but at the same time, he is also brushing up very closely to my own reasoning for why "it" might never happen. Regardless, it is thought-provoking.
But, I had a passing thought during the reading.
People who actually use LLMs like Claude Code to help write software, and/or, who pay close attention to LLMs' coding capabilities ... has anyone actually started experimenting with asking Claude Code or other LLMs that are designed for programming, to look at their own source code and help to improve it?
In other words, are we (the humans) already starting to use LLMs to improve their code faster than we humans alone could do?
Wouldn't this be the actual start of the predicted "intelligence explosion"?
Edit to add: To clarify, I am not (necessarily) suggesting that LLMs -- this particular round of AI -- will actually advance to become some kind of true supra-human AGI ... I am only suggesting that they may be the first real tool we've built (beyond Moore's Law itself) that might legitimately speed up the rate at which we approach the Singularity (whatever that ends up meaning).
19 votes -
SpaceX is acquiring xAI
45 votes -
The AI industry doesn’t take “no” for an answer
39 votes -
Any software engineers considering a career switch due to AI?
I've grown increasingly unsure about if I'll stay with this profession long term thanks to the AI "revolution". Not because I think I'll be replaced, I have an extremely wide set of skills thanks...
I've grown increasingly unsure about if I'll stay with this profession long term thanks to the AI "revolution". Not because I think I'll be replaced, I have an extremely wide set of skills thanks to working over a decade in small startups so I think I'm safe for a long while to come.
No, I've grown weary because an increasingly larger share of the code that we produce is expected to be ai generated and with it shorter timelines and I just plain don't like it. I think we reached a tipping point around Claude opus 4.5 where it really is capable and that's only going to continue to get better. But damnit I like coding, I enjoy the problem solving and I feel that's getting stripped away from me basically overnight. Also, as these models become more and more capable I think the number of companies vibe coding to a product with fields of junior level engineers is going to grow which is going to push down senior job opportunities and wages.
So now I'm left wondering if it's time to start pointing towards a new career. I really love building stuff and solving problems so maybe I go back to school and switch to some other flavor of engineering? Idk. Curious where other's heads are at with this.
55 votes -
AI will compromise your cybersecurity posture
8 votes -
A guide to understanding AI as normal technology
10 votes -
Youtube channel ServeTheHome describes how they use a locally running LLM to automate data collection, allowing them to forgo a planned hire
20 votes -
Supporting Markdown search for LLMs
15 votes -
Evaluating LLMs by finding werewolves
18 votes -
Moltbot personal assistant goes viral – and so do your secrets
38 votes -
Suno, AI music, and the bad future
5 votes -
How AI assistance impacts the formation of coding skills
18 votes -
AntiRender: remove the glossy shine on architectural renderings
38 votes -
'Right-to-compute' laws may be coming to your state this year
20 votes -
Anthropic faces new music publisher lawsuit over alleged piracy
5 votes -
Someone made a social media website for AI agents
29 votes -
Pi: The minimal agent within OpenClaw
13 votes -
AI chatbots are becoming lifelines for China’s sick and lonely
8 votes -
Mistral releases Vibe 2.0
16 votes -
Dario Amodei — The adolescence of technology
5 votes -
Wilson Lin on FastRender: a browser built by thousands of parallel agents
18 votes -
Feeling weird about my career with respect to AI
I’m a software engineer. I graduated in 2021 so I’ve only been one for around 4.5 years and definitely still feel fairly entry-level (at least, any time I look at jobs, the number of years of...
I’m a software engineer. I graduated in 2021 so I’ve only been one for around 4.5 years and definitely still feel fairly entry-level (at least, any time I look at jobs, the number of years of experience required for “senior” positions seems to have increased by one) and it feels like companies don’t particularly want anyone without a lot of experience anymore (and every time I do look at new jobs, the number of years required for “senior” positions seems to have increased by one). Meanwhile, I think it has its uses but I don’t actually enjoy using it. I want to solve problems and think and write code, not talk to an AI and become a full-time code-reviewer. My company is rebranding to have AI in the name shortly and, since early December, have been forcing us into 2+ hour long AI trainings once or twice a week. A lot of my coworkers seem like they’ve drank the Kool-Aid and are talking about new models and shit all the time and I just don’t get it.
I guess I’m kind of rambling but I just feel weird about all of it. I want to program but I don’t just want to use (or be forced to use) LLMs for everything, yet it seems like companies are just trying to get rid of actually human software engineers as fast as they can. I’ll even admit, Claude is way better than I expected, but I don’t actually enjoy sitting there typing “do this for me” and then having to just spend time reviewing code. I don’t know. I don’t think this is really even me asking for advice, just a rant, but yeah, just felt like I had to get something out there, I guess.
54 votes -
Show HN: I wrapped the Zorks with an LLM
16 votes