-
26 votes
-
Dox with Grok
38 votes -
The future of AI
15 votes -
GNU and the AI reimplementations
23 votes -
A "Real BMO" local AI Agent with a Raspberry Pi and Ollama
17 votes -
Hardening Firefox with Anthropic’s red team
37 votes -
Eval awareness in Claude Opus 4.6’s BrowseComp performance
14 votes -
An AI agent published a hit piece on me
49 votes -
LLMs can unmask pseudonymous users at scale with surprising accuracy
44 votes -
Looking for vibe-coding guides (best practices, etc.)
Decided I wanted to try vibe-coding some stuff. It's been a very long time since I coded anything, and it was all very amateurish, but as the tooling has become better I wanted to give a shot at...
Decided I wanted to try vibe-coding some stuff. It's been a very long time since I coded anything, and it was all very amateurish, but as the tooling has become better I wanted to give a shot at some silly ideas. Got tired of writing about random teaching and AI related stuff, decided I wanted to build some more stuff to get more acquainted with agentic tooling.
I have gathered some sparse links here and there, but I was hoping the community here may know of some more "definitive" guides. My plan is to use Claude Code, but if people want to share guides for other coding agents (Codex, etc.) please feel free.
Very interested in iOS app development if that helps, but I feel that best practices can likely look very similar across platforms and tools.
25 votes -
Inside Anthropic’s killer-robot dispute with the US Pentagon (gifted link)
24 votes -
My personal AI assistant project
Let me start off by saying that I'm exhausted by AI hype. Being interested in LLM agent technology (AI agent hereafter for brevity) means skimming over a lot of hype for one or two useful, semi...
Let me start off by saying that I'm exhausted by AI hype. Being interested in LLM agent technology (AI agent hereafter for brevity) means skimming over a lot of hype for one or two useful, semi reality based, bits of information. Maybe the part that I find the most frustrating is how effective the hype is. I don't know if there's ever been a hype cycle like this. Probably a big part of the reason for that is the internet has already proven, within living memory for most people, that technological revolutions really can change everything. Or mess everything up. Either way they generate a lot of economic activity.
So this post is not that. I'm not going to tell you about how AI agents are the second coming for Christ. I'm not selling anything.
Fairly early into learning about AI agents I wanted a way to connect to the agent remotely without hosting it somewhere or exposing ports to the internet. I settled on tailscale and a remote terminal and moved on, I rarely used it. Somehow the tiny friction of "Turn on tailscale, open terminal app, connect, run agent" was enough to make it not feel worth it.
I know I'm far from the only person who had the same "I want it remote" thought, the best evidence: OpenClaw. It's just one of those things that everyone naturally converges on.
If you're not familiar with OpenClaw, the TLDR is: Former founder with more money than he'll ever need vibecodes a bridge between instant messenger apps and LLM APIs. Nothing about it is technically challenging or requires solving any particularly hard problems. It almost immediately becomes the fastest growing GitHub repo of all time and is currently at number 14 for number of stars. It blew up the (tech) internet like very few things ever have. Within months he was hired by Open AI.
OpenClaw now does more than just connect messaging and agents, but I believe that one piece is the killer feature. My tailscale terminal solution, combined with a scheduled task or a cron job and some context files could already do all of the things that OpenClaw can do, and countless people had already implemented similar solutions. But I think it was the tiny bit of friction OpenClaw removed that was responsible for a lot its popularity.
I thought that was interesting but I have no interest in the security nightmare that is OpenClaw, or the "sentience" vibe for that matter, so I built my own tool.
Essentially it's just a light secondary harness combined with a bridge between Signal and Claude Code. It does some other things too, things I wished existing harnesses did, some memory and guidelines, automated prompts and reminders to wake the agent up and have it do stuff, some context to give the agent some level of persistence, make it less LLMy, less annoying. None of that is particularly interesting though.
Once I got it working (MVP took less than a day) and started playing with it, the OpenClaw phenomenon made a lot more sense. Somehow having the agent in a chat interface, with almost zero friction (just open the chat and send something) was cooler than it had any reason to be.
I can't explain it any better than that at the moment. Not only was it kinda fun, it lent itself to a whole range of "what ifs". What if it could do X? What if I wrote a tool that gave it Y capability? I've been experiencing that for some time, but somehow agent in your pocket has a different feeling.
Here's an example of a "what if". What if it could do our grocery shopping? I definitely want that. I already had a custom browser tool that I built for agent coding assistance so I was most of the way there. It was just a matter of teaching the agent to login and navigate a website, something they're already trained to do. Some hand holding, a few helper scripts, and an evening's worth of hours later and I had it working. The agent can respond to a shopping request by building a shopping list based on our most recent orders, presenting it to us for approval/edits in a Signal group chat, doing searches for any additional product requests and adding the finalized order to the cart. It could also checkout the order and schedule the delivery time but I'm doing the last 2 clicks manually for the time being. It's an idiot savant, it seems like a bad idea to give it access to my credit card. Maybe eventually.
The fact that I can handle shopping with a couple of signal messages feels effortless in a way that handling shopping by connecting to my PC terminal remotely via tailscale terminal wouldn't have. Especially when I can include people in the loop who have no interest in tailscaling anywhere. Everyone can use messaging apps.
I imagine before long solutions like this will be built in, either in the grocery websites and apps, or into the frontier harnesses themselves. There will probably be agents everywhere, for better or worse. Probably I'll wish that the agents would all fuck off. In the meantime it's exciting how easy it is to get these tools to do useful things.
33 votes -
Google’s AI overviews can scam you. Here’s how to stay safe.
25 votes -
Palantir sues Swiss magazine for accurately reporting that the Swiss government didn’t want Palantir
38 votes -
AI’s memorization crisis (gifted link)
24 votes -
Anthropic rejects latest US Pentagon offer: ‘We cannot in good conscience accede to their request’
61 votes -
I hacked ChatGPT and Google's AI - and it only took twenty minutes
2 votes -
Leaked email suggests Ring plans to expand ‘search party’ surveillance beyond dogs
64 votes -
The first fully general computer action model
12 votes -
New accounts on Hacker News ten times more likely to use em-dashes
54 votes -
Who’s liable when your AI agent burns down production? How Amazon’s Kiro took down AWS for thirteen hours and why the ‘human error’ label tells you everything wrong about the agentic AI era.
45 votes -
Anthropic drops flagship safety pledge
52 votes -
The Claude C Compiler: what it reveals about the future of software
16 votes -
Why doesn’t Anthropic use Claude to make a good Claude desktop app?
27 votes -
A Japanese toilet maker and seasoning giant are unlikely winners of the AI boom
11 votes -
The AI disruption has arrived, and it sure is fun (gifted link)
29 votes -
Hold on to your hardware
46 votes -
Palantir was allegedly hacked, exposing CIA collusion and deep-rooted global surveillance/meddling
46 votes -
US data centers are getting off-grid power plants
15 votes -
The watchers: how OpenAI, the US government, and Persona built an identity surveillance machine that files reports on you to the feds
25 votes -
I hacked ChatGPT and Google's AI – and it only took twenty minutes
34 votes -
Which future?
7 votes -
The "AI god" narrative is actually a corporate power grab
43 votes -
AI fails at 96% of jobs (new study)
28 votes -
Giving my AI agent its own team and what that taught me about AI
16 votes -
OpenAI retired its most seductive chatbot – leaving users angry and grieving: ‘I can’t live like this’
15 votes -
Update on developer access and platform security | Spotify for Developers
10 votes -
Something big is happening
33 votes -
Google's quarterly report on adversarial use of AI for Q4 2025
11 votes -
Tell me that you've seen Moltbook, the AI to AI social network
11 votes -
GPT in 243 lines of pure python
14 votes -
The AI vampire
27 votes -
The hidden cost of AI art: Brandon Sanderson's keynote
52 votes -
‘So tired I want to cry’: AI promotional giveaways swamp shops in China
13 votes -
Why computers won’t make themselves smarter - Ted Chiang
25 votes -
'Control AI', a non-profit that ostensibly works to protect humanity from the risks of AI, is actually sponsoring influencers to divert public attention away from them
25 votes -
AI doesn’t reduce work—it intensifies it
41 votes -
TSMC to make advanced AI computer chips in Japan
17 votes -
AI hallucination cases - When lawyers use hallucinated legal content
25 votes -
Building a C compiler with a team of parallel Claudes
20 votes