As mentioned in the HN comments, what "from-scratch" (per the CEO) means here is that it actually uses the following dependencies: Servo's HTML parser Servo's CSS parser QuickJS for JS selectors...
As mentioned in the HN comments, what "from-scratch" (per the CEO) means here is that it actually uses the following dependencies:
Servo's HTML parser
Servo's CSS parser
QuickJS for JS
selectors for CSS selector matching
resvg for SVG rendering
egui, wgpu (v0.17, from June 2023), and tiny-skia for rendering
tungstenite for WebSocket support
So it's "just" Servo plagiarized, deconstructed and then made significantly worse.
Add to this that when people tried to compile and run it shortly after the release, they couldn't get it to compile, nor could the majority of previous versions compile. When it finally did start...
Add to this that when people tried to compile and run it shortly after the release, they couldn't get it to compile, nor could the majority of previous versions compile. When it finally did start working there were some unusual commits shortly before that some speculated were actual humans trying to duct tape it together. Disclaimer: that last bit is entirely a rumor as I didn't look at the code or try to compile it myself.
The reason I didn't look, aside from lack of interest, is that I know what GPT and Claude output under the best of circumstances, and it's not something you can mash together into a working browser from scratch. It's not even close.
But with $80,000 in tokens (their estimate), you can get it to pull together a bunch of libraries to do the real work and end up with a demo that works in the sense that you can get it to kind of display a web page but fails to be actually useful for any practical application. A handful of humans could do better in less time with a bar that low.
Willison posts great stuff, I enjoy his blog, but a puff piece is the wrong angle here. This was a publicity stunt for Cursor, relying on the AI crazed tech media not asking too many questions. Simon is an engineer, he could have told a much better story about what Cursor "achieved".
It is a really interesting proof of concept about agents orchestrating themselves, but what it also proves is that even with a blank check and a server farm agents can't make usable, sophisticated software themselves.
Another missing part of the story: Cursor's user base is increasingly vibe coders. Engineers have been switching to better options in droves for at least the last 6 months, which accelerated with the release of Sonnet 4.5 and then Opus 4.5. And started moving even faster when they scrapped their unlimited auto loss leader. So a "demo" like this appeals directly to their target audience of people who can't read the code, and therefore don't know it's slop.
It's too bad to see him doubling down. I finally got around to watching the video interview, or most of it, and in the CNN website part (the only part that wasn't cherry picked by the Cursor dev,...
It's too bad to see him doubling down. I finally got around to watching the video interview, or most of it, and in the CNN website part (the only part that wasn't cherry picked by the Cursor dev, likely with pre-cached elements... Simon (or whoever was controlling the cursor at that point) starts to scroll down and quickly stops when it becomes apparent that there's just blank space below the fold. Simon communicated more about his intent by pretending not to notice that than anything he wrote in his post.
The result is one-agent-one-browser and it's really impressive. Over three days they drove a single Codex CLI agent to build 20,000 lines of Rust that successfully renders HTML+CSS with no Rust crate dependencies at all - though it does (reasonably) use Windows, macOS and Linux system frameworks for image and text rendering.
So maybe I'm naive here but... Isn't "rendering HTML+CSS" kinda... The easy part? Like that's not actually useful, and it's all the edge cases and extra functionality that makes a browser actually...
So maybe I'm naive here but... Isn't "rendering HTML+CSS" kinda... The easy part? Like that's not actually useful, and it's all the edge cases and extra functionality that makes a browser actually a usable piece of software. No?
Apparently it's not so hard to get started these days. But it seems like CSS has become quite a large language over the years? I only understand small pieces of it. In any case, getting this far...
Apparently it's not so hard to get started these days. But it seems like CSS has become quite a large language over the years? I only understand small pieces of it.
In any case, getting this far in three days is impressive.
From the article: [...] [...] [...] [...] [...] [...]
From the article:
Last week Cursor published Scaling long-running autonomous coding, an article describing their research efforts into coordinating large numbers of autonomous coding agents. One of the projects mentioned in the article was FastRender, a web browser they built from scratch using their agent swarms. I wanted to learn more so I asked Wilson Lin, the engineer behind FastRender, if we could record a conversation about the project. That 47 minute video is now available on YouTube. I’ve included some of the highlights below.
[...]
Wilson started what become FastRender as a personal side-project to explore the capabilities of the latest generation of frontier models—Claude Opus 4.5, GPT-5.1, and GPT-5.2. 00:56
[...]
A browser rendering engine was the ideal choice for this, because it’s both extremely ambitious and complex but also well specified. And you can visually see how well it’s working! 01:57
[...]
Once it became clear that this was an opportunity to try multiple agents working together it graduated to an official Cursor research project, and available resources were amplified.
[...]
The great thing about a browser is that it has such a large scope that it can keep serving experiments in this space for many years to come. JavaScript, then WebAssembly, then WebGPU... it could take many years to run out of new challenges for the agents to tackle.
[...]
We talked about the Cargo.toml dependencies that the project had accumulated, almost all of which had been selected by the agents themselves.
Some of these, like Skia for 2D graphics rendering or HarfBuzz for text shaping, were obvious choices. Others such as Taffy felt like they might go against the from-scratch goals of the project, since that library implements CSS flexbox and grid layout algorithms directly. This was not an intended outcome. 27:53
[...]
The thing I find most interesting about FastRender is how it demonstrates the extreme edge of what a single engineer can achieve in early 2026 with the assistance of a swarm of agents.
FastRender may not be a production-ready browser, but it represents over a million lines of Rust code, written in a few weeks, that can already render real web pages to a usable degree.
As mentioned in the HN comments, what "from-scratch" (per the CEO) means here is that it actually uses the following dependencies:
So it's "just" Servo plagiarized, deconstructed and then made significantly worse.
Add to this that when people tried to compile and run it shortly after the release, they couldn't get it to compile, nor could the majority of previous versions compile. When it finally did start working there were some unusual commits shortly before that some speculated were actual humans trying to duct tape it together. Disclaimer: that last bit is entirely a rumor as I didn't look at the code or try to compile it myself.
The reason I didn't look, aside from lack of interest, is that I know what GPT and Claude output under the best of circumstances, and it's not something you can mash together into a working browser from scratch. It's not even close.
But with $80,000 in tokens (their estimate), you can get it to pull together a bunch of libraries to do the real work and end up with a demo that works in the sense that you can get it to kind of display a web page but fails to be actually useful for any practical application. A handful of humans could do better in less time with a bar that low.
Willison posts great stuff, I enjoy his blog, but a puff piece is the wrong angle here. This was a publicity stunt for Cursor, relying on the AI crazed tech media not asking too many questions. Simon is an engineer, he could have told a much better story about what Cursor "achieved".
It is a really interesting proof of concept about agents orchestrating themselves, but what it also proves is that even with a blank check and a server farm agents can't make usable, sophisticated software themselves.
Another missing part of the story: Cursor's user base is increasingly vibe coders. Engineers have been switching to better options in droves for at least the last 6 months, which accelerated with the release of Sonnet 4.5 and then Opus 4.5. And started moving even faster when they scrapped their unlimited auto loss leader. So a "demo" like this appeals directly to their target audience of people who can't read the code, and therefore don't know it's slop.
Here is Simon's pushback to someone complaining about this article on HN.
It's too bad to see him doubling down. I finally got around to watching the video interview, or most of it, and in the CNN website part (the only part that wasn't cherry picked by the Cursor dev, likely with pre-cached elements... Simon (or whoever was controlling the cursor at that point) starts to scroll down and quickly stops when it becomes apparent that there's just blank space below the fold. Simon communicated more about his intent by pretending not to notice that than anything he wrote in his post.
AI code in a nutshell. It's copyright laundering.
It’s using some powerful building blocks, but does have to draw the rest of the owl.
Someone else gave it a try. Here's Simon's blog post: https://simonwillison.net/2026/Jan/27/one-human-one-agent-one-browser/
So maybe I'm naive here but... Isn't "rendering HTML+CSS" kinda... The easy part? Like that's not actually useful, and it's all the edge cases and extra functionality that makes a browser actually a usable piece of software. No?
Apparently it's not so hard to get started these days. But it seems like CSS has become quite a large language over the years? I only understand small pieces of it.
In any case, getting this far in three days is impressive.
From the article:
[...]
[...]
[...]
[...]
[...]
[...]