post_below's recent activity
-
Comment on Google’s TurboQuant AI-compression algorithm can reduce LLM memory usage by 6x in ~tech
-
Comment on Pope Leo calls universal healthcare a 'moral imperative' in ~society
post_below LinkIf you'd asked me 15 years ago if I thought I'd kinda love two consecutive popes, I would have put the odds near zero. But here we are.If you'd asked me 15 years ago if I thought I'd kinda love two consecutive popes, I would have put the odds near zero. But here we are.
-
Comment on Pope Leo calls universal healthcare a 'moral imperative' in ~society
post_below (edited )Link ParentOne thing you seem to have left out is that the health care triumvirate of corporate hospitals and clinics, health insurance and pharmaceuticals charge Americans more than people in other...One thing you seem to have left out is that the health care triumvirate of corporate hospitals and clinics, health insurance and pharmaceuticals charge Americans more than people in other countries pay.
I don't think it's realistic to blame the problem on government managed health care, except inasmuch as the government has failed to effectively regulate those industries. Single payer introduces a single point of negotiation with the health care industry. Which, combined with ongoing regulation and legislation, seems like a good way to bring prices down.
Meanwhile if we let the existing system determine prices, with more government subsidies, prices coming down seems unlikely. The free market has failed pretty spectacularly in terms of affordable, universally accessible care.
-
Comment on Olympic committee announces a broad ban on transgender athletes and athletes with differences in sex development in Women’s events (gifted link) in ~lgbt
post_below Link ParentGood point, men in general seem to have a lot of opinions about this topic for no good reason. I suspect, without much evidence, that the general population doesn't have strong feelings one way or...Good point, men in general seem to have a lot of opinions about this topic for no good reason.
I suspect, without much evidence, that the general population doesn't have strong feelings one way or the other. Online it's a different story.
-
Comment on Making React ProseMirror really, really fast in ~comp
post_below Link ParentThat's interesting, a native implementation has to do less work, with less overhead, and with compiled code. Implies that Firefox got something pretty badly wrong.That's interesting, a native implementation has to do less work, with less overhead, and with compiled code. Implies that Firefox got something pretty badly wrong.
-
Comment on Making React ProseMirror really, really fast in ~comp
post_below LinkWorks well for me on android Chrome. Nice job optimizing. I'm going to be the one to say it: The best way to make it faster is not to use react! :)Works well for me on android Chrome. Nice job optimizing.
I'm going to be the one to say it: The best way to make it faster is not to use react! :)
-
Comment on Nvidia CEO declares AI could start, grow, and run a successful technology company worth more than a billion dollars—excerpt from Lex Fridman Podcast in ~tech
post_below LinkAn alternate take: Jensen kinda sorta believes it. There's this phenomenon with LLMs where, early on in the process of starting to really see evidence of what they're capable of, people lose their...An alternate take: Jensen kinda sorta believes it. There's this phenomenon with LLMs where, early on in the process of starting to really see evidence of what they're capable of, people lose their minds. Some people freak out "AI is going to replace everyone and we're all doomed", some people get overly bullish "holy shit this changes everything and makes me a superhero", sometimes it's sort of personal "wow this thing is a genius, it gets me and has arcane knowledge about all the things". There are a variety of options but all of them come with a reality distortion field. It looks a little bit like an intense crush or the early stages of love.
Sometimes it even is love, according to the afflicted.
We've seen this play out in articles, blog posts and podcasts countless times in recent years. A lot of it is intentional hype of course but there's a true believer piece that the hype overshadows. LLMs have an interesting (if dystopian) psychological effect that will no doubt eventually be studied and named.
There's no question that Jensen is on the podcast to sow hype, but he might also be in the the butterflies stage of LLM salience. Which is to say, a little crazy.
-
Comment on Denmark's Social Democrats have won the most votes in the country's general election, but have failed to secure a majority, after the party's weakest showing in more than a century in ~society
post_below LinkCan someone more familiar with Danish politics explain what this will mean?Can someone more familiar with Danish politics explain what this will mean?
-
Comment on OpenAI shuts down Sora AI video, Disney drops planned $1B investment in ~tech
post_below Link ParentAccording to them the android app was built from scratch, and then maintained by the same team of 4 people. I tried using it once, months after release, it was a buggy mess. They don't seem to...According to them the android app was built from scratch, and then maintained by the same team of 4 people. I tried using it once, months after release, it was a buggy mess. They don't seem to have devoted a lot of resources to it. Meanwhile it had to be bleeding money via inference on free usage. At least with coding and enterprise people pay them.
In any case I just use the android app as example of their lack of push with Sora. They also also never built any significant scaffolding or tooling around it to support professional workflows. It was more a proof of concept than a product.
-
Comment on OpenAI shuts down Sora AI video, Disney drops planned $1B investment in ~tech
post_below Link ParentSome context... The Sora android app was developed (85% vibecoded) by a team of 4 people over less than a month. It's not an area they've invested a huge amount of time and money into, compared to...Some context... The Sora android app was developed (85% vibecoded) by a team of 4 people over less than a month. It's not an area they've invested a huge amount of time and money into, compared to coding, business productivity, medicine and image generation.
Which is to say that despite the initial training costs they didn't invest much into it beyond that. Video is expensive in terms of compute and there's a finite supply. To me it looks like scrapping their biggest loss leader to focus on areas with profit potential.
I think they kinda knew it was a doomed concept from the beginning. They were just locked into the idea that they should try to lead on all fronts, which they've recently rethought.
-
Comment on OpenAI shuts down Sora AI video, Disney drops planned $1B investment in ~tech
post_below Link ParentThey've been pretty public about refocusing on coding and enterprise, they're concerned about Anthropic overtaking them. The bubble may pop, but I don't think this is part of it.They've been pretty public about refocusing on coding and enterprise, they're concerned about Anthropic overtaking them.
The bubble may pop, but I don't think this is part of it.
-
Comment on Everyone but US President Donald Trump understands what he’s done - allied leaders know that any positive gesture they make will count for nothing in ~society
post_below LinkIt's cathartic to see the insanity laid out so succinctly, thanks for posting. I do have one problem with the framing though, it puts a little too much focus on Trump by ignoring the elephant in...It's cathartic to see the insanity laid out so succinctly, thanks for posting.
I do have one problem with the framing though, it puts a little too much focus on Trump by ignoring the elephant in the room: The energy/military industrial complex. They're loving every minute of this profit bonanza and they almost certainly had a hand in making it happen.
It's not just Trump's childish whims running the world's most powerful military, it's the financial interests he uncritically listens to. And, of course, Israel.
-
Comment on That one study that proves developers using AI are deluded in ~tech
post_below Link ParentThat other post is a very passionate rant that makes a lot of valid points but also exaggerates and twists the realities liberally, as rants tend to do. The current prices are definitely...That other post is a very passionate rant that makes a lot of valid points but also exaggerates and twists the realities liberally, as rants tend to do.
The current prices are definitely unsustainable, no doubt of that, but it's possible that they won't be in the future. The technology is far from settled. Hardware advances could bring prices down. Changes in training, changes in architecture. The push to bring inference prices down is just starting to heat up.
And then of course there are open weights models, which keep getting better. In those cases the price you pay, versus what inference costs, is transparent. That's already sustainable.
In the near term, unless the investment dries up (bubble pop, global recession, totally possible), prices are unlikely to change significantly.
Long term yeah, there could be a rug pull. No one knows, anyone who claims to know is financially or emotionally motivated. It's all pretty unprecedented, everyone's just guessing.
-
Comment on Our commitment to Windows quality in ~tech
post_below Link ParentIt'a not just you. I use Windows 11 daily. No ads, no telemetry (probably), no copilot, no OneDrive, no MS account login. Windows has definitely enshittified, and that will probably continue, but...It'a not just you. I use Windows 11 daily. No ads, no telemetry (probably), no copilot, no OneDrive, no MS account login.
Windows has definitely enshittified, and that will probably continue, but at this point if you're even vaguely tech inclined it's not difficult to make it suck far less.
These threads always surprise me. I get being stuck with out of the box Windows annoyances for average users, most aren't even particularly bothered by them. But for non average users: just fix it. You don't even have to do the majority of it manually, there are various apps that will do it for you. You can have a comparatively pain free Windows installation 15 minutes from now.
-
Comment on That one study that proves developers using AI are deluded in ~tech
post_below Link ParentThat's awesome. I think accelerating research by removing some of the friction from the coding (and data analysis) part of the process is one of the most exciting applications for LLMs. And you're...That's awesome. I think accelerating research by removing some of the friction from the coding (and data analysis) part of the process is one of the most exciting applications for LLMs. And you're in an ideal position to keep agents in line if you're already comfortable working in multiple languages.
-
Comment on That one study that proves developers using AI are deluded in ~tech
post_below Link ParentSee my edit above and good luck! What kinds of things are you using it for?See my edit above and good luck! What kinds of things are you using it for?
-
Comment on That one study that proves developers using AI are deluded in ~tech
post_below Link ParentExactly right, also add harness and scaffolding to the list. Your workflow sounds pragmatic. I have a similar philosophy: it's an assistant rather than a wholesale replacement.LLM effectiveness varies so much based on model and usage that I'm not surprised there's a lot of doubters
Exactly right, also add harness and scaffolding to the list.
Your workflow sounds pragmatic. I have a similar philosophy: it's an assistant rather than a wholesale replacement.
-
Comment on That one study that proves developers using AI are deluded in ~tech
post_below (edited )Link ParentName the file AGENTS.md (or CLAUDE.md if you're using Anthropic models), it will be read automatically. If you want to be sure it's getting read, put a conspicuous instruction in it... "Confirm...Name the file AGENTS.md (or CLAUDE.md if you're using Anthropic models), it will be read automatically. If you want to be sure it's getting read, put a conspicuous instruction in it... "Confirm that you've read these instructions by saying I'm a Pony"
Edit: I haven't actually tried the above suggestion, it might not actually work given that agents.md is read before the normal loop starts. Alternatively just ask the agent a question about something present in agents.md. The file gets read reliably so once you're satisfied it read it you can trust it will keep happening.
-
Comment on That one study that proves developers using AI are deluded in ~tech
post_below Link ParentYou're not... to put it bluntly the people saying that are either lying for cynical reasons or they don't actually know what they're talking about. That said, and this applies to a lot of the...You're not... to put it bluntly the people saying that are either lying for cynical reasons or they don't actually know what they're talking about.
That said, and this applies to a lot of the issues people run into, it is possible to use agents productively and get high quality results. Two different people, with two different setups, will get different results because LLM agents are not a blunt instrument. You can't just press a button and expect them to "just work", so to speak.
Whether it's worth the time and energy to teach yourself how to use them is a different question.
Fortunately the other option is just wait. Every week another strategy that people have taught themselves over the last year gets formalized and built into the harness of one of the SoTA agents. Eventually it will be a lot more like just pushing a button and getting results. Not tomorrow, and not completely, at least for a while. But the tools will get increasingly more reliable
-
Comment on That one study that proves developers using AI are deluded in ~tech
post_below (edited )Link ParentOk finished... Posted the code to get a timestamp, now editing to add some notes: My approach was to treat it like a real project in that I used all of my scaffolding and focused on correctness...Ok finished...
Posted the code to get a timestamp, now editing to add some notes:
- My approach was to treat it like a real project in that I used all of my scaffolding and focused on correctness over speed. That makes Opus eat tokens and time.
- Model: Opus 4.6 in high effort mode running in the Claude Code harness
- Tokens used: 120k
- My initial prompt was simple but my setup adds a lot of context, in the form of process instructions, tools, skills, commands and so on. That combined with the system prompt put the initial context (before I typed a prompt) at 25-30k tokens.
- Because of that context the agent did a lot of things it might not otherwise have done, making it spend a lot of time confirming things with me, researching, planning and iterating, without that it's possible the first attempt would have been a lot messier, or broken
- There was iteration to add (for example) the death condition, which was missing from the original version
- The wall wrapping was intentional, Claude wanted it and assures me that it's a legitimate snake variant, I have no idea if that's true, can't remember the last time I played snake
- I hate MS
Just noticed that the code block below has broken formatting and it looks like it ends early because of multiple double quotes: alternative
Code
``` open System open Falco open Falco.Routing open Falco.Markup open Falco.Datastar open Microsoft.AspNetCore.Buildertype Pos = int * int
type SnakeState =
{ Snake : Pos list
Dir : Pos
Food : Pos
Width : int
Height : int
GameOver : bool }type DirSignal = { dir : string }
let dirFromString (s: string) =
match s with
| "up" -> Some (0, -1)
| "down" -> Some (0, 1)
| "left" -> Some (-1, 0)
| "right" -> Some (1, 0)
| _ -> Nonelet isOpposite (dx1, dy1) (dx2, dy2) =
dx1 + dx2 = 0 && dy1 + dy2 = 0let applyDirection current requested =
if isOpposite current requested then current
else requestedlet nextPos (w, h) ((x, y), (dx, dy)) =
((x + dx + w) % w, (y + dy + h) % h)let randomFood (rnd: Random) (w, h) (snake: Pos list) =
let rec loop () =
let p = (rnd.Next(0, w), rnd.Next(0, h))
if List.contains p snake then loop () else p
loop ()let step (rnd: Random) (state: SnakeState) =
let head = List.head state.Snake
let newHead = nextPos (state.Width, state.Height) (head, state.Dir)
if List.contains newHead state.Snake then
{ state with GameOver = true }
else
let ateFood = newHead = state.Food
let newSnake =
if ateFood then newHead :: state.Snake
else newHead :: (state.Snake |> List.take (state.Snake.Length - 1))
let newFood =
if ateFood then randomFood rnd (state.Width, state.Height) newSnake
else state.Food
{ state with Snake = newSnake; Food = newFood }let rnd = Random()
let newGame () =
{ Snake = [ (5,5); (4,5); (3,5) ]
Dir = (1, 0)
Food = randomFood rnd (20, 15) [ (5,5); (4,5); (3,5) ]
Width = 20
Height = 15
GameOver = false }let gameState = ref (newGame ())
let renderCell (snakeSet: Set<Pos>) (food: Pos) (x, y) =
let cls =
if Set.contains (x, y) snakeSet then "cell snake"
elif (x, y) = food then "cell food"
else "cell"
Elem.div [ Attr.class' cls ] []let renderBoard (state: SnakeState) =
let snakeSet = Set.ofList state.Snake
Elem.div [ Attr.id "board"; Attr.class' "board" ] [
for y in 0 .. state.Height - 1 ->
Elem.div [ Attr.class' "row" ] [
for x in 0 .. state.Width - 1 ->
renderCell snakeSet state.Food (x, y)
]
]let renderGameOver () =
Elem.div [ Attr.id "board"; Attr.class' "board gameover"; Ds.onClick (Ds.get "/restart") ] [
Elem.div [ Attr.class' "gameover-msg" ] [
Text.raw "Game Over
"
Text.raw "Click to restart
"
]
]let handleIndex : HttpHandler =
let css = """<style>
body {
font-family: system-ui, sans-serif;
background: #111; color: #eee;
display: flex; flex-direction: column;
align-items: center; justify-content: flex-start;
height: 100vh; margin: 0; padding-top: 2rem;
}
h1 { margin-bottom: 1rem; }
.board { display: inline-block; background: #222; padding: 4px; }
.row { display: flex; }
.cell {
width: 16px; height: 16px;
box-sizing: border-box;
border: 1px solid #333;
background: #111;
}
.cell.snake { background: #4ade80; }
.cell.food { background: #f97316; }
.gameover {
display: flex; align-items: center; justify-content: center;
min-height: 240px; cursor: pointer;
}
.gameover-msg { text-align: center; }
.gameover-msg h2 { color: #f87171; margin: 0 0 0.5rem; }
.gameover-msg p { color: #888; margin: 0; }
</style>"""let keyHandler = Ds.expression [ "evt.key.startsWith('Arrow') && evt.preventDefault()" "evt.key === 'ArrowUp' ? $dir = 'up' : " + "evt.key === 'ArrowDown' ? $dir = 'down' : " + "evt.key === 'ArrowLeft' ? $dir = 'left' : " + "evt.key === 'ArrowRight' ? $dir = 'right' : null" ] let html = Elem.html [] [ Elem.head [] [ Ds.cdnScript Text.raw css ] Elem.body [] [ Text.h1 "Snake" Elem.div [ Ds.signal ("dir", "right") ] [] Elem.div [ Ds.onEvent ("keydown", keyHandler, [Window]) ] [] Elem.div [ Ds.onInterval (Ds.get "/tick", 100) ] [] Elem.div [ Attr.id "board" ] [] ] ] Response.ofHtml htmllet handleTick : HttpHandler =
fun ctx -> task {
let state = gameState.Valueif state.GameOver then let board = renderGameOver () do! Response.sseStartResponse ctx do! Response.sseHtmlElements ctx board else let! dirStr = task { try let! signals = Request.getSignals<DirSignal> ctx match signals with | ValueSome s -> return s.dir | ValueNone -> return "" with _ -> return "" } let newDir = dirFromString dirStr |> Option.defaultValue state.Dir gameState.Value <- { state with Dir = applyDirection state.Dir newDir } gameState.Value <- step rnd gameState.Value let board = renderBoard gameState.Value do! Response.sseStartResponse ctx do! Response.sseHtmlElements ctx board }let handleRestart : HttpHandler =
fun ctx -> task {
gameState.Value <- newGame ()
let board = renderBoard gameState.Value
do! Response.sseStartResponse ctx
do! Response.sseHtmlElements ctx board
}let wapp = WebApplication.Create()
let endpoints : HttpEndpoint list =
[ get "/" handleIndex
get "/tick" handleTick
get "/restart" handleRestart ]wapp
.UseRouting()
.UseFalco(endpoints)
.Run()</details>
Anthropic is claiming a step change in an upcoming, larger than Opus, model release. But they'll need both memory and inference optimizations, they're already pushing the limits of their available compute due to skyrocketing demand. Maybe they've already integrated some version of similar optimizations. Google's is the latest headline but a lot of groups have been working on it.