post_below's recent activity
-
Comment on Five browser extensions to make every website more useful in ~tech
-
Comment on Five browser extensions to make every website more useful in ~tech
post_below Link ParentReplying for visibility... The following scenario happens often enough to make it worth removing any extension you're not actively using and avoiding extensions from even slightly questionable...Replying for visibility... The following scenario happens often enough to make it worth removing any extension you're not actively using and avoiding extensions from even slightly questionable publishers: Bad actors buy an aging, popular, extension that hordes of people don't necessarily even remember they have installed. Instant access to countless people's browsers. Bad actors here doesn't necessarily mean hackers after financial info, it's often shady ad companies who want telemetry. Sometimes they build the extensions themselves rather than buy them. In one recently uncovered case a family of extensions that offered "protection" turned out to be actively exfiltrating complete transcripts of users' interactions with LLM chats. Some of the extensions were "certified" by the extension store.
-
Comment on How Wall Street ruined the Roomba and then blamed Lina Khan in ~tech
post_below Link ParentGreat point, it's true that treating them as actual opinions rather than cynical strategies is a contradiction in terms. I thought the author ultimately did a fine job of making that clear... and...In some sense, I think that even critiquing their “opinions” is not very helpful because of how shallow and fake those opinions obviously are to any informed person.
Great point, it's true that treating them as actual opinions rather than cynical strategies is a contradiction in terms. I thought the author ultimately did a fine job of making that clear... and of reminding us that, despite sounding like a dream come true in the current political reality, the Obama administration was nearly as co-opted as the Bush admin.
I think the national security concerns are legitimate, it's safe to assume that any data that goes into China, goes to the government, and Roombas are collecting a lot of sensitive data. Not just mapping homes physically, also network information and physical cameras. It's a mobile surveillance device in 10's of millions of households. At the very least such a sale deserves significant scrutiny.
-
Comment on JustHTML is a fascinating example of vibe engineering in action in ~comp
post_below Link ParentI agree, this is the key part. With the right boundaries and feedback loops, frontier models can get there. Effective is a relative term here though, even without looking I feel like I have a...I think it’s notable that there was a preexisting comprehensive test suite. Both that and the coverage tests served as extremely effective feedback loops for the coding agents.
I agree, this is the key part. With the right boundaries and feedback loops, frontier models can get there.
Effective is a relative term here though, even without looking I feel like I have a pretty good idea what the codebase in question looks like under the hood. It's going to have wildly different coding conventions mixed together with no apparent logic, moving from one section of the code to another will be an adventure. Naming conventions will be all over the place. It won't be even a little bit DRY. There will be efficiency issues, unneeded defense in depth some places, exploitable holes in others. There will be broken logic that was patched over to pass tests rather than refactored to make actual sense, and so on. In some places it will be beautifully elegant. In short it will look like something built by an army of at once precocious and idiotic juniors with implacable persistence. Most of it will look nothing like production grade code.
Don't get me wrong, it's a really interesting proof of concept, it just doesn't imply all the things it seems to imply about what LLMs are currently capable of. They can do amazing things, but building nontrivial applications that are secure, efficient and maintainable without significant oversight and human feedback is not among them.
Developing interesting and efficient algorithms is always what appealed to me, but it seems like that part may get outsourced to the machine now
I suppose it depends on how you define interesting. If part of the definition is "hasn't been done lots of times before", humans are still required. The time could of course come when that's not true, but LLM technology as it exists now doesn't have a path to that reality.
They can generalize across languages. So if a pattern is well established in one language, but the problem hasn't frequently been solved in another, and there's enough of that language in the training, then the LLM can port the pattern (so to speak). But if it's something that isn't well represented in the training data, you're going to get deal breaking hallucinations. You might be able to brute force it through iteration but not in a way that's better than doing it yourself.
To answer your question though: No I don't think it makes software development less interesting. I think the opposite is true, LLMs make offloading the uninteresting parts possible. That's both interesting and exciting.
-
Comment on Other people might just not have your problems in ~life
post_below Link ParentThey actually did come up with a word for that: wisdom. All the cliches are true.I wish there was a word for a lesson that is obvious, but when you realize it in its totality, then the lesson becomes incredibly profound.
They actually did come up with a word for that: wisdom.
All the cliches are true.
-
Comment on Bun is joining Anthropic in ~tech
post_below Link ParentI'd add to this that, from a dev perspective, Bun is a pretty low risk proposition. Javascript is still javascript regardless of what happens with Bun. If things were to go in a direction you...I'd add to this that, from a dev perspective, Bun is a pretty low risk proposition. Javascript is still javascript regardless of what happens with Bun. If things were to go in a direction you didn't like you could fairly easily switch back to Node, or you could switch to whichever Bun fork you liked most, or fork it yourself.
This is open source working as intended.
It would be a bit different if Bun wasn't already VC funded, to me that's the bigger yuck. Open source companies trying to figure out how to force monetization to please investors often leads to unfortunate outcomes and misaligned priorities. Compared to that road, it seems to me that this increases the chances of Bun being a quality product, for longer.
-
Comment on EU backs away from chat control in ~society
post_below Link ParentAgreed, governments trying to (effectively) outlaw encryption has been an evergreen issue for decades, so this is far from over. What we really need are laws proactively protecting encryption, or...Agreed, governments trying to (effectively) outlaw encryption has been an evergreen issue for decades, so this is far from over. What we really need are laws proactively protecting encryption, or even better, enshrining basic principles of digital privacy into law.
But for now this is fantastic news!
-
Comment on Google must double AI serving capacity every six months to meet demand in ~tech
post_below Link ParentThat's a good point, they probably don't like to cache results for too long, especially in areas where information changes quickly, but of course they've gotta be doing caching. Newer LLMs are...That's a good point, they probably don't like to cache results for too long, especially in areas where information changes quickly, but of course they've gotta be doing caching. Newer LLMs are surprisingly token efficient for summaries too, which decreases the compute needed.
And still, the scale of Google search is mind boggling. They're burning a lot of compute and cash on those summaries.
-
Comment on Google must double AI serving capacity every six months to meet demand in ~tech
post_below Link ParentYeah Gore, little did we know how right he was! He didn't seem like a hero to me at the time, just another politician, but in retrospect I have a lot of respect for his conviction and sacrifice....I do think these days there's much less of a "champion" for the environment compared to when I was growing up. [...] I'm not sure if that same emphasis occurs today.
Yeah Gore, little did we know how right he was! He didn't seem like a hero to me at the time, just another politician, but in retrospect I have a lot of respect for his conviction and sacrifice.
It's this weird situation because there's more awareness around climate change, and a lot of conversation, but somehow the impact feels off despite that. Amongst people who work in environmental areas and climate scientists there was a popular term for a minute: Climate salience. That being the point that the reality really lands for a person or population and the conversation changes as a result.
Having arrived at climate salience repeatedly, it's still hard for me a lot of the time to really concepualize the magnitude of the mess we've made. I guess in part because life keeps happening and rawdogging the reality of the anthropocene in everyday life isn't going to help anything.
-
Comment on GPT-5 has come a long way in mathematics in ~tech
post_below Link ParentIt's moving so fast that a mental model of LLM capabilities from even 6 months ago is completely out of date. I assume there will be a plateau at some point, but so far all the claims of impending...It's moving so fast that a mental model of LLM capabilities from even 6 months ago is completely out of date. I assume there will be a plateau at some point, but so far all the claims of impending plateaus have been proven wrong by the next generation of frontier models.
And it's not just math, or audio, or brainstorming, or video, or coding, or learning, or research, or (pseudo) reasoning, it's all of the above and a lot more besides. It looks more and more like it will prove to be the most impactful advancement in the digital revolution so far. With all the exciting and frightening up and downsides that go along with that.
-
Comment on Google must double AI serving capacity every six months to meet demand in ~tech
post_below Link ParentThis is gonna blow your mind, but when I posted it wasn't the 3rd comment in the thread. They mostly supported nonprofits that are taking care of people. They understandably care a lot about that....It's the 3rd comment in the thread.
This is gonna blow your mind, but when I posted it wasn't the 3rd comment in the thread.
Well what's the other 98?
They mostly supported nonprofits that are taking care of people. They understandably care a lot about that. They like pets too, they're teenagers, they went for the low hanging fruit based on their life experience. Half of them were just waiting for the bell to ring. But if climate change and the environment were a little bigger part of the conversation, possibly they would have been more likely to make the connection between people and the planet.
-
Comment on AGI and Fermi's Paradox in ~science
post_below Link ParentAll good points, it would run into all sorts of constraints, just like we have. Still, iteration speed and self editing are powerful tools. One constraint it wouldn't have is biological evolution....All good points, it would run into all sorts of constraints, just like we have.
Still, iteration speed and self editing are powerful tools. One constraint it wouldn't have is biological evolution. It wouldn't just be getting smarter in terms of knowledge about the universe, it would be evolving its structure and capacity. We gain knowledge and pass it on to successive generations who then improve on it, but only after spending 20+ years learning the basics from scratch. Meanwhile the human brain isn't changing significantly each generation. This imaginary AI mind would be able to edit its brain at will. It would be able to evolve very fast. We wouldn't be talking about 10x smarter, we'd be talking about an unimaginable factor of greater intelligence within a short time. It would likely require a new definition of intelligence. For example, it could run evolution tests in a massive array of parallel sandboxes, keeping the best results from each round and applying them to the next. But it would come up with a far better way of doing that than I can imagine pretty quickly.
I'm not sure there's any way to predict or anticipate what a truly self aware AGI could achieve in terms of cognitive ability. It seems to me that it would find creative ways around many of the constraints that we struggle with.
And of course at this point its pure science fiction. I think of it as two singularities. True self aware AGI is the first, and full agency is the second. If the first ever happens, maybe we'll have come up with a way to mitigate the second, for a while anyway.
-
Comment on Global health series - ultra-processed foods and human health in ~food
post_below Link ParentIt's great to see a comprehensive look at ultra processed foods, and in The Lancet. The cynical truth is that, as they pointed out, the international food conglomerates will fight hard every step...It's great to see a comprehensive look at ultra processed foods, and in The Lancet.
The cynical truth is that, as they pointed out, the international food conglomerates will fight hard every step of the way to protect their profits. In the US, where corporate political influence is particularly strong, we won't see meaningful changes when it comes to processed food until there's a huge political shift. If we're being hopeful, the 2030's maybe.
Fortunately there are countries where that's less true. The other problem, though, is that a lot of people are defensive about their food. People don't want to hear that the stuff they've been eating since childhood is now maybe bad for them. Food comes with all sorts of emotion. It's security, nostalgia, safety, survival. People have big reactions.
Refined sugar, refined carbs in general, is a good example of this. The science has been there, undisputed, for decades. Not just a little science, A LOT of it, and that's despite the research money being sparse for much of that time. But we're only now, sort of, arriving at a place where it's becoming commonly accepted that it's one of the most dangerous things in the food supply (at the volumes many people consume). Also the most dangerous part of a lot of ultra-processed food. Part of the reason is industry money, both political and marketing, but the other part is that it's a key ingredient in delicious things that make people happy.
-
Comment on Blue Origin reveals a super-heavy variant of its New Glenn rocket that is taller than a Saturn V in ~space
post_below LinkI can't help but translate space rocket news into "billionare penis" terms.I can't help but translate space rocket news into "billionare penis" terms.
Blue Origin reveals a super-heavy variant of its new Jeff Bezos space penis that is biggest Jeff Bezos space penis yet
-
Comment on Google must double AI serving capacity every six months to meet demand in ~tech
post_below Link ParentAs this comment seems to be illustrating, sitting sadly down here at the bottom of the thread... When it comes to LLMs, people aren't particularly interested in talking environmental impact. It's...As this comment seems to be illustrating, sitting sadly down here at the bottom of the thread... When it comes to LLMs, people aren't particularly interested in talking environmental impact. It's not as exciting as the tech, or the implications of the tech on society. And I get it, the tech is insane, for better and worse.
I personally believe the climate considerations are pretty important. Not just in the case of AI, it's just the biggest example, but with every advancement. Even now in 2025 the carbon footprint part of the conversation often gets crowded out. Climate is a big part of a lot of conversations these days but it's a long way from being a big enough part. As long as that's true, politicians will be less motivated to care.
Somewhat related... Where I live, each year a foundation allocates money to local high schools under the premise that students should decide which nonprofits/causes to support with it. This year, out of about 100 causes students chose to support, 2 were about the environment or climate change. It just doesn't seem to be in the zeigeist as much as you'd hope.
-
Comment on Google must double AI serving capacity every six months to meet demand in ~tech
post_below Link ParentI could easily be wrong about the degree of difference, but the compute needed to handle a request to a frontier model is definitely dramatically higher than that needed for a local model. The two...I could easily be wrong about the degree of difference, but the compute needed to handle a request to a frontier model is definitely dramatically higher than that needed for a local model. The two things aren't even in the same ballpark, even accounting for better hardware on the datacenter end.
-
Comment on Google must double AI serving capacity every six months to meet demand in ~tech
post_below Link ParentSurprisingly I can't find any information about training vs inference from sources other than blogs. However the bloggers that have something to say about it all seem to say the same thing, here's...Surprisingly I can't find any information about training vs inference from sources other than blogs. However the bloggers that have something to say about it all seem to say the same thing, here's one.
Either way, however the comparison looks, the compute needed for inference is growing really fast and that's going to have a big carbon footprint.
-
Comment on Google must double AI serving capacity every six months to meet demand in ~tech
post_below Link ParentMy understanding is that it's the other way around, training is massively expensive up front but over time the compute cost of inference is far larger. From a computer science perspective it's...Serving (ie. inference) is only a tiny portion of compute in comparison to training. They could likely double serving capacity a dozen times before the cost was even comparable.
My understanding is that it's the other way around, training is massively expensive up front but over time the compute cost of inference is far larger. From a computer science perspective it's hard to imagine it could be otherwise.
One way to contextualize it would be thinking about running a local model. You need a high end GPU with a lot of memory to even consider it and it will fully use all of the resources available for quite a long time for every request. It's more resource use than almost anything else you might do with your system, even most video editing/creation. Then think about what that looks like over the course of a day's work, with dozens or 100's of prompts.
A frontier model being served to the public from the cloud is using magnitudes more compute than a local model for every request from every user. For example, every Google search. Over time, training costs can't compete with that.
-
Comment on Google must double AI serving capacity every six months to meet demand in ~tech
post_below Link ParentLLM summaries and corporate hype "force AI into everything and figure out the details later" has gotta be well over half of the demand.LLM summaries and corporate hype "force AI into everything and figure out the details later" has gotta be well over half of the demand.
-
Comment on Google must double AI serving capacity every six months to meet demand in ~tech
post_below (edited )LinkIn a recent thread I talked about how AI is becoming genuinely useful, and it is. But that doesn't negate its environmental impact, which I don't think we talk about enough. Think about the math...In a recent thread I talked about how AI is becoming genuinely useful, and it is. But that doesn't negate its environmental impact, which I don't think we talk about enough.
Think about the math of doubling every six months. Then add in all of the other tech giants, who are shovelling just as much money and resources at the problem. That includes (to a somewhat lesser degree) the newer players that are focused only on AI like Anthropic. And then also consider that the doubling they're talking about is only for serving capacity. There's also massive and ever-growing compute needed for training.
It's mind blowing, and concerning.
From the article:
At an all-hands meeting on Nov. 6, Amin Vahdat, a vice president at Google Cloud, gave a presentation, viewed by CNBC, titled “AI Infrastructure,” which included a slide on “AI compute demand.” The slide said, “Now we must double every 6 months.... the next 1000x in 4-5 years.”
“The competition in AI infrastructure is the most critical and also the most expensive part of the AI race,” Vahdat said at the meeting, where Alphabet CEO Sundar Pichai and CFO Anat Ashkenazi also took questions from employees.
The presentation was delivered a week after Alphabet reported better-than-expected third-quarter results and raised its capital expenditures forecast for the second time this year, to a range of $91 billion to $93 billion, followed by a “significant increase” in 2026. Hyperscaler peers Microsoft
, Amazon and Meta also boosted their capex guidance, and the four companies now expect to collectively spend more than $380 billion this year.Clearly, and unsurprisingly, big tech isn't concerned about climate impact, they just don't want to be late to the gold rush. From Pichai:
He then reiterated a point he’s made in the past about the risks of not investing aggressively enough, and highlighted Google’s cloud business, which just recorded 34% annual revenue growth to more than $15 billion in the quarter. Its backlog reached $155 billion.
“I think it’s always difficult during these moments because the risk of underinvesting is pretty high,” Pichai said. “I actually think for how extraordinary the cloud numbers were, those numbers would have been much better if we had more compute.”
In other words: There is maybe no upper limit to what we should spend on increasing capacity so that we can capture every last ounce of value.
1000x in 4-5 years, potentially across all of the big tech companies. Even if the quoted VP was being hyperbolic, Pichai seems to think maybe the current goals aren't even enough, so it's probably not that hyperbolic.
Of course there isn't anything any of us can do about the problem, the bigger part of the serving demand is driven by enterprise users, it's a nation state level problem to solve. But I think we should at least be aware of it, and talking about it.
Often in conversations about server and datacenter environmental impact the question of whether or not it's a big enough problem to be worried about comes up. I would say that at 1000x in 5 years it definitely is something to be concerned about. The goal, after all, is to be decreasing carbon footprint rather than multiplying it.
I just posted elsewhere in the thread about security issues with extensions, following up to say: thanks for the list! The replies so far make it sound like Tildes is angry at browser extensions. I use some, dark reader among them for uncooperative websites when my light mode allergic partner is looking over my shoulder, squinting.