Boy, is it depressing that knowing how to build a desktop application is considered to be “low level” these days. Heck, the way the author talks about it, JavaScript’s direct DOM manipulation APIs...
Boy, is it depressing that knowing how to build a desktop application is considered to be “low level” these days. Heck, the way the author talks about it, JavaScript’s direct DOM manipulation APIs are low level! It’s very tempting to push back against this framing, but it’s one of those things that just kind of feels innately true at this point in time. Nobody seems to care about desktop applications at all at this point.
To be a bit fair "the browser is the desktop" was the goal all the way back in the days of netscape, and just kinda....went off the rails. It makes sense as it's the only cross platform protocol...
To be a bit fair "the browser is the desktop" was the goal all the way back in the days of netscape, and just kinda....went off the rails.
It makes sense as it's the only cross platform protocol we've ever been forced to agree upon that has higher level functionality. Its one of the few conversion points where you can pass an instruction and KNOW how it will turn out on every device...or at least roughly. Browser/OS/Hardware/Chipset for once doesn't matter.
It's really that or containers, and of course wouldn't be necessary if we'd been able to agree on standards (or i guess have an even more dominate or state mandated monopoly), but so long as you've got about 1000x hardware/software combinations, having an agreed upon norm ANYWHERE is going to attract attention.
I will forever be disappointed that the browser as a universal platform failed in favor of mobile apps and walled gardens. It happened mostly because the big tech companies wanted it that way....
I will forever be disappointed that the browser as a universal platform failed in favor of mobile apps and walled gardens. It happened mostly because the big tech companies wanted it that way. Apple most of all, to this day they refuse to support standard APIs for device functionality in the browser not because it's hard (everyone else has managed it) but because they want to hamstring apps that aren't in their ecosystem.
Kiiiiind of, but when you look closely at most of those mobile apps you will still notice that developers have just inverted the model and embedded a web app inside a “native” app. Really, the...
Kiiiiind of, but when you look closely at most of those mobile apps you will still notice that developers have just inverted the model and embedded a web app inside a “native” app. Really, the browser is still the universal platform, we just make people ship millions of stub browsers for no real reason.
“Failed” seems too strong. Sure, mobile apps are popular but there are also plenty of web apps. Anyone starting a new platform would love to be as successful as the web.
“Failed” seems too strong. Sure, mobile apps are popular but there are also plenty of web apps. Anyone starting a new platform would love to be as successful as the web.
The big thing that mobile platforms offer over the web, both back then and today, is a vastly more deep and complete toolchest to work with. Point in case, it's a considerable effort to set up a...
The big thing that mobile platforms offer over the web, both back then and today, is a vastly more deep and complete toolchest to work with. Point in case, it's a considerable effort to set up a virtualized list view (where the only rows allocated to memory are those which are visible and get recycled as the user scrolls) on the web, but that's a been one of the baseline widgets in UIKit since 2007 (and existed in a different shape in its desktop ancestor AppKit since the 1980s) — no third party libraries or bespoke code required. That alone can make the difference between an app feeling smooth and buttery or being a laggy mess when displaying a substantial amount of data (especially on lower power devices, like low end Android phones).
I don't see web overtaking native entirely until that's changed and "bring your own everything" is no longer the dominating web development mindset.
Right, however the reason that you need to use a native app to get access to those APIs isn't that there's a technical reason those APIs can't be exposed to a web app, it's that the platforms...
Right, however the reason that you need to use a native app to get access to those APIs isn't that there's a technical reason those APIs can't be exposed to a web app, it's that the platforms choose not to expose them in order to maintain their profits.
They of course claim it's for security, and that's not completely disingenuous, but it's not the dealbreaker they make it out to be. The key motivation is to make sure that web apps can never feel native and therefore can't compete with app/play store apps.
There’s a few native APIs like that, but continuing with the virtualized list widget example, there aren’t actually blockers. It comes down to browser vendors electing to not add such a thing for...
There’s a few native APIs like that, but continuing with the virtualized list widget example, there aren’t actually blockers. It comes down to browser vendors electing to not add such a thing for web devs to use.
Nearly all of the most valuable bits of native UI frameworks could made a stock part of the browser environment, but Google etc have decided that frills like GPU and USB access are what should be prioritized, despite only being used for a tiny fraction of web apps.
Agreed, they could be made available. In case it's not already implied: The browser vendors and the platforms are largely the same companies. In the case of iOS that is still strictly enforced at...
Agreed, they could be made available. In case it's not already implied: The browser vendors and the platforms are largely the same companies. In the case of iOS that is still strictly enforced at the engine level everywhere except the EU. Meanwhile Chrome has around 70% of the mobile market share globally (higher among android users).
Yep, but if you look at where Apple has been focusing their energy in WebKit (Safari engine), it’s been more aligned with the “make web fundamentals more capable” angle. They’ve been doing a lot...
Yep, but if you look at where Apple has been focusing their energy in WebKit (Safari engine), it’s been more aligned with the “make web fundamentals more capable” angle. They’ve been doing a lot of CSS work in recent years for example.
So despite the engine restriction on iOS, if Google and/Mozilla were to make a push to build a more complete set of standard UI widgets for the web, Apple would probably follow. This has yet to happen however as Google continues to chase features that are somewhat esoteric in comparison.
Years ago, Sun Microsystems released the VHDL source code for their UltraSPARC Niagara chips, which were offering something us computer plebs could only dream of; a computer capable of running 64...
Years ago, Sun Microsystems released the VHDL source code for their UltraSPARC Niagara chips, which were offering something us computer plebs could only dream of; a computer capable of running 64 threads at a time. They claimed they were doing it for the future of commodity computers; some day we would have this at home! Imagine all of the things we could make our computers do!
Turning the browser into the platform feels like a step back from that dream. We have processors that easily outperform that design and instead of making massively parallel applications to take advantage of them, instead they take up so much ram that our massively overpowered systems can’t handle them all at once still. And it feels like when something happens that could make things faster or more efficient, it’s a guessing game when or even if any given application will ever see it.
But there is no use in crying over the death of a future that didn’t happen.
Yeah we kinda stopped with Java didn't we? imo that's always been a shame cause I HATE Java but I recognize the greatness of what it did, create a package for the code so that you can ship...
Yeah we kinda stopped with Java didn't we?
imo that's always been a shame cause I HATE Java but I recognize the greatness of what it did, create a package for the code so that you can ship applications easier. It was huge at the time.
But like, everything lives in the cloud now. Hell, I think with Win11 if you can't connect to the internet you can't even log in or some shit. How do you go back to low level software in that environment?
I suspect I will not be a fan of this, thoughts as I go: This is such a rose tinted view of things. I've had one blue screen in the last year. In 2005 alt tab was still a coin flip between...
I suspect I will not be a fan of this, thoughts as I go:
We in the Handmade community often bemoan the state of the software industry. Modern software is slow and bloated beyond belief—our computers are literally ten times more powerful than a decade ago, yet they run worse than they used to, purely because the software is so bad. The actual user experience has steadily declined over the years despite the insane power at our fingertips. Worst of all, people’s expectations have hit rock bottom, and everyone thinks this is normal.
This is such a rose tinted view of things. I've had one blue screen in the last year. In 2005 alt tab was still a coin flip between "quickly check something else" and "catastrophic failure". Yes, we 100% use memory inefficiently, probably because moore's law was still kicking around so "make it efficient" hasn't mattered as much as "make it work", but these hyperbole's get under my skin soooo much. Was I the only one playing family and friend tech support as people walked off cliffs on a daily basis, or used to just saying "Fuck it nuke the machine" as a fairly common recovery method from "shit just getting worse with time" caused by a pile of horrible practices, designs, and other nonsense?
I could maybe let it slide if it was just about power users, but no, the actual user experience has not "declined" by any meaningful metric related to software optimization. All sorts of shitty business practices are a problem, sure, but if we had perfectly optimized websites you're still not solving the fact that they want to harvest my data, force me to subscribe, and use AI (and hey at least popups and toolbars are MOSTLY dead and they finally put down flash and IE6....fuck HOW do you think things were better during IE6?!).
Maybe tbh. But laziness alone doesn’t tell the whole story. The real problem with New Reddit was the stack it was built on.
You know, the truckla example was more accurate than I think they understood. The problem wasn't the stack (or maybe, just the stack) the problem WAS THE GOAL.
The goal of new reddit is NOT FUNCTIONALITY. The goal is engagement and time on page. These "problems" were acceptable costs to instead get the PILE of react + redux developers that were (are?) floating around and make something that the maximizes for advertisers and investors. They did not want to build an efficient car, they wanted to copy paste something that "worked" and change the color.
And I think this gets back to a bigger problem. Coding is an industry, not an art. Some engineer out there can make an absolutely perfect and elegant mechanism to solve a problem, but the vast majority of people in construction aren't engineers.
The industry isn't going to move towards higher skilled bespoke work. There will always be more McDonalds than family owned/5 start unique restaurants. Any solution has to start at something that's trivial to replicate because the vast majority of coders in the work force aren't actually that great at coding.
I get that, in essence, I basically agree with this person. I just dislike how the argument is always framed. I have HATED the idea of doing any front end in JS. I've been diving in on things like htmx/datastar/webassembly because it's such an ugly pointless overhead to use JS (and I think at least 2 of those are a wonderful refutation to his claim that people at the higher level can't make meaningful improvements)
But....that's as "that one F# guy". If you learned JS...well it's probably "good enough" for 90% of what you're doing, and it's not a bad choice to learn. Like python, it's extremely well supported, and unlike python, it's also what you're going to code your frontend in anyways. Not having to switch tech stacks for the entire app is extremely appealing (if insanely frustrating when it's something as fundamentally flawed as JS, even with things like typescript finally emerging out of the radioactive wasteland).
This is simply not the case for the low-level space. If you’re lucky, you can maybe find an expensive book or course. But more likely, you’ll just get a gigantic manual that lists every property of the system in excruciating detail, which is totally worthless for learning and barely usable as reference. And that’s if you’re lucky—there’s a good chance that you’ll only get a wiki or a maze of man pages, which are impenetrable walls of jargon. In some cases the only documentation that exists is the Linux Kernel Mailing List, and you can only pray that the one guy who can answer your question hasn’t flamed out in the past decade.
Amen. It's not just low level programming or linux. There's a lot of genres of...things (games, media, skills, etc) that have HORRIFIC onboarding problems. Often reinforced by a community that tends to be proud of that fact and see it as a useful filter rather than the most horrific pain in the ass. Doubly so because there's always some loud portion that are MASSIVE hypocrites. The number of people i've watched high five themselves after posting some snarky RTFM response despite absolutely only being where they are because they found meaningful tutors and help on their journey is disturbingly high.
Handmade hero stuff
I find this part very interesting because I stumbled across someone named randy years ago and would occasionally tune into his journey through a similar goal. I think it shows a lot of the pitfalls and sidequests one can easily wind up lost in, and a more realistic view of what will likely happen to someone following in handmade's footsteps.
There's a LOT to discuss there on all sides, but I think it's fair to point out that Randy didn't release jack and or shit until he finally stopped fucking around with reinventing reverse kinematics and just used unity.
I personally have found this to be true of so many “low-level” disciplines. “Low-level” programming is not impossible; in fact, in many cases, it’s simpler than the high-level web dev work I used to do! Today’s “high-level” frameworks and tools are so complicated and so poorly designed that they are harder to understand and work with than their low-level counterparts. But all the modern nonsense like Svelte, Symfony, Kubernetes—those tools have docs! They have dev tools! Because, for some reason, people are not afraid of them!
Low-level programming is artificially terrible. I really believe that. And I know that it doesn’t have to be this way.
Okay...so I have this issue with this entire article...where's the line on low level? We get a lot of flowery talk about how high level frameworks get all these nice tooling and documentation features, but the author doesn't even seem to mention an example of a low level language, or the things that have been going on in that space, or some of the VERY real problems that aren't tooling related depending on where you draw the line.
Again, i'm F# guy, and I love it. It's in a very nice middle ground that lets me do basically whatever I want. I have never had a problem where i've thought "gosh I really should be managing my memory manually" because I don't live in the kind of world where that performance matters. So am I "low level" because i'm not doing my apps in javascript and am actually writing exe's, or am I still high level because i'm letting a garbage compiler make sure I don't blow my foot of and lead me into memory and security hell.
The very big problem low level has (which has always struck me as oddly defined since my dad's version of low level was assembly and punch cards you cowards) is humans suck at memory management, and the classic low level lang's of C and C++ don't really help. If we're talking about some app that has to be used by users and isn't just an internal only application, having memory errors isn't just "oh that's annoying", it's mostly what caused all the nightmares I mentioned from the past, and also the massive career ending security flaws that people have nightmares about.
I was sort of hoping we'd get their take on more modern low level languages like rust, zig, and odin which are actually trying to solve these problems, but they just stayed very vague about the whole thing. I've seen sooo much discussion and criticism of what they are or aren't doing right (most recently this which while I suspect is way too purist and biased, does intrigue me with comments about https://www.idris-lang.org/ ), but it's hard to approach from the outside.
Sadly it doesn't seem that the rest of their blogs really dive into the issue either.
To be fair to the author, “ten years ago” would have been 2013, not 2005. I also think that you are probably thinking of a different kind of reliability from the author. No, our systems are not...
To be fair to the author, “ten years ago” would have been 2013, not 2005. I also think that you are probably thinking of a different kind of reliability from the author. No, our systems are not having faults, but our web applications are failing constantly in both big and small ways. Yesterday I logged into Hoopla and no images would load at all. Other times service interruptions cause an application to be available to everyone on earth. But even in this context I think that things seem to be at least slightly more reliable today than they were a decade ago because most of those applications are now built on much more mature frameworks. I’m not entirely sure if that would have been the case in 2023 when this talk was given.
I honestly believe that one of the biggest problems in the programming meta when it comes to onboarding is the lack of having a singular authority on how things are done. The fact is that good tooling often does exist for low level programming! It’s just that it’s often hard to bring things together. For instance, I was interested in learning 6502 assembly a little while back and came across 8bitworkshop, a web based integrated suite of tools for writing software for retro computers that actively compiles code for you as you type and will let you poke into any piece of memory with extremely in-depth debugging tools. The hardest part was finding the actual documentation and lessons! It’s not as if they didn’t exist, it was that there wasn’t a comprehensive, authoritative source to look to. In the end I found a book from the 80s on archive.org that was supposed to teach Commodore 64 assembly to an audience of kids.
2013 is 1 year into windows 8, which yes was probably the FIRST time we really saw enterprise level stability start to matter. IE end of life was 2010. Windows 10, where they finally figured out...
To be fair to the author, “ten years ago” would have been 2013, not 2005...
I’m not entirely sure if that would have been the case in 2023 when this talk was given.
2013 is 1 year into windows 8, which yes was probably the FIRST time we really saw enterprise level stability start to matter.
IE end of life was 2010.
Windows 10, where they finally figured out they might need to maintain this thing, was 2015.
I will take the 2023 software and web world in a heartbeat over the "glorious" age of 2010-2015. I'd say that MAYBE by 2017 you're finally seeing diminishing returns on progress (again, from a coding performance perspective, not a shitty business practices one), but again I'm not willing to lay the blame of the "decline" of technology on people deciding that memory management is mostly a foot gun.
There's this belief that the people who made new reddit and decided to pull a framework that updates every single component on a message close would somehow have done better if they'd only used a lower level language.
It seems to ignore that those people were just as trendy/ill trained and would've just copied whatever the current library or pattern was for C/C++ and low and behold we've got memory leaks/overflows and seg faults instead of a 2000ms update.
There are, and were, frameworks with a lot more reasonable control over your dom, but there's also about 100x more people who know react than whatever one you're thinking of. Things are changing, coders are adopting better tools (again javascript raw is basically a sin now), and we're seeing lots of talk about how yeah importing 38 gigs of framework for a CRUD app isn't really necessary.
But to me that's not "low level". That's just "please god let me control my application state" and some of that is coming from the fact that things like RAM are orders of magnitude higher than they were in low level days. If you want to do a fully function immutable web app in WASM, go for it. It'll probably be more performant than some JS framework and 1000x easier to control and debug, but you're still not going to need to touch a pointer.
Another Randy fan here! It looks like his whole video archive is back (some of the older ones were inaccessible for a while), I'm glad that the entire journey is visible again. I found him through...
Another Randy fan here! It looks like his whole video archive is back (some of the older ones were inaccessible for a while), I'm glad that the entire journey is visible again. I found him through his pixel art video 6 years ago and I'm glad I did. Hopefully we'll get a version of Arcane (or whatever the name is at that point) eventually!
I think the concept of high level versus low level programming, aka "building", crosses different domains. The raising of what is considered "low level"seems to be just a consequence of progress....
I think the concept of high level versus low level programming, aka "building", crosses different domains. The raising of what is considered "low level"seems to be just a consequence of progress.
For example, buying yeast in the store to make bread instead of making your own sourdough starter, buying flour in the store instead of milling your own. Building computers, building buildings, making clothes, etc.
A lot of things we build are built using components that are made of other components, even the building blocks of life and the universe.
Something we built with our own hands today, can be packaged up into a bundle in the future.
Is it better to know how the sausage is made? For some people, yes, knowing the nitty gritty of every "low level" component can allow fine tweaking and improvements in performance.
But sometimes, I just want to make a hot dog as quickly as possible. 🌭
Great write-up. Two thoughts: There is a ""low level web framework"", it's HTMX. It fills in the gaps of HTML instead of paving over them. Imo, it can and should be used a lot more. Another thing...
Great write-up. Two thoughts:
There is a ""low level web framework"", it's HTMX. It fills in the gaps of HTML instead of paving over them. Imo, it can and should be used a lot more. Another thing is, HTML5 is pretty good today. You can make modals, popovers and such without (or not a lot of) JS.
Isn't the Rust community doing what they describe? You can make "low-level" applications using Rust desktop frameworks (and you can run most in the browser as well to boot).
I only skimmed the article and the comments but seems us professional programmers are in general agreement here. Setting arbitrary internal constraints on what you consider good and bad software...
I only skimmed the article and the comments but seems us professional programmers are in general agreement here. Setting arbitrary internal constraints on what you consider good and bad software is fun. I love quines.
The fact of the matter is Casey Muratori took something like 10 years to make an incomplete game with uninspired design. Jonathan Blow is another one of these dogmatic "if you don't write the compiler yourself you're an idiot" types. He bothers me in particular because the only forcing function that seems to work on getting to release literally anything ever is seeing the walls of his cash pile closing in as it dwindles. He's a genius puzzle designer and a religiously bigoted programmer.
When I play with code, I play with code. When I engineer a solution, I define as many constraints as possible, and deliver the best possible solution within those constraints.
It rings of the programmers that are paradoxically opposed to using LLMs to accelerate their work. We've been making newer and better tools since Turing. If you arbitrarily decide to not use every tool, you are intentionally producing sub standard solutions.
That said, I have a lot of empathy for those that, inside and outside of industry, that are frustrated that Mr. Altman released a hell of a dangerous hammer and everyone seems to be looking for screws to nail in. Use the right tool for every job.
We in the Handmade community often bemoan the state of the software industry. Modern software is slow and bloated beyond belief—our computers are literally ten times more powerful than a decade ago, yet they run worse than they used to[...]
The Handmade crowd seems to think that low-level programming is the key to building better software. But this doesn’t really make sense on the surface. How is this practical for the average programmer? [...]
[...]
New Reddit exemplified this perfectly: collapsing a comment would dispatch a Redux action, which would update the global Redux store, which would cause all Redux-connected components on the page to update, which would cause all their children to update as well. In other words, collapsing one comment triggered an update for nearly every React component on the page. No amount of caching, DOM-diffing, or shouldComponentUpdate can save you from this amount of waste.
At the end of the day, I had to conclude that it is simply not possible to build a fast app on this stack. I have since encountered many web applications that suffer in exactly the same way. [...]
Thankfully, React+Redux is not the only possible software stack. We can choose alternatives at every point:
Together all these choices actually form a tree.[...]
[...]
So, to recap: the first reason we care about low-level is because low-level knowledge leads to better engineering choices. The second reason we care about low-level is because, in the long term, low-level knowledge is the path to better tools and better ways of programming—it is a requirement for building the platforms of the future.
But there is still one big problem with all of this: low-level programming today is absolutely terrible.
[...]
What then does this mean for “low-level”? The conclusion is inevitable: the reason we call things “low-level” is because they are terrible to use. They are “low-level” because we do not use them directly! Because we sweep them under the rug and build abstractions on top, they become this low level that we don’t want to touch anymore!
[...]
Low-level programming is not the goal unto itself. High-level programming—a new kind of high-level programming—is the goal, and low-level is how we get there.
Edit: I somehow managed to submit my post before I finished it. Also had another thought: Tildes is a really good example for this topic: It uses high level solutions, but little to no...
Edit: I somehow managed to submit my post before I finished it. Also had another thought: Tildes is a really good example for this topic: It uses high level solutions, but little to no abstractions on top of them and as a result it's both high performance and (I imagine) easy to maintain.
I have never heard of Handmade, or its crowd, but I have a lot of sympathy for the author's sentiment. For decades the pervailing viewpoint has been something like "There is no such thing as too high level and performance is irrelevant next to development velocity (outside of a few areas like gaming)". There's a comparison to the late stage capitalist mindset begging to be made here but I won't digress.
Whereas I've always been happy to waste time on performance, even when working at a high level. Which is a point I want to add: The most popular technologies were already very high level 20 years ago. In the current relativity going low level kinda just means using proven high level technologies without a stack of frameworks. You don't have to write code in assembly, just don't pile frameworks on top of a mature high level scripting language where most of what you'd need the framework for is already pretty easy to accomplish. You don't need WASM (usually), just use HTML/CSS/JS without the frameworks. They are already, essentially, frameworks.
The Handmade crowd seems to think that low-level programming is the key to building better software. But this doesn’t really make sense on the surface. How is this practical for the average programmer? Do we really expect everyone to make their own UI frameworks and memory allocators from scratch? Do we really think you should never use libraries? Even if the average programmer could actually work that way, would anything actually improve, or would the world of software?
If that's true then the author's definition of an average programmer is very different from my own. Building your own UI "framework" just means building a UI. Having some of the decisions made for you in advance can speed up development a lot but average developers have been doing it themselves for decades.
It's a fascinating change that has happened in software development, really pretty recently: What used to be called high level is now frighteningly opaque to people who learned only on frameworks stacked on top of the old high level. When people talk about which tech stack to use for an app, increasingly they're talking about which 3rd party frameworks and solutions to use, rather than which core programming languages and technologies to use. React/redux may be particularly bad, in terms of performance, but they're not singularly bad. To a large degree it's just a problem inherent to extreme levels of abstraction.
I think there are two core reasons why this has happened. The first is corporate software. Software as a massive revenue driver rather than software as a way to solve problems and do cool things. In that environment, developers are a business tool. You want them to be as easily replaceable as possible, you want quick onboarding. Frameworks are great for that, corporate app development loves frameworks, and so that's what the job market looks like and that's what people learn.
There's not actually anything hard about the underlying technologies, the post talked a lot about web app technologies and there, underneath the frameworks, you have very simple technologies like the aforementioned HTML, CSS and Javascript. They're not hard to understand for people with even a little bit of an engineering mindset. Underneath those you have scripting languages like Ruby, Python, PHP and Node (javascript again, or typescript). Those languages are designed to be approachable. They're very high level.
Which leads to the second reason: programming became a career you got into because there was high demand and you made a lot of money, rather than because you had a natural affinity for digital technology. It's possible that "average programmer" increasingly refers to someone that never had any natural skills for coding, just an appreciation for 6+ figure income. No shade to people who choose a career for the compensation, but I think it's part of why using built in functionality of already high level languages suddenly looks like crawling into the ancient, unfriendly depths of the digital underworld.
I completely agree that understanding the low levels is important, should even be considered vital. But we can do much of the real work at a relatively high level, using proven technologies, and still solve a lot of the software quality and performance problems.
And now, all of a sudden, we have these new tools in the form of AI Agents, that make working at lower levels even easier. It's never been easier to learn about the layer underneath the stack level you're comfortable with. And there's a really high chance that, after a little bit of learning curve, you'll find that in many ways it's easier than the higher abstractions were.
Boy, is it depressing that knowing how to build a desktop application is considered to be “low level” these days. Heck, the way the author talks about it, JavaScript’s direct DOM manipulation APIs are low level! It’s very tempting to push back against this framing, but it’s one of those things that just kind of feels innately true at this point in time. Nobody seems to care about desktop applications at all at this point.
To be a bit fair "the browser is the desktop" was the goal all the way back in the days of netscape, and just kinda....went off the rails.
It makes sense as it's the only cross platform protocol we've ever been forced to agree upon that has higher level functionality. Its one of the few conversion points where you can pass an instruction and KNOW how it will turn out on every device...or at least roughly. Browser/OS/Hardware/Chipset for once doesn't matter.
It's really that or containers, and of course wouldn't be necessary if we'd been able to agree on standards (or i guess have an even more dominate or state mandated monopoly), but so long as you've got about 1000x hardware/software combinations, having an agreed upon norm ANYWHERE is going to attract attention.
I will forever be disappointed that the browser as a universal platform failed in favor of mobile apps and walled gardens. It happened mostly because the big tech companies wanted it that way. Apple most of all, to this day they refuse to support standard APIs for device functionality in the browser not because it's hard (everyone else has managed it) but because they want to hamstring apps that aren't in their ecosystem.
Kiiiiind of, but when you look closely at most of those mobile apps you will still notice that developers have just inverted the model and embedded a web app inside a “native” app. Really, the browser is still the universal platform, we just make people ship millions of stub browsers for no real reason.
“Failed” seems too strong. Sure, mobile apps are popular but there are also plenty of web apps. Anyone starting a new platform would love to be as successful as the web.
I would also say that there are a lot of applications I wish were available outside of web apps, such as the VIA configuration tool.
The big thing that mobile platforms offer over the web, both back then and today, is a vastly more deep and complete toolchest to work with. Point in case, it's a considerable effort to set up a virtualized list view (where the only rows allocated to memory are those which are visible and get recycled as the user scrolls) on the web, but that's a been one of the baseline widgets in UIKit since 2007 (and existed in a different shape in its desktop ancestor AppKit since the 1980s) — no third party libraries or bespoke code required. That alone can make the difference between an app feeling smooth and buttery or being a laggy mess when displaying a substantial amount of data (especially on lower power devices, like low end Android phones).
I don't see web overtaking native entirely until that's changed and "bring your own everything" is no longer the dominating web development mindset.
Right, however the reason that you need to use a native app to get access to those APIs isn't that there's a technical reason those APIs can't be exposed to a web app, it's that the platforms choose not to expose them in order to maintain their profits.
They of course claim it's for security, and that's not completely disingenuous, but it's not the dealbreaker they make it out to be. The key motivation is to make sure that web apps can never feel native and therefore can't compete with app/play store apps.
There’s a few native APIs like that, but continuing with the virtualized list widget example, there aren’t actually blockers. It comes down to browser vendors electing to not add such a thing for web devs to use.
Nearly all of the most valuable bits of native UI frameworks could made a stock part of the browser environment, but Google etc have decided that frills like GPU and USB access are what should be prioritized, despite only being used for a tiny fraction of web apps.
Agreed, they could be made available. In case it's not already implied: The browser vendors and the platforms are largely the same companies. In the case of iOS that is still strictly enforced at the engine level everywhere except the EU. Meanwhile Chrome has around 70% of the mobile market share globally (higher among android users).
Yep, but if you look at where Apple has been focusing their energy in WebKit (Safari engine), it’s been more aligned with the “make web fundamentals more capable” angle. They’ve been doing a lot of CSS work in recent years for example.
So despite the engine restriction on iOS, if Google and/Mozilla were to make a push to build a more complete set of standard UI widgets for the web, Apple would probably follow. This has yet to happen however as Google continues to chase features that are somewhat esoteric in comparison.
Years ago, Sun Microsystems released the VHDL source code for their UltraSPARC Niagara chips, which were offering something us computer plebs could only dream of; a computer capable of running 64 threads at a time. They claimed they were doing it for the future of commodity computers; some day we would have this at home! Imagine all of the things we could make our computers do!
Turning the browser into the platform feels like a step back from that dream. We have processors that easily outperform that design and instead of making massively parallel applications to take advantage of them, instead they take up so much ram that our massively overpowered systems can’t handle them all at once still. And it feels like when something happens that could make things faster or more efficient, it’s a guessing game when or even if any given application will ever see it.
But there is no use in crying over the death of a future that didn’t happen.
In watching videos on Windows development history, it's crazy to see that Microsoft has been trying to use HTML/CSS for desktop UI since the 90s.
Yeah we kinda stopped with Java didn't we?
imo that's always been a shame cause I HATE Java but I recognize the greatness of what it did, create a package for the code so that you can ship applications easier. It was huge at the time.
But like, everything lives in the cloud now. Hell, I think with Win11 if you can't connect to the internet you can't even log in or some shit. How do you go back to low level software in that environment?
I suspect I will not be a fan of this, thoughts as I go:
This is such a rose tinted view of things. I've had one blue screen in the last year. In 2005 alt tab was still a coin flip between "quickly check something else" and "catastrophic failure". Yes, we 100% use memory inefficiently, probably because moore's law was still kicking around so "make it efficient" hasn't mattered as much as "make it work", but these hyperbole's get under my skin soooo much. Was I the only one playing family and friend tech support as people walked off cliffs on a daily basis, or used to just saying "Fuck it nuke the machine" as a fairly common recovery method from "shit just getting worse with time" caused by a pile of horrible practices, designs, and other nonsense?
I could maybe let it slide if it was just about power users, but no, the actual user experience has not "declined" by any meaningful metric related to software optimization. All sorts of shitty business practices are a problem, sure, but if we had perfectly optimized websites you're still not solving the fact that they want to harvest my data, force me to subscribe, and use AI (and hey at least popups and toolbars are MOSTLY dead and they finally put down flash and IE6....fuck HOW do you think things were better during IE6?!).
You know, the truckla example was more accurate than I think they understood. The problem wasn't the stack (or maybe, just the stack) the problem WAS THE GOAL.
The goal of new reddit is NOT FUNCTIONALITY. The goal is engagement and time on page. These "problems" were acceptable costs to instead get the PILE of react + redux developers that were (are?) floating around and make something that the maximizes for advertisers and investors. They did not want to build an efficient car, they wanted to copy paste something that "worked" and change the color.
And I think this gets back to a bigger problem. Coding is an industry, not an art. Some engineer out there can make an absolutely perfect and elegant mechanism to solve a problem, but the vast majority of people in construction aren't engineers.
The industry isn't going to move towards higher skilled bespoke work. There will always be more McDonalds than family owned/5 start unique restaurants. Any solution has to start at something that's trivial to replicate because the vast majority of coders in the work force aren't actually that great at coding.
I get that, in essence, I basically agree with this person. I just dislike how the argument is always framed. I have HATED the idea of doing any front end in JS. I've been diving in on things like htmx/datastar/webassembly because it's such an ugly pointless overhead to use JS (and I think at least 2 of those are a wonderful refutation to his claim that people at the higher level can't make meaningful improvements)
But....that's as "that one F# guy". If you learned JS...well it's probably "good enough" for 90% of what you're doing, and it's not a bad choice to learn. Like python, it's extremely well supported, and unlike python, it's also what you're going to code your frontend in anyways. Not having to switch tech stacks for the entire app is extremely appealing (if insanely frustrating when it's something as fundamentally flawed as JS, even with things like typescript finally emerging out of the radioactive wasteland).
Amen. It's not just low level programming or linux. There's a lot of genres of...things (games, media, skills, etc) that have HORRIFIC onboarding problems. Often reinforced by a community that tends to be proud of that fact and see it as a useful filter rather than the most horrific pain in the ass. Doubly so because there's always some loud portion that are MASSIVE hypocrites. The number of people i've watched high five themselves after posting some snarky RTFM response despite absolutely only being where they are because they found meaningful tutors and help on their journey is disturbingly high.
I find this part very interesting because I stumbled across someone named randy years ago and would occasionally tune into his journey through a similar goal. I think it shows a lot of the pitfalls and sidequests one can easily wind up lost in, and a more realistic view of what will likely happen to someone following in handmade's footsteps.
There's a LOT to discuss there on all sides, but I think it's fair to point out that Randy didn't release jack and or shit until he finally stopped fucking around with reinventing reverse kinematics and just used unity.
Okay...so I have this issue with this entire article...where's the line on low level? We get a lot of flowery talk about how high level frameworks get all these nice tooling and documentation features, but the author doesn't even seem to mention an example of a low level language, or the things that have been going on in that space, or some of the VERY real problems that aren't tooling related depending on where you draw the line.
Again, i'm F# guy, and I love it. It's in a very nice middle ground that lets me do basically whatever I want. I have never had a problem where i've thought "gosh I really should be managing my memory manually" because I don't live in the kind of world where that performance matters. So am I "low level" because i'm not doing my apps in javascript and am actually writing exe's, or am I still high level because i'm letting a garbage compiler make sure I don't blow my foot of and lead me into memory and security hell.
The very big problem low level has (which has always struck me as oddly defined since my dad's version of low level was assembly and punch cards you cowards) is humans suck at memory management, and the classic low level lang's of C and C++ don't really help. If we're talking about some app that has to be used by users and isn't just an internal only application, having memory errors isn't just "oh that's annoying", it's mostly what caused all the nightmares I mentioned from the past, and also the massive career ending security flaws that people have nightmares about.
I was sort of hoping we'd get their take on more modern low level languages like rust, zig, and odin which are actually trying to solve these problems, but they just stayed very vague about the whole thing. I've seen sooo much discussion and criticism of what they are or aren't doing right (most recently this which while I suspect is way too purist and biased, does intrigue me with comments about https://www.idris-lang.org/ ), but it's hard to approach from the outside.
Sadly it doesn't seem that the rest of their blogs really dive into the issue either.
To be fair to the author, “ten years ago” would have been 2013, not 2005. I also think that you are probably thinking of a different kind of reliability from the author. No, our systems are not having faults, but our web applications are failing constantly in both big and small ways. Yesterday I logged into Hoopla and no images would load at all. Other times service interruptions cause an application to be available to everyone on earth. But even in this context I think that things seem to be at least slightly more reliable today than they were a decade ago because most of those applications are now built on much more mature frameworks. I’m not entirely sure if that would have been the case in 2023 when this talk was given.
I honestly believe that one of the biggest problems in the programming meta when it comes to onboarding is the lack of having a singular authority on how things are done. The fact is that good tooling often does exist for low level programming! It’s just that it’s often hard to bring things together. For instance, I was interested in learning 6502 assembly a little while back and came across 8bitworkshop, a web based integrated suite of tools for writing software for retro computers that actively compiles code for you as you type and will let you poke into any piece of memory with extremely in-depth debugging tools. The hardest part was finding the actual documentation and lessons! It’s not as if they didn’t exist, it was that there wasn’t a comprehensive, authoritative source to look to. In the end I found a book from the 80s on archive.org that was supposed to teach Commodore 64 assembly to an audience of kids.
2013 is 1 year into windows 8, which yes was probably the FIRST time we really saw enterprise level stability start to matter.
IE end of life was 2010.
Windows 10, where they finally figured out they might need to maintain this thing, was 2015.
I will take the 2023 software and web world in a heartbeat over the "glorious" age of 2010-2015. I'd say that MAYBE by 2017 you're finally seeing diminishing returns on progress (again, from a coding performance perspective, not a shitty business practices one), but again I'm not willing to lay the blame of the "decline" of technology on people deciding that memory management is mostly a foot gun.
There's this belief that the people who made new reddit and decided to pull a framework that updates every single component on a message close would somehow have done better if they'd only used a lower level language.
It seems to ignore that those people were just as trendy/ill trained and would've just copied whatever the current library or pattern was for C/C++ and low and behold we've got memory leaks/overflows and seg faults instead of a 2000ms update.
There are, and were, frameworks with a lot more reasonable control over your dom, but there's also about 100x more people who know react than whatever one you're thinking of. Things are changing, coders are adopting better tools (again javascript raw is basically a sin now), and we're seeing lots of talk about how yeah importing 38 gigs of framework for a CRUD app isn't really necessary.
But to me that's not "low level". That's just "please god let me control my application state" and some of that is coming from the fact that things like RAM are orders of magnitude higher than they were in low level days. If you want to do a fully function immutable web app in WASM, go for it. It'll probably be more performant than some JS framework and 1000x easier to control and debug, but you're still not going to need to touch a pointer.
I’m gonna have to actually read & reply to this later, but I also watched Randy for a while! Relevant mention to this post for sure.
Another Randy fan here! It looks like his whole video archive is back (some of the older ones were inaccessible for a while), I'm glad that the entire journey is visible again. I found him through his pixel art video 6 years ago and I'm glad I did. Hopefully we'll get a version of Arcane (or whatever the name is at that point) eventually!
I think the concept of high level versus low level programming, aka "building", crosses different domains. The raising of what is considered "low level"seems to be just a consequence of progress.
For example, buying yeast in the store to make bread instead of making your own sourdough starter, buying flour in the store instead of milling your own. Building computers, building buildings, making clothes, etc.
A lot of things we build are built using components that are made of other components, even the building blocks of life and the universe.
Something we built with our own hands today, can be packaged up into a bundle in the future.
Is it better to know how the sausage is made? For some people, yes, knowing the nitty gritty of every "low level" component can allow fine tweaking and improvements in performance.
But sometimes, I just want to make a hot dog as quickly as possible. 🌭
Eeyup, as Tanya Reilly put it so beautifully, "everyone's backend is someone's front end"
Great write-up. Two thoughts:
I only skimmed the article and the comments but seems us professional programmers are in general agreement here. Setting arbitrary internal constraints on what you consider good and bad software is fun. I love quines.
The fact of the matter is Casey Muratori took something like 10 years to make an incomplete game with uninspired design. Jonathan Blow is another one of these dogmatic "if you don't write the compiler yourself you're an idiot" types. He bothers me in particular because the only forcing function that seems to work on getting to release literally anything ever is seeing the walls of his cash pile closing in as it dwindles. He's a genius puzzle designer and a religiously bigoted programmer.
When I play with code, I play with code. When I engineer a solution, I define as many constraints as possible, and deliver the best possible solution within those constraints.
It rings of the programmers that are paradoxically opposed to using LLMs to accelerate their work. We've been making newer and better tools since Turing. If you arbitrarily decide to not use every tool, you are intentionally producing sub standard solutions.
That said, I have a lot of empathy for those that, inside and outside of industry, that are frustrated that Mr. Altman released a hell of a dangerous hammer and everyone seems to be looking for screws to nail in. Use the right tool for every job.
From the blog post:
[...]
[...]
[...]
[...]
Edit: I somehow managed to submit my post before I finished it. Also had another thought: Tildes is a really good example for this topic: It uses high level solutions, but little to no abstractions on top of them and as a result it's both high performance and (I imagine) easy to maintain.
I have never heard of Handmade, or its crowd, but I have a lot of sympathy for the author's sentiment. For decades the pervailing viewpoint has been something like "There is no such thing as too high level and performance is irrelevant next to development velocity (outside of a few areas like gaming)". There's a comparison to the late stage capitalist mindset begging to be made here but I won't digress.
Whereas I've always been happy to waste time on performance, even when working at a high level. Which is a point I want to add: The most popular technologies were already very high level 20 years ago. In the current relativity going low level kinda just means using proven high level technologies without a stack of frameworks. You don't have to write code in assembly, just don't pile frameworks on top of a mature high level scripting language where most of what you'd need the framework for is already pretty easy to accomplish. You don't need WASM (usually), just use HTML/CSS/JS without the frameworks. They are already, essentially, frameworks.
If that's true then the author's definition of an average programmer is very different from my own. Building your own UI "framework" just means building a UI. Having some of the decisions made for you in advance can speed up development a lot but average developers have been doing it themselves for decades.
It's a fascinating change that has happened in software development, really pretty recently: What used to be called high level is now frighteningly opaque to people who learned only on frameworks stacked on top of the old high level. When people talk about which tech stack to use for an app, increasingly they're talking about which 3rd party frameworks and solutions to use, rather than which core programming languages and technologies to use. React/redux may be particularly bad, in terms of performance, but they're not singularly bad. To a large degree it's just a problem inherent to extreme levels of abstraction.
I think there are two core reasons why this has happened. The first is corporate software. Software as a massive revenue driver rather than software as a way to solve problems and do cool things. In that environment, developers are a business tool. You want them to be as easily replaceable as possible, you want quick onboarding. Frameworks are great for that, corporate app development loves frameworks, and so that's what the job market looks like and that's what people learn.
There's not actually anything hard about the underlying technologies, the post talked a lot about web app technologies and there, underneath the frameworks, you have very simple technologies like the aforementioned HTML, CSS and Javascript. They're not hard to understand for people with even a little bit of an engineering mindset. Underneath those you have scripting languages like Ruby, Python, PHP and Node (javascript again, or typescript). Those languages are designed to be approachable. They're very high level.
Which leads to the second reason: programming became a career you got into because there was high demand and you made a lot of money, rather than because you had a natural affinity for digital technology. It's possible that "average programmer" increasingly refers to someone that never had any natural skills for coding, just an appreciation for 6+ figure income. No shade to people who choose a career for the compensation, but I think it's part of why using built in functionality of already high level languages suddenly looks like crawling into the ancient, unfriendly depths of the digital underworld.
I completely agree that understanding the low levels is important, should even be considered vital. But we can do much of the real work at a relatively high level, using proven technologies, and still solve a lot of the software quality and performance problems.
And now, all of a sudden, we have these new tools in the form of AI Agents, that make working at lower levels even easier. It's never been easier to learn about the layer underneath the stack level you're comfortable with. And there's a really high chance that, after a little bit of learning curve, you'll find that in many ways it's easier than the higher abstractions were.