I'd argue the problem goes even deeper that this post suggests. Languages where you don't control memory (for example) might seem like an efficiency boost, but the drawback is you can't optimise...
I'd argue the problem goes even deeper that this post suggests. Languages where you don't control memory (for example) might seem like an efficiency boost, but the drawback is you can't optimise for memory access patterns, which makes programs slow.
Operating systems have ballooned in complexity to where even drawing a pixel on the screen requires so much hoop jumping, usually in the name of security, but none of the mainstream operating systems were designed with security in mind. We just assume that security can be patched in when a vulnerability is found, which has proven to be wrong over and over.
These two together incentivise more people to use libraries where more of the hoop jumping is done for you. On the surface this seems like a good workaround, but it has the downside that the libraries aren't optimised for your use-case. They are general purpose, so they are less efficient than if you'd implemented a custom solution.
We fundamentally need to rethink our approach to software design. We need languages which make it easy to write efficient code. We need lighter operating systems with security built in. We need education structures which teach how to make these things in the first place, rather than conditioning people to think "That's too hard, it'll never happen".
Anecdotally, I'm a cybersecurity engineer who had to build a siem using Kubernetes because wazuh (written in python/c) couldn't keep up with the data ingestion. It was a disaster of a project. All of that work could have been avoided if wazuh was more efficient. Instead we went from Linux running wazuh, to Linux running kubernetes running containerised wazuh, hoping it would somehow be faster.
We can do so much better but the problem is so vast.
Liveness is a global property, and systems without pervasive automatic memory reclamation either develop ad-hoc schemes for managing fragmentation and liveness generically or else don't (the...
Exemplary
Liveness is a global property, and systems without pervasive automatic memory reclamation either develop ad-hoc schemes for managing fragmentation and liveness generically or else don't (the latter condition is worse!). Corrolarily, pervasive automatic memory reclamation neither absolves the programmer of their responsibility to think about how they use memory nor prevents them from reasoning about it; it simply is a qualitatively different model with different characteristics. Which characteristics are not asymptotically worse than any other approach (indeed, they may be asymptotically optimal); the constant factors, as always, are a give and take; the benefits to modularity and reasonability are not to be dismissed.
Improving matters necessarily entails disempowering programmers in order to enable users. The responsibility of managing memory is not one that programmers should be entrusted with. You keep mentioning security, yet systems that do not have unstratified, pervasive capability safety necessarily lead to monolithic, structurally insecure code, and some form of automatic memory management is necessary to realise that. In this respect, the browser is a marked improvement over classic unix. The cultural problems are, well, cultural—not technical. (Of course the browser has scads of technical problems too, but it's still a big improvement. Tug on the arcan thread a bit for some treatment of the social problems.)
Abstraction is not inherently slow. Contrariwise, at the meta level, designing systems to pervasively express abstract, high-level interfaces forces the development of implementation strategies for managing the performance of those interfaces, and at the object level, abstract, high-level interfaces are far more optimisable than concrete, low-level ones, because they give the implementations of those interfaces more freedom. A mediocre example is DBMSes (there are not any good examples, principally due to the perverse incentive structures underlying the development of optimising compilers). There are two salient points. First is, of course, that query planners in relational databases are capable of very sophisticated optimisations that traditional compilers tend not to do. (They are sometimes accused of having opaque and uninterpretable performance models. A reasonable complaint. It can be solved.) Second, and more interesting: some aspects of the way user-initiated optimisation works. Query plan hints and indices are non-normative in that they do not affect behaviour of the code; they are hints. The specification for the behaviour of the code is divorced from the specification of how it should be efficiently executed, and the two can be analysed independently. (Having to rewrite your query when you update your database and trying to guess how to get the fast query plan again, on the other hand, sucks. Again, mediocre example.) Having to mangle your semantics in order to get good performance is a deplorable state. The limiting case of this gets us to the utopia indicated by Fran Allen (inventor of optimising compilers), Don Knuth, Dan Bernstein, and other luminaries, in which optimisation is a collaborative process between the environment and the user.
In particular, this is dependent upon the correct framing of optimisation: it is a state-space search. (Which framing has been neglected by extant compilers, which is why they are all overly complicated and bad at optimising.) From among a set of programs (well, 'program' framing is wrong, but leaving that aside) equivalent to some source program, where is the fastest? Then both the machine and the human can propose steps through the state space. But there is an asymmetry. A single path through the state space will not be too long, and if one is proffered as a hint, it is not difficult to follow it and see where it leads. However, the state space, taken as a whole, is impossibly large (what is contained in the space is a design choice, but it should be chosen to be as large as possible—though not with redundant states, of course—because otherwise, it is unlikely to contain what we seek!), and therefore searching it exhaustively is impossible. So we are eternally bound to search only what can fit in our resource budgets (time and space), and are eternally bound to search by imperfect heuristic. The first class representation of high levels of abstraction allows searching at the level of those abstraction layers. In other words, if you arrange to tell the optimiser via a common vocabulary what your domain concerns are, then it can optimise pursuant to those domain concerns. That common vocabulary? High-level abstractions.
(Edit: given only low-level interfaces, a good optimiser or analyser will be forced to analyse both their implementations and their users in order to recover information about what aspects of the interface are actually necessary to express the essential behaviour of the code and which are incidental, and then strip out the latter. Whether it is a good idea to express those aspects of the interface formally or only informally—and if formally, whether semantically or not (c.f. 'intrinsic/extrinsic' or 'church/curry' types)—is a language, program, and environment design question I will not treat here; but the point is that deliberately eschewing them in the name of performance is counterproductive unless you want to be stuck in a local optimum doing piddling optimisation work, like I did on this binary search routine in assembly that extant optimising compilers are as yet structurally incapable of recreating, regardless of how the source is massaged.)
Edit: I do agree with this:
We need education structures which teach how to make these things in the first place, rather than conditioning people to think "That's too hard, it'll never happen".
Absolutely, 100%. And where that starts is with the disempowerment of programmers to empower users that I mentioned above. Because everybody starts as a user. And users should have the understanding that, any code they can run, they can also change, or ask the system questions about. The browser is lovely for this—well, sort of. It got worse with minified js and autogenerated html, and worse again with wasm. But in principle, I can open an inspector window and change the content of a page, and ask all sorts of questions about it. And I can write userscripts. I can do the same for code that runs outside the browser, but it's much harder and more prohibitive. Because code that runs in the browser is constrained to interact only according to the high-level interface exposed by the latter. (The bigger structural problem with the browser from this angle is stratification—notwithstanding Gary Bernhardt, the browser doesn't run inside of the browser, so the user is not empowered to change the browser itself.)
This happens at the hardware level too, where processor chips have internal operative systems that expose instruction sets that are different from their actual internal operations. Abstractions...
Operating systems have ballooned in complexity to where even drawing a pixel on the screen requires so much hoop jumping, usually in the name of security, but none of the mainstream operating systems were designed with security in mind. We just assume that security can be patched in when a vulnerability is found, which has proven to be wrong over and over.
This happens at the hardware level too, where processor chips have internal operative systems that expose instruction sets that are different from their actual internal operations.
Abstractions are a good thing, they are necessary so that different components can evolve independently, otherwise we are back to the era of software only working on one computer model, using all their resources, or worse, to an era where there is one vendor that controls hardware, os, user space software and programming languages.
All of that work could have been avoided if wazuh was more efficient
Maybe, maybe not. You cannot optimize all use cases at the same time. A more efficient wazuh might be missing features you needed.
I work building software, I hate the culture of "ease of development if the most important metric, lets just throw more hardware at the problem" from some languages / frameworks, but ease of development is indeed another cost/risk you need to juggle along with performance, scalability, licensing, etc.
I'd push back on this slightly. I'm not against abstractions, they absolutely have benefits, but abstractions also have downsides if extrapolated beyond their limit. It's the reason why...
Abstractions are a good thing, they are necessary so that different components can evolve independently
I'd push back on this slightly. I'm not against abstractions, they absolutely have benefits, but abstractions also have downsides if extrapolated beyond their limit. It's the reason why game/embedded systems programmers tend not to implement OOP, even though it heavily uses abstractions.
It is not just that the problem is so vast, everything is so interwoven that it has become impossible to move away from the existing ecosystem. I played around with various...
We can do so much better but the problem is so vast.
It is not just that the problem is so vast, everything is so interwoven that it has become impossible to move away from the existing ecosystem.
I played around with various minimal/retro-style/alternative systems, like phones, laptops and workstations. And while it can be fun, there is no way you could ever use something different than the mainstream options in your daily life, because otherwise your life instantly becomes incredibly hard.
You cannot work in isolation, you have to connect and interoperate with others. That means you have to import or wrap their mess for a part in your system. Since you are working with a niche system, you either can't do that, because no-one has written that software, or you can use something that is of barely tested hobby quality.
So no-one is seriously going to do that because you just invited a bunch of problems in your life with no immediate benefit and those around you don't have the patience to put up with your struggles.
The performance and resource consumption trade off for automatic (or mostly automatic) memory management doesn’t have to be all that steep. For example Swift is nearly fully automated with ARC and...
The performance and resource consumption trade off for automatic (or mostly automatic) memory management doesn’t have to be all that steep. For example Swift is nearly fully automated with ARC and can be plenty performant even on smartphone hardware old enough to be on the same tier of performance as late 00s laptops. From what I understand Rust can be written this way too, with similar or better performance in most circumstances.
But these sorts of languages aren’t the ones that we see used everywhere. Instead the languages that dominate are those that allow pushing to production as quickly and with as little friction as possible. All that matters is the ability to churn out new builds at maximum speed, regardless of the quality of said builds — pumping out garbage on a daily basis is preferred to delivering quality on a weekly or monthly basis.
Can you elaborate? This doesn't seem true. I can write an ECS in any language.
Languages where you don't control memory (for example) might seem like an efficiency boost, but the drawback is you can't optimise for memory access patterns, which makes programs slow.
Can you elaborate? This doesn't seem true. I can write an ECS in any language.
Any Turing-complete language can write whatever you want, but how much do you have to "fight" the language? If you're delicately manipulating memory in Python, for example, you're clearly going...
Any Turing-complete language can write whatever you want, but how much do you have to "fight" the language?
If you're delicately manipulating memory in Python, for example, you're clearly going against the design of the language.
It is important to understand that the management of the Python heap is performed by the interpreter itself and that the user has no control over it, even if they regularly manipulate object pointers to memory blocks inside that heap.
Modern JIT compilers make it possible to write code in languages like JavaScript, which still looks high-level and doesn't directly manipulate memory, but gets compiled to something efficient. The...
Modern JIT compilers make it possible to write code in languages like JavaScript, which still looks high-level and doesn't directly manipulate memory, but gets compiled to something efficient.
The problem is that it's hard to know when said code is being compiled to something efficient, because the JIT compilers are very complicated, and the smallest seemingly-irrelevant change can suddenly disable a lot of optimizations. For instance, in V8, calling a JavaScript function with more than 4 types of objectswill prevent it from being optimized at all; there are a lot of these subtle things which cause the compiler to "give up" and it can be hard to figure out why your code suddenly runs slower (static languages do have their own "subtle things" which make your code slower like branch misprediction and cache misses, but there are a lot less of them).
Another issue doesn't relate to the language itself but its community: people who write low-level languages are generally more aware of the performance characteristics of their code, while people who write very-high-level languages are not. So the libraries you use in C and Rust may be more optimized, and explicitly document which functions are expensive and which are efficient, while some libraries in JavaScript over-use abstractions and design patterns which make it extra hard for the JIT compiler to optimize them. But as mentioned, this isn't intrinsic, just statistical: there are poorly-written inefficient C libraries and highly-optimized JavaScript libraries.
Another factor is the robustness of a language’s standard library. Each feature a language comes standard with is one more dependency and chain of subdependencies that doesn’t need to get to get...
Another factor is the robustness of a language’s standard library. Each feature a language comes standard with is one more dependency and chain of subdependencies that doesn’t need to get to get pulled in.
JavaScript for example is famously spartan which means you’re going to need a pile of libraries for just about any nontrivial project, whereas newer languages tend to be much better-equipped with facilities for most common tasks.
Assuming there are other people who buy the author’s prescription here: What are your top picks for “suckless”-spirited software? (I.e. anything with some intersection of...
Assuming there are other people who buy the author’s prescription here: What are your top picks for “suckless”-spirited software? (I.e. anything with some intersection of performant/hackable/composable/single-binary/portable design.)
My number one that comes to mind is Caddy, followed by the cosmopolitan APE suite. Honorable mention to busybox and cyberchef. Navidrome too maybe, but not sure what the dependency chain looks like - it’s just portable/hackable/performant. Any others?
I can't speak to the code itself, but purely as deployment complexity, compare the unofficial Vaultwarden with the official Bitwarden. You can run Vaultwarden with a single process with a SQLlite...
I can't speak to the code itself, but purely as deployment complexity, compare the unofficial Vaultwarden with the official Bitwarden.
You can run Vaultwarden with a single process with a SQLlite database, or connect to an external database. Maybe a reverse proxy fronting it. 3 containers max.
The official Bitwarden deployment requires:
Database
Proxy
API
Admin
Web
Attachments
Identity
SSO
Icons
Notifications
Events
We spend too damn much time as a society re-implemeting LDAP, Kerberos, and OpenSSH in dozens of different ways.
I'd argue the problem goes even deeper that this post suggests. Languages where you don't control memory (for example) might seem like an efficiency boost, but the drawback is you can't optimise for memory access patterns, which makes programs slow.
Operating systems have ballooned in complexity to where even drawing a pixel on the screen requires so much hoop jumping, usually in the name of security, but none of the mainstream operating systems were designed with security in mind. We just assume that security can be patched in when a vulnerability is found, which has proven to be wrong over and over.
These two together incentivise more people to use libraries where more of the hoop jumping is done for you. On the surface this seems like a good workaround, but it has the downside that the libraries aren't optimised for your use-case. They are general purpose, so they are less efficient than if you'd implemented a custom solution.
We fundamentally need to rethink our approach to software design. We need languages which make it easy to write efficient code. We need lighter operating systems with security built in. We need education structures which teach how to make these things in the first place, rather than conditioning people to think "That's too hard, it'll never happen".
Anecdotally, I'm a cybersecurity engineer who had to build a siem using Kubernetes because wazuh (written in python/c) couldn't keep up with the data ingestion. It was a disaster of a project. All of that work could have been avoided if wazuh was more efficient. Instead we went from Linux running wazuh, to Linux running kubernetes running containerised wazuh, hoping it would somehow be faster.
We can do so much better but the problem is so vast.
Liveness is a global property, and systems without pervasive automatic memory reclamation either develop ad-hoc schemes for managing fragmentation and liveness generically or else don't (the latter condition is worse!). Corrolarily, pervasive automatic memory reclamation neither absolves the programmer of their responsibility to think about how they use memory nor prevents them from reasoning about it; it simply is a qualitatively different model with different characteristics. Which characteristics are not asymptotically worse than any other approach (indeed, they may be asymptotically optimal); the constant factors, as always, are a give and take; the benefits to modularity and reasonability are not to be dismissed.
Improving matters necessarily entails disempowering programmers in order to enable users. The responsibility of managing memory is not one that programmers should be entrusted with. You keep mentioning security, yet systems that do not have unstratified, pervasive capability safety necessarily lead to monolithic, structurally insecure code, and some form of automatic memory management is necessary to realise that. In this respect, the browser is a marked improvement over classic unix. The cultural problems are, well, cultural—not technical. (Of course the browser has scads of technical problems too, but it's still a big improvement. Tug on the arcan thread a bit for some treatment of the social problems.)
Abstraction is not inherently slow. Contrariwise, at the meta level, designing systems to pervasively express abstract, high-level interfaces forces the development of implementation strategies for managing the performance of those interfaces, and at the object level, abstract, high-level interfaces are far more optimisable than concrete, low-level ones, because they give the implementations of those interfaces more freedom. A mediocre example is DBMSes (there are not any good examples, principally due to the perverse incentive structures underlying the development of optimising compilers). There are two salient points. First is, of course, that query planners in relational databases are capable of very sophisticated optimisations that traditional compilers tend not to do. (They are sometimes accused of having opaque and uninterpretable performance models. A reasonable complaint. It can be solved.) Second, and more interesting: some aspects of the way user-initiated optimisation works. Query plan hints and indices are non-normative in that they do not affect behaviour of the code; they are hints. The specification for the behaviour of the code is divorced from the specification of how it should be efficiently executed, and the two can be analysed independently. (Having to rewrite your query when you update your database and trying to guess how to get the fast query plan again, on the other hand, sucks. Again, mediocre example.) Having to mangle your semantics in order to get good performance is a deplorable state. The limiting case of this gets us to the utopia indicated by Fran Allen (inventor of optimising compilers), Don Knuth, Dan Bernstein, and other luminaries, in which optimisation is a collaborative process between the environment and the user.
In particular, this is dependent upon the correct framing of optimisation: it is a state-space search. (Which framing has been neglected by extant compilers, which is why they are all overly complicated and bad at optimising.) From among a set of programs (well, 'program' framing is wrong, but leaving that aside) equivalent to some source program, where is the fastest? Then both the machine and the human can propose steps through the state space. But there is an asymmetry. A single path through the state space will not be too long, and if one is proffered as a hint, it is not difficult to follow it and see where it leads. However, the state space, taken as a whole, is impossibly large (what is contained in the space is a design choice, but it should be chosen to be as large as possible—though not with redundant states, of course—because otherwise, it is unlikely to contain what we seek!), and therefore searching it exhaustively is impossible. So we are eternally bound to search only what can fit in our resource budgets (time and space), and are eternally bound to search by imperfect heuristic. The first class representation of high levels of abstraction allows searching at the level of those abstraction layers. In other words, if you arrange to tell the optimiser via a common vocabulary what your domain concerns are, then it can optimise pursuant to those domain concerns. That common vocabulary? High-level abstractions.
(Edit: given only low-level interfaces, a good optimiser or analyser will be forced to analyse both their implementations and their users in order to recover information about what aspects of the interface are actually necessary to express the essential behaviour of the code and which are incidental, and then strip out the latter. Whether it is a good idea to express those aspects of the interface formally or only informally—and if formally, whether semantically or not (c.f. 'intrinsic/extrinsic' or 'church/curry' types)—is a language, program, and environment design question I will not treat here; but the point is that deliberately eschewing them in the name of performance is counterproductive unless you want to be stuck in a local optimum doing piddling optimisation work, like I did on this binary search routine in assembly that extant optimising compilers are as yet structurally incapable of recreating, regardless of how the source is massaged.)
Edit: I do agree with this:
Absolutely, 100%. And where that starts is with the disempowerment of programmers to empower users that I mentioned above. Because everybody starts as a user. And users should have the understanding that, any code they can run, they can also change, or ask the system questions about. The browser is lovely for this—well, sort of. It got worse with minified js and autogenerated html, and worse again with wasm. But in principle, I can open an inspector window and change the content of a page, and ask all sorts of questions about it. And I can write userscripts. I can do the same for code that runs outside the browser, but it's much harder and more prohibitive. Because code that runs in the browser is constrained to interact only according to the high-level interface exposed by the latter. (The bigger structural problem with the browser from this angle is stratification—notwithstanding Gary Bernhardt, the browser doesn't run inside of the browser, so the user is not empowered to change the browser itself.)
This happens at the hardware level too, where processor chips have internal operative systems that expose instruction sets that are different from their actual internal operations.
Abstractions are a good thing, they are necessary so that different components can evolve independently, otherwise we are back to the era of software only working on one computer model, using all their resources, or worse, to an era where there is one vendor that controls hardware, os, user space software and programming languages.
I work building software, I hate the culture of "ease of development if the most important metric, lets just throw more hardware at the problem" from some languages / frameworks, but ease of development is indeed another cost/risk you need to juggle along with performance, scalability, licensing, etc.
I'd push back on this slightly. I'm not against abstractions, they absolutely have benefits, but abstractions also have downsides if extrapolated beyond their limit. It's the reason why game/embedded systems programmers tend not to implement OOP, even though it heavily uses abstractions.
It is not just that the problem is so vast, everything is so interwoven that it has become impossible to move away from the existing ecosystem.
I played around with various minimal/retro-style/alternative systems, like phones, laptops and workstations. And while it can be fun, there is no way you could ever use something different than the mainstream options in your daily life, because otherwise your life instantly becomes incredibly hard.
You cannot work in isolation, you have to connect and interoperate with others. That means you have to import or wrap their mess for a part in your system. Since you are working with a niche system, you either can't do that, because no-one has written that software, or you can use something that is of barely tested hobby quality.
So no-one is seriously going to do that because you just invited a bunch of problems in your life with no immediate benefit and those around you don't have the patience to put up with your struggles.
The performance and resource consumption trade off for automatic (or mostly automatic) memory management doesn’t have to be all that steep. For example Swift is nearly fully automated with ARC and can be plenty performant even on smartphone hardware old enough to be on the same tier of performance as late 00s laptops. From what I understand Rust can be written this way too, with similar or better performance in most circumstances.
But these sorts of languages aren’t the ones that we see used everywhere. Instead the languages that dominate are those that allow pushing to production as quickly and with as little friction as possible. All that matters is the ability to churn out new builds at maximum speed, regardless of the quality of said builds — pumping out garbage on a daily basis is preferred to delivering quality on a weekly or monthly basis.
Can you elaborate? This doesn't seem true. I can write an ECS in any language.
Any Turing-complete language can write whatever you want, but how much do you have to "fight" the language?
If you're delicately manipulating memory in Python, for example, you're clearly going against the design of the language.
https://docs.python.org/3/c-api/memory.html
Edit: Found a better quote in the link
Modern JIT compilers make it possible to write code in languages like JavaScript, which still looks high-level and doesn't directly manipulate memory, but gets compiled to something efficient.
The problem is that it's hard to know when said code is being compiled to something efficient, because the JIT compilers are very complicated, and the smallest seemingly-irrelevant change can suddenly disable a lot of optimizations. For instance, in V8, calling a JavaScript function with more than 4 types of objects will prevent it from being optimized at all; there are a lot of these subtle things which cause the compiler to "give up" and it can be hard to figure out why your code suddenly runs slower (static languages do have their own "subtle things" which make your code slower like branch misprediction and cache misses, but there are a lot less of them).
Another issue doesn't relate to the language itself but its community: people who write low-level languages are generally more aware of the performance characteristics of their code, while people who write very-high-level languages are not. So the libraries you use in C and Rust may be more optimized, and explicitly document which functions are expensive and which are efficient, while some libraries in JavaScript over-use abstractions and design patterns which make it extra hard for the JIT compiler to optimize them. But as mentioned, this isn't intrinsic, just statistical: there are poorly-written inefficient C libraries and highly-optimized JavaScript libraries.
Another factor is the robustness of a language’s standard library. Each feature a language comes standard with is one more dependency and chain of subdependencies that doesn’t need to get to get pulled in.
JavaScript for example is famously spartan which means you’re going to need a pile of libraries for just about any nontrivial project, whereas newer languages tend to be much better-equipped with facilities for most common tasks.
Assuming there are other people who buy the author’s prescription here: What are your top picks for “suckless”-spirited software? (I.e. anything with some intersection of performant/hackable/composable/single-binary/portable design.)
My number one that comes to mind is Caddy, followed by the cosmopolitan APE suite. Honorable mention to busybox and cyberchef. Navidrome too maybe, but not sure what the dependency chain looks like - it’s just portable/hackable/performant. Any others?
https://caddyserver.com/
https://cosmo.zip/
https://busybox.net/
https://github.com/gchq/CyberChef
https://github.com/navidrome/navidrome
I can't speak to the code itself, but purely as deployment complexity, compare the unofficial Vaultwarden with the official Bitwarden.
You can run Vaultwarden with a single process with a SQLlite database, or connect to an external database. Maybe a reverse proxy fronting it. 3 containers max.
The official Bitwarden deployment requires:
We spend too damn much time as a society re-implemeting LDAP, Kerberos, and OpenSSH in dozens of different ways.