Explain Linux controversies to me
I'm one of those mythical Linux users who has been using it for years but has little to no idea what's going on behind the scenes or under the hood.
In my time using it, I've sort of passively gleaned that certain things are controversial, but I don't necessarily know why. It's also hard for me to know if these are just general intra-community drama/bikeshedding, or if these are actually big, meaningful issues.
If you're someone who's in the know, here's your chance to lay out a Linux controversy in a way that's understandable by someone like me, who can't tell you why people always make "GNU/Linux" jokes for some reason whenever people mention "Linux."
Here are some things that have pinged for me as controversial in my time using Linux:
- Unity
- Canonical
- Deepin
- systemd
- Arch
- GNOME
- Manjaro
- Kali
- Rust in the kernel
- elementaryOS
- Linus Torvalds
- Snaps
- Wayland
- Something about a university being banned from contributing to Linux
- NVIDIA drivers
- Package managers vs. Snaps/Flatpaks
There are certainly more -- these are just the ones I can remember off the top of my head.
Replies don't have to be limited to the above topics. I'm interested in getting the lay of the land about any Linux controversy.
IMPORTANT
This topic is intended for learning, not bickering.
- Please try to explain a controversy as fairly as you can.
- Please try to not re-ignite a flame war about a specific controversy.
It's fine to discuss these in good faith, but I do not want this topic to become yet another Linux battleground online. There are plenty of those already!
I'm gonna take the systemd one.
Systemd is, for those that are unaware, the system in most modern flavours of linux that essentially wakes the system up when you turn your computer on. Systemd's job is to start services like the activity monitoring system (journalctl), mounting the file system, starting the network manager, getting the current time and date, and essentially getting your computer to a finished state. Sounds great, right? Well, some people didn't think so.
See, before Lennart Poettering developed Systemd in 2010, Linux was started through a system they got from their UNIX grandmothers called System V (5) Init. System V Init doesn't actually do much - it just starts a lot of other services, essentially gives the computer a kickstart, starts other programs doing other things, and then sits there. It has the process number 1 (called PID 1) and if it ever dies, the computer hard crashes.
To Lennart, there were problems with this approach. For instance, Init didn't really have a concept of running things in parallel - something that would obviously make the boot process a lot faster. If it can do things simultaneously instead of having to wait for processes to finish, in theory, that would make it a lot faster and resilient, since the processes Init starts kind of have to wait for the previous one to finish. What if they never do? What if there's a malfunction? You'd be none the wiser.
So, brilliant, right? Systemd was adopted by the major distros, everyone loved it, story done. Well, no, of course not. Critics regard it as too complex, too monolithic, and therefore violating the UNIX philosophy (essentially, do one thing and do it well). Some critics are of the unshakeable belief that all PID 1 should ever do is nothing at all, aside from starting the other processes and taking in any processes that lose their parent.
Course, this is all a matter of opinion. I personally do not believe in the UNIX philosophy as a hard unshakeable rule, and on GNU systems it's even more questionable (Did you know GNU stands for GNU's Not UNIX?). But, whatever. I can deal. Unfortunately, people who did not feel like that, started sending death threats to Lennart for presenting a new idea, and thought that this simply would not do.
In the end, though, basically every system uses systemd now, and those that don't make a specific point not to do so for what's essentially culture war reasons.
TLDR: The init system in UNIX was brutally simple, Linux adopted it, a guy named Lennart Poettering thought he could do better and made systemd, some very vocal assholes thought he should die for this.
Note: A major component to the systemd hate is probably that it's in concept very similar to Apple's launchd, which is how Macs and iPhones start all their stuff. And we all know how Linux users can be when faced with good ideas coming from closed source software.
I believe much of the controversy stems from the fact that, as a rule, most (all?) Linux distributions require you to use their chosen init system in order to have a functioning machine. With other Linux components like your shell, login manager, desktop manager, window manager, compositor, full desktop environment, even deeper system-level stuff like what file system you format your hard drives with, X11 vs Wayland, graphics card device drivers--you can mix and match whatever you prefer out of the various options and they typically will all work together just fine. Having to live with whatever init system was forced upon you by the distro wasn't a huge deal because they were essentially just a bunch of scripts that ran during the boot process and then sort of got out of your way.
Systemd is different from earlier init systems because it grew to be much more invasive--reaches its tendrils across your system in somewhat opinionated ways that are hard to ignore the same way that lighter-weight single-purpose init systems are (for example if you spend any time on the Arch Wiki you'll probably start to notice that an oddly disproportionate percent of its pages across all topics seem to talk about various systemd processes in their troubleshooting sections). It means if you do have a problem with some decision that systemd makes for you, unlike in other situations where you can just switch, for example, from Gnome to KDE, or Nouveau to proprietary Nvidia drivers, if you want to move even a single piece of functionality away from systemd, it means you now have to find a whole new Linux distribution--the distribution you may have grown accustomed to and learned the ins and outs of over decades may now be completely shut off from you because they made the infuriating decision to adopt this annoying bloated thing called systemd that seems to be infecting the entire Linux world like some kind of virus. And to make it worse the pool of distributions remaining that you can choose from, which don't force systemd upon you, is a vanishingly tiny list that includes none of the big names most casual Linux users are aware of.
Obviously none of that justifies threats of violence against anyone, but hopefully it shines a bit more light on where that frustration may stem from.
While what you wrote does give perspective to where those who are so angry at systemd are coming from, it doesn't delve into the opposing side: that those so against systemd are arguing that developers need to spend their resources maintaining support for all init systems. It's an expectation that, unfortunately, isn't even possible. I posted over on a Reddit flame war in the last week that it's unreasonable to expect open source developers to support these old systems, but Valve is allowed to drop Windows Vista support from Steam and nobody gets upset with them. They have billions of dollar of resources, but we hold them to a lower standard and are less argumentative with them than open source developers. The person I was replying to just said my comparison was irrelevant.
At the end of the day, this is open source software. If sysvinit/openrc is wanted on any distro, then the community can fork it, change it, and implement their will. I think it speak volumes that after 15 years this has largely not happened. I think there are very good, yet very technical reasons for this. There are options, distros like AntiX are actually very good in their particular use cases. With that said, AntiX certainly doesn't support other init systems either (for just as good reason that Debian doesn't support sysvinit/openrc). But why do people flame the developers who support systemd for not supporting other options, when the distros that support sysvinit/openrc get a pass?
There's a ton of cognitive dissonance, circular reasoning, and logical fallacies at play in every discussion I've ever seen on any of these topics. It's interesting, and I think speaks to the fact that most people who have an opinion they are expressing are speaking with their feelings instead of thinking rationally.
I don't think the expectation on the part of systemd opponents is necessarily that a distribution should support multiple init systems. The distaste is more derived from how systemd, in its full incarnation, seems much more than just an init system, and so by choosing it over one of the other lighter weight init systems, a distribution is making a lot more decisions for the users than just the init system. I think it all comes back down to the "unix philosophy" debate--the idea that a program should just do one thing and do it well--that an init system should just be an init system; an NTP client should just be an NTP client; a DNS resolver should just be a DNS resolver; a login manager should just be a login manager--I should be able to mix and match my choice of each of these things and they should all play nicely together without giving preferential treatment to or making assumptions about other pieces of the greater puzzle.
Whether it's actually true or not, the perception that systemd forcefully replaces all of these other subsystems with its own opinionated, fully integrated versions is what I think draws the most ire. In the eyes of its detractors it should be excluded as a legitimate choice for a Linux distribution's init system on principle alone--distributions should pick one of the options that does not violate the unix philosophy instead (not support multiple ones). But I do think what you describe is exactly the reason why systemd is so attractive to distributions--by only officially supporting systemd and all of its various daemons and other tools (effectively deprecating all the other tools that systemd haters would rather continue using), it removes a lot of burden around needing to test and support all of those different possible combinations of alternative tooling. Especially if the goal of the distribution is to reach as wide an audience as possible, it makes complete sense for them to embrace systemd.
To make a pithy, overly reductive version of your argument. Everyone knows when you use Linux it's really GNU/Linux. Systemd is (arguably) so deeply entrenched that many distros should be thought of as GNU/Linux/Systemd
You are not required to hostnamectl, systemd-networkd, systemd-timesyncd, systemd-boot, etc. They depend on systemd init, but systemd init does not depend on them.
It is certainly convenient to use a set of programs that follow the same conventions in configuration and documentation though. Which is why I imagine they have exploded in popularity recently.
Maybe relevant 15 years ago. But does this mean anything in a world with modern CPUs, RAM and NVMe drives?
I’m sure it could be better designed. I still don’t understand why you (and others) are so upset. I use Linux frequently. I occasionally configure and create systemd services. As far as I can tell it hasn’t interfered with my life or infected my computers.
Because it's absolute shit-tier software, that gets in my way all of the time. Can anyone (without peeking at the man pages) say what combination of options are required s.t. a critical service if it crashes will always restart? If you think it's Restart=always you are completely wrong. Did you know that if you run systemd and d-bus crashes you can't actually issue shutdown? I do because I've had to involve remote hands to rescue computers that you can't soft reboot anymore because of d-bus crashes. I've been mysteriously logged out over and over again because a user-service defined in my home directory clashed with a user-service that started being shipped by the distro and of course the sane behaviour on startup timeout of to just crash the session. It's a transactional job scheduler moon lighting as pid one. Why???????
Compared with something actually fit for purpose like S6 systemd is a cruel joke, and it drives me crazy that it has seen such wide adoption because it proves once again that people in charge of making important decisions have no sense of taste or smell.
Fucking thank you! As someone that has had to write systemd services, I find it absolutely bonkers that this development hostile monstrosity has been accepted as the default on so many mainstream distributions. When I write a systemd service, everything I think should work is actually broken or extremely fragile, and every awkward workaround is actually the "ok good enough I guess" accepted way of doing things, even by official applications (seriously, look at some of the default debian services). It has so many footguns and edge cases and useless convoluted config bloat bullshit and historical crufty extras for who even knows what reason anymore aaaaaaaaaaaAAARGHH KILL. IT. WITH. FIREEE
I can see how the idea of something "standard" is attractive to some maintainers, but every interaction with systemd beyond "copy paste config from troubleshooting wiki and pray" confirms my suspicions that those distro maintainers have something akin to stockholm(d) syndrome(d).
I run voidlinux on my machines, which uses runit for init, and it is such a breath of fresh air due to its simplicity. Before switching, I was on debian. When I first had a look at voidlinux, I thought "oh but what if I need systemd functionality for something? I mean so packages depend on it, that must be for a reason...". I switched anyways, I've been running it for about 4 years now, doing all sorts of weird stuff, and not a single time has that situation happened.
I think it's understandable that a "default" init happened, but that default being a very opinionated systemd sucks in my opinion. I've noticed that for a normal user it would be completely fine to ditch it in favour of something simpler. And if an end user needs to interact with system services, simpler is better.
I think the default behaviour is fine for most people. Most of the time if you've tried and failed to start a service X times trying indefinitely isn't going to help you.
You should be consulting the man pages if you're deploying production code. OnFailure is perfectly suited for this kind of thing to either alert you or do whatever tasks required to get your service running again.
I have been using systemd for a long time at home but am stuck with the much simpler Android init system at work and let me tell you I much, much prefer systemd. And if you think you have d-bus issues let me tell you my binder stories...
Consider for a second what you are producing apologetics for: there is a property called restart, and when the value is "always" it doesn't always restart. That is completely insane behaviour. There is no good reason for it. There are plenty of transient conditions that can cause a service to fail that don't need manual intervention unless you happen to run the one supervision suite that just gives and goes home when it fancies it.
I agree that people should read the man pages. Imagine reading the section about Restart, and not reading about 5 other fields and being completely confused when your service is just left for dead.
You'd need to read another ~ 300 words to get to the nugget that lets you know that actually... we don't really mean always. I think you'd forgive a lot of people for thinking that words have meaning.
I have read all of systemd man pages many times, yet I still couldn't tell you from memory what combination of RestartSecs + burst + back off I need to get reasonable supervision. Runit? Trivial. S6? Trivial. Systemd? Systemd PHD required.
It's the first and most basic task and it doesn't make it simple or easy to build resilient self-healing systems. The rest of it isn't better.
This is not really true. It does always restart, it just gives up after X times by default.
If you read the man page this is very clear:
I'm convinced that if people think this requires a PhD, people will complain about anything and everything.
It does always restart, apart from when it doesn't.
We can agree to disagree I guess.
As great as this summary is, part of me feels like you kind of had to be there; it was a world full of spinning drives, a time when install
readahead
was a standard response to the question of how to improve boot time, and if you had RTFM’d you knew how to squeeze out a bit more.More than a decade later, I still occasionally miss sysvinit. Not because it “works” better than systemd, but because it was familiar. And that’s largely because I need to do less meddling with systemd.
Here’s random threads, not necessarily the spiciest, from the Debian mailing lists if anyone wants to see primary material:
https://lists.debian.org/debian-devel/2011/07/msg00269.html
https://lists.debian.org/debian-devel/2013/10/msg00651.html
https://lists.debian.org/debian-user/2013/11/msg00089.html
Edit: This drama coincided with a lot of DE drama in the early 2010s. A lot was changing quickly back then.
KDE 4 was controversial with some considering it unusable and some considering it promising.
Mint still had ikey around as a contributor and there was a new version with Debian as a base, LMDE. That wasn’t unique, lots of distros did move or considered moving to Debian as a base from Ubuntu, e.g. Crunchbang.
Joli OS was an idea, briefly. Peppermint was fresh hotness for Netbooks.
I guess my point is the init drama and the DE dramas didn’t play out in isolation. They played out simultaneously with each impacting the other.
It was both. KDE 4 was borderline unusable when it came out due to bugs, shoddy UI design, and general sluggishness. KDE 5 is basically a fixed version of KDE 4, and it's probably the best DE for your average user.
I avoided “modern” KDE longer than I should’ve because of early KDE 4. And completely agree it’s probably the best for average users these days.
Relevant video about this history: https://www.youtube.com/watch?v=o_AIw9bGogo
The focus of their research was committing malicious code to the Linux kernel.
So you can imagine how that was recieved lol
Is this a good article on the subject? In the interest of education, I think it’s good to give sources.
Here is a breakdown from the university itself -
https://cse.umn.edu/cs/linux-incident
It's hard to distinguish between controversy and just preferences with Linux users (myself included!) but I'll explain what it sounds like to me:
Canonical -- Frequently uses their weight as developers of a popular distribution to eschew community standards in favor of their own solutions. Such as:
Unity -- Caused a lot of early confusion with Gnome compatibility, plugins, and bugs.
Snaps -- Continues to push snaps as default despite flatpaks becoming far more popular.
Mir -- Wayland had been in the works for some time when they decided to push Mir instead.
Deepin -- It was released as a Chinese distribution and those suspect of government involvement with it tell others not to use it.
systemd -- While it solved a lot of headaches dealing with distribution startup scripts it replaced a lot of core tools that users and admins already knew very well. Change is annoying.
Arch -- Users tend to be overly vocal about why they love it?
GNOME -- Has their own vision of what they want the Linux desktop to be and rarely listens to anyone else about their concerns or compatibility issues with it.
Manjaro -- Distro had some sloppy releases, slow updates and some forgotten certs.
Kali -- A great distro for pen testing that for some baffling reason people try to use as a desktop despite their own site telling people not to.
Rust in the kernel -- Developers that don't want to work in new languages complained that they might end up being told to maintain code they couldn't easily understand. They threatened to quit instead of trying to work something out.
Linus Torvalds -- Has a history of verbally assaulting kernel developers who screw up when they should know better.
Wayland -- Slow development and new issues arising from a tighter security system than Xorg.
Something about a university being banned from contributing to Linux -- A University student thought it would make a good paper if they wrote about how they put malicious code in the kernel. It went about as well as you'd expect for them.
NVIDIA drivers -- Due to being closed source nobody can fix issues that come up except NVIDIA. Also any big kernel or system changes can break them until they decide to fix it.
Package managers vs. Snaps/Flatpaks -- Other than the usual Canonical issues some people find distro-within-a-distro to be wasteful of system resources and community cooperation.
I may be wrong on some this, just what I was able to pull from memory over the years.
Mir had an actual API, which is something business customers wanted. The Wayland attitude has always seemed to be "why would you want to interface directly with Wayland? Just use a toolkit!".
There are plenty of things wrong with Wayland, and criticism is often suppressed with "Wayland is better than X!" which doesn't actually contradict the idea that Wayland is massively unnecessarily flawed (and usually there's some security bullshit too). I'm hoping that Arcan will eventually replace Wayland once it reached v1.0 and the author stops deliberately writing obtusely in an attempt to avoid attention and hype.
I've switched almost entirely to Wayland since at this point major desktop environments handle it well and it does feel smoother on my machine, but it is unfortunate it's the new API that stuck and has had all the effort put into supporting it. As a game developer I very much dislike how it handles DPI (though this is an issue on other platforms as well; many of them seem averse to just giving me a window with 1:1 pixel density, and insist on just lying to me about the resolution of the screen/window, when I want to scale my app myself).
As someone with an interest in writing my own WM, DE, etc at some point the “fuzziness” of building on Wayland that’s apparent whenever I start researching it really is bothersome. In comparison, X is a lot more straightforward — do you understand asynchronous programming? Use xcb, otherwise xlib. That’s it, have fun!
Wayland is much more messy here. While you don’t have to implement the protocol yourself and can use something like wlroots as a base, you need to understand the tradeoffs of the various projects of the sort and generally have a deeper knowledge of all the things a Wayland compositor is responsible for (which is a lot). It’s very unfriendly to noobish hacking in a way X isn’t.
It’s a massive missed opportunity. Wayland brings many benefits and good implementations feel better to use than X does, so it’s unfortunate that more thought wasn’t put into approachability to make it all-around better than X.
What does "interface directly with Wayland" mean when Wayland is just a bunch of XML files?
Interface directly with Wayland compositors. Write code to directly create a window etc.
The notion that "Wayland is just a bunch of XML files" is kind of a linguistic motte-and-bailey, in practice, because it's used alongside claiming e.g. Wayland is a replacement for X.
Can't you do that though? There's libwayland which let's you do just that
It's a replacement for X11, not Xorg
I'll expand on this. Nvidia drivers being proprietary is half the puzzle. The other half is the Linux kernel's lack of a stable ABI (application binary interface) for modules. This is why kernel updates tend to break proprietary drivers in the first place. Part of the Nvidia driver installation process is to compile a shim for your specific kernel version that provides the driver an interface it knows it can talk to.
Hardware vendors that aren't Nvidia had the good sense to recognize that open source drivers don't really hurt their business, and only make life easier for their customers. Nvidia spent 25 years being stubborn assholes and only recently started making real progress towards making open source drivers feasible.
I may be mistaken but wasn’t one of the main reasons Nvidia insists on locking down their drivers product differentiation? Their consumer cards are technically capable of many of the same things their workstation and server models are, but Nvidia would prefer to force you to buy a Quadro or whatever by holding those features hostage by way of drivers.
Sure, like limited hash rate cards, GPU partitioning, etc.
There's more to it than that though. Software has been NVIDIAs moat for the longest time. CUDA, PhysX, application specific driver tweaks//profiles. Even now, the "open source" driver pulls in a bunch of proprietary blobs. The GPGPU (general purpose computing on GPUs) space is non standardized, and NVIDIA has invested a lot of time and money cultivating that moat. Open source hurts them in that it erodes this moat by standardizing the space as well as undermines product/market segmentation.
I heard that NVIDIA now has truly open source drivers.
The new driver works by moving the proprietary parts into the firmware for RISCV coprocessor in the GPU itself ("GSP") and the driver just calls to GSP. Of course it only works for RTX GPU as earlier GPUs doesn't have the GSP
Under the guise of research about how Linux respond to malicious actor, some people from this university knowingly submitted faulty/malicious patches to the Linux kernel, and that without notifying anyone. So they were (rightfully) treated as a treat and doubly so because of their dubious behavior.
Linus wrote the original kernel.
Linus, in the 00's,used to write very bluntly and rudely on the listserv(s?) to people attempting, in seemingly good faith, to contribute to the kernel.
My impression is that, as many of us do as we become graybeards, that he matured.
I'll try to explain Arch
Arch Linux as a distro which is very flexible and user customisable, with the distro not even having any window management/desktop environment preinstalled. This has made it a favourite among tinkerers and people who like to customise their experience heavily.
The only way to install Arch (until relatively recently) was a fairly manual hands-on installation compared to your more "friendly" flavours like Ubuntu/Fedora/Mint. This ended up with Arch gaining a reputation of being one of the "hardcore" distros but it was more approachable than gentoo/linux from scratch so it got the large share of gatekeepy-elitists. "I use Arch btw" then cropped up as a meme making fun of these people.
Nowadays, Arch is easier to install than ever with the archinstall scripts that come bundled in the installation image and the elitists either have matured out of it or are now even more of a minority.
I have been gravitating back to Debian for years, until a couple of years I tried out Arch, and now it displaced Debian as the distro I keep returning to. Talking purely from a personal, perhaps emotional experience, I ascribe it to the IKEA effect:
The tinkering with Arch makes you place a disproportionately high value in the worth of the distribution :)
Is this actually true? AIUI Arch's whole schtick is being designed around convenience of Arch devs (this is not a criticism) and e.g. makes no pretense of supporting options other than bash (with bashisms mixed in to Arch files ending with
.sh
). Arch is flexible because it's GNU/Linux (I'm including GNU/ because it's relevant to my point here) and GNU/Linux is modular, therefore flexible.Arch Linux, by forsaking certain pretenses of flexibility, makes development of Arch easier and therefore more vigorous. Pacman is just a tarball, Your Stupid Non-systemd Setup isn't supported, and bashisms can be anywhere, because it would take work to be otherwise.
The proof is in the pudding, and by pudding I mean the AUR (which probably includes a pudding somehow).
Debian supports (or used to support) kFreeBSD, Arch never tried.
Edit: and packages are shipped unmodified from upstream (or the bare minimum from upstream necessary), because it's less work and lets Arch package faster and therefore be more bleeding edge. If users really care they can do the modification themselves with an AUR version of the package.
Also they don't support partial package updates, and they don't provide long-term support guarantees (and in fact are vague about what defines "long term"), because all of that would require unnecessary effort.
I've had a twenty-something-year love-affair with Arch. In my time with it I think the only thing that has stayed as a consistent through-line is that Arch is how you mentioned it is meant to be as vanilla an implementation of software as possible. Meaning they are trying to ship programs the way the developers who made them intended it to be used, instead of making changing based on their own design choices. I don't think it's been that way specifically because it's less work for the Arch developers, but because the original developers of Arch thought that was what was best for the open source community as a whole. The thinking was that the (for example) KDE developers have put their time and effort into making things the way they have, so it should be packaged that way. Then efforts for changes and bug fixes can be brought to upstream to impact the project directly, instead of being worked around by the distro.
I don't think so. I think the key difference between it and other distros is that it takes the approach it does by default.
There's not a huge difference between a minimal install of Debian (used as an example) and a default arch install. To clarify, what I mean by that is that you will have a functional system with no graphical environments and a few key services installed by default. The install will be small, and everyhing else will be up to you. However if you choose to use Debian, you can also do a number of different installs that install whole groups of packages.
They're also a rolling release distribution which guarantees access to packges as soon as they're available - which you can get closer to with Debian if you wish by switching to Testing or Unstable. Using git as an example, we get the following:
Arch - 2.50.0-1
bookworm - 2.39.5-0
testing - 2.47.2-0
sid - 2.50.0-1
The total package counts for the distros are very different though, which is one of the reasons that AUR exists. Looking at sid, it has around 40K packages in the distro. Arch has almost 15K. AUR has near 100K.
Whether AUR has value to you or not is up to you though!
For folks considering Debian testing: (1) be sure to track testing not a specific code name and (2) don’t blindly update things.
I think Siduction is still around - since circa 2011 - for an example of a distro with Debian unstable as a base.
Back in the day there was an installer, the Arch Installation Framework, but it was removed in 2012. I still miss it, every time I go for an install I dread all the manual steps but the end result is worth it.
archinstall
is a thing now.GNU/Linux
Richard Stallman is a famously former MIT professor known to have literally founded the GNU foundation ("GNU's Not Unix" which is a recursive acronym).
He essentially founded the Free and Open Source Software movement. He's also held iconoclastic perspectives on all manner of recent historic events. IIRC, he was also accused of sexual harassment, leading him to be coerced into taking a huge step back and out of the public view.
GNU contributed enormous amount of software to what became known as Linux. Of particular note was gcc, the GNU C Compiler that was used to compile the Linux kernel.
Stallman would insist that Linux should be referred to as GNU/Linux, as Linux would not exist without GNU. He famously would insist on this nomenclature frequently and publicly when people would call the OS "Linux".
Disclaimer: this is mostly from memory so please amend based on what I remember or heard wrongly.
That is a very generous portrayal of Stallman, his "iconoclastic" perspectives, and his behavior. And funny you should call it being "coerced" when Stallman resigned, considering that he himself is apparently totally clueless as to what coercion entails, as exemplified by his defense of Marvin Minsky in regards to one of Jeffrey Epstein's victim's allegations towards Minksy. And don't even get me started on Stallman's views on consent and pedophilia. But if anyone wants a summary of the insanity of those views, this article from Drew DeVault does a good job of breaking it all down with lots of direct quote from Stallman: https://drewdevault.com/2023/11/25/2023-11-26-RMS-on-sex.html
Richard was famously irritated with Linus for creating the kernel after he'd already created literally everything else - damn near every fundamental linux command you ever run still has plenty of his code in there, and the compiler that still builds all linux software is his masterpiece.
I'd wager that Richard still holds the title of worldwide code execution time leader - meaning that computers have spent more CPU time running his code than the next ten contenders combined. This really ticks off the people who love to hate on him for his atrocious views on many social issues.
Interesting thing about the linux kernel - it was never supposed to exist. Richard was trying to create something we still do not have to this day - an asynchronous kernel. Rather than one large monolithic program written in one language (which Linus created), he was attempting to create a modular kernel that was itself made of many small modules that could be written in any language and run independently. This design concept is far far ahead of any existing operating system, and it's called the GNU Hurd.
The problem with this approach is simple - this thing is impossibly difficult to debug. All of the pieces run independently, communicate with each other asynchronously, and can all be written in different languages. Just figuring out what order something happened in is quite a challenge when something goes wrong. This debugging complexity is ultimately what killed the project, because development of it could only continue at a snail's pace.
That wasn't good enough for Linus who just wanted a working computer asap, so he whipped up the Linux kernel and left Richard grinding his teeth and complaining about GNU/Linux for the next three decades. I don't think he ever got over it, and it still grinds his gears that in his view, Linus is responsible for sabotaging the best kernel design ever conceived and stealing all of the credit for the GNU ecosystem from him.
Just as an example - linux would not have had any driver support problems if it had launched with the Hurd. Anyone could whip up any driver in any language they liked, open or proprietary, including reusing the windows or mac (or any other) drivers for itself. It'd also run circles around any other OS in its ability to utilize any number of available processors and secure the memory space. It'd never even need to reboot to upgrade the entire kernel, able to hot swap every part of itself. A system built like this would not 'freeze' or 'lag' under any circumstances short of hardware bottlenecks or hardware failure.
Richard aimed too high with this concept and took too long getting it off the ground. He also had to surrender a number of high minded goals in the version that currently exists because he couldn't get them to work either. Once the Linux kernel was available, everyone dropped Hurd for the kernel that was functional rather than a fancy design project, and the rest is history.
Based on his Wikipedia page, there are significant errors here:
He worked as a graduate student and research assistant at MIT, but didn’t get his PhD or become a professor. He did win a MacArthur Fellowship and a bunch of honorary doctorates, though.
It doesn’t appear he was was accused of sexual harassment? The controversy appears to be about what he wrote about pedophilia and child pornography in response to the Jeffrey Epstein scandal, and earlier writing revealed at the time.
I do recall some harassment accusations. Some of which I would classify as "product of the time," some of which I'd classify as "an autistic making bad taste jokes." Best I know none were sexual assault.
I do think that it's somewhat unfair to judge people retroactively against things that were normalized when they happened. They were part of the problem to be sure, but so was a huge majority of the population. But that also means being willing to embrace progress, rather than fight it.
Stallman, despite many of his virtuous and reasonable stances, continues to double down on his most problematic ones (as @cfabbro noted). And that, at best, means he shouldn't be the name and face of a larger movement. And I say that as someone who previously defended him quite fervently.
Do you have any specific examples with regards to Stallman specifically? I tried looking for a statement related to that in cfabbro's link & wikipedia but I couldn't find anything. I never bothered to research why Stallmann is problematic even though I knew he is & I'm not sure what you're referring to in this context
I don't think Stallman was ever directly accused doing anything particularly egregious in terms of his actions towards anyone. Most of the complaints I've read about his behavior are related to him generally being incredibly gross/inappropriate (like when he ate something he picked off his foot in the middle of a talk), and contributing to a culture of toxicity (by berating/insulting people) and sexism/misogyny at MIT. That's where @vord's comment about him being a "product of the time" comes into it. But if you're interested in reading more, this WIRED article touches on a few more of Stallman's inappropriate/toxic/sexist moments: Richard Stallman and the Fall of the Clueless Nerd
I think this medium article was the most notable source, but it has been taken down (probably because of abusive assholes), hence archive link.
Honestly I've put it in the "I was wrong, but I don't really care about this topic anymore and am moving on" mental bucket. So there's probably more I'm forgetting.
Deepin is a distro popularized by its own desktop environment, Deepin DE. Deepin DE can be used in any distro, not just Deepin. Maintainers for other distros have had longstanding issues with unresolved security vulnerabilities in Deepin DE. (my source was going to be the arch wiki but they’ve updated their citation to the next point below).
Last month, maintainers for openSUSE distribution announced they would be removing the Deepin DE package from their repositories because Deepin tried to sidestep packaging policy. They also cited Deepin’s long history of not addressing security vulnerabilities. The TL;DR is section #4 here: https://security.opensuse.org/2025/05/07/deepin-desktop-removal.html
I read something about the "whys" of Gnome 3 a little while ago that was pretty interesting. I'm not 100% sure this is true, but it all makes some sense...
The basic idea of this "controversy" is that Gnome was a pretty typical, sort of windows-y desktop interface in versions 1 and 2, but version 3 (released in 2011, with development starting in 2008ish) was a pretty significant departure. A lot of UI elements that users expected were gone (no list of open windows, no minimize/maximize buttons, no start menu type thing, no system tray, etc), which just made it all feel... weird. There were some pretty cool ideas going on, but the changes were drastic and turned a lot of people off. Over time, people generally got used to this stuff and Gnome is the default for a lot of distros -- it definitely still has its detractors, but it's pretty well liked by now.
As for the (supposed) reasoning: Gnome is largely developed/supported by Red Hat, who ship it as part of their enterprise distribution. At the time, Red Hat's main competitor was Novell, who shipped KDE by default on their competing distro. Early in the process of development on Gnome 3, Microsoft and Novell made a deal to cooperate on patents. Microsoft, of course, holds lots of patents on foundational desktop UI concepts. The story goes that Red Hat was concerned about legal action over Gnome from Novell (now that they could use MS patents), so they encouraged a more radical design that wouldn't risk infringing.
I'm not sure how true this is, and the big shifts in design could easily be explained by a desire to rethink UI ideas that had been pretty stagnant. It makes a lot of sense that designers seeing new mobile phone interfaces, new touchscreen PCs, etc might want to do something new. I think this is the usual story you see, but the patent thing easily could have been an influence. After all, those discussions probably would have stayed within Red Hat and wouldn't really be public info today...
Gnome 3 came out back when lots of UI designers were trying to create a unified UI that would work both for desktop and mobile.
To my knowledge, no one has yet managed to do this in a satisfying way: basically all attempts so far have just ended up creating something only half-usable in both paradigms.
Gnome 3 is one of the better attempts. It works pretty well on desktop—if you like its defaults and have no desire to change anything. Because the "meta-controversy" with Gnome 3—and, IMO, the real source of the controversy—is that, compared to every other Linux DE/WM/compositor/whatever, it offers essentially no customizability. In addition to just being kind of frustrating, the lack of customizability goes against a general OSS cultural norm of making highly flexible and configurable software. In some ways, Gnome 3 feels more like a corporate product than an OSS project—people compare it to Apple, except Apple has more thought put into their stuff (or, well, supposedly they did at the time; from what I gather, macOS hadn't become the mess that it is today).
Also, and related, Gnome 3 has taken a very "my way or the highway" approach. It doesn't play nicely with software outside of its ecosystem; again, this breaks OSS cultural norms.
For what it's worth, the counterargument from the Gnome folks is that customizability and "openness" carries a maintenance burden, so reducing that stuff ensures Gnome's quality.
Of course, only greybeards care about this stuff anymore, because what ended up happening is that the Web just kind of ate everything UI-related.
GNOME is so inflexible and pared down that even modern macOS bests it in configurability and conveniences for power users, which is kinda crazy.
As you said people like to compare GNOME to macOS, but the more relevant point of comparison is really iPadOS. There are of course things that are easy under GNOME that aren’t under iPadOS, but in terms of design philosophy the two are pretty closely matched. Well, except for where iPadOS is extensible, it has sets of stable APIs so extensions don’t break every release, unlike GNOME’s brittle monkeypatch-style extensions…
Gnome 3 is maybe just one part of a big bucket of drama: DEs at the start of the 2010s.
Unity was circa 2010.
Gnome 3 was circa 2011.
Cinnamon was circa 2011.
MATE was circa 2011.
And this big bucket of drama coincided with the sysvinit vs. systemd drama, which made that drama worse than it might have been otherwise.
Windows 8 Metro and iOS7 had the same reception so I don't think it was even Linux specific.
OS X Lion, circa 2011, had mixed reviews too. The more meaningful criticism was about functionality not UI, but there was some criticism of UI changes too.
I remember when Gnome 3 came out, as you mentioned it was a major departure from the previous version but my main issue with it was how unintuitive to use it was. And the fact that the window control buttons (Close, maximise, minimise) were on the left side.
I really like the design flow of GNOME, so I guess I feel a need to defend it in some way. I even just came back to GNOME after almost a year on KDE, because it's just simpler and stays out of my way more often. I really like the top left hot corner to show all windows. I really like the lack of desktop icons, I like the simple menu it has for running windows.
There are some things I don't like though: I always install a system tray extension, because having a music player always show on my screen just doesn't make sense to me. I always install Dash to Panel, because I don't like having to push a button or move my mouse see what windows are open on my desktop, and a system tray is useless if you don't have a panel. I wish Workspaces worked across multiple monitors, but I understand there are technical reasons why making that happen is a large undertaking right now. I dislike that GNOME and some app developers are becoming openly hostile to theming lately, but at the same time I do understand where they are coming from. I have spent countless hours fighting with themes on KDE trying to get Arc to look perfect if every case, and always failing. God forbid you try to switch between GNOME and KDE with any regularity, themes will look off every time. You will have to figure out completely random things to put in your environment like
Extensions have worked for me for years to close the few small gaps I have with GNOME. Anything outside of that which I still have issues with are honestly also problems on KDE, or XFCE, but nobody blames them because they "leave it up to the user".
KDE indeed has its own problems. The one pertinent to most is how it has infinite knobs and sliders that can make getting everything dialed in an extended affair. It also has weird quirks not found elsewhere like file copy progress showing up in a notification bubble instead of a window.
The theming hostility on the GTK side really is disappointing, and most of it comes down to devs hardcoding colors, fonts, etc instead of parametrizing like they should be. Properly parametrized apps will always be usable short of the user installing a poorly designed theme (which is the user’s problem, not devs’).
Under KDE my theming issues mostly fall under two categories:
The first is really annoying because I have a laptop that runs at 1.5x UI scale and it breaks most available themes in some way or another. The second is annoying because it means themes are glued together in a way that leaves a lot of visible “seams”. I’d kill for Linux theming that worked like Kaleidoscope schemes on Mac OS 9, msstyles on Windows XP/Vista/7, or themes on OS X 10.0-10.9 in which one file covered everything and left no gaps.
Yeah, I'm with you. I especially love using gnome on a laptop -- multitouch swiping around feels great. I'm a little less into it on a desktop, but after mapping the workspace switching and overview thing to my extra mouse buttons I don't really mind it.
The theming thing is a bit weird, but honestly custom themes really do break things all the time on every environment I've ever used. Throw in a few little annoyances around theming flatpak apps and I just stopped caring. Especially since many of the apps I use every day (firefox, VS code, discord, etc) aren't ever gonna match anyway...
I don’t have a link but Firefox actually has a pretty decent Adwaita theme if that’s of interest. If I’m not mistaken some distros even install it by default.
Your mention of custom themes specifically triggered a thought, and that’s that perhaps DEs should be including several first-party themes right out of the box, helping reduce the need for third party themes in the first place. If executed well, the overhead for maintaining the themes would be low and it’d make a lot of users happy.
In the interests of education, I think it's worth making sure everyone has seen Revolution OS. It is a documentary about Linux comprised of interviews with all of the key players in the original open source computing movement. It's about twenty years old now so it is blissfully devoid of any modern linux malarkey - instead it gives you the foundational mindset that drove its creation from the people who created that mindset. Without knowing that mindset it's a bit difficult for a layman to make any sense out of the pedantic arguments and nitpicking that's so common in the open source world.
There's also the three part Triumph of the Nerds documentary (and its sequel) that covers the Apple / IBM / Microsoft era that preceded Linux crashing the party. If you are new to both of these, you might want to start with this one first and finish up with Revolution OS.
These are useful since they cut through all the modern opinionated (uninformed and mostly bs imo) takes on the world of computers and go directly to the original sources. If you contrast the mindsets in the people featured in these documentaries it's easy to see where the friction comes from, and the source of these various pedantic issues becomes clear. These are must watch material for anyone interested in computer history and they are damn interesting. If you're a masochist you might also enjoy a trip through The Computer Chronicles.
Disclosure: i'm a fan of Flatpaks, and less enthusiastic about, but like Snaps. I even use brew for some things.
The skinny on Snaps:
Snaps are system-level package containers.the biggest complaint I've seen is there can be performance issues with them, but across laptops in the limited time I've used them I haven't seen any issues. I don't like Canonical switching debs for things out for snaps, but it is a viable solution to some packaging, like when Mozilla pushed for Ubuntu to use a Firefox snap (then went back on it and built their own repos).
"snaps are a property distribution format" gets quoted a lot. I believe I read Martin Wimpress, Ubuntu MATE head, former Canonical dev, address that Snaps only need an https server and a few basic files to distribute packages. Snapcraft.io is maintained by Canonical the way it is to provide an enterprise-friendly hub for packages. I can understand not wanting to use snaps on this basis, and only wish to clear the air.
Package Managers vs Snaps and Flatpaks
System packages are and will always be king. They are how you build your system.
I think the best thing to do is the following for packages: System > Snap/Flatpak/ > third party distro repo.
Snaps and Flatpaks are generally distributed from Snapcraft and Flathub but can be published by devs directly as well. They can also provide updated software your distro has a snowflake's chance in hell of getting, like Debian getting the latest Firefox in its repo. These formats are useful as supplements to distro packages, and even potentially useful as your entire package set overlaid on a relatively minimal system (see: Fedora Atomic, EndlessOS).
As an aside, I mostly use Flatpaks except for VSCode, my music suite (Renoise, Bitwig, Supercollider, and VSTs in an Arch distrobox) and think they're phenomenal. Most aren't from the developers themselves, but Ubuntu users have an advantage where they can pick and choose easily between official snaps and Flatpaks, and get as much maintained by upstream as possible.
Nvidia drivers: were open, then closed. Nvidia didn't cooperate with the Linux project and tried to do their own graphics handling for Wayland and eventually capitulated on all fronts, releasing driver sources a few years ago (2018?). Easy story.
elementaryOS Conteoversy
I skimmed and did not see any comments explaining elementaryOS (eOS) controversy, so will explain that one.
What is eOS?
eOS is a distro based on ubuntu. The biggest thing about eOS is it tried to do a macOS feel with their own desktop environment Pantheon.
Disclaimer
I am writing about these controversies from memory as they unfolded a few years ago. I fact checked names, but did not go back and read the original blog posts that I read from the people involved (I did read them as the situation was pccurring)
What is the controversy?
There is probably two big ones, but a few other small ones.
The biggest one (and the one that I think has merit) is they poorly handled declining finances. At the time, both Danielle Fore and Cassidy Blaede were working fulltime on it, and receiving a salary. Due to declining finances, Blaede stopped taking a salary and got a job elsewhere but wanted to keep his involvement in eOS. Fore thought that since Blaede was no longer dedicating full time hours, that Blaede should not be as involved. Blaede was frustrated that his attempts to save the company by going from fulltime employee to heavily involved volunteer resulted in him being pushed out, and so fully left.
Danielle Fore is trans, and at the start of eOS was male. Depending on your social circles, this may have come up as a controversy. I am listing this one here, not as support of this controversy, but to provide context.
Other smaller controversies might be about some technical features, slow turnaround from an ubuntu LTS release to eOS implementing it (sometimes a year or two for eOS to update to an ubuntu LTS) or other design philosophies that part of the broader linux community did not like.
My opinions
In my opinion, the first one is a worthy controversy. The second is stupid. The third, especially considering they were a small team is understandable. Also, at the time that I was following eOS, they did get the newer versions out before the ubuntu LTS was no longer supported
It’s more on the minor side, but there was also the thing where on the elementary site, to download the installer ISO for free you had to ratchet down the contribution slider to $0. That ruffled some feathers at debut and continued to sit badly with some people over time.
I never quite understood why it cause an upset, projects like distributions with bespoke desktop environments are expensive and money doesn’t just materialize of thin air. Corporate sponsors don’t become a factor until you’ve hit a certain critical mass, and so funding comes down to contributions from individuals which are far and few between. That being the case, they needed to find a way to encourage people to contribute and it’s been proven that just putting a “donate” button next to the download button does next to nothing, so they tried the slider instead.
Supposedly something was posted on their blog around that time suggesting folks that didn’t donate were leeches or something. More bad PR / community relations than free beer vs free speech drama, as I recall anyway.
More “lol I’m donating to Debian not this unfinished desktop environment that’s an old fork of an Ubunutu LTE” and less “how dare a FOSS project ask for donations!”
Random aside: elementary OS and its Pantheon DE are also circa 2011.
Noise: Deepin 12.12 was the first version with Deepin DE instead of Gnome. It was also when they started emphasizing English version availability on DistroWatch announcements. 12.12 dropped in mid-2013. The elementaryOS payment drama was about 18 months later and comments pushing folks to Deepin instead of elementary weren’t uncommon.
That's how I recall it as well. It's not that the project was asking for financial support, it's that it was a little presumptuous to default to a $20 (IIRC?) donation on the download page for an aging Ubuntu fork with a custom desktop environment that wasn't even close to usable as a daily driver. Pantheon was somewhat promising when it launched in 2011 but still barely an MVP at that stage, and they were using dark patterns to make you pay retail software price for software that doesn't work.
I'm phrasing it this way because they did. It wasn't "please donate to help our developers get to the goal of a finished and fantastic software experience", it was "click here to buy ElementaryOS". It gave you to understand you were buying a retail product.
That rubbed a lot of people, including me, the wrong way. And here we are, nearly 15 years later, and I would argue eOS is still not what I'd consider ready for prime time, and they're still asking for money--but at least now it's a bit less dark-patterny.
I recognize most of these (though I don't know enough details to sum most of them up for you), but I'm not really sure what Linux-specific drama there is with Unity. The drama with the game engine Unity has been largely unrelated to Linux afaik? Unless there's some other thing called Unity in the Linux ecosystem that I haven't heard of that also has drama (which is a possibility, tbf).
Unity was the semi-proprietary desktop environment for Ubuntu before they switched to classic GNOME.
Ah okay, I knew people didn't like GNOME but I didn't know the name of its predecessor
GNOME predates Unity, actually, they just went back to GNOME shortly after introducing Unity because the reception sucked so much.
It’s unfortunate, because while Unity wasn’t perfect it did at least a couple things better than GNOME.
First, instead of trying to delete the menubar or sweep it into a messy hamburger menu like GNOME does, it kept it around in a way that took up less screen real estate than menubars do in most Windows-type apps environments, which was great for small screened devices.
Second, Unity’s HUD, which was like Spotlight or Alfred on macOS (fuzzy search launcher) except it also surfaced every item in the frontmost window’s menubar, making them all quickly keyboard-accessible even if they didn’t have hotkeys.
No well-maintained Linux DE has anything like either these days, and the closest thing you can find to Unity’s HUD is actually on macOS now (the system standard Help menu, which can be opened with a key shortcut, can search menu items there).
That does sound nice. I had to actually look up what Unity looks like because I had never seen it before and it honestly looks pretty good. It's a shame it had issues because I would have preferred it to GNOME (though I still love KDE too much for eitehr).
Ugh, when it was first added I didn’t realize it and completely borked my system by force installing Unity via apt. I thought I was installing the engine and only found out the desktop environment after nothing worked outside the terminal anymore.
While not exactly specific to Linux the Unity3D drama and the subsequent developers who switched engines because of it did draw a lot of attention to Godot which being FOSS and mostly developed on Linux was a nice side effect!
True, I do think it was generally a good thing for FOSS that more people are switching to Godot! But I wouldn't classify that as "Linux drama" probably lol
There's a pretty solid argument against flatpaks hosted here. My main problem with flatpaks and snaps is that they are hard to work with on a low level. I only see the compatibility as a worthwhile tradeoff for large applications like web browsers. Having an Emacs flatpak, for example, is silly. Whenever possible, I strongly prefer AppImages over flatpaks and snaps, although the 3 have somewhat different use cases.
The controversies themselves have been explained pretty well, one thing I'd like to add is that for basically every single new thing in the Linux world there'll be a bunch of extremely vocal insane people who think that it's all a conspiracy from Microsoft to make Linux worse so that everyone uses Windows.