The first principle is compatibility. Compatibility — or, if you prefer, stability — is the idea that, in a program, the meaning of a name should not change over time. If a name meant one thing last year, it should mean the same thing this year and next year.
[...]
The second principle is repeatability for program builds. By repeatability I mean that when you are building a specific version of a package, the build should decide which dependency versions to use in a way that’s repeatable, that doesn’t change over time. My build today should match your build of my code tomorrow and any other programmer’s build next year. Most package management systems don’t make that guarantee.
[...]
The third principle is cooperation. We often talk about “the Go community” and “the Go open source ecosystem.” The words community and ecosystem emphasize that all our work is interconnected, that we’re building on—depending on—each other’s contributions. The goal is one unified system that works as a coherent whole. The opposite, what we want to avoid, is an ecosystem that is fragmented, split into groups of packages that can’t work with each other.
It's a nice article, but I have the same feelings I always do about Go - it feels like they're reinventing the wheel, sometimes arriving at the same answer we already had, and sometimes failing to...
It's a nice article, but I have the same feelings I always do about Go - it feels like they're reinventing the wheel, sometimes arriving at the same answer we already had, and sometimes failing to address the reasons the wheel was built the way it is.
For example: point 1 in the article boils down to "put the major version number in the package name", which is what Linux distributions have been doing for years (see: libpng12 and libpng16 are packaged separately on Ubuntu, an example which I ran into recently).
Also:
The repeatable builds in Go modules mean that a buggy D 1.6 won’t be picked up until users explicitly ask to upgrade. That creates time for C’s author and D’s author to cooperate on a real solution.
This fails to note that, probably, C's author also won't notice that C is broken with D 1.6, because why would they? If they are not keeping perfectly up to date with their dependencies, they won't notice any bugs in them. For a "feature-complete" program, it seems unlikely that a maintainer will continue to release a new version every time a dependency updates.
Maybe this is the difference between "software development" and "software engineering", I suppose, though really it seems more like the difference between FOSS development and paid work.
I think the Go team often comes up with an approach that's a little better by taking some old ideas and standardizing them. Don't underestimate the benefits of standardization. For example, there...
I think the Go team often comes up with an approach that's a little better by taking some old ideas and standardizing them. Don't underestimate the benefits of standardization. For example, there are many code formatters out there, but Go pioneered standardizing on one code format, and now many languages do that.
The problem with doing things by hand is that there's no consistency between projects. C doesn't even have a standard package distribution system, so often when creating a new project, you have to download source code for each library and add it to your project by hand. Java didn't start out with one either, though at least we could copy jars around.
OS-level distribution (like Linux distros) are convenient but lock you into an OS and often provide very old versions of libraries.
The comparisons in the article are to other package distribution systems that are pretty good but still have some problems.
From the article:
[...]
[...]
I'm always super impressed by the knowledge and written communication of the Golang core team when I come across it.
It's a nice article, but I have the same feelings I always do about Go - it feels like they're reinventing the wheel, sometimes arriving at the same answer we already had, and sometimes failing to address the reasons the wheel was built the way it is.
For example: point 1 in the article boils down to "put the major version number in the package name", which is what Linux distributions have been doing for years (see:
libpng12
andlibpng16
are packaged separately on Ubuntu, an example which I ran into recently).Also:
This fails to note that, probably, C's author also won't notice that C is broken with D 1.6, because why would they? If they are not keeping perfectly up to date with their dependencies, they won't notice any bugs in them. For a "feature-complete" program, it seems unlikely that a maintainer will continue to release a new version every time a dependency updates.
Maybe this is the difference between "software development" and "software engineering", I suppose, though really it seems more like the difference between FOSS development and paid work.
I think the Go team often comes up with an approach that's a little better by taking some old ideas and standardizing them. Don't underestimate the benefits of standardization. For example, there are many code formatters out there, but Go pioneered standardizing on one code format, and now many languages do that.
The problem with doing things by hand is that there's no consistency between projects. C doesn't even have a standard package distribution system, so often when creating a new project, you have to download source code for each library and add it to your project by hand. Java didn't start out with one either, though at least we could copy jars around.
OS-level distribution (like Linux distros) are convenient but lock you into an OS and often provide very old versions of libraries.
The comparisons in the article are to other package distribution systems that are pretty good but still have some problems.