9 votes

Statically linking libraries is arguably better than dynamic linking

5 comments

  1. skybrian
    (edited )
    Link
    I'm happy to see Drew DeVault posting data and not just opinions. Static linking has worked well for Google and mostly-static linking works well for Go, with the caveat that you need to have the...

    I'm happy to see Drew DeVault posting data and not just opinions. Static linking has worked well for Google and mostly-static linking works well for Go, with the caveat that you need to have the source and be able to rebuild the binary easily. Google is organized enough to do this but many other organizations aren't.

    I am not sure why it has to one or the other, though? It might be that dynamic linking is currently overused, but makes sense for certain heavily-used OS libraries that you want to be able to patch to fix security issues, while not making sense for the long tail.

    Although, an alternative to linking is to have some kind of network-like connection using a Unix pipe or shared-memory interface.

    5 votes
  2. Adys
    Link
    Excellent post from Drew. I was on the edge already but I think this post, with the data attached to it, standalone pushes me pretty firmly into the static linking camp.

    Excellent post from Drew. I was on the edge already but I think this post, with the data attached to it, standalone pushes me pretty firmly into the static linking camp.

    2 votes
  3. [3]
    whbboyd
    Link
    Most of the arguments in this article are wrong. This comment on lobste.rs rebuts it well. (I started researching and writing up virtually the same comment, but, well, it would have been virtually...

    Most of the arguments in this article are wrong. This comment on lobste.rs rebuts it well. (I started researching and writing up virtually the same comment, but, well, it would have been virtually the same.) As an added bonus, Drew responds, in case you want to follow the discussion. I don't think he does a good job of rebutting the rebuttals, but you can judge for yourself.

    The one thing I'll add is, regarding performance: dynamic loading is significantly slower than static loading due to the linking time, but static loading is already fairly slow, so anyone trying to load either kind of binary in a performance-sensitive context will get bit either way.

    2 votes
    1. [2]
      Adys
      Link Parent
      Are we reading the same thing? That link seems to say nothing about his arguments being wrong. At best, it claims 2 of them are misleading and drew's reply is pretty satisfying to me.

      Are we reading the same thing? That link seems to say nothing about his arguments being wrong. At best, it claims 2 of them are misleading and drew's reply is pretty satisfying to me.

      1. whbboyd
        Link Parent
        What? i.e., it is incorrect to conclude that shared libraries are "not really" shared when the median shared-ness is 4 and the mean is 50. i.e., symbol count is only vaguely correlated with code...

        What?

        It’s unclear to me that this supports a general conclusion that shared libraries aren’t.

        i.e., it is incorrect to conclude that shared libraries are "not really" shared when the median shared-ness is 4 and the mean is 50.

        I don’t really want to get into measuring the sizes of these symbols, particularly since I can’t get them from the source, but that sounds like at least a few full copies of libc to me.

        i.e., symbol count is only vaguely correlated with code size, and so it is incorrect to conclude that static binaries are "not really" meaningfully larger than their dynamic counterparts on the basis of symbol counts.

        (printf is an example of a single symbol with a very outsize amount of code behind it. And hey, literally everyone uses that one!)

        the bigger problem here is that it’ll be harder to find the dependents of statically linked libraries

        i.e., it is incorrect to conclude that security updates of statically-linked binaries are "not really" unmanageable—indeed, they're impossible in general! If you restrict yourself to applications distributed by your vendor, you've completely eliminated one of the major benefits to static linking, i.e. the ability to run software for which a matching version of a given library is not easily available.

        (Also, Drew's computation of update size is incorrect. 3.9GB would be the update if you left all your software vulnerable until an end-of-year wrap-up. If you update as fixes are released—something which you must do if you're taking any action on security at all—you'll download many of those binaries multiple times, as various fixes to their dependencies are released. Additionally, failing to compare to the size of shipping dynamic library updates also kills this conclusion. If, by my best guess/ass-pull, updating just the vulnerable libraries would take a few hundred megabytes, that difference is immense no matter the time period it's over.)

        4 votes