34 votes

Ditching Docker for Local Development

40 comments

  1. [2]
    LGUG2Z
    Link
    Some folks on the internet were interested in how I had managed to ditch Docker for local development. This is a slightly overdue write up on how I typically do things now with Nix, Overmind and Just.

    Some folks on the internet were interested in how I had managed to ditch Docker for local development. This is a slightly overdue write up on how I typically do things now with Nix, Overmind and Just.

    18 votes
    1. LGUG2Z
      Link Parent
      Thanks to everyone for all the discussion here. I really enjoyed this thread and ended up linking it at the bottom of the article. This article got way more attention (and hate, in places outside...

      Thanks to everyone for all the discussion here. I really enjoyed this thread and ended up linking it at the bottom of the article.

      This article got way more attention (and hate, in places outside of Tildes šŸ˜…) than I had thought possible for a simple tutorial, so I'm gonna call it a day for replying to comments and go out and touch grass for the weekend!

      8 votes
  2. [3]
    geniusraunchyassman
    Link
    Something I havenā€™t seen here that I may have missed: Using docker abstracts away the inner workings of applications and their foundations. There are many applications that just say ā€œOh, use the...

    Something I havenā€™t seen here that I may have missed:

    Using docker abstracts away the inner workings of applications and their foundations. There are many applications that just say ā€œOh, use the docker imageā€.

    I am not so easily pleased. I donā€™t want the app spoon fed to me and when it breaks have no idea what is going on.

    Lastly, adding to the point above, working in a troubleshooting field, there are a dwindling number of people who know how to(and this is just a small example with Django):

    -Setup a linux environment at all
    -Setup NGINX
    -Setup gunicorn to run the app
    -Setup a reverse proxy in NGINX
    -Setup PostgreSQL with proper grants and credentials
    -Troubleshoot any issues that arise during or after this setup.

    Iā€™m not trying to pull a ā€œkids these daysā€ but I believe pulling docker images should be a factor of convenience once you have an understanding of the underlying concepts and what it takes to ā€œget thereā€.

    11 votes
    1. underdog
      Link Parent
      I agree with you partially. It's indeed important to understand the foundations of whatever you are building. But, to some extent, being able to not care about what's going on under the hood is...

      I agree with you partially. It's indeed important to understand the foundations of whatever you are building. But, to some extent, being able to not care about what's going on under the hood is what keeps society evolving. Every generation builds on top of previous generation's work. For example, you really don't have to know how to manage memory blocks or understand mnemonics in order to build software anymore. Obviously the more complex and the more expertise you have, the more it makes sense to understand all the moving parts, but becoming an expert is not always the goal.

      That doesn't only apply to software engineering.

      5 votes
    2. skybrian
      Link Parent
      It's useful if you want to deploy the same software on multiple machines. You do the install once and you can be sure each machine gets exactly the same software installed. Also, it makes...

      It's useful if you want to deploy the same software on multiple machines. You do the install once and you can be sure each machine gets exactly the same software installed. Also, it makes rollbacks easy.

      It's a similar benefit to building a binary once and installing it multiple times, instead of building a program from source on each machine.

      You're right to be wary of installing binary files that you don't know how to build. We do install binary files we can't build ourselves all the time, but the open source ideal is to be able to build everything from source. Ideally you should have the source code and a build system that's easy to understand, so you can make changes. Also, it's important for builds to be reproducible.

      But a standard binary format has benefits that are somewhat independent of how you build the binaries. Different people can do the builds and the installs.

      3 votes
  3. [27]
    unkz
    Link
    It is a little unclear as to why though. Arenā€™t containers basically a good thing? Particularly if you use ECS or k8s for deployment.

    It is a little unclear as to why though. Arenā€™t containers basically a good thing? Particularly if you use ECS or k8s for deployment.

    9 votes
    1. [20]
      LGUG2Z
      Link Parent
      Thanks for asking this question! I may write a more complete follow-up post on this another day. Some high level points on the "why": Reproducibility: Docker builds are not reproducible, and...

      Thanks for asking this question! I may write a more complete follow-up post on this another day.

      Some high level points on the "why":

      • Reproducibility: Docker builds are not reproducible, and especially in a company with more than a handful of developers, it's nice not to have to worry about a docker build command in the on-boarding docs failing inexplicably (from the POV of the regular joe developer) from one day to the next

      • Cost: Docker licenses for even small companies now cost $9/user/month (minimum of 5 seats required) - this is very steep for something that doesn't guarantee reproducibility and has poor performance to boot (see below)

      • Performance: Docker performance on macOS (and Windows), especially storage mount performance remains abysmal; this is even more acutely felt when working with languages like Node where the dependencies are file-count heavy. Sure, you could just issue everyone Linux laptops, but these days hiring is hard enough without shooting yourself in the foot by not providing a recent MBP to new devs by default

      I think it's also worth drawing a line between containers as a local development tool and containers as a deployment artifact, as the above points don't really apply to the latter.

      20 votes
      1. [6]
        unkz
        Link Parent
        How is this more reproducible than a docker build? Is it not the case that Docker is free for commercial use unless you have 250 employees or $10m in revenue? I guess I really only use docker on...

        How is this more reproducible than a docker build?

        Is it not the case that Docker is free for commercial use unless you have 250 employees or $10m in revenue?

        I guess I really only use docker on Linux, so maybe this last issue makes some sense.

        I donā€™t know if I agree about separating dev and deployment though. Part of why I use docker is I get the same environment in dev as prod, which would not be the case if I used docker for deployment and nix for development.

        15 votes
        1. [2]
          xphade
          Link Parent
          And I think this is only the case if you use Docker Desktop. Docker Engine (which is the only thing we're using in my team) is free as far as I know.

          Is it not the case that Docker is free for commercial use unless you have 250 employees or $10m in revenue?

          And I think this is only the case if you use Docker Desktop. Docker Engine (which is the only thing we're using in my team) is free as far as I know.

          11 votes
          1. buzziebee
            Link Parent
            Yeah I don't even use docker desktop. Compose and the cli are more than good enough for organising containers locally. It feels to me (though it may be an unfair assessment) like the objections to...

            Yeah I don't even use docker desktop. Compose and the cli are more than good enough for organising containers locally. It feels to me (though it may be an unfair assessment) like the objections to using docker are more of a personal preference thing and the case against it is being built with justifications based on that preference.

            In my mind, if you build and deploy images you should really be working with them locally. If they are flaky locally then they will be flaky in production and you should fix that. It looks like OP is advocating for deploying binaries and using systemd to run them so I guess it doesn't apply to them.

            If that setup works for them great and I'm glad they've shared the info. I haven't really experienced any of these downsides that would mean I would benefit from manually configuring environments and dependencies over using containers.

            2 votes
        2. takeda
          Link Parent
          Docker solution to "it works on my computer" is to bundle the computer with the application, while nix requires you to explicitly list all dependencies and the build happens in a sandbox (so...

          Docker solution to "it works on my computer" is to bundle the computer with the application, while nix requires you to explicitly list all dependencies and the build happens in a sandbox (so external stuff doesn't interfere).

          It is is essentially the difference between file describing how to build your application from scratch VS bundling an image with it.

          10 votes
        3. [2]
          LGUG2Z
          Link Parent
          This is probably the best explanation I've seen about how Nix achieves reproducibility, it's worth the hour runtime. This talk also goes quite deep into the Docker model as well. I can count on...

          This is probably the best explanation I've seen about how Nix achieves reproducibility, it's worth the hour runtime. This talk also goes quite deep into the Docker model as well.

          Is it not the case that Docker is free for commercial use unless you have 250 employees or $10m in revenue?

          I can count on one hand the number of people I know employed in software development who work for a company that would fall under this free tier. šŸ˜… Generally software companies that fall under this provision are aiming to not fall under this provision as soon as possible (or at least before their funding runs out).

          I guess I really only use docker on Linux, so maybe this last issue makes some sense.

          This reminds me of another point; the experience of sharing Docker-based development environments between Linux and macOS hosts is also quite painful, especially when dealing with dynamic languages (Node, Python, etc.) which are known for having popular dependencies that build native, platform and architecture-specific libraries. I never want to see another node-gyp error again!

          9 votes
          1. aphoenix
            Link Parent
            You can +1 to the number of people you know that fall under the free tier; my company is small, but we are not looking to cross the 250 employee mark (we would have to hire 240 people or so). I'd...

            You can +1 to the number of people you know that fall under the free tier; my company is small, but we are not looking to cross the 250 employee mark (we would have to hire 240 people or so). I'd love to cross the $10M revenue though!

            12 votes
      2. [10]
        petrichor
        Link Parent
        Can you elaborate on this? While it certainly is true that you can fuck up your Docker environment, hard, and then have weird networking issues or filesystem mount issues or what not, I find...

        Docker builds are not reproducible

        Can you elaborate on this? While it certainly is true that you can fuck up your Docker environment, hard, and then have weird networking issues or filesystem mount issues or what not, I find Docker to be very reasonably reproducible for everything within the container - and networking problems are significantly cut down by it. I'm curious if environment failures are what you mean, or something else.

        I am also curious if Nix (or extensions to Nix) can provide for cross-platform deployments.

        3 votes
        1. [9]
          LGUG2Z
          Link Parent
          I was trying to formulate a clear and concise example of this lack of reproducibility but struggled. šŸ˜… I think this comment from HN does a decent job: So basically, whenever you do an apt-get (or...
          • Exemplary

          I was trying to formulate a clear and concise example of this lack of reproducibility but struggled. šŸ˜…

          I think this comment from HN does a decent job:

          Dockerfiles which just pull packages from distribution repositories are not reproducible in the same way that Nix expressions are.

          Rebuilding the Dockerfile will give you different results if the packages in the distribution repositories change.

          A Nix expression specifies the entire tree of dependencies, and can be built from scratch anywhere at any time and get the same result.

          So basically, whenever you do an apt-get (or similar command) in a Dockerfile, reproducibility is lost.

          However, if you carefully craft a Dockerfile that doesn't make any calls to package managers, and doesn't inherit from any base images that make calls to package managers, you can have a reproducible container image. However, this is not how the Docker ecosystem works or what the Dockerfile format is set up to accommodate.

          On the other hand, if you use a Nix expression to create an OCI container image tarball (which can be loaded into Docker), this kind of container image is completely reproducible. This is a really good article on how building fully reproducible OCI containers with Nix works; I've linked to an anchor a fair bit down the page to the juiciest part, but the whole thing is worth a read if you find the topic interesting.

          I am also curious if Nix (or extensions to Nix) can provide for cross-platform deployments.

          Can you elaborate on this a bit more? Does platform here mean the underlying OS platform, or the type of deployment abstraction running on top of an OS (eg. Kubernetes, Nomad, Systemd etc.)? I have experience deploying binaries built with Nix to Linux servers (running on bare metal and managed with systemd) and also building containers to deploy on top of Kubernetes.

          10 votes
          1. [6]
            Greg
            Link Parent
            I may well be misunderstanding, but isnā€™t this just a choice of where you draw the boundary between your images? Iā€™d expect a versioned environment image that does all of the OS level stuff like...

            However, if you carefully craft a Dockerfile that doesn't make any calls to package managers, and doesn't inherit from any base images that make calls to package managers, you can have a reproducible container image. However, this is not how the Docker ecosystem works or what the Dockerfile format is set up to accommodate.

            I may well be misunderstanding, but isnā€™t this just a choice of where you draw the boundary between your images?

            Iā€™d expect a versioned environment image that does all of the OS level stuff like apt-get - this build step doesnā€™t need to be reproducible per se because the act of building the image is locking the dependencies. You worry about image version, not individual dependency versions, and when itā€™s rebuilt itā€™s a clear opt-in change.

            After that you pull the environment image for dev, load your application code into it to work on, and then when the PR is merged the build pipeline creates a finalised application image based on the same environment image and the updated code. I guess what Iā€™m saying is Iā€™m not sure I see why docker build needs to be reproducible if itā€™s only run on the parts that change anyway?

            4 votes
            1. [2]
              spit-evil-olive-tips
              (edited )
              Link Parent
              not necessarily. for a concrete example: there's a Dockerfile for the foo-service that has RUN apt-get install pkg-a pkg-b pkg-c as one of its build steps. I'm a new hire on the team that...

              this build step doesnā€™t need to be reproducible per se because the act of building the image is locking the dependencies. You worry about image version, not individual dependency versions, and when itā€™s rebuilt itā€™s a clear opt-in change.

              not necessarily. for a concrete example:

              there's a Dockerfile for the foo-service that has RUN apt-get install pkg-a pkg-b pkg-c as one of its build steps.

              I'm a new hire on the team that maintains the foo-service and its Docker image. I get assigned a nice, easy ramp-up task that requires adding pkg-d to that list, as well as writing some code that will make use of that newly installed package to do something.

              the Docker build succeeds...but in the resulting image, some unrelated functionality is broken. and it turns out it's broken because my change to add D also inadvertently upgraded packages A, B, and C. and one of those upgrades turns out to have some incompatibility with our code.

              opting-in to a rebuild is not an all-or-nothing question. yes, I opted-in to rebuilding the image, but I didn't opt-in to each of those package upgrades. I just wanted to add D.

              (or, rather than adding pkg-d to the Docker image, this same thing could happen if all I did was modify a line in the Dockerfile before the apt-get install step, because that sort of change invalidates all subsequent cached layers)


              for another example of how this can break down:

              we have a CI server that builds these Docker images and pushes them to whatever registry we use.

              one day, the build server dies. say the EC2 instance it was running on crashed, or something like that that's outside of our control.

              no problem, our build tools team was smart enough to have the build server deployed through Terraform or some other IaC system. so spinning up a replacement build server is easy.

              however...the old build server had a nice big cache of Docker image layers. the new server doesn't, it starts off with an empty cache.

              ideally, this should only be a one-time performance problem - the first builds that run on the new server will take a bit longer than before.

              except, the dependencies in the Dockerfile weren't pinned. so when it goes to build the image, we have that same inadvertent upgrade of packages A, B and C.

              a cache invalidation has thus been magnified into a correctness problem.

              and that build tools team might be a separate team entirely from the team working on foo-service. when they rebuilt the CI server, they didn't think to give a heads-up to every team that uses the build server. that means as someone working on the foo-service team, I have no idea that the build cache invalidation took place.

              so, a nightly build or whatever gets deployed to a test environment, and it's buggy because of the inadvertent upgrade. if we do continuous deployment and our automated tests don't catch the bug, maybe it even gets deployed to production.

              and now I'm tracking down a bug that seems like it should be "impossible", because no one on my team has touched the relevant code recently.

              if you want to make this even worse, imagine there's two build servers, each with its own cache, and only one of them failed and needed to be rebuilt. if each build runs on a randomly-selected build server, you now flip a coin on every single CI build to determine if you get a working image or a buggy one.

              4 votes
              1. Greg
                (edited )
                Link Parent
                First example totally makes sense, and now that I'm looking at it as saving effort rather than increasing safety that's definitely clearer to me (edit: although to be fair less effort does also...

                First example totally makes sense, and now that I'm looking at it as saving effort rather than increasing safety that's definitely clearer to me (edit: although to be fair less effort does also mean fewer things that might be missed, so I guess it actually does both!). My mindset was that adding package D could inherently be expected to interact with the installation state as a whole, but yeah, it's actually quite fair to say that life is easier when that's kept to the bare necessary minimum.

                The second one, part of me wants to say that letting the cache state have that impact is a misconfiguration in itself (for context, I was meaning that there would be an environment image and an output image: the Dockerfile for the actual application deploy would start with FROM foo-service-env:6fda48f or whatever, where all of the dependencies are provided as part of the environment image), but in reality I know for damn sure that it's unreasonable to expect every user of the tool to foresee and work around that kind of thing every time. I've sure as hell missed things a lot less subtle than that on plenty of occasions!

                1 vote
            2. [2]
              LGUG2Z
              (edited )
              Link Parent
              You're definitely not misunderstanding, and this can and does work given the right number of people, but my experience has been that as the number of people grows (along with the scope of the...

              Iā€™d expect a versioned environment image that does all of the OS level stuff like apt-get - this build step doesnā€™t need to be reproducible per se because the act of building the image is locking the dependencies. You worry about image version, not individual dependency versions, and when itā€™s rebuilt itā€™s a clear opt-in change.

              You're definitely not misunderstanding, and this can and does work given the right number of people, but my experience has been that as the number of people grows (along with the scope of the product, breadth of features, etc.), this base layer needs to change more frequently. It feels like most npm installs that have a node-gyp step will require some obscure system dependency you hadn't thought of to be installed in order to compile the required native modules. šŸ˜…

              After that you pull the environment image for dev, load your application code into it to work on, and then when the PR is merged the build pipeline creates a finalised application image based on the same environment image and the updated code.

              Building this kind of image and caching is a great idea too to avoid local builds, but if it breaks on CI, it's still broken and it's still potentially blocking on-boarding of new joiners (though sometimes it may be alright for them to manually pull old revisions as a temporary measure). :/

              3 votes
              1. Greg
                Link Parent
                That makes a lot of sense - I'm used to working on smaller teams, but I can see that the overhead of changing the base image and working through any interconnected issues could stack up. That's...

                That makes a lot of sense - I'm used to working on smaller teams, but I can see that the overhead of changing the base image and working through any interconnected issues could stack up. That's actually a more compelling case for Nix than what I originally understood: it's not that there's a risk of things breaking unexpectedly or nondeterministically with (properly used) docker, both tools will achieve that exact same end goal, it's just that the workload of ensuring it can potentially be lower with Nix.

                Building this kind of image and caching is a great idea too to avoid local builds, but if it breaks on CI, it's still broken and it's still potentially blocking on-boarding of new joiners (though sometimes it may be alright for them to manually pull old revisions as a temporary measure). :/

                That one still strikes me as a tooling and process question to be honest. As I see it, everyone's work branch will have a known good environment image attached to it from main by definition - when a PR is opened the CI pipeline pulls the repo, merges the branch to main on the build server, and runs a build. If all tests on that image pass, it's eligible to become the new primary image after code review; if merging a given branch would create an image that doesn't pass CI, then it's blocked until the issue is fixed and never becomes the new main environment. There's no opportunity for a bad image to go anywhere beyond its own dev branch.

                This isn't to say I don't see potential value in Nix, now that I'm a little clearer; quite the opposite, I'm actually going to do a bit more reading and see if it has a useful place in our workflow. Just that I see its potential as solving problems more easily, rather than solving problems that previously weren't solved.

                3 votes
            3. flowerdance
              Link Parent
              I was quite confused by OP on some points as well. They mention that behaviour can change if the dependencies within the image has changed, but that's essentially what docker images are for. They...

              I was quite confused by OP on some points as well. They mention that behaviour can change if the dependencies within the image has changed, but that's essentially what docker images are for. They provide a static base environment that you can always fall back on. They're called "base images" for a reason. There's also a stable version called a "golden image."

              2 votes
          2. [2]
            xk3
            Link Parent
            I've seen some Nix recipes before and I wonder how much reproducibility for both Docker and Nix has to do more about "controlled vocabulary" / culture as an ability of being more opinionated vs a...

            I've seen some Nix recipes before and I wonder how much reproducibility for both Docker and Nix has to do more about "controlled vocabulary" / culture as an ability of being more opinionated vs a specific limitation in either technologies. I'm sure there is a way to create unreproducable builds within Nix by using "non-standard" or uncommon escape hatches and that is kind of the point. The default way is more reproducible with Nix vs the "unopinionated"-opinionated Docker

            1 vote
      3. archevel
        Link Parent
        As an anecdote about reproducibility, at my current job we're using docker to build our .net application. We build the images and tag them with the git commit they are built from. These are then...

        As an anecdote about reproducibility, at my current job we're using docker to build our .net application. We build the images and tag them with the git commit they are built from. These are then pushed to a repo in AWS ECR. The repo is setup with immutable tags, i.e. it won't accept an image tagged as XYZ if there is one already where the layers in the images don't match. Now if we build the image multiple times from the same git hash we ought to get the same layers... and we do... except if we build on the same day. If on the other hand we rebuild the image a day later the images will differ!

        Now, I don't think this is really an issue with docker per se. It more likely stems from .net including something date dependant in the build output. This I don't think would be solved with using Nix unfortunately.

        Docker is a convenient way to package applications along with their dependencies so they can be deployed on some server. It CAN also be convenient when developing if you need multiple interacting applications to be up to test the systems in some way. However, for just development and in particular debugging it ends up being in the way rather than an aid. YMMV.

        3 votes
      4. [2]
        skybrian
        Link Parent
        I don't actually use Docker files, but I had the general impression that it's more of an open standard now and people use a lot of other tools, many of them free? Is that not the case? Also, it...

        I don't actually use Docker files, but I had the general impression that it's more of an open standard now and people use a lot of other tools, many of them free? Is that not the case?

        Also, it seems like making building a Docker file reproducible would be a matter of picking the right build system to do the build? Sure, if your build doesn't pin versions, that's not going to be reproducible.

        1 vote
        1. LGUG2Z
          Link Parent
          I linked this article in response to another commenter which I think is worth the read if you're interested in how you could use a different build system to create a reproducible OCI container!

          I linked this article in response to another commenter which I think is worth the read if you're interested in how you could use a different build system to create a reproducible OCI container!

          1 vote
    2. [3]
      aphoenix
      Link Parent
      I agree. I feel like this addresses the easy question, "how?" and leaves out the complicated question of "why?". That said, it is a good writeup on "how".

      I agree. I feel like this addresses the easy question, "how?" and leaves out the complicated question of "why?". That said, it is a good writeup on "how".

      5 votes
      1. [2]
        CannibalisticApple
        Link Parent
        Sometimes you just need a good writeup on "how" with none of the "why". The number of times I've searched guides that have the first half dedicated to "why" and then barely go into the "how"...

        Sometimes you just need a good writeup on "how" with none of the "why". The number of times I've searched guides that have the first half dedicated to "why" and then barely go into the "how"...

        9 votes
        1. aphoenix
          (edited )
          Link Parent
          I think a good writeup on "how" can be super important, and as I said, this is a good one; it's well written and easy to follow. And sometimes people need a good writeup on "how" - if someone was...

          I think a good writeup on "how" can be super important, and as I said, this is a good one; it's well written and easy to follow. And sometimes people need a good writeup on "how" - if someone was looking for using overmind and just, then this is a great writeup on doing so.

          In some cases though - and I think this is one of those cases - the first inclination for many people is going to bey "why would I do this thing?" when they encounter an article like this. So when we are given something in a link aggregator site, we need some context about why this might be of interest.

          @LGUG2Z did provide that in the commehnts, and I'm grateful.

          4 votes
    3. [3]
      Ganymede
      Link Parent
      As always it depends. It's an added layer of complexity that you should only choose if you understand the pros/cons and are intentionally making the choice for your specific use case.

      As always it depends. It's an added layer of complexity that you should only choose if you understand the pros/cons and are intentionally making the choice for your specific use case.

      3 votes
      1. [2]
        stu2b50
        Link Parent
        Although if reducing complexity is the goal I'm not sure introducing Nix to the system is contributing to that.

        Although if reducing complexity is the goal I'm not sure introducing Nix to the system is contributing to that.

        7 votes
        1. LGUG2Z
          Link Parent
          I'm at the point in my career where I've accepted that complexity can never really be reduced; its burden can be shifted from one place to another, but it is never really reduced. I've been...

          I'm at the point in my career where I've accepted that complexity can never really be reduced; its burden can be shifted from one place to another, but it is never really reduced.

          I've been thinking about your comment for a while, and I think you would do something like this in a company if your goal was to increase reliability.

          I think that shifting the complexity in this way is a reasonable trade-off for increasing reliability over an engineering org of 100s.

          8 votes
  4. [6]
    Beenrak
    Link
    I didn't really think anyone felt that docker was the best way to develop. It's makes debugging far more complicated and introduced overhead. The reason docker is so heavily used is for...

    I didn't really think anyone felt that docker was the best way to develop. It's makes debugging far more complicated and introduced overhead.

    The reason docker is so heavily used is for deployment. It completely removes the complexity of making sure your dev environment and production environment is exactly the same. Plus it allows you to enforce strong and hard black box interfaces between different components.

    Do you feel you've lost any of this in your transition? What about the ease of setting up fully functional third party tools?

    5 votes
    1. [5]
      LGUG2Z
      Link Parent
      "Roughly" the same would be more accurate here; two Dockerfiles built days, weeks or months apart can produce subtly (or drastically) different results. Generally I eschew micro-services (if this...

      It completely removes the complexity of making sure your dev environment and production environment is exactly the same

      "Roughly" the same would be more accurate here; two Dockerfiles built days, weeks or months apart can produce subtly (or drastically) different results.

      Plus it allows you to enforce strong and hard black box interfaces between different components.

      Generally I eschew micro-services (if this is what is meant by "components") in favour of monolithic applications, so this doesn't really apply in my personal work. My day job however has (too) many microservices, and I don't think anything has been lost here in the transition.

      Previously, we were using dnsmasq and traefik in the context of Docker Compose to expose the public components under https://service.companyname.local local DNS endpoints, and letting those that would be internal only communicate through the Docker network, taking their hostnames from environments variables (because they'd need to be configured separately for deployment on Kubernetes later).

      After migrating away from Docker Compose, it still looks conceptually the same; we have dnsmasq and caddy running locally (in a process, not a container this time), and all of the public-facing components are exposed locally on https://service.companyname.local endpoints. Services that will communicate on the private network inside the Kubernetes cluster in production environments communicate with each other over localhost in this local development setup instead of the previous Docker network.

      What about the ease of setting up fully functional third party tools?

      All of the usual suspects (mainly data stores, caches etc.) are just as easy to set up as with Docker in the context of a local development environment (sometimes even easier, because for custom stuff we can go directly to the documentation rather than having to reconcile the documentation with whatever additional hooks and shims are being inserted in the Dockerfile builds that are pushed to public registries).

      When it comes to setting up third party tools on a NixOS server, the experience is significantly easier and more ergonomic than working with Docker. You just look here, find the service you want and hit enable 99% of the time.

      4 votes
      1. [2]
        bolundxis
        Link Parent
        Why are you building your docker images so often? I'm no expert but my understanding is that you build the image once and use that after. If something changes in the config you build again, not...

        Why are you building your docker images so often? I'm no expert but my understanding is that you build the image once and use that after.

        If something changes in the config you build again, not everyone should build every time.

        1 vote
        1. LGUG2Z
          Link Parent
          It's not so much about building repeatedly but building over time. It sucks to have someone being onboarded who can't get started because something which worked a month ago, and that continues to...

          It's not so much about building repeatedly but building over time.

          It sucks to have someone being onboarded who can't get started because something which worked a month ago, and that continues to work for others because they have locally cached layers, has broken just for them (and future new joiners).

          1 vote
      2. [2]
        Beenrak
        Link Parent
        So long as you are providing versions in your docker/compose file, what causes differences to arise in built images?

        So long as you are providing versions in your docker/compose file, what causes differences to arise in built images?

        1. LGUG2Z
          Link Parent
          Generally speaking, the changing state of upstream package managers called in the build steps is what causes differences to arise over time. This is less of a problem if you're just pulling a...

          Generally speaking, the changing state of upstream package managers called in the build steps is what causes differences to arise over time.

          This is less of a problem if you're just pulling a PostgreSQL container from DockerHub, but has the potential to be more pronounced if you are building your own development containers to execute your code within. Again, this second point varies with the complexity of your Dockerfiles as well.

  5. ButteredToast
    Link
    Iā€™m something of a layman on the issue at this point since I havenā€™t done serious web development in a decade (have been doing mobile development instead), so my thoughts may not count for much,...

    Iā€™m something of a layman on the issue at this point since I havenā€™t done serious web development in a decade (have been doing mobile development instead), so my thoughts may not count for much, butā€¦

    On paper, Docker makes a lot of sense but in my dabblings Iā€™ve not liked how strongly married to Linux it is, primarily because of the performance issues and jank that introduces when using it on macOS and Windows. It feels a bit like a half-baked solution, working well on servers (well, unless you want to run some flavor of BSD anyway) with the non-Linux desktop experience being passable at best.

    So itā€™s interesting to read about attempts to seek out something better, even if Iā€™m not likely to immediately use it. It feels good to see that things are still moving. Maybe by the time I get back into web dev in earnest something better will have supplanted Docker.

    2 votes
  6. BroiledBraniac
    Link
    Docker is great in prod, pain in the ass on local. Waste of your RAM. I also use Overmind.

    Docker is great in prod, pain in the ass on local. Waste of your RAM. I also use Overmind.

    2 votes