Greetings folks, I wrote a tutorial on how to manage the dumpster fire that arises whenever one has to contribute to projects with very diverse stacks using asdf.vm. It's been a highly debated...
Greetings folks, I wrote a tutorial on how to manage the dumpster fire that arises whenever one has to contribute to projects with very diverse stacks using asdf.vm. It's been a highly debated topic, so I figured y'all might be interested :D
As usual, feel free to ask away!
Wow! I'm not really a dev (sysadmin/labber and infosec) so I don't normally need to contend with this, but man. I can't tell you how many times I've tried install some tool from GitHub and tried...
Wow! I'm not really a dev (sysadmin/labber and infosec) so I don't normally need to contend with this, but man. I can't tell you how many times I've tried install some tool from GitHub and tried installing modules and other components only for it to turn into a huge mess.
'asdf' looks like it will be super helpful absolutely adding it to my Desktop anisble script. Bonus for such a well written and visually supported tutorial!
Hey there ! I'm a fellow infosec/automation/backend and product guy! Let me know in case you have free cycles and want to collaborate on research, I've got some ideas :) It's been crazy lately...
Wow! I'm not really a dev (sysadmin/labber and infosec)
Hey there ! I'm a fellow infosec/automation/backend and product guy! Let me know in case you have free cycles and want to collaborate on research, I've got some ideas :)
so I don't normally need to contend with this, but man.
It's been crazy lately because I'm testing other people's environment and code and without the setup described in the OP I'd had gone crazy by now.
I can't tell you how many times I've tried install some tool from GitHub and tried installing modules and other components only for it to turn into a huge mess.
An absolute shit show, I hear you.
On Reddit someone was complaining that asdf.vm environments cannot be deployed and therefore are not useful, not as Docker containers, well if every project just came with a Dockerfile or docker-compose.yml
'asdf' looks like it will be super helpful absolutely adding it to my Desktop anisble script.
Is your desktop ansible script on a public repo somewhere? Curious to cross-checks notes!
Bonus for such a well written and visually supported tutorial
Thanks mate! I appreciate it, feel free to stay in the loop via the mailing list or socials, my resolution for this year is to be more active :D I love explaining stuff and teaching, so it really motivates me how good the reception has been; although the debate has been intense as hell.
This is interesting. I have to confess the idea of having so many versions of things installed gives me the heebeejeebies. But if one must live in that environment, this seems like a useful tool....
This is interesting. I have to confess the idea of having so many versions of things installed gives me the heebeejeebies. But if one must live in that environment, this seems like a useful tool.
I'm curious how often the tool dependencies (like library paths) get snarled up when switching environments. Does the tool only shim the binaries, or does it have a provision to set up environment variables as well?
One problem that I see is that the global config has the potential to hide dependencies. Have you considered the ability to add a "no-globals" option to the local config files, so that users must explicitly choose a version for that context? That way when the config gets committed, it carries all the version info explicitly. I can see this being the default behavior for configs inside version control, but maybe that is a bridge too far.
FWIW, I am all-in on containerized development, especially if the app itself will eventually be deployed in containers. For sure, there is a learning curve, but the ability to precisely control the environment seems worth the cost to me. For my dev setups, I like having task as the bootstrap dependency. From there, a task setup-env and a task check-env can check for or install run time dependencies (mainly things like docker, k3d, helm, etc). Then I usually have a task build-dev-env for the dev docker file and task run-dev-env to launch a shell in it. The task file is a great resource for people on the team who don't understand the containerization to use it in a repeatable way. Debugging remote apps is one of the hardest part of working this way, but debug servers running in the container are pretty well supported by things like vscode.
I use this. I haven't encountered issues with libraries or runtime paths. It doesn't have a built in provision for environment variables, everything goes through the shims. And I've never...
I use this. I haven't encountered issues with libraries or runtime paths. It doesn't have a built in provision for environment variables, everything goes through the shims. And I've never encountered issues with cleanup, since all the environment management goes through the shims and don't leak out into your shell.
There is the asdf direnv plugin, which allows you to set up environment vars, not using the shims, through an envrc. I haven't encountered issues with cleanup here either, but this one seems more likely to cause issues. You can put arbitrary environment setup in the envrc.
I appreciate the feedback! You and me both, but it's the deck of cards I've been dealt :D If that gives you the heebie-jeebies, check this comment on HackerNews out: To be honest, this has only...
This is interesting.
I appreciate the feedback!
I have to confess the idea of having so many versions of things installed gives me the heebeejeebies.
You and me both, but it's the deck of cards I've been dealt :D
If that gives you the heebie-jeebies, check this comment on HackerNews out:
Whish there were some CLI to speed up this process actually. Just cd:ing into a folder should pull everything down for you to run iex/irb/node/etc as if it was native but running through the container.
But if one must live in that environment, this seems like a useful tool
I can't emphasize enough how many headaches I've solved or altogether prevented from arising by relying on this setup.
I'm curious how often the tool dependencies (like library paths) get snarled up when switching environments.
To be honest, this has only ever happened to me a couple of times (two?) in five years and it was due to brew and how opinionated and careless with its changes it is.
Does the tool only shim the binaries, or does it have a provision to set up environment variables as well?
Each plugin is different, since each language runtime operates differently. For example, for Java environment variables such as JAVA_HOME are set. I'll delves bit deeper into astd.vm internals in a future post.
One problem that I see is that the global config has the potential to hide dependencies. Have you considered the ability to add a "no-globals" option to the local config files, so that users must explicitly choose a version for that context?
Wouldn't explicitly stating the desired version for the given local repo and pushing it suffice in this case? A no globals with known or customary expected values seems equivalent to a set and pushed local .tool-versions, or am I missing something?
I am all-in on containerized development, especially if the app itself will eventually be deployed in containers. For sure, there is a learning curve, but the ability to precisely control the environment seems worth the cost to me.
I certainly agree, with the big plus than by the time you have your environment setup properly sorted out you also have a Dockerfile ready for deployment. However, and this is a big issue, exploring the solution space of challenges and experimenting within Docker containers is a pain in the butt! For that alone I always want to setup a local, native development environment. Also because I feel that if I've set it up locally, in a reproducible manner, I truly understand it and can port it anywhere. Nevertheless, I do agree with you in principle.
For my dev setups, I like having task as the bootstrap dependency.
Interesting, this is the second simpler version of make I learn about today ( the second being just )
From there, a task setup-env and a task check-env can check for or install run time dependencies (mainly things like docker, k3d, helm, etc). Then I usually have a task build-dev-env for the dev docker file and task run-dev-env to launch a shell in it. The task file is a great resource for people on the team who don't understand the containerization to use it in a repeatable way.
This sounds really cool, do you have one such speced-out task file public somewhere? I'd like to dive into it.
Debugging remote apps is one of the hardest part of working this way, but debug servers running in the container are pretty well supported by things like vscode.
It wasn't always the case, but I do agree that they've come a long way. Nevertheless, nothing approaches the ease with which you can debug and continue down the dependency tree when your languages runtime environment is just yet another directory within ~/.asdf/language/version/package
The scenario I'm thinking of is: a directory has its own asdf config set up for java, so the local config captures the java dependency. The config file gets pushed. (This is the correct normal...
Wouldn't explicitly stating the desired version for the given local repo and pushing it suffice in this case? A no globals with known or customary expected values seems equivalent to a set and pushed local .tool-versions, or am I missing something?
The scenario I'm thinking of is:
a directory has its own asdf config set up for java, so the local config captures the java dependency. The config file gets pushed. (This is the correct normal flow, as you say
Someone adds a python script that does some task, and it works correctly for them because they have python 3.13 in their global config. They push the script but no local asdf config change.
You pull the change but the script fails because your global config is on python 3.8.
But if someone had set something no-globals in the local config when settling things up, in step 2, the person adding the script would have gotten an error message trying to use python without setting the version explicitly and corrected it at the moment of the mistake.
Of course, failure to add the dependency is correctable, and doing it right the first time something you can ask people to "always do". Maybe something you can catch in CI. And if it's a dependency that asdf is not yet shimming, this won't catch it. So maybe you about lump it into all the weird corner cases, like calling python ../otherwise/whatever.py. (Can you tell I'm a stress tester by background?)
I certainly agree, with the big plus than by the time you have your environment setup properly sorted out you also have a Dockerfile ready for deployment. However, and this is a big issue, exploring the solution space of challenges and experimenting within Docker containers is a pain in the butt!
The way I usually do this is open up a new dockerfile and a terminal, then do a docker run --rm -it debian:bookworm-slim /bin/bash. Then I just mess around installing things interactively, adding lines to my dockerfile as I go to keep track of what I did. When I'm happy with the environment, I consolidate all the apt installs into a single dockerfile command to reduce the image size and add any manually installed tools as steps.
This sounds really cool, do you have one such speced-out task file public somewhere?
I don't have a public example, but I can put a couple inline here.
First, a really simple one that just does the docker stuff. run-env gives you an interactive environment, or automatic-task can be used to invoke some executable like task automatic-task -- -some-args foo. Note how in each command, some part of the local file system is mounted into the docker container so that things you do inside (like run a compiler) persist outside the container.
Here's the taskfile from my quote database project. It is a little more complicated because it's configuring a local k3d environment, but it has some of the bootstrapping stuff in it. Basically, if there's a commandline thing that you would do repeatedly, like to set the environment up, I like to put it in a task so that I don't have to remember how to do it.
Because there are so many commands, I have the names set up in least-specific-word to most-specific-word order so that you can (for example) start typing task docker then tab complete to get all the docker tasks. Task has the ability to import other task files, so some people might break these down into several task files and import them, which would basically put them in a namespace and achieve a similar effect, but with : instead of -.
Taskfile (long)
# https://taskfile.devversion:'3'vars:#alphabetical listDOCKER_PREFIX:quotableHELM_DEPLOYMENT_NAME:quotableLOCAL_CLUSTER_NAME:quotableLOCAL_REGISTRY_NAME_SUFFIX:quotable.registry.localhostLOCAL_REGISTRY_NAME:k3d-{{.LOCAL_REGISTRY_NAME_SUFFIX}}LOCAL_REGISTRY_PORT:34523MIGRATE_VERSION:v4.15.1MANUAL_BACKUP_JOB_NAME:manual-backupREMOTE_REGISTRY_NAME:raybetter/quotableGIT_HASH:sh:git log -n 1 --format=%H# POSTGRES_LOCAL_DOCKER_NAME: quotable-postgrestasks:##################################################################################################### cluster tasksregistry-local-create:description:create the local registry -- do this before creating the local clustercmds:-k3d registry create {{.LOCAL_REGISTRY_NAME_SUFFIX}} --port {{.LOCAL_REGISTRY_PORT}}registry-local-delete:description:delete the local registrycmds:-k3d registry delete {{.LOCAL_REGISTRY_NAME}}cluster-local-create:description:create the local clusterpreconditions:# require the registry be created first-sh:'["$(k3dregistrylist--no-headers|grep-c"k3d-quotable.registry.localhost")"-eq1]'cmds:-k3d cluster create {{.LOCAL_CLUSTER_NAME}} -p "8080:80@loadbalancer" --registry-use k3d-{{.LOCAL_REGISTRY_NAME_SUFFIX}}:{{.LOCAL_REGISTRY_PORT}}# let the cluster get going so kubectl will succeed-sleep 15#this fixes the traefik configuration for local deployment (ignore cert correctness) and turns on ingress logging-kubectl apply -f deployment/k3d/traefik-local-config.yamlcluster-local-delete:description:delete the local clustercmds:-k3d cluster delete {{.LOCAL_CLUSTER_NAME}}cluster-remote-setup:description:set up the terraform for the local clusterdir:deployment/remote/terraformcmds:-terraform init --upgrade-terraform validatecluster-remote-create:description:create a cloud instance, install kubernetesdir:deployment/remote/terraforminteractive:truecmds:-ssh-agent ./run_terraform.sh-chmod 600 quotable_kubeconfig.yaml-KUBECONFIG=quotable_kubeconfig.yaml kubectl apply -f ../k8s/letsencrypt.yaml-KUBECONFIG=quotable_kubeconfig.yaml kubectl apply -f ../k8s/traefik-cert-fix.yamlcluster-remote-list-servers:description:list the servers running under the API tokenpreconditions:-sh:'[-e"deployment/remote/secrets/hetzner_api_key"]'cmds:-HCLOUD_TOKEN=$(cat deployment/remote/secrets/hetzner_api_key) hcloud server listcluster-expose-traefik-dashboard:description:open the traefik dashboard on port 9000; run this command then navigate to `http://localhost:9000/dashboard/`interactive:truevars:PODNAME:sh:kubectl get pods -n kube-system -l app.kubernetes.io/name=traefik -o=jsonpath='{.items[0].metadata.name}'cmds:-kubectl port-forward -n kube-system {{.PODNAME}} 9000##################################################################################################### deployment tasksdeploy-secrets-create:description:create secrets in the clustercmds:-bash deployment/scripts/create_secrets.sh "deployment/secrets/"deploy-secrets-delete:description:delete secrets in the clustercmds:-bash deployment/scripts/delete_secrets.shdeploy-local-install:description:deploy the helm chart locallycmds:-helm upgrade --install -f deployment/charts/quotable/values.yaml {{.HELM_DEPLOYMENT_NAME}} deployment/charts/quotable/deploy-local-uninstall:description:uninstall the local helm chartcmds:-helm uninstall {{.HELM_DEPLOYMENT_NAME}}deploy-remote-regsecret:description:create the docker hub regsecretcmds:-bash deployment/remote/secrets/create_dockerhub_secret.shdeploy-remote-install:description:deploy the helm chart locallycmds:-helm upgrade --install -f deployment/remote/k8s/remote-values.hetzner.yaml {{.HELM_DEPLOYMENT_NAME}} deployment/charts/quotable/deploy-remote-uninstall:description:uninstall the local helm chartcmds:-helm uninstall {{.HELM_DEPLOYMENT_NAME}}deploy-get-db-password:description:get the database password from the k8s secretcmds:-kubectl get secret postgres-secrets -o=jsonpath='{.data.password}' | base64 -ddeploy-expose-backend:description:expose the backend service on localhost at 3030interactive:truecmds:-kubectl port-forward service/backend 3030:3000deploy-expose-frontend:description:expose the frontend service on localhost at 8082interactive:truecmds:-kubectl port-forward service/frontend 8082:8080deploy-expose-db:description:expose the db service on localhost at 5342interactive:truecmds:-kubectl port-forward service/quotable-db 5432:5432deploy-manual-dbbackup-create:description:manually trigger a backup jobpreconditions:-sh:'["$(kubectlgetjobs|grep-c"{{.MANUAL_BACKUP_JOB_NAME}}")"-eq0]'cmds:-kubectl create job --from=cronjob/quotable-dbbackup {{ .MANUAL_BACKUP_JOB_NAME }}deploy-manual-dbbackup-cleanup:description:clean up the manually triggered jobpreconditions:-sh:'["$(kubectlgetjobs|grep-c"{{.MANUAL_BACKUP_JOB_NAME}}")"-eq1]'cmds:-kubectl delete job {{ .MANUAL_BACKUP_JOB_NAME }}##################################################################################################### docker tasks# localdocker-local-build-and-push-all:description:build and push all the local docker imagescmds:-task:docker-local-build-all-task:docker-local-push-alldocker-local-build-all:description:build all the docker images with the local tagscmds:-task:docker-build-frontendvars:DOCKER_IMAGE_FULL_NAME:"{{.LOCAL_REGISTRY_NAME}}:{{.LOCAL_REGISTRY_PORT}}/{{.DOCKER_PREFIX}}/frontend"-task:docker-build-backendvars:DOCKER_IMAGE_FULL_NAME:"{{.LOCAL_REGISTRY_NAME}}:{{.LOCAL_REGISTRY_PORT}}/{{.DOCKER_PREFIX}}/backend"-task:docker-build-dbbackupvars:DOCKER_IMAGE_FULL_NAME:"{{.LOCAL_REGISTRY_NAME}}:{{.LOCAL_REGISTRY_PORT}}/{{.DOCKER_PREFIX}}/dbbackup"docker-local-push-all:description:push all the docker images to the local cluster registrycmds:-task:docker-pushvars:DOCKER_IMAGE_FULL_NAME:"{{.LOCAL_REGISTRY_NAME}}:{{.LOCAL_REGISTRY_PORT}}/{{.DOCKER_PREFIX}}/frontend"-task:docker-pushvars:DOCKER_IMAGE_FULL_NAME:"{{.LOCAL_REGISTRY_NAME}}:{{.LOCAL_REGISTRY_PORT}}/{{.DOCKER_PREFIX}}/backend"-task:docker-pushvars:DOCKER_IMAGE_FULL_NAME:"{{.LOCAL_REGISTRY_NAME}}:{{.LOCAL_REGISTRY_PORT}}/{{.DOCKER_PREFIX}}/dbbackup"# remote docker-remote-build-and-push-all:description:build and push all the remote docker imagescmds:-task:docker-remote-build-all-task:docker-remote-push-alldocker-remote-build-all:description:build all the docker images with the remote tagscmds:-task:docker-build-frontendvars:DOCKER_IMAGE_FULL_NAME:"{{.REMOTE_REGISTRY_NAME}}:frontend-{{.GIT_HASH}}"-task:docker-build-backendvars:DOCKER_IMAGE_FULL_NAME:"{{.REMOTE_REGISTRY_NAME}}:backend-{{.GIT_HASH}}"-task:docker-build-dbbackupvars:DOCKER_IMAGE_FULL_NAME:"{{.REMOTE_REGISTRY_NAME}}:dbbackup-{{.GIT_HASH}}"docker-remote-push-all:description:push all the docker images to the remote cluster registrycmds:-task:docker-pushvars:DOCKER_IMAGE_FULL_NAME:"{{.REMOTE_REGISTRY_NAME}}:frontend-{{.GIT_HASH}}"-task:docker-pushvars:DOCKER_IMAGE_FULL_NAME:"{{.REMOTE_REGISTRY_NAME}}:backend-{{.GIT_HASH}}"-task:docker-pushvars:DOCKER_IMAGE_FULL_NAME:"{{.REMOTE_REGISTRY_NAME}}:dbbackup-{{.GIT_HASH}}"# helpersdocker-build-frontend:description:build the frontend docker imagepreconditions:-sh:'[-n"{{.DOCKER_IMAGE_FULL_NAME}}"]'dir:frontendcmds:-docker build -t "{{.DOCKER_IMAGE_FULL_NAME}}" .docker-build-backend:description:build the backend docker imagepreconditions:-sh:'[-n"{{.DOCKER_IMAGE_FULL_NAME}}"]'dir:backendcmds:-docker build -t "{{.DOCKER_IMAGE_FULL_NAME}}" .docker-build-dbbackup:description:build the dbbackup docker imagepreconditions:-sh:'[-n"{{.DOCKER_IMAGE_FULL_NAME}}"]'dir:dbbackupcmds:-docker build -t "{{.DOCKER_IMAGE_FULL_NAME}}" .docker-push:description:push the frontend docker imagepreconditions:-sh:'[-n"{{.DOCKER_IMAGE_FULL_NAME}}"]'dir:frontendcmds:-docker push "{{.DOCKER_IMAGE_FULL_NAME}}"##################################################################################################### development server tasksskaffold-start:description:build and deploy with skaffold and watch the build for changescmds:-skaffold dev --trigger pollingskaffold-delete:description:clean up a skaffold deploymentcmds:-skaffold delete# deprecated for now because we have removed the .env in favor of k8s secrets# dev-server:# description: run the Go dev server# dir: backend# interactive: true# cmds:# - "(set -a; source .env; set +a; go run . )"# frontend-watch:# description: run the npm config to watch and rebuild frontend changes# dir: frontend# interactive: true# cmds:# - npm run watch##################################################################################################### Database tasks# db-server-start:# description: run the local dev postgres server# preconditions:# - sh: '[ -n "$QUOTABLE_DEV_PG_PASSWORD" ]'# cmds:# # note the use of /tmp/dbdata -- if we try a local directory, it doesn't work in WSL# - |# docker run -d \# -p 5432:5432 \# --name {{.POSTGRES_LOCAL_DOCKER_NAME}} \# -e POSTGRES_PASSWORD="${QUOTABLE_DEV_PG_PASSWORD}" \# -e PGDATA=/var/lib/postgresql/data/pgdata \# -v /tmp/dbdata:/var/lib/postgresql/data \# postgres:14.2# db-server-stop:# description: stop the local dev postgres server# cmds:# - docker stop {{.POSTGRES_LOCAL_DOCKER_NAME}}# - docker rm {{.POSTGRES_LOCAL_DOCKER_NAME}}db-shell:description:run psql shell in the dev postgres serverinteractive:truevars:PGUSER:quotablePODNAME:quotable-db-0cmds:-|kubectl exec -it pod/{{.PODNAME}} -- /bin/bash -c 'PGPASSWORD=$POSTGRES_PASSWORD psql -U {{.PGUSER}} postgres'db-new-migration:description:define MIGRATION=name to create a new migration with the given namepreconditions:-sh:'[-n"{{.MIGRATION}}"]'cmds:-migrate create -ext sql -dir backend/migrations/ {{.MIGRATION}}db-migrations-up:description:run the up migrationsvars:PGUSER:quotablePGPASSWORD:sh:kubectl get secret postgres-secrets -o=jsonpath='{.data.password}' | base64 -dcmds:-PGPASSWORD={{.PGPASSWORD}} migrate -database "postgres://{{.PGUSER}}:$PGPASSWORD@localhost:5432/postgres?sslmode=disable" -path backend/migrations updb-migrations-down:description:run the down migrationsvars:PGUSER:quotablePGPASSWORD:sh:kubectl get secret postgres-secrets -o=jsonpath='{.data.password}' | base64 -dPODNAME:quotable-db-0cmds:-PGPASSWORD={{.PGPASSWORD}} migrate -database "postgres://{{.PGUSER}}:$PGPASSWORD@localhost:5432/postgres?sslmode=disable" -path backend/migrations downdb-dump:description:dump the databasepreconditions:-sh:'[-n"$TARGETFILE"]&&[!-e"$TARGETFILE"]'vars:PGUSER:quotablePODNAME:quotable-db-0cmds:-kubectl exec pod/{{.PODNAME}} -- /bin/bash -c 'PGPASSWORD=$POSTGRES_PASSWORD pg_dump -U quotable postgres --column-inserts --data-only' > {{.TARGETFILE}}db-restore:description:load the databasepreconditions:-sh:'[-n"$TARGETFILE"]&&[-e"$TARGETFILE"]'vars:PGUSER:quotablePGPASSWORD:sh:kubectl get secret postgres-secrets -o=jsonpath='{.data.password}' | base64 -dPODNAME:quotable-db-0cmds:-kubectl cp {{.TARGETFILE}} quotable-db-0:/tmp/in.sql-kubectl exec pod/{{.PODNAME}} -- /bin/bash -c 'cat /tmp/in.sql | PGPASSWORD=$POSTGRES_PASSWORD psql -U {{.PGUSER}} postgres'-kubectl exec pod/{{.PODNAME}} -- /bin/bash -c 'rm /tmp/in.sql'##################################################################################################### API Tasksapi-generate-backend:description:generate the backend API handling Go codevars:API_FILE:apis/quote_api.ymlCONFIG_SERVER:apis/server-gen-config.ymlpreconditions:-sh:'[-e"{{.API_FILE}}"]'-sh:'[-e"{{.CONFIG_SERVER}}"]'cmds:-oapi-codegen -config {{.CONFIG_SERVER}} {{.API_FILE}}##################################################################################################### Bootstrap tasksbootstrap:description:install all development dependenciescmds:-task:bootstrap-install-homebrew-task:bootstrap-install-install-js-tools-task:bootstrap-install-install-go-tools-task:bootstrap-install-migrate-task:bootstrap-install-oapi-codegen-task:bootstrap-install-kubectl-task:bootstrap-install-k3d-task:bootstrap-install-helm-task:bootstrap-install-git-secret-task:bootstrap-install-aws-cli-task:bootstrap-set-environment-variables-task:bootstrap-install-remote-toolsbootstrap-install-go-tools:description:install go toolscmds:-go install honnef.co/go/tools/cmd/staticcheck@latestbootstrap-install-js-tools:description:install js toolscmds:-wget -qO- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash-nvm install --lts-npm install -g @vue/clibootstrap-install-kubectl:description:install kubectlcmds:-curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"-sudo mv kubectl /usr/local/bin/kubectl-grep -qxF 'source <(kubectl completion bash)' ~/.bashrc || echo 'source <(kubectl completion bash)' >>~/.bashrcbootstrap-install-kubectl-aliases:description:install kubectl aliasescmds:-grep -qF 'alias kc=' ~/.bashrc || echo 'alias kc=kubectl; complete -F __start_kubectl kc;' >>~/.bashrc-grep -qF 'alias kcw=' ~/.bashrc || echo 'alias kcw="watch kubectl"' >>~/.bashrc-|grep -qF 'function kcl() {' ~/.bashrc || echo 'function kcl() {kubectl logs -f -l="app.kubernetes.io/service=$1"}' >>~/.bashrcbootstrap-install-k3d:description:install k3dcmds:-wget -q -O - https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash-grep -qxF 'source <(k3d completion bash)' ~/.bashrc || echo 'source <(k3d completion bash)' >>~/.bashrcbootstrap-install-helm:description:install helmcmds:-wget https://get.helm.sh/helm-v3.9.0-linux-amd64.tar.gz-tar -zxvf helm-v3.9.0-linux-amd64.tar.gz-sudo mv linux-amd64/helm /usr/local/bin/helm-rm -r helm-v3.9.0-linux-amd64.tar.gz linux-amd64/-helm repo add bitnami https://charts.bitnami.com/bitnami-grep -qxF 'source <(helm completion bash)' ~/.bashrc || echo 'source <(helm completion bash)' >>~/.bashrcbootstrap-install-skaffold:description:install skaffoldcmds:-curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 && sudo install skaffold /usr/local/bin/ && rm ./skaffold-grep -qxF 'source <(skaffold completion bash)' ~/.bashrc || echo 'source <(skaffold completion bash)' >>~/.bashrcbootstrap-install-migrate:#helper task to install the migrate CLIcmds:-go install -tags 'postgres' github.com/golang-migrate/migrate/v4/cmd/migrate@{{.MIGRATE_VERSION}}bootstrap-install-oapi-codegen:#helper task to install the OpenAPI codegen CLIcmds:-go install github.com/deepmap/oapi-codegen/cmd/oapi-codegen@v1.11.0bootstrap-install-remote-tools:#helper task to install the git-secret clicmds:-brew install terraform-brew install hcloudbootstrap-install-git-secret:#helper task to install the git-secret clicmds:-brew install git-secretbootstrap-install-homebrew:#helper task to install the homebrewcmds:-/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"-grep -qF '/home/linuxbrew/.linuxbrew/bin/brew' ~/.bashrc || echo 'eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"' >> ~/.bashrcbootstrap-install-aws-cli:cmds:-|cd /tmp &&curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" &&unzip awscliv2.zip &&sudo ./aws/install &&rm -r aws/ awscliv2.zipbootstrap-set-environment-variables:# helper task does:# - add $HOME/go/bin to the path in bashrc if it is not already there# - add QUOTABLE_DEV_PG_PASSWORD environment variables to the bashrc if it is not already therecmds:-grep -qxF 'PATH="${PATH}:$HOME/go/bin"' $HOME/.bashrc || echo 'PATH="${PATH}:$HOME/go/bin"' ```</details>
Greetings folks, I wrote a tutorial on how to manage the dumpster fire that arises whenever one has to contribute to projects with very diverse stacks using asdf.vm. It's been a highly debated topic, so I figured y'all might be interested :D
As usual, feel free to ask away!
Wow! I'm not really a dev (sysadmin/labber and infosec) so I don't normally need to contend with this, but man. I can't tell you how many times I've tried install some tool from GitHub and tried installing modules and other components only for it to turn into a huge mess.
'asdf' looks like it will be super helpful absolutely adding it to my Desktop anisble script. Bonus for such a well written and visually supported tutorial!
Hey there ! I'm a fellow infosec/automation/backend and product guy! Let me know in case you have free cycles and want to collaborate on research, I've got some ideas :)
It's been crazy lately because I'm testing other people's environment and code and without the setup described in the OP I'd had gone crazy by now.
An absolute shit show, I hear you.
On Reddit someone was complaining that asdf.vm environments cannot be deployed and therefore are not useful, not as Docker containers, well if every project just came with a Dockerfile or docker-compose.yml
Is your desktop ansible script on a public repo somewhere? Curious to cross-checks notes!
Thanks mate! I appreciate it, feel free to stay in the loop via the mailing list or socials, my resolution for this year is to be more active :D I love explaining stuff and teaching, so it really motivates me how good the reception has been; although the debate has been intense as hell.
This is interesting. I have to confess the idea of having so many versions of things installed gives me the heebeejeebies. But if one must live in that environment, this seems like a useful tool.
I'm curious how often the tool dependencies (like library paths) get snarled up when switching environments. Does the tool only shim the binaries, or does it have a provision to set up environment variables as well?
One problem that I see is that the global config has the potential to hide dependencies. Have you considered the ability to add a "no-globals" option to the local config files, so that users must explicitly choose a version for that context? That way when the config gets committed, it carries all the version info explicitly. I can see this being the default behavior for configs inside version control, but maybe that is a bridge too far.
FWIW, I am all-in on containerized development, especially if the app itself will eventually be deployed in containers. For sure, there is a learning curve, but the ability to precisely control the environment seems worth the cost to me. For my dev setups, I like having task as the bootstrap dependency. From there, a
task setup-env
and atask check-env
can check for or install run time dependencies (mainly things like docker, k3d, helm, etc). Then I usually have atask build-dev-env
for the dev docker file andtask run-dev-env
to launch a shell in it. The task file is a great resource for people on the team who don't understand the containerization to use it in a repeatable way. Debugging remote apps is one of the hardest part of working this way, but debug servers running in the container are pretty well supported by things like vscode.I use this. I haven't encountered issues with libraries or runtime paths. It doesn't have a built in provision for environment variables, everything goes through the shims. And I've never encountered issues with cleanup, since all the environment management goes through the shims and don't leak out into your shell.
There is the asdf direnv plugin, which allows you to set up environment vars, not using the shims, through an envrc. I haven't encountered issues with cleanup here either, but this one seems more likely to cause issues. You can put arbitrary environment setup in the envrc.
I appreciate the feedback!
You and me both, but it's the deck of cards I've been dealt :D
If that gives you the heebie-jeebies, check this comment on HackerNews out:
To be honest, this has only ever happened to me a couple of times (two?) in five years and it was due to brew and how opinionated and careless with its changes it is.
Each plugin is different, since each language runtime operates differently. For example, for Java environment variables such as JAVA_HOME are set. I'll delves bit deeper into astd.vm internals in a future post.
Wouldn't explicitly stating the desired version for the given local repo and pushing it suffice in this case? A no globals with known or customary expected values seems equivalent to a set and pushed local .tool-versions, or am I missing something?
I certainly agree, with the big plus than by the time you have your environment setup properly sorted out you also have a Dockerfile ready for deployment. However, and this is a big issue, exploring the solution space of challenges and experimenting within Docker containers is a pain in the butt! For that alone I always want to setup a local, native development environment. Also because I feel that if I've set it up locally, in a reproducible manner, I truly understand it and can port it anywhere. Nevertheless, I do agree with you in principle.
Interesting, this is the second simpler version of make I learn about today ( the second being just )
This sounds really cool, do you have one such speced-out task file public somewhere? I'd like to dive into it.
It wasn't always the case, but I do agree that they've come a long way. Nevertheless, nothing approaches the ease with which you can debug and continue down the dependency tree when your languages runtime environment is just yet another directory within
~/.asdf/language/version/package
The scenario I'm thinking of is:
But if someone had set something
no-globals
in the local config when settling things up, in step 2, the person adding the script would have gotten an error message trying to use python without setting the version explicitly and corrected it at the moment of the mistake.Of course, failure to add the dependency is correctable, and doing it right the first time something you can ask people to "always do". Maybe something you can catch in CI. And if it's a dependency that asdf is not yet shimming, this won't catch it. So maybe you about lump it into all the weird corner cases, like calling
python ../otherwise/whatever.py
. (Can you tell I'm a stress tester by background?)The way I usually do this is open up a new dockerfile and a terminal, then do a
docker run --rm -it debian:bookworm-slim /bin/bash
. Then I just mess around installing things interactively, adding lines to my dockerfile as I go to keep track of what I did. When I'm happy with the environment, I consolidate all the apt installs into a single dockerfile command to reduce the image size and add any manually installed tools as steps.I don't have a public example, but I can put a couple inline here.
First, a really simple one that just does the docker stuff.
run-env
gives you an interactive environment, orautomatic-task
can be used to invoke some executable liketask automatic-task -- -some-args foo
. Note how in each command, some part of the local file system is mounted into the docker container so that things you do inside (like run a compiler) persist outside the container.Taskfile (short)
Here's the taskfile from my quote database project. It is a little more complicated because it's configuring a local k3d environment, but it has some of the bootstrapping stuff in it. Basically, if there's a commandline thing that you would do repeatedly, like to set the environment up, I like to put it in a task so that I don't have to remember how to do it.
Because there are so many commands, I have the names set up in least-specific-word to most-specific-word order so that you can (for example) start typing
task docker
then tab complete to get all the docker tasks. Task has the ability to import other task files, so some people might break these down into several task files and import them, which would basically put them in a namespace and achieve a similar effect, but with:
instead of-
.Taskfile (long)