15 votes

Topic deleted by author

13 comments

  1. [9]
    microbug
    Link
    If you want to define as much infrastructure as possible with code, I'd be looking at Ansible and Docker. Ansible can configure machines remotely by SSH (so it's agentless; you don't have to...

    If you want to define as much infrastructure as possible with code, I'd be looking at Ansible and Docker.

    Ansible can configure machines remotely by SSH (so it's agentless; you don't have to install something on the remote machine to use it). It can install packages and configure almost anything via downloadable modules or custom shell scripts (to be avoided since using modules make the YAML files easier to read).

    Docker is a container technology that makes it easy to separate services. It would be excellent for running your GitLab CE instance and other services. Using Docker Compose (and Swarm if you want to host services across multiple machines) you can define all your services in a human-readable YAML file. You can install and configure Docker with Ansible.

    Docker tutorial.

    Edit: You may be interested in r/homelab on Reddit, they do lots of this stuff and there may be posts that help you there. They certainly take things to the extreme!

    8 votes
    1. [3]
      AReluctantTilder
      Link Parent
      I’m getting a mystery server shipped to me from someone on homelab right now

      I’m getting a mystery server shipped to me from someone on homelab right now

      4 votes
    2. [2]
      Luca
      Link Parent
      This right here. I work devops, and run a home server that's running about a dozen docker containers, and the whole thing is provisioned via ansible. If I ever need to recreate it or anything, I...

      This right here. I work devops, and run a home server that's running about a dozen docker containers, and the whole thing is provisioned via ansible. If I ever need to recreate it or anything, I can just run the playbook from my desktop, and it will return to the exact state I want it.

      1 vote
      1. acr
        Link Parent
        Yes, I agree with this right here. Ansible is amazing. I don't really use it too much though. Docker the way to go at home because you're going to have a lot on one machine which is Big when money...

        Yes, I agree with this right here. Ansible is amazing. I don't really use it too much though. Docker the way to go at home because you're going to have a lot on one machine which is Big when money is an issue.

        My server has Fedora as the base and then Docker running on top of that base. I have freeipa server setup on the actual host and then each Docker container has the freeipa client. That way I can have a domain and I can kind of handle everything from a central location.

        The great thing about Docker is you can just back up the containers and if something happens just put them back. Which goes back into the whole budget issue. You can just push the backups somewhere else for redundancy. And you can stript out spinning up containers pretty easily.

        I have my volumes on a nas box. My Docker container so things like DNS server, a couple of postgres servers, light web hosting, zoneminder, and some other network resource apps.

        2 votes
    3. [3]
      jgb
      Link Parent
      Honest question: what are all these homelab people doing with all their compute power and storage space? Surely once you've got a webserver or two, perhaps an email server and an IRC daemon, and a...

      Honest question: what are all these homelab people doing with all their compute power and storage space? Surely once you've got a webserver or two, perhaps an email server and an IRC daemon, and a data backup / archive system, you start to run out of ideas?

      1. maple
        Link Parent
        I'm not one of the homelab people, but I'm a homelab person (if you know what I mean). I have a rack in my garage running vSphere 6, a homebrewed NAS, pfSense etc. I don't really do...

        I'm not one of the homelab people, but I'm a homelab person (if you know what I mean). I have a rack in my garage running vSphere 6, a homebrewed NAS, pfSense etc. I don't really do containerisation; having vSphere means that most of my workloads are actual VMs. I run:

        • Plex (we're heavily into cordcutting)
        • Nextcloud
        • Bitwarden
        • Unifi controller
        • Unifi NVR (for security cameras)
        • Puppet server
        • A 6-server Microsoft lab for work-related sandboxing

        Backups are a mixture of striped across physical disks that I rotate into the office (Plex media) and encrypted then uploaded to an Azure blob container (everything else).

        1 vote
      2. microbug
        Link Parent
        It’s a mixture of massive overkill, hosting services for friends and family, and learning new skills for fun and profit. There’s a long list of software used in homelabs that gives more specific...

        It’s a mixture of massive overkill, hosting services for friends and family, and learning new skills for fun and profit. There’s a long list of software used in homelabs that gives more specific examples.

        1 vote
  2. jgb
    Link
    One idea would be to have a local machine hosting a fossil (or git) server which has all your files / configs / programs in appropriate repos, and a deployment script to install (apt-get, pacman,...

    One idea would be to have a local machine hosting a fossil (or git) server which has all your files / configs / programs in appropriate repos, and a deployment script to install (apt-get, pacman, or whatever) all the software you want, clone all the necessary repos from the server and deploy them as appropriate. On the suggestion of another user here I have begun using GNU stow for managing config files and it works reasonably well, so you could consider using that. I'm sure there are more robust solutions than this but they may well be overkill for home use.

    1 vote
  3. PerkiPanda
    Link
    Howdy! I've not dug into NixOS too much, but what I use personally and at work is SaltStack. While there are certainly benefits to using exclusively one OS, we often need various out-of-the-box...

    Howdy!

    I've not dug into NixOS too much, but what I use personally and at work is SaltStack. While there are certainly benefits to using exclusively one OS, we often need various out-of-the-box functionality that isn't all found in just one.

    Using SaltStack, you can build a server (VM or Bare-metal) that will act as the "master". You can use this machine to define configurations, templates, and more. When you then install an OS or VM, you simply need to install the " minion" package and use the "master" to push changes to the device, including settings, package installs, etc.

    This way, you can use whichever OS you deem fit for a specific application, and still easily tie it into a manageable ecosystem. SaltStack is also open-source, has a great API, and is very extensible using Python.

    Good luck whichever route you decide to take!

    1 vote
  4. bme
    Link
    all the ansible stuff is on point, the last and final hardest-of-the-hardcore step might also be to setup dnsmasq with pxe, so you can network boot a server. I have a friend who has the most...

    all the ansible stuff is on point, the last and final hardest-of-the-hardcore step might also be to setup dnsmasq with pxe, so you can network boot a server. I have a friend who has the most extreme attitude when it comes to this: he nukes all his machines once a month pxe boots them and restores backups + ansible to provision. I don't, I just have a similar setup to most here, centos base + btrfs raid 1 pool with docker volumes running off it with provisioning handled by ansible.

  5. rabidfurby
    Link
    I do this, using NixOS plus syncthing to replicate a git repo containing the NixOS config files. At some point, I should do a full write-up on this, but the gist is that /etc/nixos is a git repo,...

    I do this, using NixOS plus syncthing to replicate a git repo containing the NixOS config files.

    At some point, I should do a full write-up on this, but the gist is that /etc/nixos is a git repo, and the remote is "git@localhost:nixos.git". That is, a folder called nixos.git under /home/git.

    That nixos.git directory is mirrored using syncthing, which is a nifty way of having a self-hosted git repo in a decentralized way, so no need for a local Gitlab or Gitea install. This setup wouldn't work if multiple users were committing to the repo, you'd end up with syncthing having sync conflicts, but for this limited use case it works perfectly. The repo never exists anywhere but my own machines, so I feel fairly safe checking secrets in (such as wifi passwords) which you obviously wouldn't do with a "normal" git repo.

    The structure of /etc/nixos itself looks something like:

    hosts/
      home-laptop/
        configuration.nix
        hardware-configuration.nix
      home-desktop/
      home-server/
      cloud-server/
    common/
      core.nix
      desktop.nix
    

    NixOS requires the specific paths /etc/nixos/configuration.nix and hardware-configuration.nix, so those are symlinked into the appropriate hosts/ directory, and those two files are in .gitignore because the symlinks vary from one machine to another.