• Activity
  • Votes
  • Comments
  • New
  • All activity
  • Showing only topics in ~comp with the tag "cloud". Back to normal view / Search all groups
    1. Cloud hosting in EU

      Hi! I've decided to move some of my selfhosted things from on-prem (at home ;)) to the cloud, and at the same time I'd like to try and run this in EU, or at least europe. I'd like to get started...

      Hi!

      I've decided to move some of my selfhosted things from on-prem (at home ;)) to the cloud, and at the same time I'd like to try and run this in EU, or at least europe. I'd like to get started fairly quickly as this was prompted by one of my home servers halfway dying on me.

      The features I'm most interested in are approximately:

      • Virtual machines.
      • Storage. Cheap long term for backups (similar to S3 Glacier).
      • Managed DB, most likely postgresql.
      • Serverless jobs (similar to AWS lambda).
      • IaaS (I've got a bit of experience with terraform, but it doesn't have to be that).
      • Builtin monitoring.
      • Git hosting, it's likely that I'll just go with github/gitlab here, but if there's a nice alternative I'm up for it.
      • Automated sending of email. I'm using AWS SES atm, and I'm very happy with it.

      Some other things:

      • I intend to run a combination of services written by others, e.g. nextcloud and software I've written myself.
      • I'll most likely be running linux only, but I prefer to select my own flavour where it makes sense.
      • I much prefer managing permissions and users in gcp than in aws as I find aws way too complicated for my needs while gcp mostly just makes sense.
      • I'd prefer a platform that's being developed and improving over time with big potential for the future.
      • This is a hobby project, and some of these requirements may seem a bit contradictory or non-optimal, but that's ok.
      • I have some experience running kubernetes (self-hosted), and I'm not a huge fan of the complexity and yaml files, at the same time OpenStack is getting kinda old, and I don't know if I think it's a platform for the future. But from what I see most of the options seems to be built on top of one of those.
      • Cheaper is of course better, I don't have a company-sized budget, this is all coming out of my "hobby pocket".
      • I live in Sweden, so datacenters geographically close is a plus.

      Right now I'm looking at European alternatives to Amazon Web Services (AWS), and Scaleway is looking the most promising, but I'm really skimming the top when it comes to info atm.

      Hope that makes sense =) I'm interested in all kinds of feedback.

      13 votes
    2. Server admins, PHP/Symfony experts: I need your guidance

      I've been the sole developer for my company's website for over a decade now. It's gone through a bunch of evolutions throughout the years, but I've been sidetracked lately and have let things...

      I've been the sole developer for my company's website for over a decade now. It's gone through a bunch of evolutions throughout the years, but I've been sidetracked lately and have let things stagnate as far as maintenance goes. Now, I'm looking to do some upgrades for security purposes and I'm trying to wrap my head around everything.

      Some facts:

      • PHP 8.0.12
      • MySQL 5.7
      • Symfony 5.4
      • Web server is currently Apache only because that's what I've always used. I'm open to nginx or other options.
      • Running on a Google Cloud VPS with Ubuntu 20.04
      • I also use Google Cloud Storage to host thousands of images

      My first thought was to take baby steps and start by upgrading Symfony as much as possible. However, the next major version (6.0) requires PHP 8.0.2. Symfony 6.1 requires PHP 8.1. Symfony 7.2 (the current release) requires PHP 8.2. So, then it just makes sense to upgrade PHP to the latest version.

      However, I am terrified of upgrading PHP in the current (outdated) Ubuntu environment. So I might as well upgrade the distro while I'm at it.

      And then, MySQL 5.7 is no longer supported, so I might as well bring that up to date too (8.0, I believe).

      There will be no baby steps. I'm gonna have to just upgrade everything all at once. Which then leads me to my next question: should I stick with the self-managed VPS, or is it time to look at something like Google App Engine or Fly.io that is a little bit more managed and "locked down" than what I'm doing right now? Should I look into just going with Docker instead?

      Put another way, if I'm going to start from ~scratch, what's the modern best practice to host all of this, given that I'm going to have to upgrade a bunch of different things all at once? (Turns out the "baby step" of upgrading Symfony will actually have to come last since I need to hit these prerequisites first).

      Please let me know if I've left anything out. PS, security is a pretty big concern for us because we manage user auth, so I'm all for anything the cloud providers can do to take some of that responsibility away from me.

      9 votes
    3. Cloud Servers for the Broke

      Just wanted to put this out there as a little PSA in case it's helpful: if you want a cloud server but don't wanna pay anything, Oracle's Free Tier is a life saver. Discovered it a year ago and...

      Just wanted to put this out there as a little PSA in case it's helpful: if you want a cloud server but don't wanna pay anything, Oracle's Free Tier is a life saver. Discovered it a year ago and couldn't be happier I did, since I'd never pay for cloud computing otherwise 😭.

      Quick Specs:

      For free you get:

      • 24/7 uptime
      • 200gb of storage space
      • 24GB of RAM
      • 4 OCPUs
      • 4 Gbps Bandwidth

      That's been more than enough for me and honestly feels too good to be true. Some things I've done with this:

      If anyone has any other ideas for cool projects I could self host, please do tell I'm curious what else I could do :)

      48 votes
    4. In which a foolish developer tries DevOps: critique my VPS provisioning script!

      I'm attempting to provision two mirror staging and production environments for a future SaaS application that we're close to launching as a company, and I'd like to get some feedback on the...

      I'm attempting to provision two mirror staging and production environments for a future SaaS application that we're close to launching as a company, and I'd like to get some feedback on the provisioning script I've created that takes a default VPS from our hosting provider, DigitalOcean, and readies it for being a secure hosting environment for our application instance (which runs inside Docker, and persists data to an unrelated managed database).

      I'm sticking with a simple infrastructure architecture at the moment: A single VPS which runs both nginx and the application instance inside a containerised docker service as mentioned earlier. There's no load balancers or server duplication at this point. @Emerald_Knight very kindly provided me in the Tildes Discord with some overall guidance about what to aim for when configuring a server (limit damage as best as possible, limit access when an attack occurs)—so I've tried to be thoughtful and integrate that paradigm where possible (disabling root login, etc).

      I’m not a DevOps or sysadmin-oriented person by trade—I stick to programming most of the time—but this role falls to me as the technical person in this business; so the last few days has been a lot of reading and readying. I’ll run through the provisioning flow step by step. Oh, and for reference, Ubuntu 20.04 LTS.

      First step is self-explanatory.

      #!/bin/sh
      
      # Name of the user to create and grant privileges to.
      USERNAME_OF_ACCOUNT=
      
      sudo apt-get -qq update
      sudo apt install -qq --yes nginx
      sudo systemctl restart nginx
      

      Next, create my sudo user, add them to the groups needed, require a password change on first login, then copy across any provided authorised keys from the root user which you can configure to be seeded to the VPS in the DigitalOcean management console.

      useradd --create-home --shell "/bin/bash" --groups sudo,www-data "${USERNAME_OF_ACCOUNT}"
      passwd --delete $USERNAME_OF_ACCOUNT
      chage --lastday 0 $USERNAME_OF_ACCOUNT
      
      HOME_DIR="$(eval echo ~${USERNAME_OF_ACCOUNT})"
      mkdir --parents "${HOME_DIR}/.ssh"
      cp /root/.ssh/authorized_keys "${HOME_DIR}/.ssh"
      
      chmod 700 ~/.ssh
      chmod 600 ~/.ssh/authorized_keys
      chown --recursive "${USERNAME_OF_ACCOUNT}":"${USERNAME_OF_ACCOUNT}" "${HOME_DIR}/.ssh"

sudo chmod 775 -R /var/www
      sudo chown -R $USERNAME_OF_ACCOUNT /var/www
      rm -rf /var/www/html
      

      Installation of docker, and run it as a service, ensure the created user is added to the docker group.

      sudo apt-get install -qq --yes \
          apt-transport-https \
          ca-certificates \
          curl \
          gnupg-agent \
          software-properties-common
      
      curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
      sudo apt-key fingerprint 0EBFCD88
      
      sudo add-apt-repository --yes \
         "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
         $(lsb_release -cs) \
         stable"
      
      sudo apt-get -qq update
      sudo apt install -qq --yes docker-ce docker-ce-cli containerd.io
      
      # Only add a group if it does not exist
      sudo getent group docker || sudo groupadd docker
      sudo usermod -aG docker $USERNAME_OF_ACCOUNT
      
      # Enable docker
      sudo systemctl enable docker
      
      sudo curl -L "https://github.com/docker/compose/releases/download/1.27.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
      sudo chmod +x /usr/local/bin/docker-compose
      sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
      docker-compose --version
      

      Disable root logins and any form of password-based authentication by altering sshd_config.

      sed -i '/^PermitRootLogin/s/yes/no/' /etc/ssh/sshd_config
      sed -i '/^PasswordAuthentication/s/yes/no/' /etc/ssh/sshd_config
      sed -i '/^ChallengeResponseAuthentication/s/yes/no/' /etc/ssh/sshd_config
      

      Configure the firewall and fail2ban.

      sudo ufw default deny incoming
      sudo ufw default allow outgoing
      sudo ufw allow ssh
      sudo ufw allow http
      sudo ufw allow https
      sudo ufw reload
      sudo ufw --force enable && sudo ufw status verbose
      
      sudo apt-get -qq install --yes fail2ban
      sudo systemctl enable fail2ban
      sudo systemctl start fail2ban
      

      Swapfiles.

      sudo fallocate -l 1G /swapfile && ls -lh /swapfile
      sudo chmod 0600 /swapfile && ls -lh /swapfile
      sudo mkswap /swapfile
      sudo swapon /swapfile && sudo swapon --show
      echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
      

      Unattended updates, and restart the ssh daemon.

      sudo apt install -qq unattended-upgrades
      sudo systemctl restart ssh
      

      Some questions

      You can assume these questions are cost-benefit focused, i.e. is it worth my time to investigate this, versus something else that may have better gains given my limited time.

      1. Obviously, any critiques of the above provisioning process are appreciated—both on the micro level of criticising particular lines, or zooming out and saying ā€œwell why don’t you do this insteadā€¦ā€. I can’t know what I don’t know.

      2. Is it worth investigating tools such as ss or lynis (https://github.com/CISOfy/lynis) to perform server auditing? I don’t have to meet any compliance requirements at this point.

      3. Do I get any meaningful increase in security by implementing 2FA on login here using google authenticator? As far as I can see, as long as I'm using best practices to actually ssh into our boxes, then the likeliest risk profile for unwanted access probably isn’t via the authentication mechanism I use personally to access my servers.

      4. Am I missing anything here? Beyond the provisioning script itself, I adhere to best practices around storing and generating passwords and ssh keys.

      Some notes and comments

      1. Eventually I'll use the hosting provider's API to spin up and spin down VPS's on the fly via a custom management application, which gives me an opportunity to programmatically execute the provisioning script above and run some over pre- and post-provisioning things, like deployment of the application and so forth.

      2. Usage alerts and monitoring is configured within DigitalOcean's console, and alerts are sent to our business' Slack for me to action as needed. Currently, I’m settling on the following alerts:
        1. Server CPU utilisation greater than 80% for 5 minutes.
        2. Server memory usage greater than 80% for 5 minutes.
        3. I’m also looking at setting up daily fail2ban status alerts if needed.
      9 votes