-
31 votes
-
Social media platform Parler is back online with new hosting
10 votes -
PeerTube v3 : it’s a live, a liiiiive !
23 votes -
RIAA obtains DMCA subpoenas against Cloudflare and Namecheap targeting forty-one domains for YouTube-ripping platforms and pirate sites
29 votes -
In which a foolish developer tries DevOps: critique my VPS provisioning script!
I'm attempting to provision two mirror staging and production environments for a future SaaS application that we're close to launching as a company, and I'd like to get some feedback on the...
I'm attempting to provision two mirror staging and production environments for a future SaaS application that we're close to launching as a company, and I'd like to get some feedback on the provisioning script I've created that takes a default VPS from our hosting provider, DigitalOcean, and readies it for being a secure hosting environment for our application instance (which runs inside Docker, and persists data to an unrelated managed database).
I'm sticking with a simple infrastructure architecture at the moment: A single VPS which runs both nginx and the application instance inside a containerised docker service as mentioned earlier. There's no load balancers or server duplication at this point. @Emerald_Knight very kindly provided me in the Tildes Discord with some overall guidance about what to aim for when configuring a server (limit damage as best as possible, limit access when an attack occurs)—so I've tried to be thoughtful and integrate that paradigm where possible (disabling root login, etc).
I’m not a DevOps or sysadmin-oriented person by trade—I stick to programming most of the time—but this role falls to me as the technical person in this business; so the last few days has been a lot of reading and readying. I’ll run through the provisioning flow step by step. Oh, and for reference, Ubuntu 20.04 LTS.
First step is self-explanatory.
#!/bin/sh # Name of the user to create and grant privileges to. USERNAME_OF_ACCOUNT= sudo apt-get -qq update sudo apt install -qq --yes nginx sudo systemctl restart nginx
Next, create my sudo user, add them to the groups needed, require a password change on first login, then copy across any provided authorised keys from the root user which you can configure to be seeded to the VPS in the DigitalOcean management console.
useradd --create-home --shell "/bin/bash" --groups sudo,www-data "${USERNAME_OF_ACCOUNT}" passwd --delete $USERNAME_OF_ACCOUNT chage --lastday 0 $USERNAME_OF_ACCOUNT HOME_DIR="$(eval echo ~${USERNAME_OF_ACCOUNT})" mkdir --parents "${HOME_DIR}/.ssh" cp /root/.ssh/authorized_keys "${HOME_DIR}/.ssh" chmod 700 ~/.ssh chmod 600 ~/.ssh/authorized_keys chown --recursive "${USERNAME_OF_ACCOUNT}":"${USERNAME_OF_ACCOUNT}" "${HOME_DIR}/.ssh" sudo chmod 775 -R /var/www sudo chown -R $USERNAME_OF_ACCOUNT /var/www rm -rf /var/www/html
Installation of docker, and run it as a service, ensure the created user is added to the docker group.
sudo apt-get install -qq --yes \ apt-transport-https \ ca-certificates \ curl \ gnupg-agent \ software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo apt-key fingerprint 0EBFCD88 sudo add-apt-repository --yes \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" sudo apt-get -qq update sudo apt install -qq --yes docker-ce docker-ce-cli containerd.io # Only add a group if it does not exist sudo getent group docker || sudo groupadd docker sudo usermod -aG docker $USERNAME_OF_ACCOUNT # Enable docker sudo systemctl enable docker sudo curl -L "https://github.com/docker/compose/releases/download/1.27.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose sudo chmod +x /usr/local/bin/docker-compose sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose docker-compose --version
Disable root logins and any form of password-based authentication by altering
sshd_config
.sed -i '/^PermitRootLogin/s/yes/no/' /etc/ssh/sshd_config sed -i '/^PasswordAuthentication/s/yes/no/' /etc/ssh/sshd_config sed -i '/^ChallengeResponseAuthentication/s/yes/no/' /etc/ssh/sshd_config
Configure the firewall and fail2ban.
sudo ufw default deny incoming sudo ufw default allow outgoing sudo ufw allow ssh sudo ufw allow http sudo ufw allow https sudo ufw reload sudo ufw --force enable && sudo ufw status verbose sudo apt-get -qq install --yes fail2ban sudo systemctl enable fail2ban sudo systemctl start fail2ban
Swapfiles.
sudo fallocate -l 1G /swapfile && ls -lh /swapfile sudo chmod 0600 /swapfile && ls -lh /swapfile sudo mkswap /swapfile sudo swapon /swapfile && sudo swapon --show echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
Unattended updates, and restart the ssh daemon.
sudo apt install -qq unattended-upgrades sudo systemctl restart ssh
Some questions
You can assume these questions are cost-benefit focused, i.e. is it worth my time to investigate this, versus something else that may have better gains given my limited time.
- Obviously, any critiques of the above provisioning process are appreciated—both on the micro level of criticising particular lines, or zooming out and saying “well why don’t you do this instead…”. I can’t know what I don’t know.
- Is it worth investigating tools such as
ss
orlynis
(https://github.com/CISOfy/lynis) to perform server auditing? I don’t have to meet any compliance requirements at this point. - Do I get any meaningful increase in security by implementing 2FA on login here using google authenticator? As far as I can see, as long as I'm using best practices to actually
ssh
into our boxes, then the likeliest risk profile for unwanted access probably isn’t via the authentication mechanism I use personally to access my servers. - Am I missing anything here? Beyond the provisioning script itself, I adhere to best practices around storing and generating passwords and ssh keys.
Some notes and comments
- Eventually I'll use the hosting provider's API to spin up and spin down VPS's on the fly via a custom management application, which gives me an opportunity to programmatically execute the provisioning script above and run some over pre- and post-provisioning things, like deployment of the application and so forth.
- Usage alerts and monitoring is configured within DigitalOcean's console, and alerts are sent to our business' Slack for me to action as needed. Currently, I’m settling on the following alerts:
- Server CPU utilisation greater than 80% for 5 minutes.
- Server memory usage greater than 80% for 5 minutes.
- I’m also looking at setting up daily fail2ban status alerts if needed.
9 votes -
CyberBunker: The "bulletproof web hosting" company based in a German Cold War bunker that became a dark-web empire
10 votes -
A month-and-a-half of self-hosted email
10 votes -
Hosting email server
6 votes -
Plans for PeerTube v3 : global index, progressive fundraising, live streaming
16 votes -
SMTP: A Conversation
9 votes -
Facebook files lawsuit against Namecheap
9 votes -
DigitalOcean is laying off staff, sources say thirty to fifty affected
10 votes -
Peertube 2.0 is out
35 votes -
What online services do you use to host and share photos?
Services like Facebook, Instagram, Google Photos, iCloud Photo Sharing, Flickr, SmugMug, 500px, and more are available for hosting and sharing photos online. I'm curious what service, or set of...
Services like Facebook, Instagram, Google Photos, iCloud Photo Sharing, Flickr, SmugMug, 500px, and more are available for hosting and sharing photos online. I'm curious what service, or set of services, you you use and how you decided. Do you cross post between them, and if so what strategy do you use?
I'm currently spread a bit and without much cross-posting between Facebook, Instagram, and Flickr. Not a strong conscious decision, though I've been wanting to do a bit more photography and would like to figure out a better strategy.
I think some key points to consider are cost (free vs pro, ease of upgrade/downgrade), storage space and restrictions (total space, max individual size, filetypes, enforced resizing/compression), ease and control of sharing publicly or privately, network audience and reach, and creative rights (who owns what rights on the content)
This Terms of Service; Didn't Read site can be helpful for at least determining what the general rights on these services are with some broad judgement.
14 votes -
The dirty business of hosting hate online
11 votes -
Imgur has raised $20M from Coil, a micropayment tool for creators that Imgur has agreed to build into its service
14 votes -
Is having a business line worth it?
Does anyone have a business subscriber Internet connection? Is it worth it? I just spoke with my ISP, and for an extra $40/mo I can get a static IP address with 100mbps that I can host my own...
Does anyone have a business subscriber Internet connection? Is it worth it?
I just spoke with my ISP, and for an extra $40/mo I can get a static IP address with 100mbps that I can host my own website on. I have a virtualization server, and I've been thinking about hosting my own hobby-scale website for a while. I haven't had any luck finding rack hosting space that I'd feel comfortable using so I'm thinking about just going rogue, and operating solo. If I had a static IP address with a pipe that would allow me to host then all I'd need to do is stand up a server, register a domain, and point it at my IP address.
Other than the typical security risks, what do I need to worry about? Would the experience be worth it?
11 votes -
YouTube vs PeerTube: Thoughts on PeerTube as a competitor to YouTube
9 votes -
Flickr will soon start deleting photos — and massive chunks of internet history
27 votes -
The Cloud Is Just Someone Else's Computer
10 votes -
I tried to block Amazon from my life. It was impossible
13 votes -
Bomb threat, sextortion spammers abused weakness at GoDaddy.com
7 votes -
GoDaddy is sneakily injecting JavaScript into your website and how to stop it
44 votes -
The community network manual: How to build the Internet yourself
13 votes -
Flickr's free accounts will be limited to 1,000 photos and videos starting January 8, 2019
30 votes -
PeerTube reaches its first stable 1.0 release
23 votes -
Personal Wikis
I have been looking for some software where I can brain dump all the things I need to remember on a constant basis so I can easily find it again in the future. A personal wiki basically. I am...
I have been looking for some software where I can brain dump all the things I need to remember on a constant basis so I can easily find it again in the future. A personal wiki basically. I am wondering what any of you tilderians are using?
The things I am looking for:
Absolute requirements:
- Open Source: I want to be in control of the data myself, and I want to be able to hack on it myself as the need arises.
- Self Hostable: Goes hand-in-hand with with open sourceness, I want the data to live on the server in my apartment, under my own control.
- An API of some sort so I can programmatically add/read/modify data.
Nice to haves:
- Revision history of some sort.
- Common/simple data format for easy backup and longevity.
- Web interface, with mobile compatibility.
- Lightweight as possible, so I can run it on a low powered server.
Does anything know anything like that?
Options I have heard of:
25 votes -
Microsoft threatened to terminate Gab's cloud hosting if it didn't remove two posts by a neo-Nazi
24 votes -
Teknik.io registration is open for a few more hours!
EDIT: signups are now closed. relevant blog post teknik.io is a website that provides services like email, [encrypted] file uploads, Git repos, blogs, URL shortening, and more. I've used them for...
EDIT: signups are now closed.
teknik.io is a website that provides services like email, [encrypted] file uploads, Git repos, blogs, URL shortening, and more. I've used them for a few years and they're wonderful. It's all open-source and privacy conscious, maybe some Tildes users would like it?
Registration is usually invite-only, but it's open for a few hours.
Thanks @duckoverflow for mentioning this.
Edit: also their privacy policy is short, simple, and easy-to-read if anyone is interested in that. I'd consider it a great example of what a privacy policy should be.
24 votes -
Microsoft sinks data centre off Orkney
8 votes -
Feedback on a federated decentralized git hosting solution
I have an idea, it's not particularly new. I think git code sharing could integrate very nicely with blockchains. I think it could be done elegantly without modifying the git protocol at all, just...
I have an idea, it's not particularly new. I think git code sharing could integrate very nicely with blockchains.
I think it could be done elegantly without modifying the git protocol at all, just as an optional superset (like Github) to provide forks, PR and discussion.
Something like:
- smart contract based system
- something like lightening network for off master chain pushes
- local node hosting all obtained versions of code, something like PNPM meets zeronet
- cloning/pushing over DHT with web torrent.
- client key pairs for collaboration and authentication
Do you guys think it could be done? Thoughts? Ideas? Criticisms?
Would anyone be interested in working on something like this? I'd like all the help I can get and any input people have.
6 votes -
Imgur adds videos
19 votes -
Tildes Technical Map
Having just joined recently and made my way though the (technical goals documentation)[https://docs.tildes.net/technical-goals], I am interested in the lower-level stuff. How scaling is being...
Having just joined recently and made my way though the (technical goals documentation)[https://docs.tildes.net/technical-goals], I am interested in the lower-level stuff. How scaling is being considered, off-loading static content to CDNs, fault-tolerance etc... As well as code testing, deployments, etc...
I guess this will be a bit clearer when Tildes goes Open, but I think a discussion on it could also be helpful for roadmapping and growth if possible.
5 votes