• Activity
  • Votes
  • Comments
  • New
  • All activity
  • Showing only topics with the tag "development". Back to normal view
    1. Trying to become a junior developer in Brazil is an uphill battle

      They ask for years of experience, skills that no Jr would know since, well, it is a Jr and the process to apply for jobs are surreal. Thousands of tests, interviews that goes nowhere and lots of...

      They ask for years of experience, skills that no Jr would know since, well, it is a Jr and the process to apply for jobs are surreal. Thousands of tests, interviews that goes nowhere and lots of ghosting. And the pay is not that good. No wonder after 2 or 3 years of experience a lot of develpers starts working for companies outside of Brazil.

      Last one to contact me sent me a test to do it in 1 week. I went above and beyond and learned a lot of things. Before this, i had some small projects in Go and Python. Now i needed to learn Docker, tests, github actions, Postgresql and other things. Not everything was mandatory, but i did my best and did it all. I finished in 5 days since i have a day job.

      Here is the result: https://github.com/crdpa/conservice

      Showing the data in the browser was not necessary, but i think it was a nice touch and well made. If this does not land me a job as a junior developer i don't know what else could.

      I'm glad i already have a job in another area, but me and my SO are separated by a 4 hour drive and i'm tired. I want to work from home to be near her and our dog. Paying rent in two places is becoming a burden.

      I would be happy if you guys could test the application i made. It only needs docker.

      And do you guys have any tips from now on?

      7 votes
    2. Good web dev communities?

      Hey folks. May someone recommend a good web dev community out there for quality discussions? Right now I'm using Vue for a project and I'm wrestling with architectural decisions. I'd love for a...

      Hey folks.

      May someone recommend a good web dev community out there for quality discussions?

      Right now I'm using Vue for a project and I'm wrestling with architectural decisions. I'd love for a place where I can discuss different approaches' trade-offs and merits.

      Many thanks. :)

      11 votes
    3. Where/how should I acquire a .com domain for three years in advance?

      So I wanna purchase a domain for my personal website (just a WordPress thing), and I wanna pay for three years in advance (I have my reasons). Which domain sellers are reasonably priced,...

      So I wanna purchase a domain for my personal website (just a WordPress thing), and I wanna pay for three years in advance (I have my reasons). Which domain sellers are reasonably priced, trustworthy, and more likely to assist a less technical, non-developer user like myself?

      Thanks!

      13 votes
    4. A few easy linux commands, and a real-world example on how to use them in a pinch

      This below is a summary of some real-world performance investigation I recently went through. The tools I used are installed on all linux systems, but I know some people don't know them and would...

      This below is a summary of some real-world performance investigation I recently went through. The tools I used are installed on all linux systems, but I know some people don't know them and would straight up jump to heavyweight log analysis services and what not, or writing their own solution.

      Let's say you have request log sampling in a bunch of log files that contain lines like these:

      127.0.0.1 [2021-05-27 23:28:34.460] "GET /static/images/flags/2/54@3x.webp HTTP/2" 200 1806 TLSv1.3 HIT-CLUSTER SessionID:(null) Cache:max-age=31536000
      127.0.0.1 [2021-05-27 23:51:22.019] "GET /pl/player/123456/changelog/ HTTP/1.1" 200 16524 TLSv1.2 MISS-CLUSTER SessionID:(null) Cache:

      You might recognize Fastly logs there (IP anonymized). Now, there's a lot you might care about in this log file, but in my case, I wanted to get a breakdown of hits vs misses by URL.

      So, first step, let's concatenate all the log files with cat *.log > all.txt, so we can work off a single file.

      Then, let's split the file in two: hits and misses. There are a few different values for them, the majority are covered by either HIT-CLUSTER or MISS-CLUSTER. We can do this by just grepping for them like so:

      grep HIT-CLUSTER all.txt > hits.txt; grep MISS-CLUSTER all.txt > misses.txt
      

      However, we only care about url and whether it's a hit or a miss. So let's clean up those hits and misses with cut. The way cut works, it takes a delimiter (-d) and cuts the input based on that; you then give it a range of "fields" (-f) that you want.

      In our case, if we cut based on spaces, we end up with for example: 127.0.0.1 [2021-05-27 23:28:34.460] "GET /static/images/flags/2/54@3x.webp HTTP/2" 200 1806 TLSv1.3 HIT-CLUSTER SessionID:(null) Cache:max-age=31536000.

      We care about the 5th value only. So let's do: cut -d" " -f5 to get that. We will also sort the result, because future operations will require us to work on a sorted list of values.

      cut -d" " -f5 hits.txt | sort > hits-sorted.txt; cut -d" " -f5 misses.txt | sort > misses-sorted.txt
      

      Now we can start doing some neat stuff. wc (wordcount) is an awesome utility, it lets you count characters, words or lines very easily. wc -l counts lines in an input, since we're operating with one value per line we can easily count our hits and misses already:

      $ wc -l hits-sorted.txt misses-sorted.txt
        132523 hits-sorted.txt
        220779 misses-sorted.txt
        353302 total
      

      220779 / 132523 is a 1:1.66 ratio of hits to misses. That's not great…

      Alright, now I'm also interested in how many unique URLs are hit versus missed. uniq tool deduplicates immediate sequences, so the input has to be sorted in order to deduplicate our entire file. We already did that. We can now count our urls with uniq < hits-sorted.txt | wc -l; uniq < misses-sorted.txt | wc -l. We get 49778 and 201178, respectively. It's to be expected that most of our cache misses would be in "rarer" urls; this gives us a 1:4 ratio of cached to uncached URL.

      Let's say we want to dig down further into which URLs are most often hitting the cache, specifically. We can add -c to uniq in order to get a duplicate count in front of our URLs. To get the top ones at the top, we can then use sort, in reverse sort mode (-r), and it also needs to be numeric sort, not alphabetic (-n). head lets us get the top 10.

      $ uniq -c < hits-sorted.txt | sort -nr | head
          815 /static/app/webfonts/fa-solid-900.woff2?d720146f1999
          793 /static/app/images/1.png
          786 /static/app/fonts/nunito-v9-latin-ext_latin-regular.woff2?d720146f1999
          760 /static/CACHE/js/output.cee5c4089626.js
          758 /static/images/crest/3/light/notfound.png
          757 /static/CACHE/css/output.4f2b59394c83.css
          756 /static/app/webfonts/fa-regular-400.woff2?d720146f1999
          754 /static/app/css/images/loading.gif?d720146f1999
          750 /static/app/css/images/prev.png?d720146f1999
          745 /static/app/css/images/next.png?d720146f1999
      

      And same for misses:

      $ uniq -c < misses-sorted.txt | sort -nr | head
           56 /
           14 /player/237678/
           13 /players/
           12 /teams/
           11 /players/top/
      <snip>
      

      So far this tells us static files are most often hit, and for misses it also tells us… something, but we can't quite track it down yet (and we won't, not in this post). We're not adjusting for how often the page is hit as a whole, this is still just high-level analysis.

      One last thing I want to show you! Let's take everything we learned and analyze those URLs by prefix instead. We can cut our URLs again by slash with cut -d"/". If we want the first prefix, we can do -f1-2, or -f1-3 for the first two prefixes. Let's look!

      cut -d'/' -f1-2 < hits-sorted.txt | uniq -c | sort -nr | head
       100189 /static
         5948 /es
         3069 /player
         2480 /fr
         2476 /es-mx
         2295 /pt-br
         2094 /tr
         1939 /it
         1692 /ru
         1626 /de
      
      cut -d'/' -f1-2 < misses-sorted.txt | uniq -c | sort -nr | head
        66132 /static
        18578 /es
        17448 /player
        17064 /tr
        11379 /fr
         9624 /pt-br
         8730 /es-mx
         7993 /ru
         7689 /zh-hant
         7441 /it
      

      This gives us hit-miss ratios by prefix. Neat, huh?

      13 votes
    5. Rate my homepage!

      Inspired by this post on lobste.rs, I thought it'd be fun for us all to post our homepages and talk about them. I'm posting this in ~creative because I think of a homepage as a creative endeavor,...

      Inspired by this post on lobste.rs, I thought it'd be fun for us all to post our homepages and talk about them. I'm posting this in ~creative because I think of a homepage as a creative endeavor, but feel free to move this to ~design or ~tech or wherever, mods.

      Just post your homepage as a top-level comment, and we'll workshop in replies!

      42 votes
    6. In which a foolish developer tries DevOps: critique my VPS provisioning script!

      I'm attempting to provision two mirror staging and production environments for a future SaaS application that we're close to launching as a company, and I'd like to get some feedback on the...

      I'm attempting to provision two mirror staging and production environments for a future SaaS application that we're close to launching as a company, and I'd like to get some feedback on the provisioning script I've created that takes a default VPS from our hosting provider, DigitalOcean, and readies it for being a secure hosting environment for our application instance (which runs inside Docker, and persists data to an unrelated managed database).

      I'm sticking with a simple infrastructure architecture at the moment: A single VPS which runs both nginx and the application instance inside a containerised docker service as mentioned earlier. There's no load balancers or server duplication at this point. @Emerald_Knight very kindly provided me in the Tildes Discord with some overall guidance about what to aim for when configuring a server (limit damage as best as possible, limit access when an attack occurs)—so I've tried to be thoughtful and integrate that paradigm where possible (disabling root login, etc).

      I’m not a DevOps or sysadmin-oriented person by trade—I stick to programming most of the time—but this role falls to me as the technical person in this business; so the last few days has been a lot of reading and readying. I’ll run through the provisioning flow step by step. Oh, and for reference, Ubuntu 20.04 LTS.

      First step is self-explanatory.

      #!/bin/sh
      
      # Name of the user to create and grant privileges to.
      USERNAME_OF_ACCOUNT=
      
      sudo apt-get -qq update
      sudo apt install -qq --yes nginx
      sudo systemctl restart nginx
      

      Next, create my sudo user, add them to the groups needed, require a password change on first login, then copy across any provided authorised keys from the root user which you can configure to be seeded to the VPS in the DigitalOcean management console.

      useradd --create-home --shell "/bin/bash" --groups sudo,www-data "${USERNAME_OF_ACCOUNT}"
      passwd --delete $USERNAME_OF_ACCOUNT
      chage --lastday 0 $USERNAME_OF_ACCOUNT
      
      HOME_DIR="$(eval echo ~${USERNAME_OF_ACCOUNT})"
      mkdir --parents "${HOME_DIR}/.ssh"
      cp /root/.ssh/authorized_keys "${HOME_DIR}/.ssh"
      
      chmod 700 ~/.ssh
      chmod 600 ~/.ssh/authorized_keys
      chown --recursive "${USERNAME_OF_ACCOUNT}":"${USERNAME_OF_ACCOUNT}" "${HOME_DIR}/.ssh"

sudo chmod 775 -R /var/www
      sudo chown -R $USERNAME_OF_ACCOUNT /var/www
      rm -rf /var/www/html
      

      Installation of docker, and run it as a service, ensure the created user is added to the docker group.

      sudo apt-get install -qq --yes \
          apt-transport-https \
          ca-certificates \
          curl \
          gnupg-agent \
          software-properties-common
      
      curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
      sudo apt-key fingerprint 0EBFCD88
      
      sudo add-apt-repository --yes \
         "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
         $(lsb_release -cs) \
         stable"
      
      sudo apt-get -qq update
      sudo apt install -qq --yes docker-ce docker-ce-cli containerd.io
      
      # Only add a group if it does not exist
      sudo getent group docker || sudo groupadd docker
      sudo usermod -aG docker $USERNAME_OF_ACCOUNT
      
      # Enable docker
      sudo systemctl enable docker
      
      sudo curl -L "https://github.com/docker/compose/releases/download/1.27.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
      sudo chmod +x /usr/local/bin/docker-compose
      sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
      docker-compose --version
      

      Disable root logins and any form of password-based authentication by altering sshd_config.

      sed -i '/^PermitRootLogin/s/yes/no/' /etc/ssh/sshd_config
      sed -i '/^PasswordAuthentication/s/yes/no/' /etc/ssh/sshd_config
      sed -i '/^ChallengeResponseAuthentication/s/yes/no/' /etc/ssh/sshd_config
      

      Configure the firewall and fail2ban.

      sudo ufw default deny incoming
      sudo ufw default allow outgoing
      sudo ufw allow ssh
      sudo ufw allow http
      sudo ufw allow https
      sudo ufw reload
      sudo ufw --force enable && sudo ufw status verbose
      
      sudo apt-get -qq install --yes fail2ban
      sudo systemctl enable fail2ban
      sudo systemctl start fail2ban
      

      Swapfiles.

      sudo fallocate -l 1G /swapfile && ls -lh /swapfile
      sudo chmod 0600 /swapfile && ls -lh /swapfile
      sudo mkswap /swapfile
      sudo swapon /swapfile && sudo swapon --show
      echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
      

      Unattended updates, and restart the ssh daemon.

      sudo apt install -qq unattended-upgrades
      sudo systemctl restart ssh
      

      Some questions

      You can assume these questions are cost-benefit focused, i.e. is it worth my time to investigate this, versus something else that may have better gains given my limited time.

      1. Obviously, any critiques of the above provisioning process are appreciated—both on the micro level of criticising particular lines, or zooming out and saying “well why don’t you do this instead…”. I can’t know what I don’t know.

      2. Is it worth investigating tools such as ss or lynis (https://github.com/CISOfy/lynis) to perform server auditing? I don’t have to meet any compliance requirements at this point.

      3. Do I get any meaningful increase in security by implementing 2FA on login here using google authenticator? As far as I can see, as long as I'm using best practices to actually ssh into our boxes, then the likeliest risk profile for unwanted access probably isn’t via the authentication mechanism I use personally to access my servers.

      4. Am I missing anything here? Beyond the provisioning script itself, I adhere to best practices around storing and generating passwords and ssh keys.

      Some notes and comments

      1. Eventually I'll use the hosting provider's API to spin up and spin down VPS's on the fly via a custom management application, which gives me an opportunity to programmatically execute the provisioning script above and run some over pre- and post-provisioning things, like deployment of the application and so forth.

      2. Usage alerts and monitoring is configured within DigitalOcean's console, and alerts are sent to our business' Slack for me to action as needed. Currently, I’m settling on the following alerts:
        1. Server CPU utilisation greater than 80% for 5 minutes.
        2. Server memory usage greater than 80% for 5 minutes.
        3. I’m also looking at setting up daily fail2ban status alerts if needed.
      9 votes
    7. Tell me about your early experiences with debugging and software QA

      Are you an “old timer” in the computer industry? I’m writing a story about the things programmers (and QA people) had to do to test their software. It’s meant to be a nostalgic piece that’ll...

      Are you an “old timer” in the computer industry? I’m writing a story about the things programmers (and QA people) had to do to test their software. It’s meant to be a nostalgic piece that’ll remind people about old methods — for good or ill.

      For example, there was a point where the only way to insert a breakpoint in the code was to insert “printfs” that said “I got to this place in the code!” And all testing was manual testing. Nothing was automated. If you wanted a bug tracking system, you built your own.

      So tell me your stories. Tell me what you had to do to test software, way back when, and compare it to today. What tools did you use -- or build? Is there anything you miss? Anything that makes you especially glad that the past is past?

      C’mon, you know you wanted a “remember when”!

      8 votes