• Activity
  • Votes
  • Comments
  • New
  • All activity
  • Showing only topics with the tag "bash". Back to normal view
    1. Share your personal dotfile treats and Unix tool recommendations

      I am currently preparing for a new job and cleaning up my dotfile repository. During the process, I had the idea that it would be nice to create a list of amazing tools, aliases, functions, and...

      I am currently preparing for a new job and cleaning up my dotfile repository. During the process, I had the idea that it would be nice to create a list of amazing tools, aliases, functions, and recommendations together.

      I will start.

      First, here is a list of nice tools to apt-get install or brew install that I can wholeheartedly recommend:

      • nvim is just an amazing text editor.
      • fzf is a very good fuzzy finder util. For example, you can quickly find files with it.
      • eza is a good ls replacement (and the successor of exa).
      • bat is a great replacement for cat with nice integrations and many options.
      • stow is great for managing your dotfiles. Thanks to @TangibleLight for telling me about it some while ago. I really love it.
      • tmux is a terminal multiplexer, i.e. you can have many sessions in one single terminal window. It's easy to use and super helpful. (When on a mac, I prefer iTerm tabs, though.)
      • nvm is practically a must if you are working with Node.
      • glow is an excellent markdown reader.
      • tldr is a nice man replacement. (You must run tldr -u after installing it to update available texts.)
      • z, an amazing tool for switching directories quickly.

      Also, I can recommend Oh My ZSH! which I have been using for years.

      Here is a small list of aliases I enjoy (I have 100+ aliases and I tried to pick some others may enjoy as well):

      # Serve current dir
      alias serve="npx serve ."
      
      # What's my IP?
      alias ip="curl --silent --compressed --max-time 5 --url 'https://ipinfo.io/ip' && echo ''"
      
      # This should be the default
      alias mkdir="mkdir -p"
      
      # Nice git helpers
      alias amend="git add . && git commit --amend --no-edit"
      alias nuke="git clean -df && git reset --hard"
      
      # Make which more powerful
      which='(alias; declare -f) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot'
      
      # This saves so many keystrokes, honestly
      alias -- +x="chmod +x"
      
      # Turns your path into a nice list and prints it
      alias path='echo -e ${PATH//:/\\n}'
      
      # Map over arguments and run a command
      # Usage: map <command>
      # Example: ls | map cat
      alias map="xargs -n1"
      

      And, finally, here are some fun functions:

      # Get cheat sheets for almost anything!
      # https://github.com/chubin/cheat.sh
      cheat() {
          WITH_PLUS=$(echo $@ | sed 's/ /+/g')
          CAT_TOOL=$(command -v batcat || command -v bat || command -v cat)
          curl "cheat.sh/$WITH_PLUS" | $CAT_TOOL
      }
      
      # Send everything to /dev/null
      nullify() {
        "$@" >/dev/null 2>&1
      }
      
      # Create a new dir and enter it
      mk() {
        mkdir -p "$@" && cd "$_"
      }
      
      # Create a data URL from a file
      # Source: https://github.com/mathiasbynens/dotfiles/blob/master/.functions
      data-url() {
      	local mimeType=$(file -b --mime-type "$1");
      	if [[ $mimeType == text/* ]]; then
      		mimeType="${mimeType};charset=utf-8";
      	fi
      	echo "data:${mimeType};base64,$(openssl base64 -in "$1" | tr -d '\n')";
      }
      
      74 votes
    2. best way to go about with a script that seems to need both bash and python functionality

      Gonna try and put this into words. I am pretty familiar with bash and python. used both quite a bit and feel more or less comfortable with them. My issue is I often do a thing where if I want to...

      Gonna try and put this into words.

      I am pretty familiar with bash and python. used both quite a bit and feel more or less comfortable with them.

      My issue is I often do a thing where if I want to accomplish a task that is maybe a bit complex, I feel like I have to wind up making a script, let's call it hello_word.sh but then I also make a script called .hello_world.py

      and basically what I do is almost the first line of the bash script, I call the python script like ./hello_world.py $@ and take advtange of the argparse library in python to determine what the user wants to do amongst other tasks that are easier to do in python like for loops and etc.

      I try to do the meat of the logic in the python scripts before I write to an .env file from it and then in the bash script, I will do

      set -o allexport
      source "${DIR}"/"${ENV_FILE}"
      set +o allexport
      

      and then use the variable from that env file to do the rest of the logic in bash.

      why do I do anything in bash?

      cause I very much prefer being able to see a terminal command being executed in real-time and see what it does and be able to Ctrl+c if I see the command go awry.

      in python, you can run a command with subprocess or other similar system libraries but you can't get the output in real-time or terminate a command preemptively and I really hate that. you have to wait for the command to end to see what happened.

      But I feel like there is something obvious I am missing (like maybe bash has an argparse library I don't know about and there is some way to inject the concept of types into it) or if there is another language entirely that fits my needs?

      6 votes
    3. A few easy linux commands, and a real-world example on how to use them in a pinch

      This below is a summary of some real-world performance investigation I recently went through. The tools I used are installed on all linux systems, but I know some people don't know them and would...

      This below is a summary of some real-world performance investigation I recently went through. The tools I used are installed on all linux systems, but I know some people don't know them and would straight up jump to heavyweight log analysis services and what not, or writing their own solution.

      Let's say you have request log sampling in a bunch of log files that contain lines like these:

      127.0.0.1 [2021-05-27 23:28:34.460] "GET /static/images/flags/2/54@3x.webp HTTP/2" 200 1806 TLSv1.3 HIT-CLUSTER SessionID:(null) Cache:max-age=31536000
      127.0.0.1 [2021-05-27 23:51:22.019] "GET /pl/player/123456/changelog/ HTTP/1.1" 200 16524 TLSv1.2 MISS-CLUSTER SessionID:(null) Cache:

      You might recognize Fastly logs there (IP anonymized). Now, there's a lot you might care about in this log file, but in my case, I wanted to get a breakdown of hits vs misses by URL.

      So, first step, let's concatenate all the log files with cat *.log > all.txt, so we can work off a single file.

      Then, let's split the file in two: hits and misses. There are a few different values for them, the majority are covered by either HIT-CLUSTER or MISS-CLUSTER. We can do this by just grepping for them like so:

      grep HIT-CLUSTER all.txt > hits.txt; grep MISS-CLUSTER all.txt > misses.txt
      

      However, we only care about url and whether it's a hit or a miss. So let's clean up those hits and misses with cut. The way cut works, it takes a delimiter (-d) and cuts the input based on that; you then give it a range of "fields" (-f) that you want.

      In our case, if we cut based on spaces, we end up with for example: 127.0.0.1 [2021-05-27 23:28:34.460] "GET /static/images/flags/2/54@3x.webp HTTP/2" 200 1806 TLSv1.3 HIT-CLUSTER SessionID:(null) Cache:max-age=31536000.

      We care about the 5th value only. So let's do: cut -d" " -f5 to get that. We will also sort the result, because future operations will require us to work on a sorted list of values.

      cut -d" " -f5 hits.txt | sort > hits-sorted.txt; cut -d" " -f5 misses.txt | sort > misses-sorted.txt
      

      Now we can start doing some neat stuff. wc (wordcount) is an awesome utility, it lets you count characters, words or lines very easily. wc -l counts lines in an input, since we're operating with one value per line we can easily count our hits and misses already:

      $ wc -l hits-sorted.txt misses-sorted.txt
        132523 hits-sorted.txt
        220779 misses-sorted.txt
        353302 total
      

      220779 / 132523 is a 1:1.66 ratio of hits to misses. That's not great…

      Alright, now I'm also interested in how many unique URLs are hit versus missed. uniq tool deduplicates immediate sequences, so the input has to be sorted in order to deduplicate our entire file. We already did that. We can now count our urls with uniq < hits-sorted.txt | wc -l; uniq < misses-sorted.txt | wc -l. We get 49778 and 201178, respectively. It's to be expected that most of our cache misses would be in "rarer" urls; this gives us a 1:4 ratio of cached to uncached URL.

      Let's say we want to dig down further into which URLs are most often hitting the cache, specifically. We can add -c to uniq in order to get a duplicate count in front of our URLs. To get the top ones at the top, we can then use sort, in reverse sort mode (-r), and it also needs to be numeric sort, not alphabetic (-n). head lets us get the top 10.

      $ uniq -c < hits-sorted.txt | sort -nr | head
          815 /static/app/webfonts/fa-solid-900.woff2?d720146f1999
          793 /static/app/images/1.png
          786 /static/app/fonts/nunito-v9-latin-ext_latin-regular.woff2?d720146f1999
          760 /static/CACHE/js/output.cee5c4089626.js
          758 /static/images/crest/3/light/notfound.png
          757 /static/CACHE/css/output.4f2b59394c83.css
          756 /static/app/webfonts/fa-regular-400.woff2?d720146f1999
          754 /static/app/css/images/loading.gif?d720146f1999
          750 /static/app/css/images/prev.png?d720146f1999
          745 /static/app/css/images/next.png?d720146f1999
      

      And same for misses:

      $ uniq -c < misses-sorted.txt | sort -nr | head
           56 /
           14 /player/237678/
           13 /players/
           12 /teams/
           11 /players/top/
      <snip>
      

      So far this tells us static files are most often hit, and for misses it also tells us… something, but we can't quite track it down yet (and we won't, not in this post). We're not adjusting for how often the page is hit as a whole, this is still just high-level analysis.

      One last thing I want to show you! Let's take everything we learned and analyze those URLs by prefix instead. We can cut our URLs again by slash with cut -d"/". If we want the first prefix, we can do -f1-2, or -f1-3 for the first two prefixes. Let's look!

      cut -d'/' -f1-2 < hits-sorted.txt | uniq -c | sort -nr | head
       100189 /static
         5948 /es
         3069 /player
         2480 /fr
         2476 /es-mx
         2295 /pt-br
         2094 /tr
         1939 /it
         1692 /ru
         1626 /de
      
      cut -d'/' -f1-2 < misses-sorted.txt | uniq -c | sort -nr | head
        66132 /static
        18578 /es
        17448 /player
        17064 /tr
        11379 /fr
         9624 /pt-br
         8730 /es-mx
         7993 /ru
         7689 /zh-hant
         7441 /it
      

      This gives us hit-miss ratios by prefix. Neat, huh?

      13 votes
    4. How can I make "whereis" automatically open the file on Nvim when it is the only result?

      EDIT: SOLVED It looks like it was much simple than I thought and someone solved it on Reddit already. I won't delete, just leave the link if someone is interested. Runtime Environment OS: MX Linux...

      EDIT: SOLVED

      It looks like it was much simple than I thought and someone solved it on Reddit already. I won't delete, just leave the link if someone is interested.

      Runtime Environment

      Issue

      Sometimes I use "whereis" (aliased for "wh", but it doesn't make any difference...) for my own scripts.

      I usually copy their paths manually (using tmux) and paste to the command line resulting in something like this:

      nvim /home/my_username/my_scripts_folder/my_script
      

      Could I make that into a single command?

      Thanks in advance!

      3 votes