• Activity
  • Votes
  • Comments
  • New
  • All activity
    1. Best 120mm fans for a desktop?

      I was looking for preferences on 120/140mm case fans. RGB is a want, but not at the expense of quality fans. I'm pretty new to the topic and not super familiar with the technical side. So open to...

      I was looking for preferences on 120/140mm case fans. RGB is a want, but not at the expense of quality fans.

      I'm pretty new to the topic and not super familiar with the technical side. So open to reading more in depth too.

      Thanks!

      7 votes
    2. What does your desktop look like? What tools do you swear by?

      Between the recent discussions on the Useful Shell Scripts thread, and some of the tangents on the Desktop Usability thread, I thought it might be an interesting idea to have a desktop screenshot...

      Between the recent discussions on the Useful Shell Scripts thread, and some of the tangents on the Desktop Usability thread, I thought it might be an interesting idea to have a desktop screenshot sharing / unixporn thread where we talk about our setups, preferred applications, and share some pointers. This doesn't specifically have to be a Unix circlejerk though. If you have a Windows/Android/ChromeOS/TempleOS setup with some novel innovations, you're more than welcome to share too.

      34 votes
    3. Programming Challenge - It's raining!

      Hi everyone, it's been 12 days since last programming challenge. So here's another one. The task is to make an algorithm that'll count how long would it take to fill system of lakes with water....

      Hi everyone, it's been 12 days since last programming challenge. So here's another one. The task is to make an algorithm that'll count how long would it take to fill system of lakes with water.

      It's raining in the forest. The forest is full of lakes, which are close to each other. Every lake is below the previous one (so 1st lake is higher than 2nd lake, which is higher than 3rd lake). Lakes are empty at the beginning, and they're filling at rate of 1l/h. Once a lake is full, all water that'd normally fall into the lake will flow to the next lake.

      For example, you have lakes A, B, and C. Lake A can hold 1 l of water, lake B can hold 3 l of water and lake C can hold 5 l of water. How long would it take to fill all the lakes?
      After one hour, the lakes would be: A (1/1), B (1/3), C(1/5). After two hours, the lakes would be: A(1/1), B(3/3), C(2/5) (because this hour, B received 2l/h - 1l/h from the rain and 1l/h from lake A). After three hours, the lakes would be: A(1/1), B(3/3), C(5/5). So the answer is 3. Please note, that the answer can be any rational number. For example if lake C could hold only 4l instead of 5, the answer would be 2.66666....

      Hour 0:

      
      \            /
        ----(A)----
                             \                /
                              \              /
                               \            /
                                ----(B)----
                                                   \           /
                                                    \         /
                                                     \       /
                                                     |       |
                                                     |       |
                                                      --(C)--
      

      Hour 1:

      
      \============/
        ----(A)----
                             \                /
                              \              /
                               \============/
                                ----(B)----
                                                   \           /
                                                    \         /
                                                     \       /
                                                     |       |
                                                     |=======|
                                                      --(C)--
      

      Hour 2:

                  ==============
      \============/           |
        ----(A)----            |
                             \================/
                              \==============/
                               \============/
                                ----(B)----
                                                   \           /
                                                    \         /
                                                     \       /
                                                     |=======|
                                                     |=======|
                                                      --(C)--
      

      Hour 3:

                  ==============
      \============/           |
        ----(A)----            |             ========
                             \================/       |
                              \==============/        |
                               \============/         |
                                ----(B)----           |
                                                   \===========/
                                                    \=========/
                                                     \=======/
                                                     |=======|
                                                     |=======|
                                                      --(C)--
      

      Good luck everyone! Tell me if you need clarification or a hint. I already have a solution, but it sometimes doesn't work, so I'm really interested in seeing yours :-)

      21 votes
    4. Share your useful shell scripts!

      Disclaimer: Don't run scripts offered to you by randos unless you trust them or review it yourself I use this constantly, it just plays music by file name, specifically matching *NAME* with...

      Disclaimer: Don't run scripts offered to you by randos unless you trust them or review it yourself

      I use this constantly, it just plays music by file name, specifically matching *NAME* with case-insensitivity. Requires bash 4.something.

      # play -ln SONGS ...
      # -l don't shuffle
      # -n dry run
      mpv_args="--no-audio-display --no-resume-playback \
                --msg-level=all=status --term-osd-bar"
      shopt -s globstar nullglob nocaseglob
      
      shuffle=true
      dry=false
      while [[ "$1" == -* ]]; do
          if [[ "$1" == "-l" ]]; then 
              shuffle=false
          elif [[ "$1" == "-n" ]]; then
              dry=true
          fi
      
          shift 1
      done
      
      if [[ "$shuffle" == true ]]; then
          mpv_args="--shuffle $mpv_args"
      fi
      
      songs=()
      while [[ "$#" != 0 ]]; do
          songs+=( ~/music/**/**/*"$1"*.* ) # change this to match your music directory layout
          shift 1                                               # could probably use find instead
      done
      
      if [[ "$dry" == true ]]; then
          if [[ "$shuffle" == true ]]; then
              printf "Shuffle mode is on\n"
          fi
      
          for song in "${songs[@]}"; do
              printf "$song\n"
          done
        
          exit
      fi
      
      if [[ ${#songs[@]} != 0 ]]; then
          mpv $mpv_args "${songs[@]}"
      fi
      

      I make no claims to the quality of this but it works!

      36 votes
    5. Where would a beginner start with data compression? What are some good books for it?

      Mostly the title. I have experience with Python, and I was thinking of learning more about data compression. How should I proceed? And what are some good books I could read, both about specifics...

      Mostly the title. I have experience with Python, and I was thinking of learning more about data compression. How should I proceed? And what are some good books I could read, both about specifics and abstracts of data compression, data management, data in general.

      15 votes
    6. An Alternative Approach to Configuration Management

      Preface Different projects have different use cases that can ultimately result in common solutions not suiting your particular needs. Today I'm going to diverging a bit from my more abstract,...

      Preface

      Different projects have different use cases that can ultimately result in common solutions not suiting your particular needs. Today I'm going to diverging a bit from my more abstract, generalized topics on code quality and instead focus on a specific project structure example that I encountered.


      Background

      For a while now, I've found myself being continually frustrated with the state of my project configuration management. I had a single configuration file that would contain all of the configuration options for the various tools I've been using--database, API credentials, etc.--and I kept running into the problem of wanting to test these tools locally while not inadvertently committing and pushing sensitive credentials upstream. For me, part of my security process is ensuring that sensitive access credentials never make it into the repository and to limit access to these credentials to only people who need to be able to access them.


      Monolithic Files Cause Monolithic Pain

      The first thing I realized was that having a single monolithic configuration file was just terrible practice. There are going to be common configuration options that I want to have in there with default values, such as local database configuration pointing to a database instance running on the same VM as the application. These should always be in the repo, otherwise any dev who spins up an instance of the VM will need to manually tread documentation and copy-paste the missing options into the configuration. This would be incredibly time-consuming, inefficient, and stupid.

      I also use different tools which have different configuration options associated with them. Having to dig through a single file containing configuration options for all of these tools to find the ones I need to modify is cumbersome at best. On top of that, having those common configuration options living in the same place that sensitive access credentials do is just asking for a rogue git commit -A to violate the aforementioned security protocol.


      Same Problem, Different Structure

      My first approach to resolving this problem was breaking the configuration out into separate files, one for each distinct tool. In each file, a "skeleton" config was generated, i.e. each option was given a default empty value. The main config would then only contain config options that are common and shared across the application. To avoid having the sensitive credentials leaked, I then created rules in the .gitignore to exclude these files.

      This is where I ran into problem #2. I learned that this just doesn't work. You can either have a file in your repo and have all changes to that file tracked, have the file in your repo and make a local-only change to prevent changes from being tracked, or leave the file out of the repo completely. In my use case, I wanted to be able to leave the file in the repo, treat it as ignored by everyone, and only commit changes to that file when there was a new configuration option I wanted added to it. Git doesn't support this use case whatsoever.

      This problem turned out to be really common, but the solution suggested is to have two separate versions of your configuration--one for dev, and one for production--and to have a flag to switch between the two. Given the breaking up of my configuration, I would then need twice as many files to do this, and given my security practices, this would violate the no-upstream rule for sensitive credentials. Worse still, if I had several different kinds of environments with different configuration--local dev, staging, beta, production--then for m such environments and n configuration files, I would need to maintain n*m separate files for configuration alone. Finally, I would need to remember to include a prefix or postfix to each file name any time I needed to retrieve values from a new config file, which is itself an error-prone requirement. Overall, there would be a substantial increase in technical debt. In other words, this approach would not only not help, it would make matters worse!


      Borrowing From Linux

      After a lot of thought, an idea occurred to me: within Linux systems, there's an /etc/skel/ directory that contains common files that are copied into a new user's home directory when that user is created, e.g. .bashrc and .profile. You can make changes to these files and have them propagate to new users, or you can modify your own personal copy and leave all other new users unaffected. This sounds exactly like the kind of behavior I want to emulate!

      Following their example, I took my $APPHOME/config/ directory and placed a skel/ subdirectory inside, which then contained all of the config files with the empty default values within. My .gitignore then looked something like this:

      $APPHOME/config/*
      !$APPHOME/config/main.php
      !$APPHOME/config/skel/
      !$APPHOME/config/skel/*
      # This last one might not be necessary, but I don't care enough to test it without.
      

      Finally, on deploying my local environment, I simply include a snippet in my script that enters the new skel/ directory and copies any files inside into config/, as long as it doesn't already exist:

      cd $APPHOME/config/skel/
      for filename in *; do
          if [ ! -f "$APPHOME/config/$filename" ]; then
              cp "$filename" "$APPHOME/config/$filename"
          fi
      done
      

      (Note: production environments have a slightly different deployment procedure, as local copies of these config files are saved within a shared directory for all releases to point to via symlink.)

      All of these changes ensure that only config/main.php and the files contained within config/skel/ are whitelisted, while all others are ignored, i.e. our local copies that get stored within config/ won't be inadvertently committed and pushed upstream!


      Final Thoughts

      Common solutions to problems are typically common for a good reason. They're tested, proven, and predictable. But sometimes you find yourself running into cases where the common, well-accepted solution to the problem doesn't work for you. Standards exist to solve a certain class of problems, and sometimes your problem is just different enough for it to matter and for those standards to not apply. Standards are created to address most cases, but edge cases will always exist. In other words, standards are guidelines, not concrete rules.

      Sometimes you need to stop thinking about the problem in terms of the standard approach to solving it, and instead break it down into its most abstract, basic form and look for parallels in other solved problems for inspiration. Odds are the problem you're trying to solve isn't as novel as you think it is, and that someone has probably already solved a similar problem before. Parallels, in my experience, are usually a pretty good indicator that you're on the right track.

      More importantly, there's a delicate line to tread between needing to use a different approach to solving an edge case problem you have, and needing to restructure your project to eliminate the edge case and allow the standard solution to work. Being able to decide which is more appropriate can have long-lasting repercussions on your ability to manage technical debt.

      16 votes
    7. As someone with ADHD, I hate the "RTFM" motto

      I'm a student of software engineering. I'm not a programmer yet, but I use software that is common among this crowd, like i3wm, Neovim and Emacs. I know how to find and read documentation. I've...

      I'm a student of software engineering. I'm not a programmer yet, but I use software that is common among this crowd, like i3wm, Neovim and Emacs. I know how to find and read documentation. I've read the obnoxious How To Ask Questions the Smart Way. Every time I encounter an issue, I do my diligence. I go through the manuals, I google, I read the docs. My main editor, Emacs, has an extensive manual, with plenty of accurate details. I get that's a huge program (more like a platform, really), but let's just say that a black-and-white 650 pages PDF is not the most ADHD friendly thing in the world.

      I'm aware that I chose a career that requires plenty of reading, but I happen to like it a lot and it seems like I have some aptitude for it. I had similar issues in my previous activities anyway. But it's discouraging trying to understand programming and complex software, only to be repelled by people who think everyone has their ability for concentration. Sometimes I completely lose track of time. I can sit on my computer and hyperfocus for up to 48 hours with 20 Chrome tabs open non-stop and Netflix on the background. I may seem productive, but I'm not reading anything. Maybe I read one paragraph or two, and 30 seconds later I can't remember what I was doing. But I still have tasks to accomplish, and sometimes I need help to find useful information on a 700 pages manual.

      Luckily I have a great support and determination and have accomplished a lot, but my peers have no idea what I went through to get to where I am. What I don't have in natural born skills I compensate with a lot of raw effort. Everyone has their difficulties and I'm not seeking compassion, but I'd like to suggest people think twice before dismissing as "lazy" someone you know nothing about. That person might have a mental disorder, a reading disorder or even an intellectual disability. Do you wanna be the guy who told a dyslexic to just read the fucking manual?

      EDIT: of course I get that time and energy are limited commodities... my point is: don't be an asshole about it. Do what you can and you wanna do, but there's no need to use hostile buzzwords when you communicate with less knowledgeable people. You're not even forced to answer... I much prefer not getting an answer than getting a hostile one.

      26 votes
    8. Help! I'm indecisive and I want a keyboard.

      I know there are at least fifteen threads on ~comp alone about mechanical keyboards, but, this one is mine. I recently had a run in with tendinitis, which taught me the importance of ergonomics,...

      I know there are at least fifteen threads on ~comp alone about mechanical keyboards, but, this one is mine.

      I recently had a run in with tendinitis, which taught me the importance of ergonomics, but I still wanted the clickety clack of a mechanical keyboard, so I decided to consider buying an ergonomic mechanical keyboard.

      The first one that I looked at was the ErgoDox EZ (it was the first one I saw). It had a split layout, open source firmware, and a positive review from Linus Tech Tips.

      The second one was the Ultimate Hacking Keyboard (I saw the Hacker News thread). I was interested in it for the Trackball Module.

      These two keyboards are different enough from each other, so it's hard to compare them.

      In conclusion, why should I choose one over the other?

      14 votes
    9. Ask Tilde: How would you improve the ErgoDox

      The ErgoDox has been out for a few years now and spawned many, many new designs based off it. My question is how would you improve it? I've been trying to answer this question for a few weeks now...

      The ErgoDox has been out for a few years now and spawned many, many new designs based off it. My question is how would you improve it? I've been trying to answer this question for a few weeks now and would like to know what the community thinks. What is important in a keyboard for you?

      I've thrown my hat into the ring with Gergo which I think comes close. It uses SMD components, reducing the overall size and cost of the board, Removes the ProMicro for a TQFP Atmega32u4, moves the paddles in a tiny bit and removes the extra keys from the thumb cluster. It's meant to be used without a case (using rubbered standoffs to keep it off the desk/surface) and the back has some pretty designs. The hardest part for me to justify was loping off the number row, but seeing as many layouts use a modifier and the right hand pad as a ortho numpad I went with it. Worst case the default layout will have paddle + top row give numbers. In addition, for occasional mouse users, I designed a trackball that fits inside of a 1u key and can be mounted on the right hand side of the board (or a regular key if wanted). The idea being for small movements you have something other then QMKs mouse keys to work with. I've gone into a bit more detail on my blog on the design considerations

      The main thing I tried to optimize with Gergo was cost. Ergo keyboards need not be expensive and I think the price point on this board drives it home. With a cheap set of caps off Amazon and some Cherry clones, this board can be put together for under 100$ shipping included. Compared to a ErgoDox EZ with a starting price of 250$ before keys or shipping, I think I've done a decent job.

      As keyboards are highly personal devices, what do you look for in a keyboard?

      5 votes
    10. Meta Discussion: Is there interest in topics concerning code quality?

      I've posted a few lengthy topics here outside of programming challenges, and I've noticed that the ones that seem to have spurred the most interest and generated some discussion were ones that...

      I've posted a few lengthy topics here outside of programming challenges, and I've noticed that the ones that seem to have spurred the most interest and generated some discussion were ones that were directly related to code quality. To avoid falling for confirmation bias, though, I thought I would ask directly.

      Is there generally a greater interest in code quality discussions? If so, then what kind of things are you interested in seeing in those discussions? What do you prefer not to see? If not, then what kinds of programming-related discussions would you prefer to see more of? What about non-programming discussions?

      Also, is there any interest in an informal series of topics much like the programming challenges or the a layperson's introduction to... series (i.e. decentralized and available for anyone to participate whenever)? Personally, I'd be interested in seeing more on the subject from others!

      17 votes
    11. Code Quality Tip: Wrapping external libraries.

      Preface Occasionally I feel the need to touch on the subject of code quality, particularly because of the importance of its impact on technical debt, especially as I continue to encounter the...

      Preface

      Occasionally I feel the need to touch on the subject of code quality, particularly because of the importance of its impact on technical debt, especially as I continue to encounter the effects of technical debt in my own work and do my best to manage it. It's a subject that is unfortunately not emphasized nearly enough in academia.


      Background

      As a refresher, technical debt is the long-term cost of the design decisions in your code. These costs can manifest in different ways, such as greater difficulty in understanding what your code is doing or making non-breaking changes to it. More generally, these costs manifest as additional time and resources being spent to make some kind of change.

      Sometimes these costs aren't things you think to consider. One such consideration is how difficult it might be to upgrade a specific technology in your stack. For example, what if you've built a back-end system that integrates with AWS and you suddenly need to upgrade your SDK? In a small project this might be easy, but what if you've built a system that you've been maintaining for years and it relies heavily on AWS integrations? If the method names, namespaces, argument orders, or anything else has changed between versions, then suddenly you'll need to update every single reference to an AWS-related tool in your code to reflect those changes. In larger software projects, this could be a daunting and incredibly expensive task, spanning potentially weeks or even months of work and testing.

      That is, unless you keep those references to a minimum.


      A Toy Example

      This is where "wrapping" your external libraries comes into play. The concept of "wrapping" basically means to create some other function or object that takes care of operating the functions or object methods that you really want to target. One example might look like this:

      <?php
      
      class ImportedClass {
          public function methodThatMightBecomeModified($arg1, $arg2) {
              // Do something.
          }
      }
      
      class ImportedClassWrapper {
          private $class_instance = null;
      
          private function getInstance() {
              if(is_null($this->class_instance)) {
                  $this->class_instance = new ImportedClass();
              }
      
              return $this->class_instance;
          }
      
          public function wrappedMethod($arg1, $arg2) {
              return $this->getInstance()->methodThatMightBecomeModified($arg1, $arg2);
          }
      }
      
      ?>
      

      Updating Tools Doesn't Have to Suck

      Imagine that our ImportedClass has some important new features that we need to make use of that are only available in the most recent version, and we're several versions behind. The problem, of course, is that there were a lot of changes that ended up being made between our current version and the new version. For example, ImportedClass is now called NewImportedClass. On top of that, methodThatMightBecomeModified is now called methodThatWasModified, and the argument order ended up getting switched around!

      Now imagine that we were directly calling new ImportedClass() in many different places in our code, as well as directly invoking methodThatMightBecomeModified:

      <?php
      
      $imported_class_instance = new ImportedClass();
      $imported_class_instance->methodThatMightBeModified($val1, $val2);
      
      ?>
      

      For every single instance in our code, we need to perform a replacement. There is a linear or--in terms of Big-O notation--a complexity of O(n) to make these replacements. If we assume that we only ever used this one method, and we used it 100 times, then there are 100 instances of new ImportClass() to update and another 100 instances of the method invocation, equaling 200 lines of code to change. Furthermore, we need to remember each of the replacements that need to be made and carefully avoid making any errors in the process. This is clearly non-ideal.

      Now imagine that we chose instead to use the wrapper object:

      <?php
      
      $imported_class_wrapper = new ImportedClassWrapper();
      $imported_class_wrapper->wrappedMethod($val1, $val2);
      
      ?>
      

      Our updates are now limited only to the wrapper class:

      <?php
      
      class ImportedClassWrapper {
          private $class_instance = null;
      
          private function getInstance() {
              if(is_null($this->class_instance)) {
                  $this->class_instance = new NewImportedClass();
              }
      
              return $this->class_instance;
          }
      
          public function wrappedMethod($arg1, $arg2) {
              return $this->getInstance()->methodThatWasModified($arg2, $arg1);
          }
      }
      
      ?>
      

      Rather than making changes to 200 lines of code, we've now made changes to only 2. What was once an O(n) complexity change has now turned into an O(1) complexity change to make this upgrade. Not bad for a few extra lines of code!


      A Practical Example

      Toy problems are all well and good, but how does this translate to reality?

      Well, I ran into such a problem myself once. Running MongoDB with PHP requires the use of an external driver, and this driver provides an object representing a MongoDB ObjectId. I needed to perform a migration from one hosting provider over to a new cloud hosting provider, with the application and database services, which were originally hosted on the same physical machine, hosted on separate servers. For security reasons, this required an upgrade to a newer version of MongoDB, which in turn required an upgrade to a newer version of the driver.

      This upgrade resulted in many of the calls to new MongoId() failing, because the old version of the driver would accept empty strings and other invalid ID strings and default to generating a new ObjectId, whereas the new version of the driver treated invalid ID strings as failing errors. And there were many, many cases where invalid strings were being passed into the constructor.

      Even after spending hours replacing the (literally) several dozen instances of the constructor calls, there were still some places in the code where invalid strings managed to get passed in. This made for a very costly upgrade.

      The bugs were easy to fix after the initial replacements, though. After wrapping new MongoId() inside of a wrapper function, a few additional conditional statements inside of the new function resolved the bugs without having to dig around the rest of the code base.


      Final Thoughts

      This is one of those lessons that you don't fully appreciate until you've experienced the technical debt of an unwrapped external library first-hand. Code quality is an active effort, but a worthwhile one. It requires you to be willing to throw away potentially hours or even days of work when you realize that something needs to change, because you're thinking about how to keep yourself from banging your head against a wall later down the line instead of thinking only about how to finish up your current task.

      "Work smarter, not harder" means putting in some hard work upfront to keep your technical debt under control.

      That's all for now, and remember: don't be fools, wrap your external tools.

      23 votes
    12. Programming Challenge: Shape detection.

      The programming challenges have kind of come to a grinding halt recently. I think it's time to get another challenge started! Given a grid of symbols, representing a simple binary state of...

      The programming challenges have kind of come to a grinding halt recently. I think it's time to get another challenge started!

      Given a grid of symbols, representing a simple binary state of "filled" or "unfilled", determine whether or not a square is present on the grid. Squares must be 2x2 in size or larger, must be completely solid (i.e. all symbols in the NxN space are "filled"), and must not be directly adjacent to any other filled spaces.

      Example, where 0 is "empty" and 1 is "filled":

      000000
      011100
      011100
      011100
      000010
      
      // Returns true.
      
      000000
      011100
      011100
      011110
      000000
      
      // Returns false.
      
      000000
      011100
      010100
      011100
      000000
      
      // Returns false.
      

      For those who want a greater challenge, try any of the following:

      1. Get a count of all squares.
      2. Detect squares that are touching (but not as a rectangle).
      3. Detect other specific shapes like triangles or circles (you will need to be creative).
      4. If doing (1) and (3), count shapes separately based on type.
      5. Detect shapes within unfilled space as well (a checkerboard pattern is a great use case).
      13 votes