• Activity
  • Votes
  • Comments
  • New
  • All activity
  • Showing only topics with the tag "linux". Back to normal view
    1. Help needed: slow external hard drive

      I've got a 2TB Toshiba drive (formatted as NTFS) that has become very slow and I was wondering if anyone here as any ideas what the problem could be and how I could fix it. All the data I'd need...

      I've got a 2TB Toshiba drive (formatted as NTFS) that has become very slow and I was wondering if anyone here as any ideas what the problem could be and how I could fix it. All the data I'd need off the drive is backed up, but I would at least like a drive to put it back on to!

      In short, it became slow after I had to force power-off the system it was connected to (Pop OS installed on another external drive which I unplugged by mistake) and I haven't bothered to try to fix it in the six months since.

      I've tested it on Pop and it takes about 10-20 minutes to mount, and 2 minutes to unmount and safely remove. The data itself seems fine but performance is slow, accessing a 20MB image takes several seconds and selecting the drive in GNOME Disks caused it to freeze.

      The drive sounded louder than normal, especially after plugging in.

      On Windows, the drive was recognised and browsable immediately, but browsing through folders was very slow - opening some folders causes Windows Explorer to freeze for a while. Some of my double-clicks were mis-recognised as click-to-rename, which took several seconds to activate and during which time Task Manager reported the average response time between 5000 and 11000 ms.

      Attempting to load an audio file resulted in lots of buffering. Task Manager reports an active time of 100% (even when not loading files or folders) and the activity never exceeded 100 KB/s (and doesn't sustain it for more than a second). Ejecting the drive takes forever - after ejecting it using the tray icon, the tray icon is not removed (even though there are no other drives connected or listed) and the active time is still 100% with the indicator LED blinking non-stop. The system did not enter sleep right away after me asking it to either.

      All of that to say, does anyone know what the issue could be, or how I could find and fix it? Thanks!


      Edit: fixed and normal functionality restored (at least so I can check the drive a bit easier) using Scan & Repair in Windows (see my comment).

      4 votes
    2. Whatever happened with UMN vs. Linux Kernel Maintainers?

      Even tech news moves a bit too fast for me to keep up. Did UMN ever get unbanned? I saw a half-hearted apology and then finally this [1], but never heard any update. Most recent article I've seen...

      Even tech news moves a bit too fast for me to keep up. Did UMN ever get unbanned? I saw a half-hearted apology and then finally this [1], but never heard any update. Most recent article I've seen is this ZDNet article [2] from a couple of weeks ago discussing a related issue, but still mentions that UMN is still banned.

      Anyone following this?

      [1] https://cse.umn.edu/cs/statement-computer-science-engineering-confirming-linux-technical-advisory-board-findings-may-9

      [2] https://www.zdnet.com/article/hard-work-and-poor-pay-stresses-out-open-source-maintainers/

      4 votes
    3. A few easy linux commands, and a real-world example on how to use them in a pinch

      This below is a summary of some real-world performance investigation I recently went through. The tools I used are installed on all linux systems, but I know some people don't know them and would...

      This below is a summary of some real-world performance investigation I recently went through. The tools I used are installed on all linux systems, but I know some people don't know them and would straight up jump to heavyweight log analysis services and what not, or writing their own solution.

      Let's say you have request log sampling in a bunch of log files that contain lines like these:

      127.0.0.1 [2021-05-27 23:28:34.460] "GET /static/images/flags/2/54@3x.webp HTTP/2" 200 1806 TLSv1.3 HIT-CLUSTER SessionID:(null) Cache:max-age=31536000
      127.0.0.1 [2021-05-27 23:51:22.019] "GET /pl/player/123456/changelog/ HTTP/1.1" 200 16524 TLSv1.2 MISS-CLUSTER SessionID:(null) Cache:

      You might recognize Fastly logs there (IP anonymized). Now, there's a lot you might care about in this log file, but in my case, I wanted to get a breakdown of hits vs misses by URL.

      So, first step, let's concatenate all the log files with cat *.log > all.txt, so we can work off a single file.

      Then, let's split the file in two: hits and misses. There are a few different values for them, the majority are covered by either HIT-CLUSTER or MISS-CLUSTER. We can do this by just grepping for them like so:

      grep HIT-CLUSTER all.txt > hits.txt; grep MISS-CLUSTER all.txt > misses.txt
      

      However, we only care about url and whether it's a hit or a miss. So let's clean up those hits and misses with cut. The way cut works, it takes a delimiter (-d) and cuts the input based on that; you then give it a range of "fields" (-f) that you want.

      In our case, if we cut based on spaces, we end up with for example: 127.0.0.1 [2021-05-27 23:28:34.460] "GET /static/images/flags/2/54@3x.webp HTTP/2" 200 1806 TLSv1.3 HIT-CLUSTER SessionID:(null) Cache:max-age=31536000.

      We care about the 5th value only. So let's do: cut -d" " -f5 to get that. We will also sort the result, because future operations will require us to work on a sorted list of values.

      cut -d" " -f5 hits.txt | sort > hits-sorted.txt; cut -d" " -f5 misses.txt | sort > misses-sorted.txt
      

      Now we can start doing some neat stuff. wc (wordcount) is an awesome utility, it lets you count characters, words or lines very easily. wc -l counts lines in an input, since we're operating with one value per line we can easily count our hits and misses already:

      $ wc -l hits-sorted.txt misses-sorted.txt
        132523 hits-sorted.txt
        220779 misses-sorted.txt
        353302 total
      

      220779 / 132523 is a 1:1.66 ratio of hits to misses. That's not great…

      Alright, now I'm also interested in how many unique URLs are hit versus missed. uniq tool deduplicates immediate sequences, so the input has to be sorted in order to deduplicate our entire file. We already did that. We can now count our urls with uniq < hits-sorted.txt | wc -l; uniq < misses-sorted.txt | wc -l. We get 49778 and 201178, respectively. It's to be expected that most of our cache misses would be in "rarer" urls; this gives us a 1:4 ratio of cached to uncached URL.

      Let's say we want to dig down further into which URLs are most often hitting the cache, specifically. We can add -c to uniq in order to get a duplicate count in front of our URLs. To get the top ones at the top, we can then use sort, in reverse sort mode (-r), and it also needs to be numeric sort, not alphabetic (-n). head lets us get the top 10.

      $ uniq -c < hits-sorted.txt | sort -nr | head
          815 /static/app/webfonts/fa-solid-900.woff2?d720146f1999
          793 /static/app/images/1.png
          786 /static/app/fonts/nunito-v9-latin-ext_latin-regular.woff2?d720146f1999
          760 /static/CACHE/js/output.cee5c4089626.js
          758 /static/images/crest/3/light/notfound.png
          757 /static/CACHE/css/output.4f2b59394c83.css
          756 /static/app/webfonts/fa-regular-400.woff2?d720146f1999
          754 /static/app/css/images/loading.gif?d720146f1999
          750 /static/app/css/images/prev.png?d720146f1999
          745 /static/app/css/images/next.png?d720146f1999
      

      And same for misses:

      $ uniq -c < misses-sorted.txt | sort -nr | head
           56 /
           14 /player/237678/
           13 /players/
           12 /teams/
           11 /players/top/
      <snip>
      

      So far this tells us static files are most often hit, and for misses it also tells us… something, but we can't quite track it down yet (and we won't, not in this post). We're not adjusting for how often the page is hit as a whole, this is still just high-level analysis.

      One last thing I want to show you! Let's take everything we learned and analyze those URLs by prefix instead. We can cut our URLs again by slash with cut -d"/". If we want the first prefix, we can do -f1-2, or -f1-3 for the first two prefixes. Let's look!

      cut -d'/' -f1-2 < hits-sorted.txt | uniq -c | sort -nr | head
       100189 /static
         5948 /es
         3069 /player
         2480 /fr
         2476 /es-mx
         2295 /pt-br
         2094 /tr
         1939 /it
         1692 /ru
         1626 /de
      
      cut -d'/' -f1-2 < misses-sorted.txt | uniq -c | sort -nr | head
        66132 /static
        18578 /es
        17448 /player
        17064 /tr
        11379 /fr
         9624 /pt-br
         8730 /es-mx
         7993 /ru
         7689 /zh-hant
         7441 /it
      

      This gives us hit-miss ratios by prefix. Neat, huh?

      13 votes
    4. Can anyone recommend a printer? (...ahem...) a Linux printer?

      Last time I owned an inkjet was well over a decade ago. I had a nice HP color laserjet that Just Worked™for almost a decade (and PS, I bought it used), and then I just lived w/o a printer for the...

      Last time I owned an inkjet was well over a decade ago. I had a nice HP color laserjet that Just Worked™for almost a decade (and PS, I bought it used), and then I just lived w/o a printer for the past 3-4 years. Now, I'm window-shopping for inkjets, it sounds like the whole "use-our-ink-or-die" business model has only gotten worse.

      Are there any good inkjet printers where I can just use it like a normal printer, just buy ink (cheaper than the printer was) when I need it, yada? Or should I just write off the entire industry (again), and go straight to the laser printers?

      And does anyone actually have a decent (color, all-in-one) printer that works reasonably well with their (YourDistroHere) Linux machine?

      Danke


      ETA: Thanks for all the feedback. I'm now prioritizing a Brother laser (maybe just mono), or possibly an Epson Ecotank.

      Side-note ... how cool is it that we have so many Linux-folk in our midst!?

      Thanks again.

      13 votes
    5. Share your linux desktop/setup

      I've put quite a bit of work into my i3 set up recently and I'm curious if the people here are interested in that kind of thing. I'd be interested in looking through configs to get ideas, and...

      I've put quite a bit of work into my i3 set up recently and I'm curious if the people here are interested in that kind of thing.

      I'd be interested in looking through configs to get ideas, and sharing screenshots and such.

      Here is what my desktop looks like right now. Let me know what you think.

      26 votes