whbboyd's recent activity

  1. Comment on Awesome Games Done Quick 2022 has raised $3,416,729 for the Prevent Cancer Foundation in ~games

    whbboyd
    Link Parent
    Absolutely not; I insist on a brutal death executed by a vigilante mob armed with torches and pitchforks. I strongly agree with you, though. There are clear and significant pros and cons to both...

    (can we let “ORB!” die gracefully, please?)

    Absolutely not; I insist on a brutal death executed by a vigilante mob armed with torches and pitchforks.

    I strongly agree with you, though. There are clear and significant pros and cons to both forms. Having one of each I think would let them capitalize on the strengths of each.

    6 votes
  2. Comment on Has anyone with a WD NAS formatted drives for another system? in ~comp

    whbboyd
    Link Parent
    Interesting. A couple more things you could try: It looks like WD Reds have jumper pins to configure the drive. It has historically been possible to configure some drives to lie about their...

    Interesting.

    A couple more things you could try:

    • It looks like WD Reds have jumper pins to configure the drive. It has historically been possible to configure some drives to lie about their capacity in order to work around software limitations. I couldn't find documentation for recent WD jumper settings, but if your drive has a jumper installed, you could try taking note of where, removing it, and seeing if that changes anything. (I wouldn't experiment too much with jumpers otherwise unless you can find documentation of them.)
    • You could try WD's own troubleshooting software.
    • WD's documentation of this is atrocious to nonexistent, but theoretically, it should be possible to upgrade or reset the drive's firmware. If you can figure out how, that would be something to try.
    2 votes
  3. Comment on Has anyone with a WD NAS formatted drives for another system? in ~comp

    whbboyd
    Link
    If you don't care what's on the drives, just replace the partition table. I have no idea why WD would use unpartitioned space on the drive for storage, but weird behavior is par for the course for...

    If you don't care what's on the drives, just replace the partition table. I have no idea why WD would use unpartitioned space on the drive for storage, but weird behavior is par for the course for proprietary systems, so it certainly doesn't surprise me. On Linux, you can run fdisk -l as root to doublecheck everything; you should see output along the lines of

    $ sudo fdisk -l
    Disk /dev/sda: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
    Disk model: Samsung SSD 860 
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: <UUID>
    
    Device       Start        End    Sectors   Size Type
    /dev/sda1     2048     194559     192512    94M EFI System
    /dev/sda2   194560    2148351    1953792   954M Linux filesystem
    /dev/sda3  2148352 1953523711 1951375360 930.5G Linux filesystem
    
    
    Disk <etc>
    

    From what you're describing, it sounds like you should expect something along the lines of Disk /dev/whatever: 4000 GiB, 4<trillion> bytes, <many> sectors with partitions not adding up to the full size of the drive. If so, creating a new partition table with a single partition covering the entire drive will give you access to the full amount of storage (though will delete any data currently on the drive).

    5 votes
  4. Comment on Would this be alright for a NAS? in ~comp

    whbboyd
    (edited )
    Link Parent
    If you're not space-constrained, the ProLiant will certainly give you a much better bang for your buck. The biggest downside I can think of is that that Xeon may idle at a much higher power usage...

    If you're not space-constrained, the ProLiant will certainly give you a much better bang for your buck. The biggest downside I can think of is that that Xeon may idle at a much higher power usage than a more consumer-oriented CPU.

    (For what it's worth, the Z420 @vord linked would work fine for your purposes as well: you can mount 3.5" drives in 5.25" bays with cheap adapter brackets. The GPUs wouldn't do anything for you now, but might be fun to experiment with down the line.)

    System-building can be stressful, to be sure. You're much less likely to accidentally destroy something now than in the bad old AT days, but there are definitely still surprise incompatibilities it's easy to run across. In can be fun to improvise around issues, though! Here's a recent war story of mine, which maybe can be inspirational:


    As I've mentioned, I run a home server (it used to be just a NAS, but I've been putting more services on it). Relevant to this story, as of a month or so ago, it had 3x 1.5TB HDDs (in a ZFS RAID-Z pool), a basic power supply, a mid-tower ATX case with 3x 3.5" bays, 3x 2.5" bays, and 3x 5.25" bays, a tiny cheap-o video card to convince the motherboard to POST, and a bunch of other hardware that doesn't matter here, running FreeBSD 12 (for reasons). I was running out of storage and saw a good deal on hard drives on Newegg; so I somewhat off-the-cuff sprung for 5x 4TB HDDs.

    • I'm going to bullet the problems I encountered like this. This first one isn't really a problem, but more of a warning: now is an awful time to be buying hard drives. "Shingled magnetic recording" (or "SMR") technology enables significantly improved density at the cost of making some write patterns catastrophically bad. Critically for pooled storage, resilvering is one of the most pathological cases, which is… just really, really bad. Hard drive manufacturers have been quietly rolling out SMR technology because it's cheaper for a given capacity, and mitigating the write performance through tricks like big caches. There was a huge scandal around WD trying to hide this a few years ago. Anyway, the upshot is, do your research and make absolutely certain the drives you're buying for pooled storage are not SMR (the older technology, which you want, is called "conventional magnetic recording" or "CMR"). Drives being marketed "for NAS use" is not good enough.

    Due to limitations of ZFS, I can't expand my pool and replace the devices in-place; I need to set up the new drives as a new pool, migrate all the data, and then decommission the old pool.

    • Problem 1: the server doesn't have remotely enough bays or—more critically—SATA ports to connect nine drives (don't forget the OS drive!) simultaneously.

    Fortunately, ZFS provides a convenient tool called zfs-send which makes it possible to turn a zpool into a streaming image that you can send over the network, or store somewhere, or do whatever with. I don't have enough room to store the image I'm going to create; but my desktop has enough drive bays and connectors to load everything up, so I can use it as a drive mule, send the snapshot over the network to it, and then swap the drives back into my NAS.

    So, I start doing that, and immediately run into a problem.

    • Problem 2: my desktop's video card is physically incompatible with mounting as many hard drives as I need to; it's too long and interferes with some of the drive bays.

    I can't just remove it, because it's a gaming PC and the rest of the system was built on the assumption you'd drop a big standalone video card in it and doesn't have integrated graphics (and won't boot without a GPU). Fortunately, the server has a tiny little video card because it's mostly scavenged from older revisions of this gaming desktop and has the same limitation! So, I can swap the video cards and mount all the new hard drives in my desktop. This setup is goofy (my server doesn't even have a monitor attached to use its temporary new hefty GPU on), but everything I need works.

    My desktop runs Debian, so it's straightforward for me to install zfs-dkms and initialize the pool. Seeing Size: 11T in df's output is pretty sweet! I do a couple of quick tests of zfs-send to try to get the invocation I want, then kick it off. I'm sending roughly 1.8TB over gigabit ethernet, so this takes a few hours. I go to bed.

    In the morning, the send is complete. I check on it and immediately find an issue.

    • Problem 3: the send didn't transfer all the snapshots.

    I want to keep my snapshots. There's nothing I know I need in them, but, well, that's the thing, isn't it. So I re-research the zfs-send commandline options, wipe the filesystem (the easiest way I found to do this was just to destroy and recreate the pool), and re-send.

    Once the data has been sent again, I still don't see the snapshots! Turns out with the version of openzfs in Debian, you have to add a flag for zfs list to show snapshots. I'm pretty sure the first send actually worked fine. Oops. Anyway, now I'm ready to swap all the drives around.

    Pulling all the drives out of my desktop results in a nasty scraped knuckle, but is otherwise smooth. Swapping the drives in the server is four-fifths smooth.

    • Problem 4: I need to power six drives (five data drives and the OS drive). The basic power supply in my server provides four SATA and two Molex plugs. Somehow—I'm seriously not sure how this is possible—I only have one Molex-to-SATA power adapter.

    I could have sworn I had more adapters than that, but evidently not. Fortunately, the new pool has two-drive redundancy, so I can just leave one of the drives disconnected and operate in a degraded state pretty safely until an adapter arrives. I'll just mount them all, and…

    • Problem 5: I need to mount two of the drives in 5.25" bays, but I only have one set of adapter brackets.

    Oops. I knew about that one, too. Oh well, I'll just leave it out of the case until that adapter arrives. (Fun side effect of these two problems, but not really an issue, is that I have to leave the OS drive loose in the case in order to reach with a power connector. SSDs don't really care about this, but it's amusingly unpolished and in line with the way this has been going.)

    So, I boot the server, go to import the pool, and…

    • Problem 6: The version of ZFS in FreeBSD 12 is significantly older than the version in Debian Bullseye. The pool has feature flags the server OS doesn't know about. I can only import it read-only.

    Guess it's time to upgrade to FreeBSD 13.

    This goes fine. The correct order of operations with upgrading the OS and packages isn't super clear, but whatever I do works well enough for me to boot. At some point around here, I discover a very orthogonal but fairly serious issue:

    • Problem 7: I use LDAP for user accounts on my systems. This is totally unnecessary, but handy for some things. The LDAP server runs on this same server, and I put the LDAP database on the ZFS storage pool. When I booted the server with the old pool out and the new pool not importable, LDAP looked at its empty, unmounted database directory and decided to come up with an empty database. sssd caches my account if LDAP is down, but it's now up with nothing in it, so my user account on my laptop disappears.

    This is a real problem. It's surmountable—I have local admin accounts on all my systems, and direct hardware access regardless—but really annoying, and it's preventing me from ssh-ing into the server for arcane reasons. I cobble together a connection by suing to local admin and copying the ssh keys from my regular user account. After the upgrade, importing the pool goes perfectly smoothly. I quickly mount LDAP's database, restart the daemon, restart sssd on my laptop, and check: my user account exists again. Whew.

    At this point, everything is up and running. Woohoo! I order adapter brackets and a Molex-to-SATA adapter. When they come, I mount the last drive. It is extremely tight to finagle it into the bay, but I manage to do so without needing to take anything apart. I can actually mount the OS drive, now, too.

    And that's the end! It was a surprisingly bumpy ride, but in the end, I got everything working, and the only additional stuff I had to buy were the two adapters (and I was able to get up and running without them).

    8 votes
  5. Comment on Would this be alright for a NAS? in ~comp

    whbboyd
    Link
    That's likely to be fine. Some potential concerns to consider include: @asymptomatically's concern about clearances in the case is legit. Compact cases often put a lot of unpublished limitations...

    That's likely to be fine. Some potential concerns to consider include:

    • @asymptomatically's concern about clearances in the case is legit. Compact cases often put a lot of unpublished limitations on the dimensions of things. My NAS is in a cheap ATX mid tower; I don't have to care too much about the size because it lives in my basement, and I don't have to worry about clearances or capacity or having to get unusual or specialized parts to fit.
    • Check that the motherboard has enough SATA ports! A lot of mini-ITX boards only have four ports, which will leave you without a port for your OS drive if you load it up with four data drives.
    • The CPU is weak. For the workloads you've outlined, it'll probably work fine; but if you start trying to do much else with it, it will likely struggle. I'd try to find out what socket it is, and if there are reasonable upgrades you can drop into that socket.
    • The RAM amount is a little weak, too (though again, fine for what you're running), but this should be upgradable if needed.
    2 votes
  6. Comment on Epic Games Holiday Sale 2021 in ~games

    whbboyd
    Link Parent
    2021-12-20: Loop Hero I have never heard of this game before. Other than the one-word description of "roguelike", I know nothing about it. Anyone with any experience of this game want to add anything?

    2021-12-20: Loop Hero

    I have never heard of this game before. Other than the one-word description of "roguelike", I know nothing about it. Anyone with any experience of this game want to add anything?

    3 votes
  7. Comment on Log4Shell Update: Second log4j Vulnerability Published (CVE-2021-44228 + CVE-2021-45046) in ~comp

    whbboyd
    Link Parent
    Remarkably, this ride's not over yet: Apache Log4j2 does not always protect from infinite recursion in lookup evaluation This one is denial-of-service (under some? circumstances, attacker-crafted...

    Remarkably, this ride's not over yet:

    Apache Log4j2 does not always protect from infinite recursion in lookup evaluation

    This one is denial-of-service (under some? circumstances, attacker-crafted log inputs can cause the lookup code to recurse infinitely (!), using a ton of resources and ultimately crashing the logging thread). It is mitigated by log4j-core version 2.17.0.

    I'll reiterate my advice to stop using log4j-core, I guess.

    3 votes
  8. Comment on Epic Games Holiday Sale 2021 in ~games

    whbboyd
    Link Parent
    Fair enough—I certainly don't follow AAA gaming in any form, including pricing, particularly closely. It is certainly my impression that a few years ago, a headline AAA release dropping below 50%...

    Fair enough—I certainly don't follow AAA gaming in any form, including pricing, particularly closely. It is certainly my impression that a few years ago, a headline AAA release dropping below 50% of opening day price just a year after release would have been extraordinary.

    (It's also certainly a good observation that Epic themselves are happy to throw massive discounts around with wild abandon.)

  9. Comment on Log4Shell Update: Second log4j Vulnerability Published (CVE-2021-44228 + CVE-2021-45046) in ~comp

    whbboyd
    Link Parent
    Whoops, a consequential update: Log4Shell Update: Severity Upgraded 3.7 -> 9.0 for Second log4j Vulnerability (CVE-2021-45046) It's RCE again, you may now resume panicking. (To the best of my...

    Whoops, a consequential update:

    Log4Shell Update: Severity Upgraded 3.7 -> 9.0 for Second log4j Vulnerability (CVE-2021-45046)

    It's RCE again, you may now resume panicking.

    (To the best of my knowledge, 2.16 fully-disables the vulnerable lookup mechanism, so if you've made that upgrade, you're safe.)

    My recommendation to anyone affected by either of these issues, who doesn't have extensive infrastructure built around log4j, would be to stop using it. You can do this with no changes to client code by removing log4j-core from your application; adding log4j-over-slf4j (the SLF4J API bridge) to your application; and adding an appropriate SLF4J logging backend to your application (I think almost everybody should be using slf4j-simple; but if you have complicated requirements, or you're trying to port a complicated log4j configuration, logback is a substantially more featureful alternative.)

    5 votes
  10. Comment on Epic Games Holiday Sale 2021 in ~games

    whbboyd
    Link Parent
    Ouch! For game that's barely more than a year old, that's a harsh sale price. (Not to say it doesn't deserve it, because it certainly does. But even so, ouch. Also, I'll second that, if you have a...

    Cyberpunk is also $20

    Ouch! For game that's barely more than a year old, that's a harsh sale price.

    (Not to say it doesn't deserve it, because it certainly does. But even so, ouch. Also, I'll second that, if you have a PC that can run it and you're pretty tolerant of glitchiness, it's probably worth purchasing at that price.)

    1 vote
  11. Comment on “Imagine if doctors relied on Google as much as programmers do” in ~tech

    whbboyd
    Link
    Short and sweet. I need to take a potshot at the title, though, because it makes it very clear the author doesn't interact with physicians other than as a patient much: they do. And yes, they...

    Short and sweet. I need to take a potshot at the title, though, because it makes it very clear the author doesn't interact with physicians other than as a patient much: they do. And yes, they literally use google.com (and look up videos of procedures on youtube), though of course proprietary databases and services are also an important part of most physicians' practice.

    Like programmers, doctors know their own jargon and specific terminology to cut quickly through the chaff of general-interest content to get to information which is targeted to and useful for them when using general-purpose search engines. This is, essentially, the underlying reason why laypeople referring to "Doctor Google" (or "Helpdesk Tech Google") tends to go very, very poorly, while professionals using the same information sources are much more likely to extract useful information.

    17 votes
  12. Comment on Reddit confidentially files to go public in ~tech

    whbboyd
    (edited )
    Link Parent
    I wish I shared your optimism, but unfortunately, I think you're wrong about this. One of the essential features of Digg v4 was that Reddit was already right there as a single obvious place for...

    It's just gonna take one or two more major scandals for Reddit to have its Digg v4 moment.

    I wish I shared your optimism, but unfortunately, I think you're wrong about this. One of the essential features of Digg v4 was that Reddit was already right there as a single obvious place for dissatisfied users to migrate to en masse. (Jokes comparing the two sites were commonplace, e.g. "Digg is Reddit's frontpage from yesterday".) Both were also much smaller than Reddit is now, and more similar to each other at the time than either is to modern Reddit. I don't know of an obvious destination for modern Reddit émigrés.

    I guess one possibility is that the contributors who provide Reddit's dwindling supply of thoughtful and/or informative content finally all up and leave, leaving behind the teenagers, bots, trolls, and lowest-common-denominator internet point farmers and transforming the site entirely into the marginally-curated 4chan it clearly actually wants to be. But I don't think there'll be a watershed moment for this process. It's been ongoing for years (since Digg v4, ironically, if some of the saltier older users are to be believed) and will probably just continue, gradual but unabated, until the proportion of "quality content" reaches homeopathic proportions.

    3 votes
  13. Comment on Reddit confidentially files to go public in ~tech

    whbboyd
    Link Parent
    I figured it was something like this. Thank you for explaining the specifics. You have to admit, though, it is a funny bit of terminology to see in a news headline. =)

    I figured it was something like this. Thank you for explaining the specifics.

    You have to admit, though, it is a funny bit of terminology to see in a news headline. =)

    3 votes
  14. Comment on Log4Shell Update: Second log4j Vulnerability Published (CVE-2021-44228 + CVE-2021-45046) in ~comp

    whbboyd
    Link
    Maybe worth noting, for anyone feeling the panic restarting, that this vulnerability is much less severe than CVE-2021-44228 (denial of service versus remote code execution; it's still really bad...

    Maybe worth noting, for anyone feeling the panic restarting, that this vulnerability is much less severe than CVE-2021-44228 (denial of service versus remote code execution; it's still really bad given the circumstances, but we're talking "1906 San Francisco earthquake" versus "Chicxulub impactor").

    But, since you've already dusted off your build infrastructure and set things up to deal with the dino-killer vulnerability, may as well make use of it again to deal with the city-killer…

    3 votes
  15. Comment on Reddit confidentially files to go public in ~tech

    whbboyd
    (edited )
    Link Parent
    As someone who says "the day old.reddit stops working is my last day on the site", my reasoning is this: There are degrees of worth. Reddit's content could be so amazing that it would be worth...

    Either the content is worth the pain in which case you’ll find an alternative or will simply use the shitty UI, or the content isn’t worth it in which case… just stop using it now instead of Stockholm syndroming your social media diet.

    As someone who says "the day old.reddit stops working is my last day on the site", my reasoning is this:

    1. There are degrees of worth. Reddit's content could be so amazing that it would be worth crawling through broken glass for; or it could be meh, worth consuming if it's easy to do so, but not otherwise; or it could be total garbage, not worth consuming under any circumstances or even worth going to lengths to avoid. (Any conversations about Reddit specifically are further complicated by the siloing of subreddits; any two people could have arbitrarily different experiences on the site depending on subreddit subscriptions.) For me, personally, it's just barely above the "meh" line—there's content I enjoy on the subs I'm subscribed to, but there's nothing I'd consider valuable with anything approaching regularity, and so if the site got more annoying to use, there's nothing remotely valuable enough for me to put up with that annoyance.
    2. The new interface literally degrades the quality of the content, beyond just being an unpleasant eyesore. It parcels out content in tiny, not-even-bite-sized pieces, making it incredibly laborious to follow conversations and making those conversations just outright less likely to happen.
    3. This is a little meta, but I have a strong suspicion that most of the people posting the rare piece of content I do find valuable are also in the "old. or death" camp. If they follow through on their ultimatum, that's also directly reducing the quality of the content.

    I agree wholeheartedly with your take on the IPO, though. Maybe they can turn meme-and-racism "engagement" numbers into success on the IPO front (I mean, I sure as hell hope not, but I'm certainly not going to bet on the intelligence or insight of the people who buy into tech IPOs), but in the process, they will definitely lose every remaining vestige of real "value", at least from my perspective.

    19 votes
  16. Comment on What are your favorite Christmas songs? in ~music

    whbboyd
    Link
    Frankly, I'll take damn near anything that doesn't rate retail Christmas playlists (Mariah Carey is right the fuck out), but a sampling that I actually enjoy: Loreena McKennitt—A Midwinter Night's...

    Frankly, I'll take damn near anything that doesn't rate retail Christmas playlists (Mariah Carey is right the fuck out), but a sampling that I actually enjoy:

    2 votes
  17. Comment on Log4Shell: RCE 0-day exploit found in log4j2, a popular Java logging package in ~comp

    whbboyd
    Link Parent
    It's certainly possible! Java and the JVM are definitely very widespread, but definitely not universal, and I don't know exactly what the distribution is. On the other hand, I imagine there are...

    It's certainly possible! Java and the JVM are definitely very widespread, but definitely not universal, and I don't know exactly what the distribution is.

    On the other hand, I imagine there are plenty of orgs like yours which are affected because of packaged software based on the JVM, even if their developers wouldn't touch Java with a twenty-foot pole, or if they have no software devs on staff at all. Even though the only thing to do is wait for your vendor, you're still affected.

    4 votes
  18. Comment on Log4Shell: RCE 0-day exploit found in log4j2, a popular Java logging package in ~comp

    whbboyd
    Link Parent
    There's surprisingly little room for "much" worse IMO, lol. Maybe if someone finds a trivial ring-0 RCE in the Linux TCP stack…?

    There's surprisingly little room for "much" worse IMO, lol. Maybe if someone finds a trivial ring-0 RCE in the Linux TCP stack…?

    6 votes
  19. Comment on Log4Shell: RCE 0-day exploit found in log4j2, a popular Java logging package in ~comp

    whbboyd
    Link
    So, who else got to spend this morning plumbing the depths of the dependency trees of their employer's entire software suite? ;) Pretty much all my colleagues were affected by this; I'd bet this...

    So, who else got to spend this morning plumbing the depths of the dependency trees of their employer's entire software suite? ;) Pretty much all my colleagues were affected by this; I'd bet this issue hits some 90% of the world's organizations, and probably nearly 100% of orgs big enough to have software developers on staff.

    (I got lucky: our logging backend of choice is slf4j-simple, I only found one place log4j had snuck in anyway, and it wasn't accessible to untrusted data. I ripped it out anyway, but it was a "while I'm looking at it" thing, not a panicked emergency fix.)

    (While I've always thought log4j was massively overengineered crap, now I have a clear and substantial consequence to point to, so at least there's that!)

    8 votes