bme's recent activity
-
Comment on Bridgy Fed, a project to connect the open social web, is now becoming a nonprofit in ~tech
-
Comment on Bridgy Fed, a project to connect the open social web, is now becoming a nonprofit in ~tech
bme I guess I mostly agree with the "technobro sociopaths". Activity pub is already equipped with opt-out mechanisms for deciding the scope of your posts (require consent to follow / publish to...I guess I mostly agree with the "technobro sociopaths". Activity pub is already equipped with opt-out mechanisms for deciding the scope of your posts (require consent to follow / publish to followers). Stop putting your shit out on public if you want to control who reads it.
It reminds me of the drama around bluesky exiting invite only mode and people flipping out because their data was going to be public.I honestly wish the internet would go away (he says, participating in a conversation on the internet). It seems beyond the grasp of most people to understand that when they plug a computer into the internet they are creating an opportunity for anyone good or bad to reach out and touch them from anywhere on the globe, and it seems requires a shift in thinking that is too big, and there is too much money to be made from exploiting for it to be left alone.
-
Comment on What are some common terminal aka CLI workflows? in ~comp
-
Comment on Touch typing learning software in ~tech
bme As in you need to be on the same local network to play the game on mobile? Might be worth running a zerotier network (decent free tier). I used to use this tons for LAN on WAN gaming when I had...As in you need to be on the same local network to play the game on mobile? Might be worth running a zerotier network (decent free tier). I used to use this tons for LAN on WAN gaming when I had friends and free time.
-
Comment on Has anyone ever used NixOS as daily-drive distro? in ~tech
bme That makes sense. If you aren't regularly mixing config and code then the dependency linking doesn't buy you much, and you still have to deal with ejecting the FHS, which isn't an obvious win at all.That makes sense. If you aren't regularly mixing config and code then the dependency linking doesn't buy you much, and you still have to deal with ejecting the FHS, which isn't an obvious win at all.
-
Comment on Has anyone ever used NixOS as daily-drive distro? in ~tech
bme (edited )Link ParentWhat does that workflow look like? Are you using something like vscode dev containers? The thing that I really really like about nix is how it combines configuration and code in a single closure...the rest I run in Docker
What does that workflow look like? Are you using something like vscode dev containers? The thing that I really really like about nix is how it combines configuration and code in a single closure which once you get the hang of it is difficult to imagine living without. The snippet below is a heavily redacted snippet of what I have to install my shell (fish). It is effectivly a function that takes many arguments one of which has to be a package set, which returns a config map which causes
- fish to be installed
- fish greeting to be disabled
- creates a fish function (similar to bash alias) which calls terraform. The
${pkgs.terraform}
string interpolation syntax expands to the input-hash addressed path that terraform is stored under and installs terraform at the same time. The beauty of this being that in order to refer to the path of a package it has to be installed. Configuration + closure. This snippet will never be missing it's dependencies, even if terraform somehow isn't available on the path. For funsies I also include a bit cross platform stuff. I run linux at home and travel with a macbook. I use the same set of nix expressions to manage both.isDarwin
checks which platform I am on and selects the right way for me to get my md5 hash.
{ pkgs, ... }: let md5 = if (!pkgs.stdenv.isDarwin) then "${pkgs.outils}/bin/md5 -qs" else "md5 -s"; in { programs.fish = { enable = true; interactiveShellInit = '' set fish_greeting ''; functions = { "ta" = { description = "Apply terraform"; body = '' ${pkgs.terraform}/bin/terraform apply /tmp/(${md5} "$PWD").plan $argv ''; }; }; }; }
I'll do another one: have you ever just wanted to refer to a script and not wanted to care about naming it? I do all of the time, and nix frees me from having to care about it. Another redacted snippet, this time setting up the MFA process to pull OTP codes out of a nitrokey for AWS. This generate my
.aws/config
file and installs bothaws-vault
andpynitrokey
and a small script that tidies up the nitrokey script output. I don't care where the script lives, and with nix I don't have to name a path, it's just input-hashed. It's also guaranteed to exist as long as it's referenced by the config. Win win win.{ config, pkgs, ... }: let credentialProcess = "${pkgs.aws-vault}/bin/aws-vault export --pass-dir=${config.xdg.dataHome}/v --backend=pass --format=json default" getOtp = pkgs.writeText "get-otp" '' set -l totpout (${pkgs.pynitrokey}/bin/nitropy nk3 secrets get-otp aws 2>/dev/tty) # get-otp outputs two lines, the otp is on the second line echo $totpout[2] ''; in { home.file.".aws/config".text = '' [default] region=eu-west-1 credential_process = ${credentialProcess} mfa_serial = arn:aws:iam::XXX:mfa/XXX mfa_process = ${config.programs.fish.package}/bin/fish ${getOtp} ''; }
I don't know about you, but my computer is littered with this stuff, and now it's trivial to evolve, track and rollback from (switching config takes about 10 seconds typically). I can't see myself going back to anything else that doesn't at least bear a passing resemblence to nix.
-
Comment on What the hell is a Typescript or: Creation ideas above my skill level in ~tech
bme I don't know what the right way to push the project forwards for you is, but this sounds like an emminently feasible project. Especially if you have put thought into the design / ui elements /...I don't know what the right way to push the project forwards for you is, but this sounds like an emminently feasible project. Especially if you have put thought into the design / ui elements / user stories, and have some semi-formal description of how scoring should work, or at least what the objective of scoring should be and what data it should be derived from. As others have already suggested you just need to find one talented-enough dev that already likes art fight and is stoked on pixel clash.
If you ever want any architecture/code review if you move the coding side of it along shoot me a dm, and if I can help I will (creds: ex-google / ex-investment bank distributed systems guy (currently doing startup things)). Sounds like a fun project.
-
Comment on What the hell is a Typescript or: Creation ideas above my skill level in ~tech
bme (edited )LinkTo answer the question on cope: hakuna matata. Life is too short to master everything, so there is no point in worrying about it. Either accept it's a multiyear project and what will be important...To answer the question on cope: hakuna matata. Life is too short to master everything, so there is no point in worrying about it. Either accept it's a multiyear project and what will be important is doing a little bit consistently, or drop it. Everyone over estimates what they can do in the short term and under estimates just as badly what they can achieve with steady application on a longer timescale.
On the "just get a dev to do it": this is only gross if you have no skin in the game. You've already invested a significant amount in doing. You'd continue to be doing, even if you grabbed a dev to join you on the journey. I read the wiki page on Art Fight. I am mildly curious (but not enough to do more research myself). This looks really trivial to scale out: 2 teams, keep a running total. There's a CRDT for that. The only complex part seems to be scoring attacks. How is that done today? I'm way too busy to be able commit to implementation, but I'd be willing to do some light mentoring on how to structure the thing.
Lastly, on Art Fight being too big for a hobbiest open source competitor. The wiki page says 400k people participated in 2024. Assuming that scoring isn't obscenely expensive then this could totally run on a pretty limited budget. Assuming every participant never slept and interacted with the site every 10 seconds, 40k requests per second can be had on a single boring mid-tier VPS. Assuming scoring is chunky it's still going to be multiple orders of magnitude less. 10s per second, (assumes someone submits art once per 12 hours)? Ez.
EDIT: it's self-scored???? Ok. 400k users could be handled pretty easily, and if you wanted to make it work for millions, sure. I'd say getting people to use it would be harder than making it usable. I make no judgement on UI complexity, that's not my bag.
-
Comment on Has anyone ever used NixOS as daily-drive distro? in ~tech
bme This depends on entirely how you use your computer. Do you mostly use things out-of-the-box? If so, I'd agree, this probably doesn't sound particularly strong. If you take the "system-crafter"...This depends on entirely how you use your computer. Do you mostly use things out-of-the-box? If so, I'd agree, this probably doesn't sound particularly strong. If you take the "system-crafter" approach of investing in customisation with the long view in mind you might accumulate much more state. I run various bits of nix on all of the computers I use, and it is far far far far cheaper to replicate a few kb of text than gigabytes of full system backup. I also use nix for work, so if I get issued a new corp laptop I spend zero time setting it up.
If you are a dev then it gets even more compelling in terms of reproducing evironments sharing scripts / packages across operating systems etc. If you don't have any problems it solves then it's probably just niche for niche's sake, but I'd say anyone that uses a computer for long enough will one day wish they had a succinct way of capturing the essence of the state of their computer without the attendent cruft. Nix is that.
-
Comment on What advantages does Linux have over other operating systems? in ~tech
bme I guess I don't really see these as kernel modules, because if any program living in userspace that interacts with the OS via a systemcall interface is a "kernel module" then reasonably it seems...macOS 10.15 or later enables developers to extend the capabilities of macOS by installing and managing system extensions that run in user space rather than at the kernel level.
I guess I don't really see these as kernel modules, because if any program living in userspace that interacts with the OS via a systemcall interface is a "kernel module" then reasonably it seems like every program in MacOS could be labelled as a kernel module. I get that given the hybrid nature of the mach kernel this is perhaps unfair, but I only mentioned this as a counter to the idea that you are getting to peek behind the curtains. I'll concede it's not a particularly strong point :)
-
Comment on What advantages does Linux have over other operating systems? in ~tech
bme You can't develop kernel modules for (modern) MacOS: anything you want to do has to run is userspace. We were definitely all forcefully reminded recently (crowdstrike) though of how possible it is...You can't develop kernel modules for (modern) MacOS: anything you want to do has to run is userspace. We were definitely all forcefully reminded recently (crowdstrike) though of how possible it is to write windows kernel modules.
Windows has logs
It is my general experience that the system event viewer on windows is barely usable compared to the breadth of information available in journald, in part because I don't think (but happy to be corrected) you have anything like the linux kernel logs directly interleaved and filterable with respect to application level logs. Admittedly it's been a long time since I developed anything for windows, so I am happy to be corrected. It's also true that typically everything I have loaded in my userspace has source available so drilling directly to the source of some interesting log message is always possible. However I'll concede the for the average consumer of PC hardware it's pretty unlikely that this leads to actionable insight. On the other hand it does lead I think to a far more readily available group of people who have direct knowledge of how the sausage is made, and I think that's where a lot of this "you can't fix windows problems" mentality comes from. I don't have direct access to anyone in Redmond. Vendors of most prop software put up insurmountable barriers to getting to talk to someone who actually knows what's up. Just not true in the world of FOSS. You either can interact with the devs, or if you have the skills you can just fix it yourself.
I mean... https://www.youtube.com/watch?v=iyA9DRTJtyE&t=397s watch 10 seconds of this video (at the timestamp provided). This is where this perception comes from, imo. AMD has no such advice for linux, because all of the scheduling is already just working, and if it wasn't AMD could (and does) contribute fixes to the kernel to make it work.
This whole exaggeration of "nothing is possible on Windows and MacOS" is not helpful in these topics.
Agree.
-
Comment on What advantages does Linux have over other operating systems? in ~tech
bme But advanced user of what? I know exactly what you are saying (and you're not wrong), but I really wish these frustrated "advanced users" would take a step back from time to time and realise how...advanced user
But advanced user of what? I know exactly what you are saying (and you're not wrong), but I really wish these frustrated "advanced users" would take a step back from time to time and realise how complete silly their expectations are. No violin player, no matter how advanced they were at violin would throw down a trumpet with intense disgust if they weren't able to play it immediately at the same level as their violin, or at least if they did everyone would call them crazy. Sure, their sense of pitch, and understanding of musical tone is transferable, and sure: the musical notation is identical, so they have a head start over a total novice. Everything else needs to be learnt again. This confuses no one. And yet 9/10 someone tries linux who has ever used some other operating system the first thing that comes out of their mouth is going to be a complaint that can be replaced roughly with "I had to learn". Well no shit, Sherlock. You have to decide if what you are getting is worth the effort, but there will be effort, and you will need to go back down the hill your are on to get to the top of the mountain yonder.
-
Comment on What advantages does Linux have over other operating systems? in ~tech
bme Headlines No global registry No weird "oh my computer seems to be getting slower and slower, I guess it's time for a fresh windows install" No completely arbitrary user-hostile offer you can't...Headlines
- No global registry
- No weird "oh my computer seems to be getting slower and slower, I guess it's time for a fresh windows install"
- No completely arbitrary user-hostile offer you can't refuse bullshit like "now you have to use a ms accounts and sign in online" stuff.
Detail
- Ok, maybe if you are using gnome really heavily then you could argue gnome settings is kind of like the windows registry, but that's a stretch. For the most part applications configure themselves in an isolated way. You want to to know where the settings are for $app, well look in
~/.config/$app
if it's well behaved and if not read the docs / man pages and at absolute worst open it usingstrace
and see what it's reading on boot. Once you've located the config it's very likely just a text file. Nice! No need to go dumpster diving in some sprawling barely documented global database where one wrong move can nuke your system. - Due to 1 and "package managers" it is trivial to query your computer for everything that's ever been installed on it, and spring clean whenever you fancy. If you are blessed enough to use a rolling distribution you can keep the same install for as long as you'd like. I think the last time I installed fresh linux on the computer I am writing this on was 10 years ago. It's undergone many kernel upgrades, hardware transplants, and even distro changes (home on a separate partition (ok btrfs subvol) ftw). I just keep pulling the latest packages, no stress. Aside: opportunity to shill my current love: nix. You want to keep your computer fresh? Try a distro which forces you to name every bit of software you have installed and will delete everything that isn't covered by that config file. Your computer will will exist in a perennial state of cleanliness. People have taken it really far, but I have not gone quite that extreme yet. It's quite freeing: I install stuff all the time to try it out without sticking it in config, knowing that at some point it's just going to get nuked. If I find myself coming back to it, I add it to my config and back that up. Now if the worst happens it will take me ~ 15 minutes of investment and probably an hour of waiting to have the computer. Are there drawbacks: yes, you are niche of a niche, so it's not for everyone, but I can't see myself using anything else that doesn't offer this feature (completely reproducible machines from config).
- This is fairly obvious, and has been pointed out by others: when the users are the developers, and the profit motive isn't there, you get much better alignment. Linux remains simple enough (despite some people's concerted effort) that there are a whole slew of different ways to interact with the kernel from login, to window manager, to application suites. It makes it very hard to hate your users and have anyone use your stuff.
-
Comment on Distrohoppers, what's your flavor this week? in ~comp
bme The best way to get to the inner workings of nixos I think depends on what type of person you are. If you are a programmer type, I really recommend doing something like installing nix on whatever...NixOS
The best way to get to the inner workings of nixos I think depends on what type of person you are. If you are a programmer type, I really recommend doing something like installing nix on whatever distro you currently have and messing about with the repl. Before going any further, I want to say that I deliberately ran all of these examples on a box running arch linux + nix. You can just dip your toes in! Let me show off something dumb: I want to package a shell script, but not only that, I want a function that given someone's name produces a package that installs a shell script that greets them, and I want to figure this out on the fly
~ ben@lamorna ❯ nix repl Welcome to Nix 2.15.0. Type :? for help. nix-repl> :l <nixpkgs> # load up nixpkg repository (bringing `pkgs` into scope) Added 17839 variables. nix-repl> hai = name: pkgs.writeShellScript "hello tildes" '' echo Hai ${name} '' nix-repl> haipkg = hai "geniusraunchyassman" # apply function to yield a package nix-repl> :b haipkg # build my package This derivation produced the following outputs: out -> /nix/store/9r73ihnlybzxbnyaiy6ij4sxxnk352xz-hello-tildes nix-repl> ~ ben@lamorna ❯ /nix/store/9r73ihnlybzxbnyaiy6ij4sxxnk352xz-hello-tildes Hai geniusraunchyassman
Maybe that isn't compelling. Maybe you like experimenting with different versions of a package that doesn't match what is on the system, but you'd like the install to be native and not conflict with what the system is providing. Ok, here is me wanting to get a version of
fish
that doesn't match the version in nixpkgs:~ ben@lamorna ❯ nix-prefetch-url --type sha256 https://github.com/fish-shell/fish-shell/releases/download/3.5.1/fish-3.5 .1.tar.xz path is '/nix/store/6zisgncm6j3m4cnjnjck4i84mcc57qpy-fish-3.5.1.tar.xz' 0a39vf0wqq6asw5xcrwgdsc67h5bxkgxzy77f8bx6pd4qlympm56 ~ ben@lamorna ❯ nix repl Welcome to Nix 2.15.0. Type :? for help. nix-repl> :l <nixpkgs> Added 17839 variables. nix-repl> fishy = pkgs.fish.overrideAttrs (old: rec { version = "3.5.1"; src = pkgs.fetchurl { url = "https://github.com/fish-shell/fish-shell/releases/download/${version}/${old.pname}-${version}.tar.xz"; sha256 = "0a39vf0wqq6asw5xcrwgdsc67h5bxkgxzy77f8bx6pd4qlympm56"; }; }) nix-repl> :b fishy This derivation produced the following outputs: doc -> /nix/store/3skdvch8sk970d0l6lr9dn9gjpfnafmy-fish-3.5.1-doc out -> /nix/store/qnn7gp4q6la4ikgi3bw9g59ix10va7xp-fish-3.5.1 nix-repl> ~ ben@lamorna ❯ ls /nix/store/qnn7gp4q6la4ikgi3bw9g59ix10va7xp-fish-3.5.1 bin/ etc/ nix-support/ share/
Now in both of the above cases, I'm just building stuff in the store, which to be usable just gets symlinked to discoverable places by slightly higher level tools. There are many linux distributions, there are even multiple distributions that give you atomic roll forwards and backwards package sets, but nixos and guix stand alone as being supremely hackable by dint of creating a composable package abstraction which
- Makes it nearly impossible to miss a dependency (complete)
- Makes it nearly impossible for packages to clash (isolation)
- Exposes existing packages in a way that makes them open to extension either by composing them or by overriding parts of them in a principled way
Nix / NixOS is not without its downsides, but everything else seems basically crippled in comparison once you climb up the cliff into productivity.
-
Comment on How IoT betrays us: Today, Sonos speakers. Tomorrow, Alexa and electric cars? in ~tech
bme I think something that has ruined a lot of things that used to "last" is the insertion of software everywhere. Remember when cars mostly had compatible slots for aftermarket radios? Then we had...I think something that has ruined a lot of things that used to "last" is the insertion of software everywhere. Remember when cars mostly had compatible slots for aftermarket radios? Then we had external GPS navigation, then it started to be integrated. No more swapping out the tech in the centre console. Now you need a new car if you want an upgrade, and now many fancy new things have arrived which were never available externally (adaptive cruise control etc). Driving a car even a few years old now means substantial features are missing and can't be acquired after the fact.
-
Comment on How IoT betrays us: Today, Sonos speakers. Tomorrow, Alexa and electric cars? in ~tech
bme Specifically on the Sonos front I am never buying another product of theirs again, and in general I have stopped buying anything that needs an internet connection to a component that I can't host....Specifically on the Sonos front I am never buying another product of theirs again, and in general I have stopped buying anything that needs an internet connection to a component that I can't host. I haven't got an answer for everything yet, and on the multiroom audio front it's disappointing to see traditional speaker companies follow Sonos' lead with closed ecosystems. There is no multiroom equivalent of line-in or toslink. There is playfi but it's basically dead as far as I can see.
My next startup after current obligations expire is going to be pluggable multiroom sound and video on published standards. It's not a hard problem. There are multiple DIY solutions already, i.e. SnapCast, it just needs someone to invest in a business around it with some pucks a la chromecast audio (discontinued). Shift the brains onto a server which you could sell too and the pucks at least would be good for the life of the silicon (presumably a DAC + microcontroller + wifi reciever). Publish the protocols and hopefully grow a decently compatible ecosystem.
-
Comment on <deleted topic> in ~tech
bme I think this is probably a metaphor for how almost all software is trending: towards a lowest common denominator, most useful for most people. For those hold-outs that that can do more with tools...I think this is probably a metaphor for how almost all software is trending: towards a lowest common denominator, most useful for most people. For those hold-outs that that can do more with tools that require investment there will always be options and alternatives. I run plex and keep all my media locally and with an offsite backup. Do I think this is best for most people? Nope. I do it because I like dicking around with computers. For people that want files, there will always be NAS systems and that jazz.
This isn't even unique to software, I bet every activity under the sun has a control / convenience trade-off.
-
Comment on Please tell me what you think about this idea for a text editor/Linux Distribution combo in ~comp
bme Completely agree with void / alpine recommendations. I haven't run alpine but void basically leaves you with nothing, it's trivial to manage the ttys.Completely agree with void / alpine recommendations. I haven't run alpine but void basically leaves you with nothing, it's trivial to manage the ttys.
-
Comment on Please tell me what you think about this idea for a text editor/Linux Distribution combo in ~comp
bme (edited )Link ParentEh, I dig what this guy is trying to do. Often a barrier is enough. Do you still lock your door? You can make it pretty annoying to escape the jail if you fancy it: Install the stuff, get the...Eh, I dig what this guy is trying to do. Often a barrier is enough. Do you still lock your door? You can make it pretty annoying to escape the jail if you fancy it: Install the stuff, get the chain all working for some locked down user then run a shell that changes the root password to some unknown random 20 char string and reboots the computer. Congrats, you've made it reasonably challenging for yourself, which is often enough to get yourself into the flow of things.
@mrbig: If I were you i'd look at customizing something like bspwm to basically launch the editor of your choice and simply don't launch anything else, have root own the config and make it read only. change the login shell to something suitable neutered. I guess you'd also need to kill all the ttys. Change root password to something else and that's a low-effort lock down that should be good enough for what you are trying to do. For extra fun: consider pairing a rock64 or pi with waveshare epaper display or something like that.
-
Comment on What are some startup scripts you have on your daily driver? in ~comp
bme (edited )LinkI don't think most peoples innovations are driven by imagination, they are driven problems that they look to solve. What currently bugs you about your workflow? Is there a collection of windowing...I don't think most peoples innovations are driven by imagination, they are driven problems that they look to solve. What currently bugs you about your workflow? Is there a collection of windowing tasks that you regularly perform? Would it make sense to automate them? Do you manually place your applications into windows / workspaces on startup?
I'll give you one example of a typical problem:
You have some environment variables relevant to a given project, which only need to exist within the project folder. This is easily solved with direnv. Ok fine. What if you want those values to encrypted because they are secrets and only decrypted on demand? I use pass so naturally I came up with the following: I will encode collections of env vars into yaml files that I will keep in my pass directory. I'll write a script to decode them and stick it in my~/.direnvrc
. So in my~/.direnvrc
I have the following function:pass-export() { [[ -n "$1" ]] || { >&2 echo "Must pass pass arg"; return 63; } while read -r key; do key=${key#'"'} key=${key%'"'} read -r value value=${value#'"'} value=${value%'"'} export $key="$value" done < <(pass "$1" | yq 'to_entries | .[] | .key, .value') }
and in a
.envrc
file I might have#!/bin/bash export AWS_DEFAULT_REGION=eu-west-1 pass-export work/outw/web/aws-keys
which exports my secret keys for a given aws project so they can be used with terraform or aws cli or whatever. This is typical of my collection of random bits and bobs. Search is another regular problem. Learn how to wield things like
fzf
well and you'll get around your computer with ease, but it can be inserted in all kinds of places. For instance if I have more than one git remote I want to be prompted to pick where to push an upstream branch for the first time otherwise just send it! For this I use a little git alias:[alias] p = !just-push-it
and a dumb script:
#!/bin/bash if [[ -n "$(git for-each-ref --format '%(upstream:short)' $(git symbolic-ref -q HEAD))" ]]; then exec git push else exec git push -u "$(git remote | fzf -1)" HEAD fi
The script basically does the right thing if there is just one option and if there are multiple it gives me the choices piped into a fuzzy finder and then does the right thing once the choice is made. This pattern can be repeated all over the place. The fzf wiki is full of them.
The point of all this is that efficiency doesn't require imagination, it only requires the will to be lazy enough to learn how to not do things. I don't have time to not be intentionally improving my workflow. I doubt many people do, and yet somehow they make time for it by refusing to learn the skills they need to stop wasting time.
But who is doing any plundering here? It's a bridge between two open networks, which requires some specific human to say "hey I want to follow someone over there", and the bridge facilitates that. I'd be a little bit more on the side of calling this person out if they were reselling the content, but that's not it. It isn't scraping, it's just using the APIs according to the users stated preferences to federate content.