wirelyre's recent activity

  1. Comment on Markdown preview is now available when writing topics/comments/etc. in ~tildes.official

    wirelyre (edited ) Link Parent
    Although this is a fine description of the tech stack, it's kind of overwhelming. I spent a lot of time skimming through all that documentation and frankly most of it was unnecessary. The code is...

    Although this is a fine description of the tech stack, it's kind of overwhelming. I spent a lot of time skimming through all that documentation and frankly most of it was unnecessary. The code is all right there after all — you can just read it and see what it's doing!

    So here's where I'd start:

    1. Get the dev environment running.
    2. Find a place on a page.
    3. Change the text using a Jinja template. (Just edit the template, save, and reload the page. Magic!)
    4. Change the text using a new Python function.
    5. Change the text using JavaScript (hint: might require a terminal command if you make a new file).
    6. Change the text using Sass (CSS).
    7. Now skim through the frontend code.

    This way you won't get bogged down in APIs and reading technical documentation. If you need to write more code, open the file where it's going to go; there's probably hints for where to start just a few lines up!

    This might sound like a hacky or shallow way of working with the code, but it's not. You're observing the subset of the API which is actually used in this project, and also accurately reproducing the house style.

    9 votes
  2. Comment on Markdown preview is now available when writing topics/comments/etc. in ~tildes.official

    wirelyre Link Parent
    Yes, but it's a lot of extra code that Is independent of all the current Tildes rendering (lots of work and maintenance). Would bloat page sizes.

    Yes, but it's a lot of extra code that

    1. Is independent of all the current Tildes rendering (lots of work and maintenance).
    2. Would bloat page sizes.
    4 votes
  3. Comment on Markdown preview is now available when writing topics/comments/etc. in ~tildes.official

    wirelyre Link
    I don't think people give enough credit to the difficulty of accepting open source contributions. It's hard enough keeping track of code you wrote; taking what someone else wrote and even just...

    I don't think people give enough credit to the difficulty of accepting open source contributions. It's hard enough keeping track of code you wrote; taking what someone else wrote and even just making sure it works is much harder!

    Now a call to action:

    If you're at all interested in making a contribution to Tildes, please go ahead. I'd never done any kind of web framework stuff before, but the Tildes codebase is exceptionally well organised and easy to read.

    40 votes
  4. Comment on New mod for bsnes emulator makes “Mode 7” SNES games look like new in ~games

    wirelyre Link
    Here is a video that explains the SNES's Mode 7 in detail.

    Here is a video that explains the SNES's Mode 7 in detail.

    4 votes
  5. Comment on Web Design in 4 minutes in ~comp

    wirelyre Link Parent
    You might want to adjust your monitor or web browser, because the body text passes WCAG AA. I found it uncomfortable as well though, after reading the perfectly legible black-on-white. Maybe try...

    You might want to adjust your monitor or web browser, because the body text passes WCAG AA. I found it uncomfortable as well though, after reading the perfectly legible black-on-white. Maybe try it again with fresh eyes.

    However, some of the color choices are inaccessible because they are not contrasting enough. The inline code has a contrast ratio of 2.1:1 (WCAG requires 4.5:1 at least). And it's basically illegible. The <strong> text has a normal font weight (?!) and a contrast ratio of 2.26:1 with the surrounding text (required 3:1 for links).

    I like this article for creative presentation and minimal CSS. But I really dislike lightened body text or changing it to not-black. Under my current extremely bright lights, the article's text is already lighter than an actual book.

    8 votes
  6. Comment on Optimize What? - an article on modern technological approaches in ~comp

    wirelyre Link
    I wish I'd seen this before reading the article: It would have made the pieces leading to the conclusion a lot easier to pick up and put together. I'm really torn on this article. On one hand, I...

    I wish I'd seen this before reading the article:

    Commune is a popular magazine for a new era of revolution

    It would have made the pieces leading to the conclusion a lot easier to pick up and put together.


    I'm really torn on this article. On one hand, I agree with the author that we're obligated to consider the implications of the data collection and manipulation we participate in. And I think it's always worthwhile to reconsider the social and economic systems we unconsciously perpetuate and reinforce.

    On the other hand, I'm unimpressed with the tone, writing style, structure, and conclusion. For instance, the whole section on the history of linear programming seems completely unnecessary — I felt like the author was trying to impress me with facts to distract from the argument.

    Yet in positioning itself as tech’s moral compass, academic computer science belies the fact that its own intellectual tools are the source of the technology industry’s dangerous power.

    What does that mean? That university computer science programs are ivory towers of ethics and morality? I don't know; maybe the culture in Silicon Valley is different. But in my mind they're more like vocational programs.

    I just think it's weird to consider supply chain logistics, creating maps of freely available data, formal artificial neural networks, and racial biases in image recognition as the same kind of thing. They are different tools for different kinds of data with different implications if they draw the wrong conclusions and perpetuate different systems if they draw the "right" ones.

    Also, I can't reconcile the Eightmaps story with the author's point. Are the website owners supposed to do something to prevent people from being harassed? Then surely it's not a problem with ethics in academia, since they are talking about it. Or are they blameless because the data is already publicly available? Then the fault must not lie with "algorithms" at all, but rather that the data is public in the first place.

    3 votes
  7. Comment on How does compression work? A short explanation. in ~comp

    wirelyre Link Parent
    Video stabilization is a related but surprising application of frequency analysis. Use some technique to estimate how the camera moves from frame to frame. This is usually done by comparing...

    Video stabilization is a related but surprising application of frequency analysis.

    Use some technique to estimate how the camera moves from frame to frame. This is usually done by comparing features in consecutive frames, then concluding how much the camera has moved and in which direction. But it is also possible that the camera has an accelerometer, which can measure the movement directly.

    Next, express that movement in terms of its component frequencies. Higher frequencies represent small jitters, while lower frequencies represent large motions over long periods of time. Large motions are more likely to be intentional, so simply throw out the rest using a "low-pass filter". Now you have the "ideal" camera motion over time.

    Finally, compare the ideal camera to the actual observed motion, and use the difference to adjust the frame picture data.

    4 votes
  8. Comment on Why OpenBSD Rocks in ~comp

    wirelyre Link Parent
    Somehow it has never occurred to me that /lib could be for plain text documents. You know, like a library.

    Within the /lib folder, 9front contains plain text copies of The Manifesto of the Communist Party[1] […]

    Somehow it has never occurred to me that /lib could be for plain text documents. You know, like a library.

    3 votes
  9. Comment on Can You Trust Kurzgesagt Videos? in ~misc

    wirelyre Link Parent
    This one? Seems pretty good to me. In fact, for a 7-minute overview of quantum computing, it's remarkably accurate and not misleading. I guess I would criticise their discussion of database...

    This one? Seems pretty good to me. In fact, for a 7-minute overview of quantum computing, it's remarkably accurate and not misleading.

    I guess I would criticise their discussion of database searching (alluding to Grover's algorithm), because in order to search an arbitrary database, you'd have to construct the whole database in your quantum computer. Grover's algorithm is not really a database search algorithm in the usual sense of "database" and "search".

    8 votes
  10. Comment on Comparing Textile vs. Markdown for mobile use in ~comp

    wirelyre Link
    I think that's right; it's parallel to link syntax (link text ↦ alt text, link target ↦ image source). This also reveals the (main?) difference between Markdown and Textile. Textile is "a...

    In Markdown alt text [for images] seems to be obligatory

    I think that's right; it's parallel to link syntax (link text ↦ alt text, link target ↦ image source).

    This also reveals the (main?) difference between Markdown and Textile. Textile is "a shorthand syntax used to generate valid HTML". A Textile document is not the final form. It's just for writing. But Markdown is "a plain text format for writing structured documents". It is for reading and writing.

    If you're reading a structured plain text document, image descriptions are crucial. The embedded image is completely useless, since you can't see it. But if the document is guaranteed to turn into HTML, then alt text is not necessary — for sighted users anyway.

    4 votes
  11. Comment on What programming language do you think deserves more credit? in ~comp

    wirelyre Link Parent
    Erlang takes a lot of inspiration from Prolog too. The syntax in particular resembles Prolog quite strongly, with English punctuation (fac(0) -> 1; fac(N) -> N * (N-1).), a strong emphasis on tail...

    Erlang takes a lot of inspiration from Prolog too. The syntax in particular resembles Prolog quite strongly, with English punctuation (fac(0) -> 1; fac(N) -> N * (N-1).), a strong emphasis on tail recursion (i.e. cuts), capitalization distinguishing variables and atoms, and probably more.

    When I walked through the manual a few years ago, I remember being delighted with bitstrings — it's possible to pattern match directly on bytes and bits.

    There are a few other surprises, like hot-swappable code. Essentially all of the unusual features are stuff you'd typically find in libraries (threading, actors, byte parsing), lifted into the core language. It's a very well-designed system.

    2 votes
  12. Comment on Nintendo Makes It Clear that Piracy Is the Only Way to Preserve Video Game History in ~games

    wirelyre Link
    In fairness, I think it should be pointed out that from Nintendo's perspective, this isn't much different from retiring old consoles and halting production of cartridges. They're just not selling...

    In fairness, I think it should be pointed out that from Nintendo's perspective, this isn't much different from retiring old consoles and halting production of cartridges. They're just not selling the old games anymore.

    The effect is quite different, of course. Download-only titles can't be resold like physical cartridges or games can. But it's better to think of this as Nintendo's inaction and refusal to change process, rather than a specific anti-consumer move.

    3 votes
  13. Comment on Why I use old hardware in ~comp

    wirelyre (edited ) Link
    There could be other benefits to programming under extra constraints. One night, in a state of hazy insomnia, I used my smartphone (and a touch keyboard) to write a small language interpreter....

    There could be other benefits to programming under extra constraints.

    One night, in a state of hazy insomnia, I used my smartphone (and a touch keyboard) to write a small language interpreter. Reading through it the next morning, I found that I had naturally kept to ~30 columns, uncramped spacing, and crystal clear data structures. I'm sure I have never written better C.

    Iterating on a problem gives you new views of the problem that are increasingly well suited to your tools. If there are two equally clear ways to solve a problem, then the solution using more primitive tools is probably better.

    Edit. I should complete this argument.

    If you use older hardware or tools, besides creating more widely usable software, you might very well create better software. The development process will refine your conception of the problem past what is natural for more powerful tools. It will also make scope creep less sustainable. Constraints are good.

    10 votes
  14. Comment on How do I hack makefiles? in ~comp

    wirelyre Link Parent
    Object files separate the concern of compilation (e.g. language parsing, optimization, instruction selection, register allocation) from that of linking (i.e. executable layout). Imagine you're a...

    Object files separate the concern of compilation (e.g. language parsing, optimization, instruction selection, register allocation) from that of linking (i.e. executable layout).

    Imagine you're a compiler. You're reading plain text and producing an executable file.

    When you compile a function, you emit machine code (add this to that, check if it's zero, otherwise jump backwards an instruction). Sometimes, you need to reference code or a memory location that hasn't been defined. This is done by allocating space for that instruction, but leaving the operand out (jump to ____; multiply ____ by two).

    Why do you need to reference undefined stuff? Part of your job is determining the memory layout that the machine code will have when the program is running. But if you haven't compiled that other function yet, you don't know where it will live in memory. And you can't really guess, because you don't know how many bytes each compiled function uses until after it's compiled.

    You might also reference code from a dynamic library, which is code that, by definition, has no assigned memory location until the program starts. The location is unknowable at compile time.

    There is a logical point like this for many compiled languages. The strategy is to leave blanks in the machine code, then keep track of those blanks. Blank spots have names called symbols. A file containing blanks, symbols, and machine code is an object file.

    If the job of a compiler is to produce machine code, then the job is finished once all internal references are resolved and the blanks are filled (mostly). Now you have a blob of binary that is independent of language-specific details (mostly). When you want to finish up by merging object files, possibly produced in wildly different ways (with different languages or compilers), you can use a separate program called a linker that doesn't need to know anything about programming languages.

    2 votes
  15. Comment on How do I hack makefiles? in ~comp

    wirelyre Link Parent
    You can also mix object files produced in different ways. For instance, a C compiler, a C++ compiler, and an assembler can all make object files that are usable together.

    You can also mix object files produced in different ways. For instance, a C compiler, a C++ compiler, and an assembler can all make object files that are usable together.

    1 vote
  16. Comment on How do I hack makefiles? in ~comp

    wirelyre Link Parent
    Typical kids, criticising things That Work and are historically essential, right? :P I'll second the Linux Makefile. It's beautiful and well-documented.

    Typical kids, criticising things That Work and are historically essential, right? :P

    I'll second the Linux Makefile. It's beautiful and well-documented.

    2 votes
  17. Comment on How do I hack makefiles? in ~comp

    wirelyre (edited ) Link
    The GNU build system is a huge mess. This comment is an introduction to that mess. It might help you. It's mostly a rant. Most ./configure scripts aren't written by hand. That's why they look like...

    The GNU build system is a huge mess. This comment is an introduction to that mess. It might help you. It's mostly a rant.

    Most ./configure scripts aren't written by hand. That's why they look like casseroles. In fact, most Makefiles also aren't written by hand. We'll get to that.

    Make is nothing more or less than a language for describing dependencies for files; for building files into others. This is all very good. If you want to use Make in a sensible way, by expressing e.g. "build object files from these .c files, then link them into this executable", you can read the GNU make manual. Read sections 2 and 4, skim section 6, and keep the rest as reference. Suckless sbase, musl, and the Tiny C Compiler all have excellent Makefiles.

    Here comes the mess.

    Autotools

    Back in the day, software had to deal with a bunch of very incompatible compliation environments. Sometimes functions had different signatures, sometimes they weren't defined. So people started writing shell scripts to explore the computer (libraries, headers, etc.), and generate the Makefile automatically from a template (Makefile.in).

    But configure scripts are tedious to write by hand. So GNU Autoconf was born. Autoconf takes configure.ac scripts and makes configure scripts… which make Makefiles… which make programs. And as a bonus, Autoconf scripts are written in an ancient language, M4, which you will never see anywhere else.

    Unfortunately, Make isn't smart enough to keep track of which files depend on each other. If you do it manually but incorrectly, the program might not build, or worse, might build with outdated parts. Fear not! There is a tool called GNU Automake, which explores your source tree and generates Makefile.in (which configure uses to make the real Makefile) automatically! Except Automake can't determine everything about your project, so you need to give it a file Makefile.am as input.

    Wouldn't it be great if you didn't have to write Autoconf scripts? Surely someone has a list of functions that don't exist on some computers or whatever. Good news! Autoconf includes autoscan, which does that for you!

    To recap:

    • You don't want to type compilation commands by hand, so you need a Makefile.
    • Compilers behave differently on different systems, so you need a configure script and a template Makefile.in to generate the Makefile.
    • Configure scripts are similar to each other, so you need configure.ac to generate them.
      • Actually, you don't; they can be deduced automatically by autoscan.
    • Template Makefiles are tedious to write, so they can be deduced automatically. Mostly. They still need Makefile.am.

    Complexity

    Obviously this is all a load of garbage very complex. It's ironic that, towards the goal of a free Unix, GNU sacrificed the Unix philosophy to portability. These programs are all subtly interconnected and difficult to reason about. The portability concerns are outdated too! There's no reason you couldn't write these tools as, say, libraries in shell script; or as programs in GNU Make, which, by the way, is incompatible with other Makes.

    The upshot is that, incredibly, knowing Make will often not help very much when you need to fix builds that use Make. But do write Makefiles for your own projects where possible (i.e. when simple enough). Make is a great tool.

    Other projects saw this problem and fixed it in different ways. Let's explore.

    Alternatives

    Ninja is a replacement for what Make became: a description of file dependencies and commands which are generated by other tools. Ninja is very fast and simpler than Make. Ninja is in charge of executing builds.

    CMake is a tool for describing software requirements and finding them on the computer; and for determining which source files depend on which others. It is basically a replacement for configure scripts and what's involved in generating them. It's clunky, but better than casserole. CMake can generate Makefiles, Ninja files, and files for other build systems. CMake is in charge of planning builds.

    Meson is like CMake, except the configuration language is quite different. As far as I can tell, the project emphasizes automatic configuration (edit the config specification as necessary) over CMake's occasional manual tweaks (edit the generated variables as necessary). It's hard to recommend one of these over the other, although Meson can use CMake dependencies and its syntax is more C-like. Meson also generates Ninja files.

    SCons and waf replace the entire build system, from configuration to build execution. I can't talk much about these since I don't use them.

    Hopefully this was useful for someone. My relationship with GNU builds became very strained recently. May yours remain cordial.

    Edit. s/a load of garbage/very complex/, don't be rude.

    19 votes
  18. Comment on The International Gymnastics Federation wants to recognise parkour as a new discipline, with a view to Olympic inclusion in 2024. But the parkour community is opposing the FIG’s efforts in ~sports

    wirelyre Link
    We saw similar criticism with the introduction of skateboarding and surfing at the 2020 Olympics. Both of these sports have grown communities that embrace personal expression and counterculture...

    We saw similar criticism with the introduction of skateboarding and surfing at the 2020 Olympics. Both of these sports have grown communities that embrace personal expression and counterculture over pure technical skills.

    Plenty of athletes find the inclusion of these sports in the Olympics to be completely contrary to the spirit of the sports. An LA Times article summarizes:

    More than 5,500 people identifying themselves as skateboarders from around the world have signed an online petition asking the International Olympic Committee not to add their sport to the Games.

    […]

    "Skateboarding is not a 'sport' and we do not want skateboarding exploited and transformed to fit into the Olympic program," the online petition states. "We feel that Olympic involvement will change the face of skateboarding and its individuality and freedoms forever."

    And from an Outside article:

    And young surfers don’t want to listen to Bob Costas narrate John John Florence’s top turns. They want to watch a webisode, scroll through heats on demand (if the waves are firing), and then go surfing.

    There are even some parallels with the difficult involvement of community leaders in skateboarding and parkour. The Guardian article reports that members of the skateboarding commission left in part due to "no involvement of the international parkour community" in the commission. From a Vice article:

    Getting skating into the Olympics, however, was never Ream's mission. He felt that would happen whether he cooperated or not. But the IOC requires every Olympic sport to have an international federation, and the fear was that an organization with little to no experience with skateboarding or its culture, rules, or people would wind up in charge. So Ream and other icons of skate formed the ISF in 2004. In describing his overall role regarding the Games, Ream said he was "very active in protecting skateboarding in its relationship with the Olympics."

    Having no personal involvement in these sports or communities, I think parkour is obviously a great fit as a sub-discipline of gymnastics. But I worry about the effect of the FIG on future competitive parkour. The 2006 scoring changes in artistic gymnastics represented a huge turning point in the sport. They encourage significantly trickier skills at the cost of artistic integrity. I hope that parkour can keep well enough to its roots to avoid similar changes.

    4 votes
  19. Comment on Where would a beginner start with data compression? What are some good books for it? in ~comp

    wirelyre Link
    My local library has Sayood's Introduction to Data Compression and Salomon/Motta's Handbook of Data Compression, both of which I would recommend. Sayood is a great medium-paced read that's usable...

    My local library has Sayood's Introduction to Data Compression and Salomon/Motta's Handbook of Data Compression, both of which I would recommend. Sayood is a great medium-paced read that's usable as a self-taught course, although it's a little heavy on the mathematics. Salomon/Motta is an awfully dry reference that is unusable for learning the basics, but it's an incredible overview of compression strategies and has a section for basically every algorithm in common use.

    6 votes
  20. Comment on Programming Challenge - It's raining! in ~comp

    wirelyre (edited ) Link
    This is very cool. Here is a Haskell solution. Try it online! import Data.Ratio main = print . fill $ map lake [1, 3, 5] data Lake = Lake { unfilled :: Rational -- litres , rate :: Rational --...

    This is very cool.

    Here is a Haskell solution. Try it online!

    import Data.Ratio
    
    main = print . fill $ map lake [1, 3, 5]
    
    data Lake = Lake { unfilled :: Rational -- litres
                     , rate :: Rational -- litres per hour
                     }
    
    lake :: Rational -> Lake
    lake depth = Lake {unfilled=depth, rate=1}
    
    fill :: [Lake] -> Rational
    fill [] = 0
    fill ls = time + fill (spill 0 (map fill' ls))
      where
        time = minimum . map (\lake -> unfilled lake / rate lake) $ ls
        fill' Lake {unfilled=u, rate=r} = Lake {unfilled=u-r*time, rate=r}
    
    spill :: Rational -> [Lake] -> [Lake]
    spill _ [] = []
    spill incoming (Lake {unfilled=0, rate=r} : ls) = spill (incoming+r) ls
    spill incoming (Lake {unfilled=u, rate=r} : ls) = Lake {unfilled=u, rate=incoming+r} : spill 0 ls
    

    Since the answer can be any rational number, we'll use the built-in Ratio library. Haskell will promote literal integers like 0 and 1 into Rationals automatically.

    At any given time, for each lake, we only care about

    1. how much is unfilled; and
    2. how quickly water is flowing into it.
    import Data.Ratio
    
    data Lake = Lake { unfilled :: Rational -- litres
                     , rate :: Rational -- litres per hour
                     }
    

    At the start, each lake has a flow rate of 1 litre per hour, so we'll make a convenience function to construct a lake from its volume.

    lake :: Rational -> Lake
    lake depth = Lake {unfilled=depth, rate=1}
    

    Finally, we'll declare a function fill, which takes a list of lakes and finds out how long it takes to fill them. Now we can write a main function.

    fill :: [Lake] -> Rational
    main = print . fill $ map lake [1, 3, 5]
    

    Here's the plan: At every time step, we find out how long until some lake is filled next. Then, for each lake, we decrease unfilled by the appropriate amount. Finally, we clean up the list of lakes by removing lakes that are completely full.

    But wait! A lake that is full needs to spill incoming water to the next lake. We need one final auxiliary function, spill. spill takes a list of lakes, and removes lakes that are completely full, but adds water intake rates forward into lakes that are not yet full. That is, if a lake is full, all of the incoming water is carried forward to the next lake.

    spill :: Rational -> [Lake] -> [Lake]
    spill _ [] = []
    spill incoming (Lake {unfilled=0, rate=r} : ls) = spill (incoming+r) ls
    spill incoming (Lake {unfilled=u, rate=r} : ls) = Lake {unfilled=u, rate=incoming+r} : spill 0 ls
    

    This is a common form for recursive functions. Let's step through each case.

    1. Base case: If there are no lakes remaining ([]), no need to do anything.
    2. Otherwise, we have at least one lake. If the lake is empty (unfilled=0), all of the incoming water spilled over so far, plus all of the water that used to fill this lake, overflows forward. Discard the lake.
    3. Otherwise, we have at least one lake and the lake is not empty. Keep the lake, and increase the incoming water rate. Zero water overflows forward (because any incoming water will fill this lake first).

    Now the rest is straightforward. For the base case, it takes no time to fill no lakes.

    fill :: [Lake] -> Rational
    fill [] = 0
    

    How long until the next lake is filled to capacity? (A little math: each lake fills in unfilled / rate hours. The minimum of those is the next to fill.)

      where
        time :: Rational
        time = minimum . map (\lake -> unfilled lake / rate lake) $ ls
    

    Once that amount of time has passed (once the next lake has filled to capacity), what are the water levels in a single lake?

      where
        fill' :: Lake -> Lake
        fill' Lake {unfilled=u, rate=r} = Lake {unfilled=u-r*time, rate=r}
    

    Finally, we recurse. After time has passed, we update each lake (map fill' ls), spill over empty lakes (spill 0 _, because 0 litres per hour overflow into the first lake), and add time to however long it takes to fill the new list of lakes.

    fill ls = time + fill (spill 0 (map fill' ls))
    
    11 votes