Jakobeha's recent activity

  1. Comment on I think I've failed the United States in ~society

    Jakobeha
    Link
    I always think of people (including myself) like machines. We're programmed to survive and form communities, we act from what we've learned and we learn from our surroundings. Some people are true...

    I always think of people (including myself) like machines. We're programmed to survive and form communities, we act from what we've learned and we learn from our surroundings.

    Some people are true sociopaths, but the vast majority have at least some empathy. My understanding is that most people help themselves first, then others: when we're struggling we prioritize our own needs, but when our needs are met well enough we spend effort and resources to make sure others' needs are met too (the amount of sacrifice depends on the person, but people who feel they are well off in societies that encourage charity tend to be very generous).

    Look around you: almost everything was invented, made, and shipped by others, laws that protect and provide for you are enforced by others. People have the capacity to do horrible and wonderful things, what we do depends on the how the outside world affects and then empowers us.

    Never forget the real enemy: tragedy. Tragedy creates and empowers evil people, and tragedy causes diseases and disasters that so far have and always can be more destructive than anything man-made. Everyone's goal should be "progress", which is to bend nature in a way that minimizes tragedy. A lot of internet discourse involves vilifying and/or insulting large groups, but what every large enough group truly wants is the same, serenity, so the things we want different aren't worth fighting for beyond "live and let live".

    6 votes
  2. Comment on Using AI generated code will make you a bad programmer in ~tech

    Jakobeha
    (edited )
    Link Parent
    I agree with the first part: LLMs hurt education. With LLMs, a student can accomplish a lot without really understanding what their code is doing, but they can't accomplish as much or with as good...

    I agree with the first part: LLMs hurt education. With LLMs, a student can accomplish a lot without really understanding what their code is doing, but they can't accomplish as much or with as good quality as if they did understand. Students can pass entry-level classes and maybe even graduate and get jobs developing software, barely learning anything, until eventually they reach a point where the LLMs aren't good enough. At this point they're like students who skipped the first half of a course because there are no graded assignments until the midterm. Maybe these students are still able to learn the fundamental skills they missed, but at the very least they wasted a lot of time not learning the skills earlier.

    But I disagree this is inevitable. Students still can learn the fundamentals to write good code. At minimum, schools can give assignments and exams in an IDE that doesn't support LLMs, and I think this is necessary for the entry-level classes. But I also think it's possible to design assignments that LLMs aren't good enough to solve for higher-level classes, so that students still truly learn how to write code even when they have access to LLMs.

    I think in this way an LLM is a lot like a calculator or parent/friend/tutor who you could convince to do your work for you. In theory, it's easy for someone to "complete" assignments outside of class without truly learning, and this has been the case before LLMs. But (most?) students still learned the fundamentals, because they still had to pass in-class assignments to get a good overall grade, and because the honor system and potential of being caught was enough to deter them (at least me) outside of class. I believe most schools nowadays give every student a cheap locked-down laptop, and colleges should have enough money to do the same. In this case, a teacher can ban LLMs for an assignment by requiring students do the assignment on their locked-down computer, in a restricted IDE that has syntax highlighting and autocomplete but no LLMs.

    4 votes
  3. Comment on Using AI generated code will make you a bad programmer in ~tech

    Jakobeha
    (edited )
    Link
    I use LLM code generation a lot, but I always check (and fairly often change) what it generates afterward. I'm pretty sure when not using AI, I end up writing the same code by hand, slower....

    I use LLM code generation a lot, but I always check (and fairly often change) what it generates afterward. I'm pretty sure when not using AI, I end up writing the same code by hand, slower. Actually when not using AI, I frequently copy-paste large chunks of code and then heavily edit them, so it's not so different.

    Is AI making me forget how to write code? Probably not, because I still write a lot of the code by hand, and I read over all of it. Code is often buggy and/or needs to be modified (e.g. to handle new features), especially LLM-generated code, so even if at first I don't really understand the LLM-generated code, I often end up learning it later (to debug or modify it).

    Will AI replace me eventually? Maybe, but I don't see how using AI is making that non-negligibly faster. Current models train on written code, presumably not factoring who wrote it. I can ensure AI companies don't train on my code or writing process by using a local model (although I don't, but that would be a separate argument).

    Will AI-generated code retroactively become illegal? If so, that means a lot of recently written code retroactively becomes illegal, so it seems very unlikely.

    There are problems related to LLM-generated code, such as less developer positions and bad software. But these have been problems before LLMs, and are exacerbated by IDEs and frameworks like Electron respectively. I don't think getting rid of IDEs and frameworks are the solution, and likewise, I think the root cause (allowing more people to write software easier, albeit most of it low-quality) is a net positive.

    6 votes
  4. Comment on Using AI generated code will make you a bad programmer in ~tech

    Jakobeha
    Link Parent
    I like algorithmic refactoring tools much more than LLMs because I trust that the code is refactored properly. Even when the algorithm fails, it fails in predictable, reasonable ways. e.g. a good...

    I like algorithmic refactoring tools much more than LLMs because I trust that the code is refactored properly. Even when the algorithm fails, it fails in predictable, reasonable ways.

    e.g. a good "rename method" tool in a statically-typed language will:

    • Only rename calls on values of the correct type. If I rename Foo#bar, it won't touch calls to Bar#bar, even though both look like some_value.bar(...) without context.
    • Skip (or ask to confirm) renaming in strings and comments, because the tool isn't smart enough to guarantee a literal occurrence of the method's name actually refers to the method.

    Sure, I have to search for and rename true negatives in the strings and comments manually, but it's easy (I can "find and replace in project" the old name). Importantly, I can rely on references outside of strings and comments being renamed properly.

    An LLM doing "rename method" may be even smarter, because when it finds the method name in a string or comment, it can use English comprehension to determine whether it's an actual reference. But (AFAIK) there is no LLM-based tool that can guarantee it won't rename literal occurrences that are not references to the method, or can guarantee it will rename every reference (and maybe some). So when I ask an LLM to refactor, I have to check true negatives and false positives, and look over every line of code the LLM changed. At this point it's faster to skip the LLM and go straight to "find and replace in project" and looking over every occurrence manually.

    10 votes
  5. Comment on HTML for people in ~tech

    Jakobeha
    Link Parent
    I'd go with a static site generator, but there are downsides: Any SSG adds complexity. Instead of previewing or publishing your code directly, you generate the static site, then preview or publish...

    I'd go with a static site generator, but there are downsides:

    • Any SSG adds complexity. Instead of previewing or publishing your code directly, you generate the static site, then preview or publish that. When the site has a problem, you have to figure out whether it's the generator, the source code, or neither but you forgot to re-generate it.

    • Some SSGs let you avoid HTML and build your site entirely in another language, which is presumably the antithesis of this book. At that point, why not skip the book and just read the Hugo tutorial? The ones that use HTML augment the language, even if they just process comments a special way, in order provide the non-HTML features like including common fragments in multiple pages.

    • There are hundreds of options. Each have their own additional downsides, many aren't beginner-friendly, and some are very buggy.

    Personally, I think it would be cool if the book guided readers to build their own tiny static-site generator, to alleviate HTML issues like every page having a common "scaffold", and maybe do other cool things like render data (CSV or JSON). Then it would still be covering front-end web design from the bottom up. But I'm not sure how beginner-friendly or useful that would be. Otherwise, I guess I'd recommend simplest-sitegen (found from this list), although I didn't test it, because from the README it provides the simplest (as is advertised) way to do the common scaffold.

    2 votes
  6. Comment on Dr1v3n Wild in ~games

    Jakobeha
    Link Parent
    Something other games do is when the timer reaches 0, the car starts slowing, and when it stops you lose. But if you pass a checkpoint before the car fully stops, you continue.

    Something other games do is when the timer reaches 0, the car starts slowing, and when it stops you lose. But if you pass a checkpoint before the car fully stops, you continue.

  7. Comment on Share your personal dotfile treats and Unix tool recommendations in ~comp

    Jakobeha
    Link
    I may add more to this later: Fish shell: alternative to bash and zsh with colors, git integration, auto-complete, better scripting, and more. Note that fish isn't 100% POSIX-compatible. In...

    I may add more to this later:

    • Fish shell: alternative to bash and zsh with colors, git integration, auto-complete, better scripting, and more.
      • Note that fish isn't 100% POSIX-compatible. In practice, almost any command that works with bash works with fish (especially because most scripts start with #! /usr/bin/env bash). But in particular, nvm doesn't, you need nvm.fish.
    • rg (ripgrep): grep (search text in files) but better.
    • fd: find (search files by name and type) but better.
    • ninja: make (build C/C++) but faster. You use it to build CMake projects by passing -GNinja to the command that generates the CMakeFile.
    • mold: ld (link C/C++/Rust/nim object files) but faster. Similarly, you can configure CMake projects to use mold via DCMAKE_EXE_LINKER_FLAGS="-fuse-ld=mold" -DCMAKE_SHARED_LINKER_FLAGS="-fuse-ld=mold". It also works with Rust and Nim according to the README.
    • rename for macOS (via brew install rename): batch rename files.
    • restic: command-line backup tool.
    • ncdu: command-line disk cleaner (find and remove large files).
    13 votes
  8. Comment on Does anyone have experience working as an independent researcher? in ~science

    Jakobeha
    (edited )
    Link Parent
    It depends on what you want to research. If you taper your costs and expectations. You don’t have to be retired or rich to work on hobby projects in your spare time with your spare money. I’m...

    It depends on what you want to research.

    If you taper your costs and expectations. You don’t have to be retired or rich to work on hobby projects in your spare time with your spare money.

    I’m biased since I’m in Computer Science: even nowadays, there are a lot of popular open-source projects built by ordinary people. And some fields like chemistry, biology, and medicine, you can’t really do anything without government and/or corporate support. But the range of fields you can accomplish something cheap is very large. Hardware and robotics (whatever the popular replacement for Raspberry Pi is), history and literature (there’s a ton of online content not paywalled), food science, …

    EDIT: Your idea of spending years working on something without even a guarantee of success reminds me of people like Andrew Wiles and Eric Barone, two very different people who spent years working on “moonshots” and succeeded. Granted, they are the exceptions and they both spent full time on their projects, but there are far less well-known projects that I know have been developed by people for years, and although they’ve never taken off, the people developing them seem to make a modest living and their projects have gotten some recognition.

  9. Comment on <deleted topic> in ~tech

    Jakobeha
    Link Parent
    I have a similar solution, except instead of a noise generator I have bone-conduction headphones (AfterShokz). You can wear both ear plugs and bone-conduction headphones, and I also bend the...

    I have a similar solution, except instead of a noise generator I have bone-conduction headphones (AfterShokz). You can wear both ear plugs and bone-conduction headphones, and I also bend the headphones to only cover one ear because I sleep on my side (they're very flexible, and they have a model that go on each ear separately).

    6 votes
  10. Comment on Does something like a charity fund for FOSS exist? If not, do you think it could be a good idea? in ~tech

    Jakobeha
    Link
    Something similar is thanks.dev: it scans your projects' dependency trees, then you "donate" a set amount of money and it distributes it among the dependencies you use. Although I think it's more...

    Something similar is thanks.dev: it scans your projects' dependency trees, then you "donate" a set amount of money and it distributes it among the dependencies you use.
    Although I think it's more common for people to donate to specific projects, as others have mentioned.

    You could also donate to a non-profit foundation, such as Mozilla, Wikipedia, or Creative Commons. Sometimes these groups will select and donate to open-source projects themselves; for example, the Mozilla Technology Fund "supports open source technologists whose work furthers promising approaches to solving pressing issues".

    10 votes
  11. Comment on White House urges use of type safe and memory safe programming languages and hardware in ~tech

    Jakobeha
    Link Parent
    A few points: Code doesn't have to be written in C/C++ or Rust to be performant. It's true that in practice, well-optimized C++ or Rust code is faster than well-optimized Python or JavaScript; but...

    A few points:

    • Code doesn't have to be written in C/C++ or Rust to be performant. It's true that in practice, well-optimized C++ or Rust code is faster than well-optimized Python or JavaScript; but well-optimized Python and JavaScript code is fast. I'd argue the excessive slowness of modern software isn't because of the language, but because of other factors like performance-draining abstractions and bloat. If your basic GUI-based desktop app, 2D/low-poly 3D game, or website can't function with zero lag on a modern computer, it's not because of the language; and in fact, an inefficient C++ or Rust implementation can be even slower than an efficient Python or JavaScript one. This is reinforced by how slow software also tends to be buggy (and vice versa), but C++ certainly doesn't prevent prevent more bugs than say, Java or TypeScript (maybe Rust does, and maybe both beat untyped Python or JavaScript; but back to the argument for automatic memory management, the entire point is that it effectively eliminates memory-related bugs that C and C++ keep).
    • The first source focuses on big companies rewriting their software in faster languages. But importantly, all of them are big companies with already-successful products, and all of them started with a small prototype in a dynamic language. In fact, I believe the correct way to write great software is to start with a prototype written in something like Kotlin which prioritizes flexibility over performance, and only rewrite in something like Rust once the concept is solidified if performance becomes an issue. Because if you write the initial app in C++ or Rust, you have to think about memory or lifetimes alongside your core app's architecture; this makes development slower, discourages important-but-big changes, and makes you end up with a more convoluted product that you may just rewrite again anyways.
    • Some languages, like JavaScript and Python, have design decisions which create much more overhead than their automatic memory management and runtime guards: both JavaScript and Python are interpreted (with optional JIT compilation) and don't have a static type system. In general, code written in Swift, Java, or OCaml performs much closer to C than it does to JavaScript or Python (here's a comparison of C/Java/Python programs with a simple benchmark, although benchmarks don't really reflect real-world code).

    The bottom-line is that there's a lot of "low-hanging fruit" when it comes to slow software. Firstly, time complexity and expensive redundant computations will destroy software performance no matter what language. Then there's reducing allocations, swapping inefficient libraries for better or home-grown ones, and other "mid-level" optimizations, where in a language like C++ or Rust is probably easier, but in Java or JavaScript it's still possible. All these optimizations affect performance much more than removing the overhead given by automatic memory management and runtime guards. Eventually you may get to a point where removing the GC and guards is necessary; but people very frequently choose C++ or Rust prematurely, for a project which will never reach that point.

    12 votes
  12. Comment on White House urges use of type safe and memory safe programming languages and hardware in ~tech

    Jakobeha
    (edited )
    Link Parent
    The author is correct that memory safety vulnerabilities aren't the only vulnerabilities, and it's possible to write C or C++ in a way that ensures you don't get memory errors. Except memory...

    The author is correct that memory safety vulnerabilities aren't the only vulnerabilities, and it's possible to write C or C++ in a way that ensures you don't get memory errors.

    Except memory safety vulnerabilities are very common: an analysis by Microsoft in 2019 concluded that over 70% of their CVEs were memory safety issues. And it doesn't matter whether memory unsafety is the worst cause of exploits; it's the cause of some high-profile exploits, so when you're writing secure software, it's something you care about.

    Furthermore, while you can technically write memory-safe C/C++, it's very tedious and hard to get right. It involves formal verification (which makes solving borrow-checker issues feel like cakewalk) and/or using a subset of the language which is so small, you can barely write anything. The author severely underestimates how pervasive C/C++ footguns are; his example of a "safe" container, std::string, has at least one memory-safety footgun of its own! The entire point of Rust's strict, unintuitive rules is that developers can't be trusted to write safe code on their own, and there are so many seemingly-innocuous ways to get memory corruption; Rust's annoying rules are what's necessary to write truly-safe, high-performance software.


    To the author's credit, and ironically something I don't think he addressed, the discussion around this seems to be missing that Rust is far from the only memory-safe language. In fact, most languages being used today make it so the average developer has to go out of their way to cause memory corruption: I'm talking about Go, Java, Python, JavaScript, Lua, Swift, Haskell, and any other language with garbage collection or some other form of automatic memory management. These languages achieve safety by taking memory control away from the user and managing it themselves (hence "automatic memory management"), and inserting guards before reads, writes, and other potentially-unsafe operations.

    The drawback is that automatic memory management and guards cause extra overhead, so C++ and Rust are preferred for code that needs to run as fast as possible. But I believe that it's rare* that speed is so important, this overhead is a real issue; at least, more rare than the cases where C/C++/Rust are chosen. Modern computers are fast, and well-written Swift/Go/Java is almost as fast as well-written C. Even if your program needs to be efficient, it's usually only a small part that does the heavy lifting, which can be written in a fast language while the rest is written in a scripting language; this is what most game engines (e.g. Unity, Godot) and neural networks (e.g. PyTorch) do.

    * Relatively. I can think of a few categories of programs which are exceptions, and I'm sure there are many programs written in these categories.

    26 votes
  13. Comment on Is an ethical social media platform even possible? in ~tech

    Jakobeha
    (edited )
    Link Parent
    Hacker News is filled with subtle advertisements and the entire site is a giant advertisement for YCombinator. They also occasionally have “YC company is hiring” posts which can’t be upvoted or...

    Hacker News is filled with subtle advertisements and the entire site is a giant advertisement for YCombinator. They also occasionally have “YC company is hiring” posts which can’t be upvoted or commented, as well as “Launch HN” which can be upvoted and commented but are guaranteed to show up on the front page.

    I will say the advertisements are a lot better than most sites. There’s at most one promoted page at a time (while each page has 30 posts), “subtle” advertisements have to be genuinely interesting or they’ll never make it to the front page, and their comments are practically guaranteed to have criticism (if your app sucks commenters will describe in detail how it sucks; if your app is really good, someone will find something they feel is wrong with it and describe how that makes it suck).

    But it’s far from free of monetization. I’m certain it netted YCombinator millions if not billions in convincing at least one successful startup to join the YC program. It’s just advertising for a different audience. Even the bare-bones site design is just a way to signal “YCombinator is practical and BS-free, so you can trust us with your startup”.

    6 votes
  14. Comment on A 2024 plea for lean software in ~comp

    Jakobeha
    (edited )
    Link Parent
    Modern JIT compilers make it possible to write code in languages like JavaScript, which still looks high-level and doesn't directly manipulate memory, but gets compiled to something efficient. The...

    Modern JIT compilers make it possible to write code in languages like JavaScript, which still looks high-level and doesn't directly manipulate memory, but gets compiled to something efficient.

    The problem is that it's hard to know when said code is being compiled to something efficient, because the JIT compilers are very complicated, and the smallest seemingly-irrelevant change can suddenly disable a lot of optimizations. For instance, in V8, calling a JavaScript function with more than 4 types of objects will prevent it from being optimized at all; there are a lot of these subtle things which cause the compiler to "give up" and it can be hard to figure out why your code suddenly runs slower (static languages do have their own "subtle things" which make your code slower like branch misprediction and cache misses, but there are a lot less of them).

    Another issue doesn't relate to the language itself but its community: people who write low-level languages are generally more aware of the performance characteristics of their code, while people who write very-high-level languages are not. So the libraries you use in C and Rust may be more optimized, and explicitly document which functions are expensive and which are efficient, while some libraries in JavaScript over-use abstractions and design patterns which make it extra hard for the JIT compiler to optimize them. But as mentioned, this isn't intrinsic, just statistical: there are poorly-written inefficient C libraries and highly-optimized JavaScript libraries.

    4 votes
  15. Comment on Older folks: Do you feel like work ethic has changed? Better or worse? Do you notice any generalizations? Have the times changed that much? in ~talk

    Jakobeha
    Link Parent
    I suspect another reason is just that the work is less interesting. Back in the 90s you could get rich making a generic "social media app" or "first-person shooter" because many of the big players...

    I suspect another reason is just that the work is less interesting.

    Back in the 90s you could get rich making a generic "social media app" or "first-person shooter" because many of the big players haven't been developed yet, and those things weren't yet "generic". The tools were a lot worse, but the expected quality and size of software was a lot less, so while I don't really know, it seems like it was a lot easier to make something useful. Nowadays it seems like there's an app for everything and you can't expect people to use your app unless it's extremely approachable. It seems like there are a lot of "bullshit jobs", and even work on genuinely important software isn't very creative. Although a big part of this perception is just me growing up.

    Also as someone mentioned, the uncertainty of raises/promotions is a big factor. People won't put in extra effort unless they get extra reward, if not intrinsic, extrinsic. Which leads to non-innovative software, and I suspect eventually bad software, mainly because nobody can effectively measure productivity or quality, so developers need some level of trust.

    Anyways, not everything can be novel, but I think it would help for both companies and developers to focus on what part of their product is essential and what "improvements" make it more useful. That's what people did back then, when computers ran in megahertz and stoage was in megabytes, and despite today's computers being much more powerful and people expecting a lot more, I think it's still relevant.

    12 votes
  16. Comment on The Markup iceberg in ~tech

    Jakobeha
    Link Parent
    I should clarify: the language itself is powerful enough, it's the third-party support which makes Typst "not as powerful as LaTex". LaTex has decades of packages providing all sorts of...

    I should clarify: the language itself is powerful enough, it's the third-party support which makes Typst "not as powerful as LaTex". LaTex has decades of packages providing all sorts of extensions, some which are complex and surely rely on LaTeX-specific behavior that would make them hard to port.

    Even then, I'm sure Typst fits almost every situation, except when you're working with a team who's more familiar with LaTex, or have to submit in a very specific format which is written in a LaTeX template. Unfortunately those exceptions are pretty common.

    5 votes
  17. Comment on The Markup iceberg in ~tech

    Jakobeha
    Link Parent
    It’s pretty good: I tried ChatGPT some MathTeX by giving it a couple examples, and it made a couple mistakes, but nonetheless saved me time. Though nowadays I’d use Pandoc, which can now convert...

    It’s pretty good: I tried ChatGPT some MathTeX by giving it a couple examples, and it made a couple mistakes, but nonetheless saved me time.

    Though nowadays I’d use Pandoc, which can now convert LaTeX (or any of its other input formats) to Typst.

    2 votes
  18. Comment on The Markup iceberg in ~tech

    Jakobeha
    Link
    Typst is a new-ish (beta, but it's been out for a while) alternative to LaTeX for writing papers. My experience using it has been really impressive: it's not as powerful as LaTeX, but it has fast...

    Typst is a new-ish (beta, but it's been out for a while) alternative to LaTeX for writing papers. My experience using it has been really impressive: it's not as powerful as LaTeX, but it has fast live preview, and the syntax is actually intuitive. [It Markdown-like syntax] for bold, italic, list, and header (list and header use different characters than Markdown but otherwise it's the same), and it has a math mode similar to MathTex.

    Unfortunately when a paper needs to use a template defined in LaTeX, LaTeX is the only option. But I hope Typst gets more popular because it definitely seems like a serious contender.

    5 votes
  19. Comment on Is there a markdown editor which let me open .md files from Windows? in ~tech

    Jakobeha
    Link
    I assume you also want a sort of WYSIWYG “plain” editor. MarkText. GhostWriter except it isn’t “perfect” WYSIWYG (monospaced font), but close. Typora is very good except it costs money (used to be...

    I assume you also want a sort of WYSIWYG “plain” editor.

    MarkText.

    GhostWriter except it isn’t “perfect” WYSIWYG (monospaced font), but close.

    Typora is very good except it costs money (used to be free, now it’s a 15-day trial).

    6 votes
  20. Comment on Why do some educators dislike teaching people who don't already know? in ~life

    Jakobeha
    (edited )
    Link
    2 big issues with the general way we teach: Students come in with different levels of experience and learn at different rates. Some students get bored because they’re already ahead and learn fast,...

    2 big issues with the general way we teach:

    1. Students come in with different levels of experience and learn at different rates. Some students get bored because they’re already ahead and learn fast, some students need review and need more of each lesson to grasp its concept.

    2. Professors teach the same classes over and over, so they get bored and disinterested, especially teaching entry-level classes where there isn’t much room for flexibility or creativity.

    Unfortunately, these issues especially apply when students need review and ask review questions. For 1), some students would benefit from review lectures, but most would’ve already understood and they would be a waste. Also, review lectures mean there’s less time to teach what the course is actually about, so students who do already understand don’t get the amount of learning they signed up for. But that means the struggling students email and go to office hours for review, which leads to 2), the professor has to waste their time with the same replies over and over (in contrast, many professors love it when students ask unique and interesting questions).

    To be clear, I believe the students who need extra support and review deserve it. I also believe that students who learn fast deserve to advance through their classes faster. I think there has to be a better way, so that teachers and professors can spend their time teaching interesting material and answering uncommon questions, which is a better use of their talents, and students can learn at their own rate with more personal support. I’m a big fan of online learning, and I imagine the ideal is platforms like Khan Academy replacing lectures entirely, and teachers/professors only teaching interactive seminars and assisting students who have questions and/or need a form of help that the online tools can’t provide.

    28 votes