Jakobeha's recent activity

  1. Comment on AI is creeping into the Linux kernel - and official policy is needed ASAP in ~comp

    Jakobeha
    (edited )
    Link Parent
    You'd have the power to build and ship any software you can build today. The difference is that some people would be able to build more complex software, using e.g. libraries that are too complex...

    You'd have the power to build and ship any software you can build today. The difference is that some people would be able to build more complex software, using e.g. libraries that are too complex for humans to understand (but these are new libraries, or new versions of existing libraries). However, we already have this in the sense that very rich people can hire expert programmers in multiple specific fields (e.g. graphics, networking, mobile) to collaborate and build whatever they want; so the real difference is that less rich people can use AI to achieve the same complexity.

    Also note that despite having complex word processors, productivity tools, video games, etc. designed by large companies, many people prefer simpler equivalents (Markdown, CLI tools, indie games). And although people have increased their standard for software, it seems to be tapering: most gamers wouldn't be happy with 2000s era 3D graphics in a AAA game, but more would be satisfied with 2010s era graphics, and many wouldn't notice a difference between graphics from 2020 vs. 2025. Even if LLMs substantially increase productivity, I'm skeptical they'll make software of current quality unacceptable, especially if they're restricted or still controversial.

    5 votes
  2. Comment on AI is creeping into the Linux kernel - and official policy is needed ASAP in ~comp

    Jakobeha
    Link Parent
    If you’re talking about software developers ceding the power to write software for high salaries because AI can do it cheaply, that’s a real potential issue. But if you’re talking about online...

    If you’re talking about software developers ceding the power to write software for high salaries because AI can do it cheaply, that’s a real potential issue.

    But if you’re talking about online LLMs that exist today being restricted in the future, we’ve been writing software without LLMs pre-2021 (before Copilot), and local LLMs are already good enough to do code completion and basic refactors (although online models are still better).

    Also, money has always allowed non-technical people to create highly-complex software, by hiring technical people. LLMs strictly decrease cost. If we eventually get a model that’s as smart and productive as an experienced coder, but costs $2,000/mo; rich people can already pay an experienced coder $10,000/mo (a $120k salary), so the only difference (except that job going away) is slightly-less-rich people gain that power.

    8 votes
  3. Comment on Cybernews research team has uncovered over sixteen billion leaked records since the start of 2025 in ~tech

    Jakobeha
    Link Parent
    I hesitate to say "yes" because nobody can be 100% certain with malware, and even Apple has zero-click vulnerabilities (most recently Paragon). Apple devices seem to have good security, and I...

    I hesitate to say "yes" because nobody can be 100% certain with malware, and even Apple has zero-click vulnerabilities (most recently Paragon). Apple devices seem to have good security, and I assume if a zero-click was used to mass-leak passwords (not just target specific journalists and activists like in the linked article), it would make more headlines everywhere and Apple themselves would send an alert telling (if not forcing) everyone to change their password ASAP. But I'm not in security so you probably need a more qualified opinion.

    At minimum I recommend 2FA, preferably TOTP which is safer than SMS or email (I use Duo, except for Apple/Google/Microsoft which have their own 2FA). Then if someone does have your password they can't login without your phone.

    3 votes
  4. Comment on Cybernews research team has uncovered over sixteen billion leaked records since the start of 2025 in ~tech

    Jakobeha
    Link Parent
    The article says the passwords were leaked by infostealers, and most servers store passwords so they can't be leaked directly (hashed + salted), so they were probably obtained by phishing. In...

    The article says the passwords were leaked by infostealers, and most servers store passwords so they can't be leaked directly (hashed + salted), so they were probably obtained by phishing. In summary, if you only ever entered a password into the real site it's probably safe; however, it's very easy to unknowingly reach a fake login screen practically identical to the real one (the only difference being the URL), e.g. by clicking an official-looking email, and any password entered there would be leaked.

    Besides using a password generator, I also recommend using an email service that gives you "masked" emails (I use Fastmail, Firefox and Apple also provide this). These are email addresses that forward everything to your main address, and Fastmail at least also forwards replies through the masked email; if a website is breached and the masked email you gave it gets flooded with spam, you can turn off that particular email and still receive email from other sites and your main address.

    5 votes
  5. Comment on What is a non-problematic word that you avoid using? in ~talk

    Jakobeha
    (edited )
    Link Parent
    I used to hate that phrase, but then I realized: People use the phrase as an indirect way to state that “it” is bad and can’t get better. But that’s not the phrase; “it is what it is” taken...

    I used to hate that phrase, but then I realized:

    People use the phrase as an indirect way to state that “it” is bad and can’t get better. But that’s not the phrase; “it is what it is” taken literally means nothing.

    So when someone says the phrase and I disagree, I know what they mean, but I like to pretend they are unintentionally or even subconsciously suggesting that “it” can get better and they’re too stubborn to realize. They’re trying to say that the situation is hopeless, but when taken literally, they’re not denying that there’s still hope, and the optimist in me clings onto that.

    Alternatively, I can use the phrase in an ironic sense if I know that a situation will improve, to try to shift its perceived meaning back to its literal meaning: nothing.

    3 votes
  6. Comment on Have I been conversing with bots or humans? in ~tech

    Jakobeha
    (edited )
    Link Parent
    You're right, I should've highlighted that em-dashes alone don't signal ChatGPT. The suspect posts are those with lots of em-dashes and ChatGPT's stuttered, awkwardly-informal writing (IMO the...

    You're right, I should've highlighted that em-dashes alone don't signal ChatGPT.

    The suspect posts are those with lots of em-dashes and ChatGPT's stuttered, awkwardly-informal writing (IMO the general style is a better signal, but it's hard to describe exactly what sets it apart from human "informal" writing). Even then, any particular one of those posts may be genuine; in aggregate, the fact there are suddenly much more is stronger evidence of bot content.

    2 votes
  7. Comment on Have I been conversing with bots or humans? in ~tech

    Jakobeha
    Link Parent
    Yes I should also add, accusing someone of being or using AI is low-quality, it's ad-hominen. If you feel the need to say anything about a particular piece of writing, you can point out its...

    Yes I should also add, accusing someone of being or using AI is low-quality, it's ad-hominen. If you feel the need to say anything about a particular piece of writing, you can point out its unsubstantiated claims and vacuous phrases or criticize it for being low on details (assuming it has those issues, most AI-generated text does).

    5 votes
  8. Comment on Have I been conversing with bots or humans? in ~tech

    Jakobeha
    Link
    Well, Reddit definitely has lots of bots or at least LLM users, because there are some dead giveaways (maybe we should make a list, or maybe not so they don't learn). The current one is the...

    Well, Reddit definitely has lots of bots or at least LLM users, because there are some dead giveaways (maybe we should make a list, or maybe not so they don't learn).

    The current one is the character "—" (em-dash). Humans type "-" or "--". These are quite common: example, example, example. You can also distinguish the writing style: it's too informal, it gives off a "fellow kids" vibe.

    However, I'm sure there are much less obvious bots. Moreover, there's no definite way to tell, and the line between promotion and authenticity is blurry. I'm inclined to believe your 10,000 commenter was a bot and/or paid promoter, but they could've just been someone chronically online with very strong views.

    Personally, I try not to care whether a post is human-written, I care whether it's high-quality. The thing about LLM writing is that it's usually boring, verbose, and inaccurate; if an LLM wrote an interesting, concise, and reliable post, to me that would be a good post. I'm skeptical of anything on any social media, even if there's a consensus, but there are still trusted news sources (e.g. ground.news) and product reviewers (this unfortunately depends on the product, people recommend Consumer Reports although it's paid).

    4 votes
  9. Comment on Megathread: April Fools' Day 2025 on the internet in ~talk

    Jakobeha
    Link
    This wasn’t intentional, but apparently the word “camel” was causing issues on many websites and services including NPM and Stack Overflow. It turned out to be an issue in Cloudflare surrounding a...

    This wasn’t intentional, but apparently the word “camel” was causing issues on many websites and services including NPM and Stack Overflow.

    It turned out to be an issue in Cloudflare surrounding a suspiciously-named framework:

    Apache Camel

    Camel is an Open Source integration framework that empowers you to quickly and easily integrate various systems consuming or producing data.

    An exploit in the framework just happened to be discovered yesterday, and presumably Cloudflare turned on some mitigation that was way too aggressive.

    8 votes
  10. Comment on What can a software engineer do to help the US? in ~society

    Jakobeha
    (edited )
    Link Parent
    "Community" and "echo chamber" refer to the same thing, just with different connotations. Forming communities is good, the real problems are groupthink (+ believing things are obvious and/or...

    "Community" and "echo chamber" refer to the same thing, just with different connotations. Forming communities is good, the real problems are groupthink (+ believing things are obvious and/or widely-accepted that only are in the community), and forgetting that people exist outside the community (+ what those people are like).

    I think it's important for everyone to interact with people outside their "bubble" at least occasionally, but spend the majority of their time with people they like (i.e. inside the bubble).

    5 votes
  11. Comment on Enough with the bullshit (a letter to fellow bullshit sufferers) in ~tech

    Jakobeha
    (edited )
    Link
    I’m skeptical that we’ve reached peak bullshit. People have surely been lying and exaggerating since before history. The article reminds me of “Catcher in the Rye” except fake outputs instead of...

    I’m skeptical that we’ve reached peak bullshit. People have surely been lying and exaggerating since before history. The article reminds me of “Catcher in the Rye” except fake outputs instead of fake people.

    But the message is spot-on. The way to solve a specific type of bullshit is by seeing throughcounteracting* on it; if enough people do, it loses effectiveness and dies out. Although as the article points out, bullshit in general never dies, it just evolves as new types are invented.

    * Even if you try to ignore bullshit you may be subliminally impacted. You may think to yourself “I’m not affected by shiny branding” but buy the products with shiny logos over the bland ones anyways. So when choosing between options you must weigh bullshit advertising as slightly negative.

    8 votes
  12. Comment on If a new constitution was written, what would you advocate for in it? in ~society

    Jakobeha
    Link
    This is dodging the question, but I think culture matters more than government. The most ideal Constitution won't stop a bad supermajority (they'll throw it out or horribly misinterpret it), and...
    • Exemplary

    This is dodging the question, but I think culture matters more than government. The most ideal Constitution won't stop a bad supermajority (they'll throw it out or horribly misinterpret it), and no Constitution is necessary for a good supermajority.

    I do think the US Constitution, although "good", has room for improvement. My main suggestions are to replace First Past the Post, have shorter term limits, and stagger elections. I can't think of anything else off the top of my head which is really important. Also, there should be a strong way of holding elected officials accountable for failures and successes, to encourage them to act directly in their constituents favor.

    But even then, you have an issue, because many people vote for someone who doesn't support their interests (even without FPTP, it seems to happen in Canadian and European elections). And vote for people who support their interests, but not the interests of other citizens, e.g. of a different sub-culture or ethnicity.

    To fix that, we need cultural improvements, which are trends and guidelines, not laws. Note that, though government funding and organization would greatly help, these don't have to involve the government at all. Some examples:

    • Hold regular community activities to make people know each other (reducing racism and general meanness) and increase civic duty. Form groups to hold many activities in all different areas, and every one in a while these groups should collaborate and hold a joint activity to get their people to mingle.

    • Improve education, by paying teachers better, personalizing it (I actually think AI would be very helpful here), decreasing "memorization" and "busy work" and increasing interesting topics and hands-on activities. School should be much longer, but include more sports and special classes (which students can pick, as opposed to general classes that are taught to everyone).

    • Incentivize small businesses and innovation. I don't know exactly how this would work, but the goals are: if someone has a great idea, they have ways to demonstrate that it's great, then once they do so ways to get funding; and if a market is monopolized and "enshiffifying", someone can get enough money to form a competitor, and not lose if the old players reverse course and start improving their products again.

    I'm sure there are more. The main point is: if the main goal is to fix society, or create an ideal society from scratch, I believe it's more important to think about soft guidelines and trends to establish a good culture, than hard laws to form a good government.

    5 votes
  13. Comment on Digg is relaunching under Kevin Rose and Alexis Ohanian in ~tech

    Jakobeha
    Link Parent
    I also support AI, and am curious for what they have planned, but I'm skeptical it will work. The problem with current AI is that a lot of tasks that seem like "grunt work", but couldn't be solved...

    I also support AI, and am curious for what they have planned, but I'm skeptical it will work. The problem with current AI is that a lot of tasks that seem like "grunt work", but couldn't be solved by pre-LLM algorithms, actually require critical thinking that current AI doesn't have.

    AI has been used to artificially grow communities and help moderators before. In particular, the problem with community-building is that the vast majority of AI-generated content is bland (see: AI-generated images), and the problem with moderation is that AI struggles with nuance and is very suggestible (see: jailbreaks). I suspect Digg will try to make the AI a tool, but I don't see how that will avoid the problems. For example, if they make the AI moderator only flag posts and provide context for the human moderators to review, they will run into 1) posts not being flagged (easy to imagine with code words and dog whistles), and 2) context sometimes being inaccurate (see: Apple Intelligence news and text summaries).

    Right now my opinion is that we need to develop fundamentally new architectures to handle these tasks. LLMs are very good at some tasks, including code completion and giving summary answers to questions that have been discussed online. But creative media generation and nuanced alignment/moderation are tasks they have never been good at, ChatGPT was released >2 years ago and almost immediately people were trying to use it for them, and I suspect further progress will only be made after another big advancement.

    12 votes
  14. Comment on US$ 30 million to reinvent the wheel (Bluesky vs. Mastodon) in ~tech

    Jakobeha
    Link
    Why do we want another social media? IMO it's for discovery. Otherwise we could use Signal, email, Tildes, or any other communication platform. The advantage of Twitter over something like Discord...

    Why do we want another social media? IMO it's for discovery.

    Otherwise we could use Signal, email, Tildes, or any other communication platform. The advantage of Twitter over something like Discord is that you can discover new posts and people.

    Personally, I think BlueSky's architecture and UX is better than Mastodon's. But I think the real goal should be to bridge these platforms (and others), then write shared code for, fund, and promote both of them. For example, a single open client that can create an account on BlueSky or Mastodon or sync one between both. Along with better algorithms and tools for discovery and moderation.

    Also, my understanding is that coding a social media is the (relatively) easy part, the hard part is social. Good discovery, moderation, and funding are what make or break a site, and you accomplish these with people: interesting people who create "good vibes", level-headed moderators, and sponsors. The big social medias are entrenched, I don't even think because they have better discovery algorithms or UX, but because they have far more users and moderators. Arguments over what platform, what governance completely sidestep this, the main thing I think the $30 million should be for is to figure out how, and then do, 1) convince people to make interesting things the open web 2) while blocking grifters, trolls, etc.

    11 votes
  15. Comment on US appeals court rejects net neutrality: The internet cannot be treated as a utility in ~tech

    Jakobeha
    (edited )
    Link Parent
    A more optimistic silver lining is that courts have ruled in the past that states can enforce their own version of Net Neutrality, and the justifications are very similar (the FCC lacks authority...

    A more optimistic silver lining is that courts have ruled in the past that states can enforce their own version of Net Neutrality, and the justifications are very similar (the FCC lacks authority over state governments, and ISPs are information services).

    California has SB-822 that seemingly enforces Net Neutrality within its borders, and many other states have weaker laws.

    AFAIK the federal government can still pass laws banning state-level net neutrality, but likewise they could have passed a law banning net neutrality even if this decision was upheld. The courts at least have precedent that without a federal law, states can do what they want.

    42 votes
  16. Comment on What's on your Christmas wish list? in ~life

    Jakobeha
    Link
    I've had an M1 Macbook Air since ~2021 and I'm probably getting an M4 Mac Mini to go with it. People who've upgraded from one M# mac to another, do you notice a difference? Particularly with...

    I've had an M1 Macbook Air since ~2021 and I'm probably getting an M4 Mac Mini to go with it.

    People who've upgraded from one M# mac to another, do you notice a difference? Particularly with IntelliJ, which is the most resource-intensive thing I run where latency matters. The M1 has been fine, but this is what I use for school/work, so if it makes me more productive it pays for itself.

    Also, does the M4 pro have any real benefit over the M4? Because the base model is really cheap, but the pro model costs over 2x and to me it really doesn't seem worth it.

    4 votes
  17. Comment on I think I've failed the United States in ~society

    Jakobeha
    Link
    I always think of people (including myself) like machines. We're programmed to survive and form communities, we act from what we've learned and we learn from our surroundings. Some people are true...

    I always think of people (including myself) like machines. We're programmed to survive and form communities, we act from what we've learned and we learn from our surroundings.

    Some people are true sociopaths, but the vast majority have at least some empathy. My understanding is that most people help themselves first, then others: when we're struggling we prioritize our own needs, but when our needs are met well enough we spend effort and resources to make sure others' needs are met too (the amount of sacrifice depends on the person, but people who feel they are well off in societies that encourage charity tend to be very generous).

    Look around you: almost everything was invented, made, and shipped by others, laws that protect and provide for you are enforced by others. People have the capacity to do horrible and wonderful things, what we do depends on the how the outside world affects and then empowers us.

    Never forget the real enemy: tragedy. Tragedy creates and empowers evil people, and tragedy causes diseases and disasters that so far have and always can be more destructive than anything man-made. Everyone's goal should be "progress", which is to bend nature in a way that minimizes tragedy. A lot of internet discourse involves vilifying and/or insulting large groups, but what every large enough group truly wants is the same, serenity, so the things we want different aren't worth fighting for beyond "live and let live".

    8 votes
  18. Comment on Using AI generated code will make you a bad programmer in ~tech

    Jakobeha
    (edited )
    Link Parent
    I agree with the first part: LLMs hurt education. With LLMs, a student can accomplish a lot without really understanding what their code is doing, but they can't accomplish as much or with as good...

    I agree with the first part: LLMs hurt education. With LLMs, a student can accomplish a lot without really understanding what their code is doing, but they can't accomplish as much or with as good quality as if they did understand. Students can pass entry-level classes and maybe even graduate and get jobs developing software, barely learning anything, until eventually they reach a point where the LLMs aren't good enough. At this point they're like students who skipped the first half of a course because there are no graded assignments until the midterm. Maybe these students are still able to learn the fundamental skills they missed, but at the very least they wasted a lot of time not learning the skills earlier.

    But I disagree this is inevitable. Students still can learn the fundamentals to write good code. At minimum, schools can give assignments and exams in an IDE that doesn't support LLMs, and I think this is necessary for the entry-level classes. But I also think it's possible to design assignments that LLMs aren't good enough to solve for higher-level classes, so that students still truly learn how to write code even when they have access to LLMs.

    I think in this way an LLM is a lot like a calculator or parent/friend/tutor who you could convince to do your work for you. In theory, it's easy for someone to "complete" assignments outside of class without truly learning, and this has been the case before LLMs. But (most?) students still learned the fundamentals, because they still had to pass in-class assignments to get a good overall grade, and because the honor system and potential of being caught was enough to deter them (at least me) outside of class. I believe most schools nowadays give every student a cheap locked-down laptop, and colleges should have enough money to do the same. In this case, a teacher can ban LLMs for an assignment by requiring students do the assignment on their locked-down computer, in a restricted IDE that has syntax highlighting and autocomplete but no LLMs.

    4 votes
  19. Comment on Using AI generated code will make you a bad programmer in ~tech

    Jakobeha
    (edited )
    Link
    I use LLM code generation a lot, but I always check (and fairly often change) what it generates afterward. I'm pretty sure when not using AI, I end up writing the same code by hand, slower....

    I use LLM code generation a lot, but I always check (and fairly often change) what it generates afterward. I'm pretty sure when not using AI, I end up writing the same code by hand, slower. Actually when not using AI, I frequently copy-paste large chunks of code and then heavily edit them, so it's not so different.

    Is AI making me forget how to write code? Probably not, because I still write a lot of the code by hand, and I read over all of it. Code is often buggy and/or needs to be modified (e.g. to handle new features), especially LLM-generated code, so even if at first I don't really understand the LLM-generated code, I often end up learning it later (to debug or modify it).

    Will AI replace me eventually? Maybe, but I don't see how using AI is making that non-negligibly faster. Current models train on written code, presumably not factoring who wrote it. I can ensure AI companies don't train on my code or writing process by using a local model (although I don't, but that would be a separate argument).

    Will AI-generated code retroactively become illegal? If so, that means a lot of recently written code retroactively becomes illegal, so it seems very unlikely.

    There are problems related to LLM-generated code, such as less developer positions and bad software. But these have been problems before LLMs, and are exacerbated by IDEs and frameworks like Electron respectively. I don't think getting rid of IDEs and frameworks are the solution, and likewise, I think the root cause (allowing more people to write software easier, albeit most of it low-quality) is a net positive.

    6 votes
  20. Comment on Using AI generated code will make you a bad programmer in ~tech

    Jakobeha
    Link Parent
    I like algorithmic refactoring tools much more than LLMs because I trust that the code is refactored properly. Even when the algorithm fails, it fails in predictable, reasonable ways. e.g. a good...

    I like algorithmic refactoring tools much more than LLMs because I trust that the code is refactored properly. Even when the algorithm fails, it fails in predictable, reasonable ways.

    e.g. a good "rename method" tool in a statically-typed language will:

    • Only rename calls on values of the correct type. If I rename Foo#bar, it won't touch calls to Bar#bar, even though both look like some_value.bar(...) without context.
    • Skip (or ask to confirm) renaming in strings and comments, because the tool isn't smart enough to guarantee a literal occurrence of the method's name actually refers to the method.

    Sure, I have to search for and rename true negatives in the strings and comments manually, but it's easy (I can "find and replace in project" the old name). Importantly, I can rely on references outside of strings and comments being renamed properly.

    An LLM doing "rename method" may be even smarter, because when it finds the method name in a string or comment, it can use English comprehension to determine whether it's an actual reference. But (AFAIK) there is no LLM-based tool that can guarantee it won't rename literal occurrences that are not references to the method, or can guarantee it will rename every reference (and maybe some). So when I ask an LLM to refactor, I have to check true negatives and false positives, and look over every line of code the LLM changed. At this point it's faster to skip the LLM and go straight to "find and replace in project" and looking over every occurrence manually.

    10 votes