post_below's recent activity

  1. Comment on “This technology disrupts [...] Democratic—voters, [and] increases the economic power of [...] male, working-class voters” in ~society

    post_below
    Link Parent
    Step one: don't listen to that dude. Technology isn't the problem, capital is the problem, and the influence and connections that come with it. This guy's pitch to the business world and the...

    Step one: don't listen to that dude.

    Technology isn't the problem, capital is the problem, and the influence and connections that come with it.

    This guy's pitch to the business world and the current administration is just that, a pitch. Palatir's influence comes from their spending, not their technology.

    Societally we need to figure out how we handle direct, open undermining of democracy.

    I believe in order to solve the problem we have to correctly identify it. That means first acknowledging that it isn't new. The world (and especially parts of the west) has been on a steady path of increasing Oligarchy for... well forever, but for conversation's sake let's say the industrial revolution. Or if we want a more recent turning point: The Reagan and Thatcher administrations.

    It hasn't been a secret for a long time. Remember when the Bush Jr administration got caught in bed with the energy/military industrial complex and suffered almost no consequences? That was when it became clear to everyone left who hasn't already figured it out that you don't even have to pretend to hide it anymore: money governs.

    It's a mistake, IMO, to imagine that the core of our society (if by that you mean democracy) wasn't already undermined. That illusion has been reliably keeping the electorate from effectively rallying behind the right people, and causes, for a long time. It's so easy to distract us from the core problem.

    I think that when that basic truth is common knowledge to the degree that it's boring, we might have a shot at electing people who genuinely want to do something about the problem. Until then it will remain trivial for the Oligarchy to make sure that the Bernie Sanders' of the world never make it on a ticket that matters.

    17 votes
  2. Comment on Meta to acquire Moltbook, the social network for AI agents in ~tech

    post_below
    Link Parent
    To take a break from moltbook hate for a second: Side note: it's guys plural. There were two of them, both acquihired. If I had to guess I'd say it's for the audience, like you said in your last...

    To take a break from moltbook hate for a second:

    Why Meta would decide to hire this guy

    Side note: it's guys plural. There were two of them, both acquihired.

    If I had to guess I'd say it's for the audience, like you said in your last sentence. That and the moltbook creator's sense of the pulse. Who knows, maybe they also liked the play on Facebook. You're right, the code is trivial, the idea isn't revolutionary, there's no moat.

    What Moltbook did was launch at exactly the right moment during the rise of OpenClaw and the subsequent media attention. Meta likely wants some of that marketing, currently they haven't really managed to make a name for themselves in AI.

    It's somehat similar to Open AI hiring the OpenClaw creator (again nothing special about the code), though that guy had previously proven he was a legitimately good engineer. I don't know if that's true in Moltbook's case as I don't know much about the founders. But in both cases, in AI money relativity, a few zeros and commas is nothing. These companies are just trying to get as many smart and creative people together as they can. If those people come with some buzz, even better.

    In Open AIs case it's working out pretty well for them so far. As far as Meta goes... well it will be interesting to see if they can manage to not screw it up as badly as they did with the metaverse.

    5 votes
  3. Comment on Why do I almost never catch colds anymore? in ~health

    post_below
    Link
    COVID is related to many viruses that cause colds and exposure to a pathogen can sometimes improve your response to similar pathogens in the future through things like cross reactive immunity and...

    COVID is related to many viruses that cause colds and exposure to a pathogen can sometimes improve your response to similar pathogens in the future through things like cross reactive immunity and trained innate immunity. There is also some evidence that extra iteration can happen after an illness that can improve response to future mutations. Additionally, the more severe the illness the harder your immune system will work to protect you against future versions, and the longer it will remember. That's one reason why vaccines use adjuvants.

    3 votes
  4. Comment on Electricity use of AI coding agents in ~enviro

    post_below
    Link Parent
    Ok fair enough, maybe I misread. The reason I speculated that you avoid LLMs is that it's difficult to square "most do not see as valuable" with having used current gen models. It's hard to...

    Ok fair enough, maybe I misread.

    The reason I speculated that you avoid LLMs is that it's difficult to square "most do not see as valuable" with having used current gen models. It's hard to imagine that most people, even for light applications like asking a chatbot for a recipe, wouldn't see it as valuable relative to the miniscule increase in their daily energy footprint it represents. Virtually any other footprint related action they could take (like biking vs driving) would have a bigger impact by a magnitude.

    Whereas for power users (most of which are using a lot less than the author's example of multiple parallel agents for hours) the utility seems entirely worth a fraction of a dishwasher run.

    I am concerned about the overall LLM footprint though. It's negligible on an individual basis but in aggregate it shouldn't be ignored. I only wish regulation moved quicker, some of those 10's of billions from each funding round should be going to offsetting the water and power impacts. At a minimum, force them to be carbon neutral and mitigate their impact on local communities.

    1 vote
  5. Comment on Electricity use of AI coding agents in ~enviro

    post_below
    Link Parent
    I freely concede that there's a lot of room for debate about whether LLM agents are a good thing. Also about what they should be used for, and what they shouldn't, and how they should be...

    the concern is that it's using such an incredibly large amount of power to perform tasks that most do not see as valuable

    I freely concede that there's a lot of room for debate about whether LLM agents are a good thing. Also about what they should be used for, and what they shouldn't, and how they should be regulated.

    But I think it's pretty obviously evident that a lot of people consider LLMs useful.

    assuming one interprets the METR and those MIT researchers results as meaning that traditional work and business practices not being amenable to AI/LLM usage in their current forms (which goes both ways), we'd need reforms across all sectors, which is only loosely related to late stage capitalism.

    I've talked about the issues with the METR study before so I won't repeat anything, it's not necessary, this is from the header of the landing page:

    Measuring the Impact of Early-2025 AI

    From early 2025 to early 2026 the technology has changed dramatically. There is no way, like really no way, to make a case that these tools aren't useful in a wide variety of contexts. This is easy to demonstrate. It's happening all around you.

    Perhaps you avoid AI entirely, in which case I applaud your principles. However, the "they're not really useful, people are just hallucinating" angle is no longer compatible with reality.

    2 votes
  6. Comment on Documents reveal a web of financial ties between Donald Trump officials and the US industries they help regulate in ~society

    post_below
    Link Parent
    Sorry for being pedantic... it's not exactly toothless just because the full house then senate process has never panned out. It likely would have in the case of Nixon, leaving him little choice...

    Sorry for being pedantic... it's not exactly toothless just because the full house then senate process has never panned out. It likely would have in the case of Nixon, leaving him little choice but to resign.

    Which is to say that it can happen, even if it's essentially impossible in this moment.

    4 votes
  7. Comment on Documents reveal a web of financial ties between Donald Trump officials and the US industries they help regulate in ~society

    post_below
    Link Parent
    I'm slightly more optimistic, and also I agree that America will never be the same. It's a lot to process.

    I'm slightly more optimistic, and also I agree that America will never be the same. It's a lot to process.

    5 votes
  8. Comment on Documents reveal a web of financial ties between Donald Trump officials and the US industries they help regulate in ~society

    post_below
    Link Parent
    Made more stunning by the fact that many of the crimes/scandals would have been administration ending at any other point in American history. How did it happen? Part of it is the investment the...

    Made more stunning by the fact that many of the crimes/scandals would have been administration ending at any other point in American history.

    How did it happen? Part of it is the investment the right wing has made in misinformation and manipulation for the last 10 years. A non-negliglible number of people don't trust, or even see, reputable sources of information these days.

    Another part is organization. For example, it takes a lot of effort, over a lot of time, to corrupt and co-opt the supreme court.

    And of course part is oligarchy.

    Trump is possible because the groundwork has been laid over time.

    And still it's mind blowing. It would be a lot easier to comprehend if he was just a little bit less... Trump.

    13 votes
  9. Comment on Documents reveal a web of financial ties between Donald Trump officials and the US industries they help regulate in ~society

    post_below
    Link
    No doubt this will be a shock to no one. Cheers to ProPublica for continuing to propublica. The article goes on to detail some of the most glaring examples.

    No doubt this will be a shock to no one. Cheers to ProPublica for continuing to propublica.

    A trove of nearly 3,200 disclosure records that ProPublica is making public today. The disclosures, which can be viewed in a searchable online tool, detail the finances of more than 1,500 federal officials appointed by President Donald Trump

    The article goes on to detail some of the most glaring examples.

    25 votes
  10. Comment on Ayatollah Ali Khamenei killed in Israeli and American joint strikes in ~society

    post_below
    Link Parent
    It really doesn't matter what the subject is, if you have a reasonable level of expertise, you're guaranteed, eventually, to have a frustrating time talking about it on the internet. Well really...

    It really doesn't matter what the subject is, if you have a reasonable level of expertise, you're guaranteed, eventually, to have a frustrating time talking about it on the internet. Well really even if you don't have expertise.

    Let me rephrase: it doesn't matter who you are or what you're talking about, the internet can be frustrating :D

    The main reason the frustrating conversations you refer to were with people in tech is only because they're over-represented online. It's a statistical thing. I promise you it's just humans on the internet, tech people don't have the monopoly.

    There are two halves to the conversation though. One of the reasons internet conversations can be frustrating is that the rules are slightly different. People will (confidently) say things they wouldn't say offline. The stakes aren't the same.

    I recommend trying to take online discourse less seriously.

    Relevant, ancient: XKCD

    The sun rises, everything eventually dies, and someone is always wrong on the internet. Nothing to be done about it!

    2 votes
  11. Comment on My personal AI assistant project in ~tech

    post_below
    Link Parent
    That makes sense, especially if you mean 10k filings. That's about as unstructured as you can get. Even their structured APIs are challenging with mix and match XBRL concepts. That said, agents...

    That makes sense, especially if you mean 10k filings. That's about as unstructured as you can get. Even their structured APIs are challenging with mix and match XBRL concepts.

    That said, agents can be prodded into being useful in that context.

    1 vote
  12. Comment on US Pentagon declares Anthropic a threat to national security in ~society

    post_below
    Link Parent
    For purposes of accuracy, as far as I know the only sources for that are Altman's tweet and 3rd party speculation. Reading the tweet, it appears to me to be carefully worded to imply that the...

    For purposes of accuracy, as far as I know the only sources for that are Altman's tweet and 3rd party speculation. Reading the tweet, it appears to me to be carefully worded to imply that the restrictions are the same, while making it pretty clear by omission that they aren't.

    The most credible speculation I've seen suggests that Anthropic wanted to be in charge of the guardrails, while Open AI was willing to leave that part up to the DoD. So a version of "any lawful use", just like Hegseth wanted.

    Whatever the details, Open AI agreed to the deal impressively quickly, doesn't seem like they had time for much negotiation.

    13 votes
  13. Comment on My personal AI assistant project in ~tech

    post_below
    Link Parent
    Any plans to teach it to do anything specific? Are you using the default context files?

    Any plans to teach it to do anything specific? Are you using the default context files?

    1 vote
  14. Comment on Anthropic rejects latest US Pentagon offer: ‘We cannot in good conscience accede to their request’ in ~tech

    post_below
    Link Parent
    Google might have trouble providing the same level of functionality, Gemini just isn't as good right now (benchmarks aside), but there's certainly nothing stopping them from saying yes. Open AI...

    Would it be that difficult for Google (or OpenAI) to simply step in if Anthropic is kicked to the curb?

    Google might have trouble providing the same level of functionality, Gemini just isn't as good right now (benchmarks aside), but there's certainly nothing stopping them from saying yes. Open AI models are close enough that they'd probably be able to swap pretty easily.

    Just gib monies

    Agreed, I'd amazed if the pentagon couldn't find a replacement, the US gov is the ultimate enterprise customer if you're willing to deal with the regulations. The solidarity from the employees is still great to see though. One sliver of hope: The Trump admin is unpopular and this is a good PR opportunity.

    2 votes
  15. Comment on Anthropic rejects latest US Pentagon offer: ‘We cannot in good conscience accede to their request’ in ~tech

  16. Comment on Anthropic rejects latest US Pentagon offer: ‘We cannot in good conscience accede to their request’ in ~tech

    post_below
    Link
    Wow I did not expect that

    Wow I did not expect that

    30 votes
  17. Comment on Updating Eagleson's Law in the age of agentic AI in ~comp

    post_below
    Link
    My philosophy with agentic coding is that I need to be in the loop. I still write code myself and when the agent writes code I sign off on a detailed plan, with code, in advance, and then review...

    My philosophy with agentic coding is that I need to be in the loop. I still write code myself and when the agent writes code I sign off on a detailed plan, with code, in advance, and then review the results.

    The times where I have let that slip, I've regretted it.

    Right now I believe what's happening is that a lot of developers are letting it slip, the regret is pending. Though in large orgs where there's less personal stake, and potentially everyone is vibecoding, I expect there won't be much personal regret.

    I think there are essentially two possible outcomes: One, agents get so much better that true vibecoding becomes viable and the agents can fix the industry wide mess that's currently silently happening in the background. Or two, practices will need to change to keep humans in the loop.

    I suspect many teams have already learned to do the latter, vibecoding at scale simply can't work for production if you care about quality and reliability.

    I suppose there is a third option: The industry lowers its standards of quality and reliability. I think this strategy is currently being beta tested at scale. The upside is that it should increase the perceived value of high quality software and the people who can deliver it.

    Side note: I've revisited 6 month old code plenty of times and almost immediately understood what I was thinking. Other times not so much, but we remember complex details about things that happened 6 months ago all the time. It's possible that Eagleson's law primarily applies to Eagleson.

    9 votes
  18. Comment on New accounts on Hacker News ten times more likely to use em-dashes in ~tech

    post_below
    Link Parent
    The thought occured to me, but I didn't care enough to check, they looked mostly legit. There was one undeniable bot reply that was downvoted to oblivion. Found the link

    The thought occured to me, but I didn't care enough to check, they looked mostly legit. There was one undeniable bot reply that was downvoted to oblivion. Found the link

    1 vote
  19. Comment on New accounts on Hacker News ten times more likely to use em-dashes in ~tech

    post_below
    Link Parent
    That matches what I've seen, one recent post that was at the top of the front page used clearly AI generated text to promote a (vibecoded) project. The OP also used AI generation in most, maybe...

    That matches what I've seen, one recent post that was at the top of the front page used clearly AI generated text to promote a (vibecoded) project. The OP also used AI generation in most, maybe all, of their comment responses. The comments were engaging with the post and the user as though they were a human.

    The last part is what surprised me, the HN crowd is as familiar with LLMs as any out there, how did they collectively fail to identify slop? Previously, AI generated prose I've seen posted on HN gets downvoted to death. Human judgement is really the only thing stopping online discussion from getting overwhelmed with slop.

    For now, once you're familiar with AI generated text, it's pretty easy to spot. It's possible to get AI generated prose that looks more legitimate but that requires pretty comprehensive prompting/context strategies. I've been assuming that tech spaces, and most others, would nearly unanimously reject AI writing, it sucks to see that cracking at one of the most famous tech forums.

    If people stop rejecting it, the Claws will happily take over content generation.

    9 votes