vektor's recent activity

  1. Comment on The boy that cried Mythos in ~comp

    vektor
    Link Parent
    Saw on reddit the other day that firefox released a graph of security-related fixes, and that thing positively went to the moon. Think 10x increase in the "Mythos month". I don't know where I saw...

    Saw on reddit the other day that firefox released a graph of security-related fixes, and that thing positively went to the moon. Think 10x increase in the "Mythos month".

    I don't know where I saw it, but it's a lead if someone wants to chase it down.

    6 votes
  2. Comment on When Richard Dawkins met Claude in ~health.mental

    vektor
    Link Parent
    For human-human interaction this is doing a lot of work: I know the existence of my mind, therefore by similarity (functionally, materially, take your pick), I have an easy time convincing myself...

    I have to infer the existence of your mind, but I can experience the existence of mine.

    For human-human interaction this is doing a lot of work: I know the existence of my mind, therefore by similarity (functionally, materially, take your pick), I have an easy time convincing myself of the existence of yours.

    I think that's doing a lot of work to explain why the discussion of consciousness is always so intuitive and "vibes": Because we're used to doing it in a very shorthand way. IOW: No one has reasoned themselves into the position that other humans are conscious; we're brought up with it, and the best proof we have of it is "I think, therefore he thinks too".

    Anyway, I feel massively disconnected from most of the rest of the discussion here. I have been, for I don't know how long, a "functionalist" when it comes to mental phenomena. "If it quacks like a duck, it's a duck". Simply because due to privacy (thank you for the technical term, btw.), I don't see another way of evaluating intelligence or consciousness of non-me systems. If it behaves like it's intelligent, then it must be intelligent. And yes, that gives LLMs at least a limited but nontrivial amount of intelligence. No, they're not just regurgitating training data, or at least if you lead them out-of-training-distribution, they don't do so qualitatively differently from humans IMO. They are genuinely recombining information in novel ways. As for consciousness? Hell, if I had a duck test for it to apply to humans, I'd be a step farther, but I don't think consciousness as a concept is even reasonably testable in humans, so what are we doing trying to apply it to AI?

    2 votes
  3. Comment on Why I find woke criticism of veganism and effective altruism so outrageous in ~society

    vektor
    Link Parent
    I disagree with that... most EA orgs I know of are pretty transparent about where their money goes, and allow even rather small donations. There's no reason for me to give up my wages to some...

    I disagree with that... most EA orgs I know of are pretty transparent about where their money goes, and allow even rather small donations. There's no reason for me to give up my wages to some billionaire to then route some of that to my preferred EA org to then end up aiding the goals I support. I can just take a bigger paycheck and donate it myself, knowing that I have better control over the money and probably a tax advantage through the lack of additional intermediaries.

    Like, the conspiracy that this is all to justify billionaires makes no sense if it doesn't actually need (or encourage) billionaires being in the loop kind of falls apart.

    Yes, they're using it to whitewash their wealth. But they'd find another way of doing that.

    4 votes
  4. Comment on Why I find woke criticism of veganism and effective altruism so outrageous in ~society

    vektor
    (edited )
    Link Parent
    Let me, butting on from the sidelines, put it like this: Upon starting my first job earning the kind of money that enables setting something aside for later, I decided I probably want to donate a...

    Let me, butting on from the sidelines, put it like this: Upon starting my first job earning the kind of money that enables setting something aside for later, I decided I probably want to donate a smidge of that. Some time previous to that, I must have heard, one way or another, of Effective Altruism. Can't have been in the context of SBF, or any other billionaire for that matter. More of a grass roots thing. So, I go look for the biggest lever by looking at a few EA and EA-adjecent organizations and what they recommend putting your money towards. The org I end up at, pretty much explicitly wearing the EA label, recommended at the time: Global poverty (among which mosquito nets in Africa), animal suffering and climate change. For each of those goals they recommend different orgs with slightly different goals and different approaches. Just today, I found they've added two more big-picture goals to their list: Safeguarding democracy and "preserving the future and AI risks". Looking into it, because I know the mere mention of AI risks will cause people to associate certain things: Their first recommendation is a broader longer-term existential risks: Pandemics, nuclear war, and yes, AI. THeir second recommendation is AI-only. AI-related disbursements from those funds are mostly towards lobbying and policysetting and less so about pouring money into AI research. This category also has roughly a tenth of the donations for this org as compared to poverty, and about a fourth of the donations to climate change. It's a small, small part of what they recommend.

    That's my background with EA. I've met people who are active in EA organizations that (presumably) have even less money to give away. I've never seen/heard anyone with fuck-you money recommend or advertise EA philosophy; except perhaps indirectly Bill Gates when he talks about how he allocates his donations.

    I have also occasionally read internet drama about SBF<->EA. It does not color my judgement of EA that someone else would use a good thing as a shield to deflect from their bad deeds. People do it all the time. What, is a bad person going to just declare themselves a villain? Of course they'll try to hide behind something.

    So I urge you to look at this entire discussion from that perspective. In particular, I'd like to call out this statement of yours:

    I don't know what EA billionaire nerve I hit

    You didn't. But maybe you can now imagine what non-billionaire nerve you hit by -basically- painting my small contribution towards climate change mitigation as billionaire bootlicking. Because judging from the comments here, my EA contributions towards climate change are perpetuating a system that justifies the existence of rich people. Because I route donations through an organization that in turn routes most of its donations to addressing global poverty....???

    And I get it. It's easy to go through life never hearing about EA in any capacity except billionaires shilling how morally good they are. That's understandable and I see how that would inform your views on the topic. But that's also just your perspective. To take a deliberately pointed comparison: It's similarly easy for a young man to go through their adolescence never knowingly meeting a feminist, but hearing a lot about feminism through 4chan level caricatures. That's not necessarily a good basis for decision making about feminism.

    8 votes
  5. Comment on Google Chrome silently installs a 4 GB AI model on your device without consent in ~tech

    vektor
    Link Parent
    Those are two massively different things. P2P update distribution makes a lot of sense from all kinds of perspective. Torrent and related approaches are just ridiculously efficient. If MS didn't...

    Those are two massively different things. P2P update distribution makes a lot of sense from all kinds of perspective. Torrent and related approaches are just ridiculously efficient. If MS didn't do that, people would complain that they were leaving emission reductions on the table...

    Offloading LLM processing to other users is not at all comparable, and I think it's a bit conspiratorial to expect that until privacy-preserving LLMs are here. (The cool stuff, were you encrypt data, run a computation on it, then decrypt the result and it's all homomorphic. The computer doesn't actually know what it's computing.) Everything else is a massive user privacy nightmare for Google. They take on that "burden" because they trust themselves not to compromise your privacy too badly (or too noticably if you're being cynical) and to keep your data safe. Give user data from one user to another, and you have no guarantees of any kind - a malicious user could compromise any of it at will.

    2 votes
  6. Comment on Google Chrome silently installs a 4 GB AI model on your device without consent in ~tech

    vektor
    Link Parent
    Exactly my thinking. By the way very convenient thinking if you want to make someone your enemy. The enemies of fascism are simultaneously weak enough as to be undeserving of any sympathy or...

    Exactly my thinking.

    By the way very convenient thinking if you want to make someone your enemy. The enemies of fascism are simultaneously weak enough as to be undeserving of any sympathy or rights, and strong enough that "we must all band together". Migrants are simultaneously useless unemployed mooches and they're taking your jobs. Google is simultaneously very forward about its on-device AI efforts, and snuck this model onto your device without anyone's knowledge. Then what is it? The answer depends on what the author wants you to believe.

    1 vote
  7. Comment on Google Chrome silently installs a 4 GB AI model on your device without consent in ~tech

    vektor
    Link Parent
    Not quite general purpose; there's only so much behavior you can plop into an NN, and whatever I/O harnessing there's built around this NN is the actual potential for abuse. But curiously, nary a...

    Not quite general purpose; there's only so much behavior you can plop into an NN, and whatever I/O harnessing there's built around this NN is the actual potential for abuse. But curiously, nary a mention of that. If the I/O code around the model unnecessarily and unpromptedly sends data to Google, that's bad. But the 4GB blob has nothing to do with that.

    I get that 4GB of space on-device isn't nothing. But as far as malware concerns goes, this is very unsubstantiated. On-device LLMs are generally privacy friendlier than cloud, you have a very clear overview of ecological impacts, what's not to like? Oh, it's opt-out instead of opt-in? I haven't seen anyone make the case that this model somehow is activated without a user request, and even then I'd be indifferent about it as long as no sensitive telemetry is triggered.

    I think it's a safe bet there's bigger privacy violations of opt-out telemetry without any AI involvement buried in the Chrome settings... Sometimes I forget how to even.

    1 vote
  8. Comment on US begins blockade in Strait of Hormuz in ~society

    vektor
    Link Parent
    Right, but they lose that from their protection racket falling apart, so revenue neutral in the end.

    Right, but they lose that from their protection racket falling apart, so revenue neutral in the end.

    4 votes
  9. Comment on Am I German or autistic? in ~health.mental

    vektor
    Link Parent
    When I'm thinking about my internal monologue, the monologue starts to form very slight movements of vocal cords and tongue.

    When I'm thinking about my internal monologue, the monologue starts to form very slight movements of vocal cords and tongue.

    3 votes
  10. Comment on Megathread: April Fools' Day 2026 on the internet in ~talk

  11. Comment on US regulator bans imports of new foreign-made routers, citing security concerns in ~tech

    vektor
    Link Parent
    Even to a blind moron in a rush, it should be painfully obvious that this kind of intervention would require a grace period. If the US government went "alright, stop importing this shit. It'll be...

    Even to a blind moron in a rush, it should be painfully obvious that this kind of intervention would require a grace period. If the US government went "alright, stop importing this shit. It'll be banned in two years. Get some domestic alternatives onto the market within two years." instead of "stop importing this shit now". If you don't want imports to surge in anticipation of the ban, put a cap on imports or some shit.

    Like, this should be obvious to people not in IT. I could forgive them for neglecting some technical detail, many of which you mentioned, but this fuckup isn't even funny. They didn't even check if the alternative they want people to buy even exists.

    8 votes
  12. Comment on LLMs can unmask pseudonymous users at scale with surprising accuracy in ~tech

    vektor
    Link Parent
    Username: Goose. In hindsight, a bit on the nose.

    Username: Goose.

    In hindsight, a bit on the nose.

    14 votes
  13. Comment on Cory Doctorow | AI companies will fail. We can salvage something from the wreckage. in ~tech

    vektor
    Link Parent
    To clarify once more, duck testing like this isn't necessarily a short and simple thing. Let's stick with Ohm's law for an example. I wouldn't be convinced by simply regurgitating a formula. I...

    This implies that the process under the hood doesn't matter, only the outcome does, and I just can't agree on that.

    To clarify once more, duck testing like this isn't necessarily a short and simple thing. Let's stick with Ohm's law for an example. I wouldn't be convinced by simply regurgitating a formula. I wouldn't be convinced by an LLM solving a college level homework exercise. Those are easily in the training data, and I wouldn't be convinced if the answer is plausibly just regurgitated from somewhere. [If the training data were infinite, it might satisfy me because then I don't run the risk of going beyond the training data.] At least if I'm testing for knowledge beyond those particular quotes. Then I'd want to see some understanding that goes beyond the source material.

    But if you have an arbitrarily long conversation with an LLM about Ohm's Law, and you can't find any flaws with its conception of Ohm's Law, IMO at some point you must concede that it knows Ohm's law. Knowledge is IMO a functional property, the process ultimately does not matter if the results are there. Yes, I can easily acknowledge that one might be easier convinced of someone's knowledge if the process is known. I (presume to) know you are human, therefore I get certain axioms for free about how you work and think. I don't get that for an LLM. But to say the process does matter in a way that no evidence that does not include internal insights will always be insufficient is basically saying that no non-human entity of any kind can ever know anything. Do ravens know how to use tools? Do elephants know what death is? Does your Roomba know the layout of your home? Does chatGPT know Ohm's Law? These are all categorically "no", if you put too much weight on processes rather than results. And I'm not saying the answer is necessarily Yes, but the method of determining that answer must at least in principle permit either answer.

    As an olive branch of sorts, a middle ground if you will, I will easily concede that any artifacts of process that are observed in results are also fair game. If I explain to you my reasoning, or you see a raven experimenting with tools, or an LLM puts it's thinking capmode on, and you read those thinking tokens, all of those are useful in getting a glimpse of the process.

    But the process line of thinking is often restricted to -more or less- something subjective or human-centric: They're human, therefore I know the process, therefore knowledge is possible. And that implicitly and often categorically excludes non-humans.

    Put another way: Say we had the AIs of science fiction. The good ones that is, not the faulty ones. Think, perhaps, more Data and less HAL. How would you make the judgement that they know things? If your answer is "I wouldn't", pick a better AI. I hope you're convinced that for some AIs, the answer must invariably be "it knows things", and that's where we need to think about a judgement that doesn't rely on process.

  14. Comment on Cory Doctorow | AI companies will fail. We can salvage something from the wreckage. in ~tech

    vektor
    (edited )
    Link Parent
    Oh, I don't mean to get into the "the LLM said whatever is most likely to result in a thumbs up reaction, because RL finetuning" part. That's not where I tend to see the denial of knowledge, but...

    Oh, I don't mean to get into the "the LLM said whatever is most likely to result in a thumbs up reaction, because RL finetuning" part. That's not where I tend to see the denial of knowledge, but it is IMO an absolutely crucial caveat to always keep in mind when working with LLMs. If the honest answer is unsatisfactory, e.g. "you're wrong" or "I don't know", LLMs tend to lie with conviction. Though I believe that to be a transient artifact of current training methods. Yes, it's a useful thing for keeping customers engaged, but customer engagement isn't the promise that drives the bubble, it's customer productivity. They sell these services not to people who have time to kill and want to be entertained, at least not mostly. They sell them to people who want to get a job done. This yes-man lying is a problem to them.

    The way I most often see this denial of knowledge is when people basically say "it's just a semantic parrot, and just selecting the maximum likelihood token according to its training data. It doesn't know anything". And, from an external perspective, I think that's a unhelpful definition of 'to know'. You can make the same argument about human cognition. We're also next token predictors, slightly biased towards predicting lower-utility token more often (a phenomenon called negativity bias). LLMs, at least without RL finetuning, are at least unbiased estimators, so mechanistically, they have a leg up on us. Knowledge in both cases is an emergent behavior, because at some point you see more complex and complex patterns in the training data, and exploiting those is more efficient. You don't have to build an explicit knowledge engine into the LLM, because us humans didn't get one either. If you have a chat with an LLM and that chat would convince you that a human knows his stuff, then IMO you must be convinced that the LLM knows its stuff as well. Knowledge is, IMO, a functional property. The process doesn't matter, unless it matters in a way that colors the conversation so you are not convinced.

    Apologies everyone for the technical terminology, but if I recall correctly, sparksbet should understand.

    2 votes
  15. Comment on Cory Doctorow | AI companies will fail. We can salvage something from the wreckage. in ~tech

    vektor
    Link Parent
    To be clear, my use of the word nitpicky was more poking at my own argument: "I know Ohm's law because I know V = IR." is using my -very nitpicky- definition of "to know". I know it, and only it,...

    To be clear, my use of the word nitpicky was more poking at my own argument:

    "I know Ohm's law because I know V = IR." is using my -very nitpicky- definition of "to know". I know it, and only it, but I know nothing about what to do with that knowledge. I wouldn't describe that state of mind as "not knowing", but there's a lot more to know about Ohm's law, and you would easily find out in conversation. Hence me splitting up knowledge about a thing into (basically) being able to quote something back, and then transforming that quote into useful new material. I'm not saying "I know electical engineering because I know V=IR".

    But to pull that back into something resembling a point about the original topic: I don't need the nitpick, when LLMs can at least sometimes cover both definitions of knowledge. An LLM can clearly use the things it can regurgitate to form useful new material. I can, to condense a complex topic, ask it for Ohm's law, but I can also apply it in a way that I am damn sure no part of its training data covers it. It isn't quoting back someone's homework, it's actually applying that stuff to a new situation. And at that point I am hard pressed to deny that the LLM knows Ohm's Law, insofar as I'd need to resort to an argument of -basically- "it can't know because it is a thinking rock. Only thinking meat can know."

    I think part of the issue is our tendency to accept that other humans are like us and therefore they can know things. Then we quiz them about electrical engineering and conclude that they know Ohm's law. Why isn't the same process applicable to a machine? Because they aren't like us, and by the types of mistakes they make that is readily apparent. We excuse human errors as "just human", but the same is not true for machines, because machines don't make errors. Therefore, any error is not a failure to act on knowledge, but a failure to know. And because we can't empathize with a machine and mirror its thought process, we are less convinced by the same amount of evidence.

    1 vote
  16. Comment on Cory Doctorow | AI companies will fail. We can salvage something from the wreckage. in ~tech

    vektor
    Link Parent
    Depends on what exactly this person knows or doesn't know, you can split hairs there all day long. Your person knows the quotes. Easy one. Do they know anything about those topics beyond that?...

    Depends on what exactly this person knows or doesn't know, you can split hairs there all day long.

    Your person knows the quotes. Easy one. Do they know anything about those topics beyond that? Different question. But they definitely know something. They could, for example, know verbatim some simple physics formula, but not know how to apply it. Those are different bits of knowledge. And a person who knows both can have more in-depth conversations that a person who only knows one. I can tell these knowledge states apart by duck testing.

    But we don't have to go there and make nitpicky distinctions about knowing a fact and knowing how to apply it: LLMs can usually make some inferences that go beyond the retrieved material. They'll easily regurgitate Ohm's Law, but also know how to apply it. They thus know Ohm's Law. Do they know all the edge cases that a EE prof might know? Dunno. But they might compare favorably to undergrad students.

  17. Comment on Cory Doctorow | AI companies will fail. We can salvage something from the wreckage. in ~tech

    vektor
    Link Parent
    Honestly, the same argument can be applied to the argument of "LLMs can't have knowledge". I'd also apply duck testing here: If it knowledges like a knowledgeable person, it knowledges. If an LLM...

    Honestly, the same argument can be applied to the argument of "LLMs can't have knowledge". I'd also apply duck testing here: If it knowledges like a knowledgeable person, it knowledges. If an LLM writes as if it knew about a topic, then it knows about that topic. If you can instead lead it to produce inconsistent results (which can be trivial or impossible, depending on the topic), then evidently it doesn't know about that topic well enough - or it doesn't logic very well, leading to false inferences from true knowledge. Which honestly is the more likely culprit. In either case, I'd be on board with denying knowledge.

    But to outright say "LLMs can't know things", with an argument that basically boils down to "because it is thinking rocks and not thinking meat" is asinine. Yes, it's just an engine to predict the next token with some RL finetuning on top. Guess what your brain is? It's an engine to predict its next sensory input, and then produce outputs that best shape the next sensory input. There's no reason to assume, judging from the way our brains are created, that we'd know anything. So arguing from the way LLMs are created seems a moot point to me.

    </rant>
    4 votes
  18. Comment on Cory Doctorow | AI companies will fail. We can salvage something from the wreckage. in ~tech

    vektor
    Link Parent
    Hell, I'd imagine if we gave copyright more teeth, not only would media corps get massively more powerful. But we'd not see any functional impact on AI. They'll obfuscate the derivativeness of any...

    Hell, I'd imagine if we gave copyright more teeth, not only would media corps get massively more powerful. But we'd not see any functional impact on AI. They'll obfuscate the derivativeness of any derivative outputs slightly better, and that'll be that. I don't think openAI will actually throw out all their ill-gotten training data or delete their models.

    And if we actually force them to by means of massive and unthinkable enforcement action, i.e. raiding their data centers and wiping all their hard drives, we'd just give the game over to the Chinese or any other nation willing enough to take the lead in the AI game.

    My approach would be to go the opposite direction: restrict copyright protection. Your model parameters? Definitely not copyrighted. Model outputs? Yeah, nah. The datasets you're using to train? Fine to use as long as it's just for training. (Personally I'd go complete abolitionist, but that's an admittedly fringe take. ) That levels the playing field: 3 (british) guys in a shed have the same access to data as OpenAI, and we'll make sure that this technology isn't oligopolized by roughly 4 companies. Yes, compute is still a limiting factor, but it's much easier to distribute compute fairly by virtue of free markets. Ask Amazon or one of a bajillion other companies if they'll rent you the compute to train a model - this part of the equation isn't what keeps the big few in power.

    11 votes
  19. Comment on US President Donald Trump isn't building a ballroom in ~society

    vektor
    Link Parent
    A detail that the author doesn't cite, or if they do it's buried in the sources but not referenced: That any of this has anything at all to do with AI. Or autonomous decision making. Or anything...

    however the author has absolutely done their homework.

    A detail that the author doesn't cite, or if they do it's buried in the sources but not referenced: That any of this has anything at all to do with AI. Or autonomous decision making. Or anything like that. Data center? Sure. Quite possible. But AI? I mean, I won't categorically say no, but I see no indication of this being built with AI in mind.

    I'm also unconvinced about the insinuation that the Jerusalem data center is doing any kind of AI decision making or targeting in Gaza. For all I know, the data center is just a normal government data center. Holding your tax records, allowing you to file for unemployment benefits, banale things like that. The intelligence branches are involved? Ok, sure, they probably need somewhere to store their satellite imagery.

    Not that any of this dimishes the fucked-up-ness of the project by much, even if I feel AI is being invoked as the boogeyman here. Even if Trump just centralized tax records in the White House, that's already a bad sign for the US, but "the military is very much involved" just makes this even worse. I'm thinking this could be the DOGE data grab, except this time under a mantle of a bit more legitimacy, and even less actual accountability, and bigger.

    6 votes
  20. Comment on Without looking, do you have a vague idea of your coordinates? in ~talk

    vektor
    Link Parent
    You really sure you need elevation? I mean, these things come down nearly vertical, and the margin of error for considering a nuke "properly delivered" is probably large enough. Like, if you're...

    You really sure you need elevation? I mean, these things come down nearly vertical, and the margin of error for considering a nuke "properly delivered" is probably large enough. Like, if you're unsure if they're at 9km or 0km, you'll probably only miss by 4km at most, which is still good enough.

    Beyond that, a 18MB .png in the guidance calculations will probably sort you out a lot better, bringing the elevation uncertainty down to a few hundred meters at most.

    (Is this comment unvoluntarily ITAR-restricted now? Who knows!)

    6 votes