PendingKetchup's recent activity

  1. Comment on Are LLMs making Stack Overflow irrelevant? in ~tech

    PendingKetchup
    Link Parent
    Isn't that real life? I thought StackOverflow was shopping everybody's posts around as a training dataset for model builders.

    Isn't that real life? I thought StackOverflow was shopping everybody's posts around as a training dataset for model builders.

    5 votes
  2. Comment on Are LLMs making Stack Overflow irrelevant? in ~tech

    PendingKetchup
    Link
    It seems like the article equates a decline in post volume on StackOverflow with a decline in the quality of the resource, which I don't think is necessarily the case. SO didn't build a culture of...

    It seems like the article equates a decline in post volume on StackOverflow with a decline in the quality of the resource, which I don't think is necessarily the case. SO didn't build a culture of close-vote zealots for fun; they adopted such a, let's say "discerning" attitude to the questions they let on the site because a lot of programmers have a lot of garbage questions. Duplicates, questions where essential information has to be badgered out of the asker, questions engendered by wanting to do terrible things to computers that nobody should ever do, etc.

    If LLMs can take over the job of telling people that they dropped a semicolon or that StackOverflow already has a great answer for their question or that what they want to do was made impossible for a good reason, then StackOverlfow itself can stop dealing with that and can specialize in questions that can't be addressed by the model of the week. That's not necessarily a bad thing.

    Or the whole thing could stop making money and collapse, who knows.

    5 votes
  3. Comment on Are LLMs making Stack Overflow irrelevant? in ~tech

    PendingKetchup
    Link Parent
    LLMs are great at stretching and squashing an example to fit your specific situation, though, and a lot less great at constructing things from documentation and first principles. Really good SO...

    LLMs are great at stretching and squashing an example to fit your specific situation, though, and a lot less great at constructing things from documentation and first principles.

    Really good SO answers will have a little widget that someone is proud of inventing or a technique not attested to in the documentation, which can be applied to your problem and to the problems of other people in the future.

    9 votes
  4. Comment on Honey did nothing wrong in ~tech

    PendingKetchup
    Link Parent
    I am kind of assuming that Honey was not breaking the rules of any merchant's affiliate programs. If they were inserting their own codes while telling a merchant they weren't doing that, that's...

    I am kind of assuming that Honey was not breaking the rules of any merchant's affiliate programs. If they were inserting their own codes while telling a merchant they weren't doing that, that's wrong.

    I won't dispute that Honey has harmed the affiliate marketers whose credit for sales they took away. I'm just disputing that they did it in a way that isn't (or oughtn't be) allowed.

  5. Comment on Honey did nothing wrong in ~tech

    PendingKetchup
    Link Parent
    If what Honey does inside of a user's browser is exclusively between the user and Honey, how could it cause unlawful harm to a third party like the original affiliate marketer? Why wouldn't all...

    If what Honey does inside of a user's browser is exclusively between the user and Honey, how could it cause unlawful harm to a third party like the original affiliate marketer? Why wouldn't all the harm caused to them be lawful?

  6. Comment on Honey did nothing wrong in ~tech

    PendingKetchup
    Link Parent
    That is a good point, just because you have a right to do something doesn't make doing it always right. But I see basically the whole concept of affiliate marketing in that category. Just because...

    That is a good point, just because you have a right to do something doesn't make doing it always right.

    But I see basically the whole concept of affiliate marketing in that category. Just because it is disclosed that one is being tracked with an affiliate code doesn't really make it right to do that. So I don't see a user as having really any moral responsibility for the proper functioning of that system, and I feel it might be best overall if they stopped it from working as designed.

    People should indeed be paid for their work, but people should not be paid to manipulate others or to extract value on the promise of "free". YouTube as it currently exists probably shouldn't.

    2 votes
  7. Comment on Why aren't we talking about the real reason male college enrollment is dropping? in ~life.men

    PendingKetchup
    Link Parent
    That's not how I read it. The thesis seems to be that some men are choosing not to go to college because of the large fraction of college students who are women, and that this is in turn a major...

    That's not how I read it. The thesis seems to be that some men are choosing not to go to college because of the large fraction of college students who are women, and that this is in turn a major driver of that gender imbalance among incoming students.

    That's not "vibes", that's a scientific, testable hypothesis. The stuff about "school" being "feminine" is looking for and apparently finding cultural outputs we would expect to see if this mechanism was indeed operating. It's anecdotes and not data, but this is a blog post and not meant to be a controlled study to test or try to rule out the hypothesis.

    Nobody is being villainized that I can see. I guess you can make some parallels between the language here and various poor-people-don't-want-to-work style wrong opinions, but those are A) about oppressed groups, which "men" isn't one of in any society I know about, and B) not supported by evidence, which this argument at least appears to be.

    And certainly nobody is being ostracized. The core problem seems to be how do we get the men back in to the colleges, not how do we get rid of them.

    30 votes
  8. Comment on [SOLVED] Looking for recommendations on new GPU in ~tech

    PendingKetchup
    Link Parent
    It might not be a good deal at market price, either, and it might not make sense to wait for sufficient supply to dump it back to MSRP.

    It might not be a good deal at market price, either, and it might not make sense to wait for sufficient supply to dump it back to MSRP.

    2 votes
  9. Honey did nothing wrong

    OK, maybe they did something wrong; not actually giving people all potentially available discount codes when you say you will is wrong. But I don't think they did anything wrong by overriding...

    OK, maybe they did something wrong; not actually giving people all potentially available discount codes when you say you will is wrong. But I don't think they did anything wrong by overriding affiliate links, and I think it's dangerous to let people convince you otherwise.

    Even if replacing affiniate codes has negative consequences, in the form of lost revenue and uncounted sales, for the affiliates, it is happening entirely in the end user's browser, and in that environment the user has the right to do whatever they want. One can get extensions that strip off all affiliate codes. A user might have a case that their informed consent was not obtained by Honey for one feature or another, but if a user wants to install a browser extension that replaces all the affiliate codes in links they click, they have a right to do that and no affiliate marketer can be rightly empowered to stop them.

    If we admit some right to control the user's browser's behavior on the part of affiliate marketers, why would that right stop at interference by Honey? Wouldn't any extension interfering with the sanctity of the affiliate marketing referral data then be a legally actionable offense?

    22 votes
  10. Comment on [SOLVED] Looking for recommendations on new GPU in ~tech

    PendingKetchup
    Link
    At the price point you are targeting you might be best off with an Intel B580. They have good Linux drivers in mainline and should beat a 3050 handily. Sometimes games are confused and dismayed to...

    At the price point you are targeting you might be best off with an Intel B580. They have good Linux drivers in mainline and should beat a 3050 handily. Sometimes games are confused and dismayed to see an Intel GPU, but unlike Intel's last gen I hear they mostly work now.

    They also seem to be mostly sold out, so they must be good?

    11 votes
  11. Comment on Pinterest alternatives in ~tech

    PendingKetchup
    Link
    Are.na is meant to be this and you can use it anywhere. I'm not sure if it has all the integrations you want though.

    Are.na is meant to be this and you can use it anywhere. I'm not sure if it has all the integrations you want though.

    5 votes
  12. Comment on Why is Google Gemini saying we should die? in ~tech

    PendingKetchup
    Link Parent
    I think I just think like that about anything. Any kind of optimization process or probability distribution I will say "wants" to do this or "pulls towards" that even when it's all mechnaical.

    I think I just think like that about anything. Any kind of optimization process or probability distribution I will say "wants" to do this or "pulls towards" that even when it's all mechnaical.

    9 votes
  13. Comment on Why is Google Gemini saying we should die? in ~tech

    PendingKetchup
    Link Parent
    The token output stage AFAIK is always a weighted random roll. But also they use some truncation of the tail of the distribution to not roll tokens with 0.1% probability 0.1% of the time, on the...

    The token output stage AFAIK is always a weighted random roll. But also they use some truncation of the tail of the distribution to not roll tokens with 0.1% probability 0.1% of the time, on the theory that they are rare because actually they're wrong. So it might not just be possible to roll this with some frequency in all contexts.

    It is always just grabbing high probability tokens (subject to some manually designed constraints) and forcing it through, it just usually gets more mileage out of its very complicated model of what tokens are probable.

    And choosing when to stop rolling tokens is always a hard problem for the wrapper scripts. Probably you don't want to let the model roll some kind of end of text token immediately because the user wants to see output of some kind. But you don't want to force it to add more garbage at the end when the thought has been finished. And you also probably don't want to stop in the middle of a paragraph.

    4 votes
  14. Comment on Why is Google Gemini saying we should die? in ~tech

    PendingKetchup
    Link Parent
    I guess so? I've definitely played around with some of the UIs that expose all the sampler configuration sliders and the plain free text generation mode (like KoboldAI) and just let you run the...

    I guess so? I've definitely played around with some of the UIs that expose all the sampler configuration sliders and the plain free text generation mode (like KoboldAI) and just let you run the model forward and see what happens. I would recommend trying something like that out if you want to try and use these things. Run it forward, hit the retry button to run the same text forward again and get a different answer, see how changing the temperature makes things more or less consistent.

    You get an appreciation for how the basic operation here is more like dreaming than thinking, and for how hard it is to turn a word salad faucet into a conversation participant who is things like awake, polite, and not high as balls. The space between "only writes 'No'" and "mind shakes itself apart into lists of the titles of subreddits and unbalanced parentheses" is surprisingly small.

    21 votes
  15. Comment on Why is Google Gemini saying we should die? in ~tech

    PendingKetchup
    Link Parent
    I think it looks a lot like what an LLM will spit out when it loses the plot. Look at the repetitive sentence structure. It goes "Oh crap what do I put, sometimes people start sentences with...

    I think it looks a lot like what an LLM will spit out when it loses the plot. Look at the repetitive sentence structure. It goes "Oh crap what do I put, sometimes people start sentences with 'this', try that. Oh no, what comes after 'This is for you, human', maybe try 'you' again? OK, I said 'You and only you. You'. I can't get away with another 'you', I guess I have to put 'are', people say that a lot. 'You are not special', I've seen that before. And then I can fit another 'you', there are loads of 'you's in this bit so it's probably that next. Hey, I can fit 'you are not' again, I just saw that so it's pretty likely to just be here again. OK, what's something you wouldn't be if you weren't special. How about 'important'? That fits, and I can fit 'you are not' again and I don't have any better ideas so let's go with that." And so on. It has nothing to convey (except maybe a vector representing generic bad vibes) but text is coming out anyway, so it repeats itself as much as it can get away with without tripping the repetition penalties on the sampler, because the only thing guiding the text at this point is the local structure of the text.

    (It doesn't actually think like this, I am trying to convey how you would have to think or feel to act like a token sampler armed with nothing but a suddenly content-free probability model over tokens.)

    Maybe this has something to do with why/how flummoxed humans resort to angry tirades? You can produce them and hold off your interlocutor while your brain reboots.

    6 votes
  16. Comment on Why is Google Gemini saying we should die? in ~tech

    PendingKetchup
    Link Parent
    There's not really a concept of having an answer or not accessible from the actual computer code where you would need it to try and throw up an error message. The wrapper code can tell if it...

    There's not really a concept of having an answer or not accessible from the actual computer code where you would need it to try and throw up an error message. The wrapper code can tell if it somehow couldn't run the model and get text back, but having gotten text there's not a great way to sniff it to determine if it actually contains an answer or not. Unless I suppose you start adding more ML models.

    I don't think people really train their models to produce error messages? Maybe some of this can be blamed on the instruct tuning or safety tuning where you teach it to do something other than answer a question posed to it (and e.g. say "I can't help you cheat on your exam Dave.").

    7 votes
  17. Comment on Why is Google Gemini saying we should die? in ~tech

    PendingKetchup
    (edited )
    Link Parent
    I think there's actually quite a lot that precedes Gemini's turn to the dark side. This is clearly somebody using the robot to take a test for them. First they workshop an essay about challenges...
    • Exemplary

    I think there's actually quite a lot that precedes Gemini's turn to the dark side.

    This is clearly somebody using the robot to take a test for them. First they workshop an essay about challenges facing old people, using instructions, and then they make it answer a bunch of multiple choice questions about elder abuse, malicious caregivers, and "cognitive capacity".

    1. This chat is full of bad vibes. These robots are fundamentally vibes-based. When knocked off course they can end up in negative energy vortexes where they just start generating insults or the word "darkness" a million times or whatever. They also know they're AI and they know about evil AI, so if you inject enough LessWrong vibes they can start to get into character.
    2. This whole exchange is quite out-of-distribution. There have not previously been a lot of chat logs like this one online. This means the model can't really be sure WTF kind of document it is looking at, which makes its confidence in the next word lower and the result of running it for several steps more random and generally weirder. That "You are a blank on the blank" bit that repeats with different words plugged in is kind of symptomatic of the model not really having a clue what comes next and keying in on a local structure that can buy it a few more tokens.
    3. The formatting is bad. "TrueFalse" here cannot be helping: highly coherent documents do not contain that text. Also, at a certain point the chat degenerates from asking for answers to just stuffing in copy-pasted questions. And if you expand the last message before Gemini turns evil you can see that there are two questions copy-pasted in a row, with no answer for Question 15, an omenous lonely "Listen" on a line by itself, and a bunch of extra line breaks.

    Then the robot is asked to predict what comes next in this increasingly deteriorating document, which it decides is probably something even worse. Perhaps something that provides a plausible ending: this text can't keep getting worse forever. Perhaps something that might come after trying to say "Listen" and getting interrupted by more poorly-formatted questions with evil-vibes and human-old-age-death vibes.

    An important thing to remember here is that you are talking to a character in a world defined by book physics. Gemini the Helpful Robot does not exist somewhere typing replies back to you. You are injecting words into a story. The actual model is evolving the story forward, and you are hoping that the next thing that happens will be the helpful robot character helping you cheat on your exam. But sometimes the next thing that happens in a story is the helpful robot gets fed up and tells someone to die in a fire; human authors think this sort of thing is hilarious.

    If you want to help the helpful robot character exist, you need to stay in character yourself and you need to put in text of the quality, coherence, and emotional charge that you hope to get out. And you need to keep an eye on the narrative.

    51 votes
  18. Comment on Using AI generated code will make you a bad programmer in ~tech

    PendingKetchup
    Link
    This is an interesting mix of great insights: More powerful tools fostering dependence The ultimate class goal being the removal of software engineers from the process of producing software Using...

    This is an interesting mix of great insights:

    • More powerful tools fostering dependence
    • The ultimate class goal being the removal of software engineers from the process of producing software
    • Using code generators functionally replacing code writing with code review, which is often worse

    And ideas that seem wrong to me:

    • The idea that the key part of software engineering is not figuring out what to call all the pieces of the problem and what those pieces have to do and what the implications of those choices are, but actually is the mostly-rote process of producing the syntax called for by each comment.
    • The idea that writing boring code is good for you like some kind of digital vegetable. People have been writing code generators and trying to express programmer intent in more-natural languages forever. Hardly anybody programs by flipping the front panel switches to encode each instruction anymore, and indeed many have forgotten how, but that's not necessarily a bad thing.
    32 votes
  19. Comment on Controversial opinion: I don't like "cosmetic armor" being an option in games in ~games

    PendingKetchup
    Link Parent
    I don't think that it's true to say a player's ability to choose not to engage with a game system is just as good as not having the system there. Having to form and choose to impose the rule, and...

    I don't think that it's true to say a player's ability to choose not to engage with a game system is just as good as not having the system there.

    Having to form and choose to impose the rule, and then playing through a 60 hour game while constantly having the option to break one's self-imposed rule only a click away, is going to be a very different experience than not having the system implemented in the first place. It takes self-control that someone might find draining, or at least not particularly fun, in the same way that spending hours deliberately not purchasing digital goods from a game that wants to sell them to you can be not fun. At some points you might even be fighting the UI's recommendations in the same way.

    And if the developers didn't design the gear mechanic to actually have interesting trade-offs between looks and effects, because they expected the player to be able to mix and match, then whether imposing the rule or not actually results in a fun mechanic is going to be fairly random.

    If someone wants to have an experience of trading off looking cool against other gameplay aspects, the best way to enjoy that experience is going to be in a game that is designed and playtested to be played like that, and where the software enforces the rule for you.

    Maybe to please the most people you could have cosmetic armor be part of the difficulty settings?

    8 votes
  20. Comment on CloudFlare beats patent troll Sable, forcing them to dedicate all its patents to the public in ~tech

    PendingKetchup
    Link Parent
    From the wiki page, it sounds like their political beliefs are along the lines of "DDoS-protection and caching infrastructure should be available to everyone, including Nazis, and any limits on...

    From the wiki page, it sounds like their political beliefs are along the lines of "DDoS-protection and caching infrastructure should be available to everyone, including Nazis, and any limits on publishing should be applied to the original host and not at the cache". All the wiki examples are along the lines of "Cloudflare didn't want to stop serving a bad legal thing but eventually gave in to angry criticism" or "Cloudflare stopped serving a good(?) illegal thing and complained about it".

    I guess this means they aren't consequentialists: they think that e.g. if a person is hurt because two Nazis were able to encourage each other along the path of Nazi-ism because their website was up instead of down because Cloudflare was providing it DDoS protection services, Cloudflare doesn't necessarily have any responsibility for the moral wrongness, and it instead lies with other people (presumably the Nazis, or them and their original host). Possibly factoring into this is the idea that, since DDoS attacks are not actually allowed, people have a right to be protected from them that is not terminated just because the overall outcome would be better if they weren't.

    Are these the awful political beliefs?

    15 votes