Gaywallet's recent activity

  1. Comment on Texas is replacing thousands of human exam graders with AI in ~tech

    Gaywallet
    Link Parent
    It's more than just that, it's also about building persistence, learning to do work you don't enjoy but is asked of you, and it's about being consistent. All of these are necessary skills to make...

    the core point isn't about mastering a topic, it's mastering how to learn

    It's more than just that, it's also about building persistence, learning to do work you don't enjoy but is asked of you, and it's about being consistent. All of these are necessary skills to make it in a competitive job market but also in general are useful life skills. You may not always click with every subject or skill in the world, and being able to stick with something can be important. It's a sure heck of a lot easier when it's something you enjoy, however, such as learning a new hobby and the importance of these skills is also a reflection of poorly optimized working environments (as many of the things you don't like doing are tasks someone else enjoys doing very much).

    3 votes
  2. Comment on Why large language models like ChatGPT treat Black- and White-sounding names differently in ~tech

    Gaywallet
    Link Parent
    Attempts to remove demographic information, or pieces of information which can be associated with demographics can result in creating greater inequality. I find discussions where we focus on...

    One thing the user could do though, is use blinding: don’t give the LLM a name when it’s not supposed to make decisions based on it.

    Attempts to remove demographic information, or pieces of information which can be associated with demographics can result in creating greater inequality.

    I find discussions where we focus on mechanistic methods to "fix" bias in AI/ML/LLM miss the larger picture. Humans are biased and the data it is trained on is biased. I believe that we should focus on what methods reduce bias in humans, and see if we can't transfer some of those ideas mechanistically to models.

    10 votes
  3. Comment on Chechnya 'bans music that is too fast or too slow' in ~music

    Gaywallet
    Link
    Plenty of deep house and melodic techno are fine. Some bass music is in that range too. Rave music exists at every bpm!

    Plenty of deep house and melodic techno are fine. Some bass music is in that range too. Rave music exists at every bpm!

  4. Comment on AI assists clinicians in responding to patient messages at Stanford Medicine in ~science

    Gaywallet
    Link
    Relevant quote: Link to paper in JAMA (currently open access)

    Relevant quote:

    After the pilot period, Garcia and the team issued a survey to the clinicians, asking them to report on their experience. They reported that the AI-generated drafts lightened the cognitive load of responding to patient messages and improved their feelings of work exhaustion despite objective findings that the drafts did not save the clinicians’ time. That’s still a win, Garcia said, as this tool is likely to have even broader applicability and impact as it evolves.

    Link to paper in JAMA (currently open access)

    3 votes
  5. Comment on Why do some people posting ChatGPT answer to the discussion/debate/question? in ~tech

    Gaywallet
    Link Parent
    You know I had never thought of this being a reason behind why people without expertise on a subject will chime in by repeating common knowledge which is actually incorrect or harmful and how I...

    You know I had never thought of this being a reason behind why people without expertise on a subject will chime in by repeating common knowledge which is actually incorrect or harmful and how I wish people wouldn't chime in with their incorrect knowledge about ideas such as the neurobiology of drugs with regards to the harm that it can cause. It's extremely frustrating when it happens and I wish people wouldn't continue to give false credibility to misconceptions, but this gives me a lot more compassion towards people because they might just want to be included in a conversation 😔

    3 votes
  6. Comment on As obesity rises, Big Food and dietitians push ‘anti-diet’ advice in ~health

    Gaywallet
    Link Parent
    As someone who's spent a ton of time learning about what's real knowledge with regards to the dietary sciences, I can tell you that there are a ton of MDs who don't even understand the real state...

    I don't know how well regulated the dietician trade is

    As someone who's spent a ton of time learning about what's real knowledge with regards to the dietary sciences, I can tell you that there are a ton of MDs who don't even understand the real state of dietary sciences. The literature has been so incredibly poisoned, and not just by capitalism representing it's own interests, but by other doctors who have an idea such as Ancel Keys being convinced that fat = bad. Some of these highly credentialed individuals have zeroed in on true statements, such as refined sugar = bad, but many go off to make entire careers publishing science which just muddies the waters because they are reaching their results with bad study design or worse p-hacking and other forms of deception. To make things more confusing it become a political issue when the food pyramid was developed, further complicating who was contributing to research and making it difficult to sort the bunk studies from the real ones.

    The best summary of dietary sciences I've seen by someone else was this talk by a Stanford PhD who works in dietary sciences. I've set the link at the relevant point in the talk where he goes into a summary of what's actually supported by the sciences, and it's at a high enough level and simple enough to understand that just about anyone can consume it and get away with making slightly healthier choices in their day to day.

    3 votes
  7. Comment on ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza in ~tech

    Gaywallet
    Link Parent
    The biggest issue here seems to be one of process, not the tech which lies underneath it. I take issue with any process which increases civilian casualties. There were decisions made about process...

    The biggest issue here seems to be one of process, not the tech which lies underneath it. I take issue with any process which increases civilian casualties. There were decisions made about process which optimized the idea of generating more targets, making it easy to ignore killing civilians, making it easy to sign off on killing anyone, pushing people to keep working the list of people to kill. These are all conscious process decisions and I believe these decisions, especially considering they are decisions about killing innocents, are what we should be talking about.

    6 votes
  8. Comment on ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza in ~tech

    Gaywallet
    Link Parent
    I'm trying to be as nice as possible here, but have you read the article? You've asked about hypotheticals which are not in the article (hallucinations), you've started a discussion on semantics...

    I'm trying to be as nice as possible here, but have you read the article? You've asked about hypotheticals which are not in the article (hallucinations), you've started a discussion on semantics of AI (not addressed in the article) and now you're asking questions about the process which is explained in the article.

    The following quote paints a pretty good picture of how much information they were provided, and perhaps more importantly, what actions they were actually taking when presented with data:

    “In any case, an independent examination by an [intelligence] analyst is required, which verifies that the identified targets are legitimate targets for attack, in accordance with the conditions set forth in IDF directives and international law.

    However, sources said that the only human supervision protocol in place before bombing the houses of suspected “junior” militants marked by Lavender was to conduct a single check: ensuring that the AI-selected target is male rather than female. The assumption in the army was that if it were a woman, the machine had likely made a mistake, because there are no women among the ranks of the military wings of Hamas and PIJ.

    And that's not the only quote which touches on the specifics of what is known as well as how the system is actually being used.

    It's a bit tangential here, but I also think you're missing the forest for the trees. The article clearly outlines how there was a conscious decision made here to provide more targets. Another relevant quote-

    “We [humans] cannot process so much information. It doesn’t matter how many people you have tasked to produce targets during the war — you still cannot produce enough targets per day.”

    They wanted to optimize for how quickly they could kill targets, not for how accurate those targets were, and that's reflected in the efficiency choices that were being made above. The push for more targets made it okay to simplify down "identified targets are legitimate targets for attack" to simply "a single check: ensuring that the AI-selected target is male rather than female".

    No amount of information or display of information would make a difference here. In fact, I would be highly surprised if most of the people in charge of interpreting the output of this machine or taking action on its outputs had a cursory level of understanding of statistics. They are probably exactly the kind of people who wouldn't understand any of the points you are bringing up- they wouldn't know the difference between chatgpt and a random forest, and that's a crucial part of what this article is about and why this discussion feels pedantic to me. The abstraction is being adopted because these people already made a moral decision about their actions. They already weighed whether it was okay to kill civilians, and in fact they put a number to the amount of civilians that were acceptable casualties.

    In an unprecedented move, according to two of the sources, the army also decided during the first weeks of the war that, for every junior Hamas operative that Lavender marked, it was permissible to kill up to 15 or 20 civilians; in the past, the military did not authorize any “collateral damage” during assassinations of low-ranking militants. The sources added that, in the event that the target was a senior Hamas official with the rank of battalion or brigade commander, the army on several occasions authorized the killing of more than 100 civilians in the assassination of a single commander.

    Allowing a machine, any machine regardless of whether it's AI or simple statistics removes or at least lessons the burden or requirement of conscious thought about what you are doing. When you had to research these people, inevitably you'd learn about their lives and the lives of all the civilians around them. You'd find some people are not valid targets, and you'd probably make some mistakes where you'd know a lot about someone or several people you just killed that were not the target or were false targets. You get to sit with the burden of taking those lives. The more you put it on a machine the more you can put that out of mind. Can we please focus on that, or anything else the article is talking about?

    8 votes
  9. Comment on ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza in ~tech

    Gaywallet
    Link Parent
    None of that is the focus of the article. The article is primarily about the tools that are being used here and the processes that cropped up around them. The primary thrust, if I had to put it...

    There's always a lot of talk around AGI, or things like how it's unexplainable, which wouldn't really apply to logistic regression, which is perfectly explainable.

    None of that is the focus of the article. The article is primarily about the tools that are being used here and the processes that cropped up around them. The primary thrust, if I had to put it anywhere, is around how increasing the level of abstraction away from the decision to murder someone makes it a lot easier to accept killing civilians as an outcome. I think it rightfully highlights a lot of the problems with allowing a non-human entity to be given this authority, and how it may have lead to the adoption of additional processes which ramped up the killing of civilians.

    13 votes
  10. Comment on ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza in ~tech

    Gaywallet
    Link Parent
    The word hallucinate isn't even in the article and no one is saying that. I understand the confusion about AI being applied to ML, but the reality is in the vernacular, the two terms are...

    As an example, if you start talking about "AI hallucinations", when the "AI" is boosted trees, like, boosted trees don't "hallucinate".

    The word hallucinate isn't even in the article and no one is saying that. I understand the confusion about AI being applied to ML, but the reality is in the vernacular, the two terms are interchangeable and they are rapidly becoming broad umbrella terms to describe more than just generative AI models. While your average human may be more likely to know the word hallucination and think about the possibility of AI hallucinations than would understand the history of the use of AI and ML and what's appropriate to classify which (or even deeper stats knowledge such as what a random forest is), the word is currently undergoing a change that you don't have any control over. I say this in the nicest way I possibly can, but spending time and energy arguing about what the "correct" definition is just isn't very productive and at worst can have people writing you off as a pedant.

    7 votes
  11. Comment on ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza in ~tech

    Gaywallet
    Link
    A few choice quotes

    A few choice quotes

    One source stated that human personnel often served only as a “rubber stamp” for the machine’s decisions, adding that, normally, they would personally devote only about “20 seconds” to each target before authorizing a bombing — just to make sure the Lavender-marked target is male. This was despite knowing that the system makes what are regarded as “errors” in approximately 10 percent of cases, and is known to occasionally mark individuals who have merely a loose connection to militant groups, or no connection at all.

    Moreover, the Israeli army systematically attacked the targeted individuals while they were in their homes — usually at night while their whole families were present — rather than during the course of military activity. According to the sources, this was because, from what they regarded as an intelligence standpoint, it was easier to locate the individuals in their private houses. Additional automated systems, including one called “Where’s Daddy?” also revealed here for the first time, were used specifically to track the targeted individuals and carry out bombings when they had entered their family’s residences.

    In an unprecedented move, according to two of the sources, the army also decided during the first weeks of the war that, for every junior Hamas operative that Lavender marked, it was permissible to kill up to 15 or 20 civilians; in the past, the military did not authorize any “collateral damage” during assassinations of low-ranking militants. The sources added that, in the event that the target was a senior Hamas official with the rank of battalion or brigade commander, the army on several occasions authorized the killing of more than 100 civilians in the assassination of a single commander.

    23 votes
  12. Comment on You don't need to document everything in ~tech

    Gaywallet
    Link Parent
    There are actually some fantastic studies out there on the exact experience being described by the author. For example, in a study titled "media usage diminishes memory for experiences", the...

    There are actually some fantastic studies out there on the exact experience being described by the author. For example, in a study titled "media usage diminishes memory for experiences", the researchers found that the very act of recording an experience seems to prevent people from 'fully experiencing' the moment - when tested in a variety of ways about the experience individuals who recorded it for their own use or for sharing all tested lower on scores of memory than people who directly experienced the event and were instructed not to record it. Of note, the times at which memory was tested were both immediately after and only one week post, and the responses might vary if they were to collect/test memory again at a later point in time, notably because those who record it could then review/relive the experience in a more robust way than purely through their own memories, but I would caution against jumping to any conclusions.

    I think it's perfectly reasonable to take small snippets or recordings of an experience as a means to both ensure that you are staying in the moment as much as possible but also to aid as a trigger for those memories if you wish to relive them in the future in a more concrete way than just remembering.

    16 votes