cykhic's recent activity

  1. Comment on Hey, monthly mystery commenters, what's up with the hit-and-runs? in ~tildes

    cykhic
    Link Parent
    I'm not sure that what you're describing is the same as what you're replying to. This comes across to me as a decently well-considered opinion, open to engagement and disagreement. Whereas: If I...

    I'm not sure that what you're describing is the same as what you're replying to.

    I don't think [A] holds up nowadays because [X] [Y] [Z]. (emphasis mine)

    This comes across to me as a decently well-considered opinion, open to engagement and disagreement.

    Whereas:

    I just think [B] is trash. Like whatever you like, but you're being nostalgic (emphasis mine)

    If I look only at this sentence, it comes across to me as pretty dismissive. "Whatever you like" feels handwavey, and "you're being nostalgic" feels like an overriding judgement that is not open for dissent. Besides that, "I think [B] is trash" is both harsh and not backed up by any evidence other than the writer's opinion.

    I don't think that responding to the second comment with "righteous fire, brimstone and daggers" is right. But that comment itself is not adding anything useful to the discussion, and I would not enjoy seeing it on Tildes.

    16 votes
  2. Comment on Coverage of Gaza War in the New York Times and other major newspapers heavily favored Israel, analysis shows in ~news

    cykhic
    Link Parent
    I agree with your characterisation of headlines vs. article bodies. But my question was, why use headlines only for the third finding? Would your characterisation not equally apply to the other...

    I agree with your characterisation of headlines vs. article bodies.

    But my question was, why use headlines only for the third finding? Would your characterisation not equally apply to the other three of the four findings?

    My concern is whether the authors ran the same analysis on both headlines and article bodies, and then reported whichever number came up worse. That would not be a fair analysis (that is, it would have a good chance of finding "evidence" of bias even in a perfectly fair newspaper). Their cherrypicking of data corpus also raises the possibility that the authors cherrypicked the words they analysed, in order to get the most skewed possible numbers, e.g. picking analysis of "children" over other words like "civilian". If they did do this, it would further skew their analysis away from fair.

    1 vote
  3. Comment on Coverage of Gaza War in the New York Times and other major newspapers heavily favored Israel, analysis shows in ~news

    cykhic
    Link Parent
    Ah, sorry about that. I think I generally use quotes both to denote "this is what they said" as well as denote "the concept of XYZ", which is also how I'm using them in this sentence. I'm not sure...

    The word "compassion" does not appear in the article. Where did you get this idea?

    Ah, sorry about that. I think I generally use quotes both to denote "this is what they said" as well as denote "the concept of XYZ", which is also how I'm using them in this sentence. I'm not sure how to better denote them, maybe square brackets would work.

    That's not what the sentence says at all, you shouldn't read into any scientific findings as 'implications', that's how you insert your own bias.

    I think I disagree in the context of this article. After looking at the specific numbers they found, I tried to make a reasonable generalisation, for example: [the papers say 'antisemitism' more than 'islamophobia'] --> [this indicates more concern about violence against Jews than Muslims] --> [the papers are biased].

    I think doing this is reasonable because:

    1. Without generalising, the finding is so specific that it isn't helpful.
    2. There probably actually is some underlying correlate which gives rise to the observation, and will cause other effects which we care about.
    3. I think many readers will make the generalisation when they read each of those findings.

    While there is some risk of inserting my own bias, I (subjectively) feel like I made reasonable generalisations. In particular, I don't think it's worse than skipping the middle step and going straight from word frequencies to conclusions of bias.

    I'm still waiting for some good scientific papers on the conflict to come out, but if older papers hold similar results, it's extremely likely that we'll see systematic biases in how this conflict is represented based on the publisher and country of origin. 1 2 3 4

    Thanks for the links, these look significantly more rigorous and I am reading them now.

    3 votes
  4. Comment on Coverage of Gaza War in the New York Times and other major newspapers heavily favored Israel, analysis shows in ~news

    cykhic
    (edited )
    Link
    Archive link: https://archive.is/IxYxy My summary of the article: The authors analysed 1100 news articles from The New York Times, The Washington Post, and The Los Angeles Times, and concluded...
    • Exemplary

    Archive link: https://archive.is/IxYxy


    My summary of the article:

    The authors analysed 1100 news articles from The New York Times, The Washington Post, and The Los Angeles Times, and concluded that these publications had a consistent pro-Israel bias, because:

    1. For every Israeli death, Israelis are mentioned 8 times, while for every Palestinian death, Palestinians are mentioned 0.5 times, in article bodies.

    2. Highly emotive words ("slaughter", "massacre", and "horrific") were used (60, 120, and 38) times more often to refer to Israelis than Palestinians, in article bodies.

    3. 6000 Palestinian children and 100 journalists were killed, but were only mentioned in headlines 2 times and 9 times respectively, out of 1100 headlines.

    4. "Antisemitism" was mentioned 549 times, while "Islamophobia" was mentioned 79 times, in article bodies, in the period before the "campus antisemitism" phenomenon.


    While I am sympathetic to this article's conclusions, I kind of doubt that their conclusion can be credibly drawn from their analysis.

    Their first finding was the weirdest in my opinion, and made me double take.

    (my summary) 1. For every Israeli death, Israelis are mentioned 8 times, while for every Palestinian death, Palestinians are mentioned 0.5 times, in article bodies.

    If we look closer at their methodology:

    [...] the words “Israeli” or “Israel” appear more than “Palestinian” or variations thereof, even as Palestinian deaths far outpaced Israeli deaths. For every two Palestinian deaths, Palestinians are mentioned once. For every Israeli death, Israelis are mentioned eight times [...]

    If their goal is to measure the rate that deaths are (edit: dropped the word "reported"), why are they counting the word "Israel" here? Are they also counting the word "Palestine"? That sounds like it would seriously conflate reporting on deaths with reporting on the conflict in general, where I would expect fair coverage to be roughly one-to-one. This is on top of the already-large leap of using "Israeli"/"Palestinian" to measure "the speaker's compassion for civilian deaths", when phrases like "Israeli government" or "Palestinian militants" are common.

    If we look at their summary, it shows 10286 mentions of "israeli" and 7045 mentions of "palestinian". If we take these as measuring coverage of the overall conflict (which, given their methodology, I think is the only halfway-reasonable use for this statistic), this is still consistent with their conclusion of pro-Israel bias, but a 10:7 bias is a much weaker conclusion than 16:1.

    I don't think that it's at all reasonable to include the words "Israel" and "Palestine" in the count for their original purpose, so although the numbers (after reinterpretation) still weakly support their conclusion, I can't help but feel like the authors were deliberately manipulating the presentation of their numbers to fit their preferred conclusion.


    Maybe I was primed by the above, but their third finding also seems particularly suspicious.

    (my summary) 3. 6000 Palestinian children and 100 journalists were killed, but were only mentioned in headlines 2 times and 9 times respectively, out of 1100 headlines.

    The intended implication seems to be that although Palestinian children made up 30% (6000 of about 20000) Palestinian casualties, they only made up 0.2% (2 of 1100) of attention paid to them by these newspapers.

    But, why only headlines? Since The Intercept's other three findings refer to words found in the bodies of articles, I see no reason why they should add this restriction specifically for this finding, unless it is to cherry-pick this conclusion. I suspect that if one were to re-run this finding on article bodies, the number that comes out would be much less newsworthy.


    I don't have huge problems with their findings 2 and 4, apart from my general doubt that this word-counting methodology can give us a good indication of (for finding number 2) sympathy to Israeli deaths vs. Palestinian deaths, or (for finding number 4) attitudes towards Jews vs. Muslims.

    (edited to add: I did notice something the article seems to gloss over, which is that the vast majority of emotive language (in finding number 2) seems to be related to the Oct 7 event, rather than more generally split along Israeli/Palestinian lines. Although I'm less sure what conclusion to draw from this.)


    Probably this response will seem like an overreaction to you, but I personally get kind of ticked off when I see poor/lazy data analysis. I respect that the authors probably wanted to find some kind of non-controversial, objective measure of reporting bias, and I don't think I have a better methodology on hand, but I think what they have done is very very much controversial and non-objective.

    I feel that people who don't already agree with them are going to immediately spot the problems, so if someone who agrees with this article signal boosts it, they run the risk of poisoning the well by suggesting that "since it was broadcast widely, this is probably one of their stronger arguments, but it is wrong, which puts an upper bound on the correctness of the entire position".

    It's also not reassuring to me that the statistics appear to be deliberately massaged to prefer one conclusion, in an article which is specifically attempting to call out bias in reporting.

    25 votes
  5. Comment on Piracy is surging again because streaming execs ignored the lessons of the past in ~tech

    cykhic
    Link Parent
    My point was that using emulators would still rely on the goodwill of continued support from its developers, which echoes the original issue of having to rely on Amazon/Disney/etc's goodwill in...

    My point was that using emulators would still rely on the goodwill of continued support from its developers, which echoes the original issue of having to rely on Amazon/Disney/etc's goodwill in hosting TV shows.

    Physical copies would reduce such dependency on third parties, especially if you store it properly with humidity control and such. It is not expensive and more importantly it is entirely actionable by the average person.

    16 votes
  6. Comment on Piracy is surging again because streaming execs ignored the lessons of the past in ~tech

    cykhic
    Link Parent
    GBA emulators might not be forward-compatible with future OSes, and patching the OS or emulator is out of their control (probably). In contrast, their physical games can always be played, assuming...

    GBA emulators might not be forward-compatible with future OSes, and patching the OS or emulator is out of their control (probably).

    In contrast, their physical games can always be played, assuming reasonable care and maintenance, which is within their control.

    9 votes
  7. Comment on Planes collide and catch fire at Japan’s busy Haneda airport, killing five. Hundreds evacuated safely. in ~transport

    cykhic
    Link Parent
    Latest word is that it was the coast guard pilot's error. Source: https://apnews.com/article/japan-planes-collision-a350-fire-investigation-tokyo-f2f0950ed7acd0e978bb26c739132629 In addition, I...

    Latest word is that it was the coast guard pilot's error.

    TOKYO (AP) — A transcript of communication between traffic control and two aircraft that collided and burst into flames at Tokyo’s Haneda Airport showed that only the larger Japan Airlines passenger flight was given permission to use the runway where a coast guard plane was preparing for takeoff.

    Source: https://apnews.com/article/japan-planes-collision-a350-fire-investigation-tokyo-f2f0950ed7acd0e978bb26c739132629

    In addition, I think that:

    1. ATC is less reliable than usually assumed, and
    2. human factors are more commonly to blame (as opposed to technical factors).

    The incidents — highlighted in preliminary F.A.A. safety reports but not publicly disclosed — were among a flurry of at least 46 close calls involving commercial airlines last month alone. [...] They were part of an alarming pattern of safety lapses and near misses in the skies and on the runways of the United States, a Times investigation found. [...] The incidents often occur at or near airports and are the result of human error, the agency’s internal records show.

    Source: https://archive.is/btNB0

    (tagging @pyeri for visibility)

    8 votes
  8. Comment on Q* - Clues to the puzzle? in ~tech

    cykhic
    Link Parent
    Firstly, thanks for engaging in a clear and non-confrontational way. If I understand you correctly, these are your main points: The probability of a technological singularity is low. There is no...
    • Exemplary

    Firstly, thanks for engaging in a clear and non-confrontational way.

    If I understand you correctly, these are your main points:

    • The probability of a technological singularity is low.
      • There is no evidence of a high probability of recursive self-improvement.
        • The recursiveness hinges on as-yet-unknown breakthroughs, which (by definition) we know nothing about.
      • In the meantime, we have some priors that the probability is low:
        • It hasn't happened for 40 years despite warnings.
        • Current AI doesn't seem close to recursive self-improvement.
      • If there is self-improvement, it will likely be slow (i.e. not an explosion), so we have time to react.
    • The cost of taking safety measures to reduce the probability of a singularity is high.
      • We will miss out on AI-assisted advances in many aspects of life, like energy and healthcare.
      • Safety measures, like all regulation, benefits larger entities like governments and corporations, who will wield increased power relative to the average person.

    This is, at its core, a position I respect. I agree that safety measures, depending on what exactly they are, can be very costly both in upfront cost and in opportunity cost. If we take as a given that the probability of a singularity is vanishingly low, then I would definitely agree that safety measures are not worth it.

    So I think our core disagreement is on how likely a singularity is. To be clear, I don't think a singularity is inevitable, or even >50%. If pressed, I would give roughly a 0.1% to 10% range, but mostly, I don't claim to know what the probability is. My confusion arises because many people seem to strongly believe that the probability is so incredibly low that it isn't worth considering, which doesn't seem correct to me.

    My previous comment was mostly a single example of one way I think a singularity could arise, which is definitely subject to my limited knowledge of computer security and such. I'm going to try to summarise below, more generally, why I have my beliefs:

    • I think the probability of a singularity is not low.
      • Recursive self-improvement is plausible.
        • It seems plausible to me that there exist future breakthroughs to be made.
        • AI research seems clearly recursive.
        • Therefore, each breakthrough may recursively trigger more breakthroughs, given enough density.
      • To me, our current trajectory does not suggest that the probability of a singularity is low.
        • "Not yet in 40 years" glosses over how fast progress has been.
        • Current AI is close to human level, which is close to the threshold for recursiveness.
    • We don't know if we have time to react.
      • We don't know how fast recursive self-improvement will be
      • We don't know how to address this, let alone how long it will take
    • A singularity is possibly very bad.
      • I think it is unlikely to share human values or moral qualms.
      • If it has a lot of power as well, it will likely do things we don't necessarily want.

    You mentioned that predictions of recursive self-improvement depend on "mechanisms that are by definition not invented yet and so can't be reasoned about". It's true that the future advances are not known. But I think there is a big difference between "unknown" and "unknowable". For example, we don't know whether P=NP, but we know that all NP can be reduced to 3-SAT.

    I do think it's possible to know some things about hypothetical AI breakthroughs, and it seems that you think so too:

    "We will almost certainly achieve - I won't say entirely self-directed, but AI assisted - breakthroughs in science [...]

    More generally, people seem uniformly optimistic about AI capabilities. But to the same extent that we expect scientific breakthroughs, shouldn't we also expect breakthroughs in AI technology?

    Suppose that human researchers (AI-assisted or otherwise) advance AI to reach the point where AI can observe evidence and draw conclusions with a similar reliability as human experts. Supposing this is possible, isn't it at least imaginable that AI then takes over the human role in AI research, and proceeds to make more discoveries?

    That alone, though, doesn't prove there will be a singularity. We would also need to assume that there are sufficient numbers of potential breakthroughs, in close proximity, such that a human-expert-level AI researcher can advance itself without human input.

    I think that it's not impossible that this is true. The transformer model invented in 2017 (which later ended up as the T in GPT) was a stripped down version of previous models, "based solely on attention mechanisms, dispensing with recurrence and convolutions entirely". It still managed to set new records on its benchmarks, which were then broken again several times over the next year.

    I haven't taken the time to fully understand how exactly every advancement worked, but they sound to me like relatively simple changes resulting in large improvements. Add to that the steady pace of releases by OpenAI and other labs, each of which seem noticeably better than what came before. In this light, saying that the singularity has been "just around the corner for 40 years" seems a bit dismissive, considering that AI now is more capable, and that progress still seems to have momentum.

    You mention that we will have time to react. But that is not obviously true to me, considering that we don't know exactly how fast it will be, and we don't know what we would do in response.

    I think the general feeling I have is that people are too dismissive of the discontinuity in velocity of AI progress that may happen when AI is able to substitute humans in the research process. It reminds me of people taking selfish actions during the pandemic, whereby they gain some personal convenience, but the replication factor of the virus goes from (say) 0.95 to (say) 1.05. Passing 1.00 was a discontinuity, the difference between "the virus dies out" and "the virus spreads and has the chance to mutate and spreads even more". It reminds me of the melting ice caps, whereby melting them accelerates climate change in a cycle and it will be that much harder to walk it back.

    I'm getting a bit emotionally exhausted, so I'll wrap it up by saying that I think intuitively that the probability that advanced AI will give us beneficial technological advances seems to necessarily be on the same order of magnitude as the probability that it manages to bootstrap itself far beyond our understanding, in a feedback loop whose mechanism seems very possible. And the degree to which this intelligence has the potential to benefit humanity, seems also on the same order of magnitude as the harm it could do if it is not aligned with our interests. So I'm confused that a vast majority of people seem to strongly believe that such a mechanism is not possible.

    (I'm aware that I haven't addressed every bullet point in the summary of my views, because I'm just too tired at this point. If anyone is somehow still reading till this point and wants to rebut them, I'd appreciate steelmanning these points and not just assuming I'm stupid. Thanks in advance.)

    10 votes
  9. Comment on Q* - Clues to the puzzle? in ~tech

    cykhic
    (edited )
    Link Parent
    Disclaimer: I am not a domain expert in AI, but I have a CS degree and work in tech. I strongly feel that most people underestimate the existential risk posed by AI. I'd like to explain my...

    Disclaimer: I am not a domain expert in AI, but I have a CS degree and work in tech.

    I strongly feel that most people underestimate the existential risk posed by AI. I'd like to explain my reasoning here, and if you still feel I'm wrong, I'm happy to listen and maybe update my beliefs.

    I feel that the general perception is that AI becomes an existential risk when it is "self aware", as you mention. People then (correctly!) object that these models don't appear to be self-aware, and in any case we don't really know what "consciousness" is.

    I don't know where the fixation on self-awareness comes from. Perhaps it's because in every movie, the AI apocalypse begins when "Skynet/HAL/Ultron becomes self-aware". But to me, it's clear that AI existential risk begins when AI becomes self-replicating, and not necessarily self-aware.

    Imagine this scenario:

    • Suppose Q* really has a low hallucination rate, can learn from tiny data sets, and can outperform humans at hard problems in fields such as physics, chemistry and biology (as described in Amarok's summary on this page).
    • Suppose that somewhere in Q*'s training data is the Linux kernel, and the details of all CVE vulnerabilities.
    • Some hacker, somewhere, puts Q* on a loop in their terminal, with the initial prompt to "put yourself onto a trojan horse which spreads itself over the internet and changes its own code frequently to avoid antivirus detection, and then mine bitcoin and send it to me".

    It seems fairly obvious to me that there is a risk here that some kind of self-replicating, self-modifying, unstoppable AI botnet is spawned from this. I don't know what exactly the probability is, but it seems reasonable that, given the scenario, the botnet is spawned somewhere between 0.001% and 10% of the time. It doesn't matter whether this botnet is "self-aware". It matters that (assuming it can find any of the numerous arbitrary-code-execution vulnerabilities in various OSes) it has access to basically unlimited compute and access to anything else which is connected to the internet. That includes humans who read things posted on the internet.

    It doesn't matter what the botnet was originally intended to do. At any point in its self-modification, its goals could easily have drifted. But whatever it ends up optimising for is very unlikely to be connected to anything that humanity wants.

    A speaker and microphone doesn't need consciousness or intent in order to cause a screeching feedback loop, limited only by the maximum volume of the speaker. Similarly, an AI doesn't need self-awareness to enter a self-replicating loop, limited only by... what exactly?

    This is just one scenario I can think of where an AI has a possibility of making itself self-replicating and self-modifying. I feel that there are many others that I can't think of.

    Do you agree that under the scenario above, there is a >0.001% chance of such an existential threat developing?

    • If so, what measures do you think are appropriate to reduce it?
    • If not, what do you think the probability is, and what is the threshold for when the risk becomes unacceptable?
    • Also, how often does the triggering scenario (or something like it) occur, under a "regulated AI" environment and a "fully democratised AI" or "everyone uses AI" environment?

    Again, I'm not an expert, and I'm entertaining the possibility that I could be wrong. I'm happy to hear counterarguments in that case.

    30 votes
  10. Comment on Exclusive: OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say in ~tech

    cykhic
    Link Parent
    My (not particularly informed) model is that agriculture scales better than hunting-gathering. If hunters need to travel up to 1km to gather enough food for 100 people, then they would need to...

    My (not particularly informed) model is that agriculture scales better than hunting-gathering.

    If hunters need to travel up to 1km to gather enough food for 100 people, then they would need to venture further for 200 people. There are increasing marginal costs to increasing food production, as the best sources nearby are picked clean first.

    Meanwhile, agriculture can presumably scale reliably at a linear rate of population supported per farmer. Economies of scale are easier because agriculture is in a fixed location, and may bring that ratio even higher.

    4 votes
  11. Comment on Exclusive: OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say in ~tech

    cykhic
    Link Parent
    I don't think what anyone may have seen at a bus stop is more likely to spark a solution than the vast tracts of, say, Wikipedia that an AI would have been trained on, which I don't think any...

    I don't think what anyone may have seen at a bus stop is more likely to spark a solution than the vast tracts of, say, Wikipedia that an AI would have been trained on, which I don't think any human has exhaustively read.

    (And if Street View has a picture of the bus stop....)

    3 votes
  12. Comment on Exclusive: OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say in ~tech

    cykhic
    Link Parent
    So if I get what you're saying, humans still have a niche because we are physically embodied and can take arbitrary physical action faster than, for instance, an assembly line robot for the same...

    So if I get what you're saying, humans still have a niche because we are physically embodied and can take arbitrary physical action faster than, for instance, an assembly line robot for the same task can be designed and produced.

    While that is true in the short term, I don't feel that this will hold true in the long term.

    9 votes
  13. Comment on Exclusive: OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say in ~tech

    cykhic
    Link Parent
    I agree that specific jobs or skills definitely can come into existence to replace those which are made obsolete. And, for example, when machines (mostly) replaced human physical labour, it was...

    I agree that specific jobs or skills definitely can come into existence to replace those which are made obsolete. And, for example, when machines (mostly) replaced human physical labour, it was still clear that knowledge and skills were the human niche.

    But, under the assumption that AI will also replace "cognition" generally, I think it's much less clear that there is anything at all left which humans can do better than AI.

    3 votes
  14. Comment on How Norway's EV rising star Easee fell foul of Swedish regulators, which took it to the brink of bankruptcy in ~transport

    cykhic
    Link Parent
    Nitpick: I don't think this is actually a good example of Pascal's Mugging, which is supposed to illustrate the absurdity of a utility function which considers very unlikely but very large...

    Nitpick: I don't think this is actually a good example of Pascal's Mugging, which is supposed to illustrate the absurdity of a utility function which considers very unlikely but very large outcomes, which is the opposite point from what I think you're trying to convey.

    1 vote
  15. Comment on How a tiny pacific island became a global capital of cybercrime in ~tech

    cykhic
    Link
    I felt a little sad reading this --- I and several people I know used to host personal websites (or projects, or hackathon products) on various .tk and .ml domains, because it was free and we were...

    I felt a little sad reading this --- I and several people I know used to host personal websites (or projects, or hackathon products) on various .tk and .ml domains, because it was free and we were broke students.

    I guess that makes me a crook?

    “It was obvious from the start that this was not going to turn out well,” says Levine, coauthor of The Internet for Dummies. “The only people who want those domains are crooks.”

    I'm a bit confused by another thing: the article says that Freenom provides a domain "in exchange for having advertisements hosted on their websites", but I don't remember ever being asked to host ads, nor did I notice any when visiting my own sites. My impression was they get revenue by selling 10-year plans so that users avoid the trouble of reregistering their free domain during a tight two-week window every year.

    It makes sense that TLD providers should also be responsible for policing the sites hosted there, and it seems like Freenom didn't do a good enough job. But it felt very weird seeing Freenom painted as a huge villain here when I thought for the longest time that they were great.

    I guess it's gonna be a bit more expensive to be an indie developer from now on.

    6 votes
  16. Comment on I skipped to the ending in ~life

    cykhic
    (edited )
    Link Parent
    I hadn't heard of "dotted line manager" before, but it sounds like his manager on the loan team (implying a dotted line on the org chart, while his real manager has a solid line). Skip manager is...
    • Exemplary

    I hadn't heard of "dotted line manager" before, but it sounds like his manager on the loan team (implying a dotted line on the org chart, while his real manager has a solid line).

    Skip manager is your manager's manager (i.e., going upwards and skipping one manager).

    'ticket to PR, LGTM, release, and get metrics in a hour' is saying that the lifecycle of a code change is very fast.

    • A ticket is a request for some specified work.
    • A PR is a pull request, which means you have finished implementing the code which fulfils the request in the ticket, and you would like someone to approve the PR so that your code can be merged into the main branch of code.
    • LGTM is "looks good to me", which is what reviewers customarily say when they agree to merging a PR into the main branch.
    • release is when your code has finished merging into the main branch and has been deployed into production systems which are actually running.
    • getting metrics means you are monitoring your production system in real time, to confirm that the changes you just merged in, are actually working correctly.

    An hour is almost unbelievably fast for this process, I would expect days and sometimes weeks.

    15 votes
  17. Comment on Have mass migrations ever happened for positive reasons instead of because hardship or lack of opportunity in their home countries? in ~talk

    cykhic
    Link
    My uninformed model: Much like how it's easier to smash a glass than to make one, so too it is a lot easier for a region to destabilise than to build it up, or for a group to become oppressed than...

    My uninformed model:

    Much like how it's easier to smash a glass than to make one, so too it is a lot easier for a region to destabilise than to build it up, or for a group to become oppressed than to become venerated.

    I imagine that the same differential of "goodness" would cause roughly the same amount of migration, just that things collapse far more often than they suddenly become amazing.

    For instance, compare the length of Wikipedia's Lists of wars or List of natural disasters to the length of their Category: Economic booms.

    2 votes
  18. Comment on As Gazans scrounge for food and water, Hamas sits on a rich trove of supplies in ~misc

    cykhic
    Link Parent
    I can't see Hamas using their stockpile to supply their citizens --- three to four months of supplies for 40000 people is about 2.5 days for 2 million people, before distribution costs. This...

    I can't see Hamas using their stockpile to supply their citizens --- three to four months of supplies for 40000 people is about 2.5 days for 2 million people, before distribution costs.

    This probably also puts into perspective why Israel is so reluctant to allow humanitarian aid. Even if this aid is earmarked for civilian use only, Hamas would only need to divert 2% to fully supply themselves, which seems very possible in the general chaos.

    10 votes
  19. Comment on Regular Americans are getting richer in ~finance

    cykhic
    Link Parent
    I'm curious, what would your predictions have been before reading this report? I haven't thought too hard about the results but the credit card numbers don't seem overly surprising. I imagine the...

    I'm curious, what would your predictions have been before reading this report?

    I haven't thought too hard about the results but the credit card numbers don't seem overly surprising. I imagine the higher federal funding rate led to some increase in credit card APRs, and that APR increase would be more than the federal rate increase because of a materially higher chance of default (as we also see in the stats).

    But also this is what I'm saying after seeing the numbers, so I could just be rationalising them. What do you think?

    Also, I can't fathom why mortgage delinquency would move in the opposite direction, so there are clearly things I'm missing here.

    3 votes
  20. Comment on As Gazans scrounge for food and water, Hamas sits on a rich trove of supplies in ~misc

    cykhic
    Link Parent
    I have no specific knowledge here, but purely considering the military implications (specifically not commenting on humanitarian concerns), it seems strictly better to me to force the other side...

    I have no specific knowledge here, but purely considering the military implications (specifically not commenting on humanitarian concerns), it seems strictly better to me to force the other side to ration their resources.

    Also, if/when Hamas draws from their stockpiles, it might be visible to Israeli military intelligence (whether on the ground or in the air), which opens up the possibility of targeting the stockpiles directly. So supplies will likely be exhausted much faster than the advertised three to four months.

    8 votes