entangledamplitude's recent activity

  1. Comment on Tab viewer/organizer? in ~tech

    entangledamplitude
    Link
    I find OneTab to be convenient for such purposes. All tabs go into a list, then I can export and process the list (Eg: remove all lines with wikipedia) and re-import the list and open all tabs. If...

    I find OneTab to be convenient for such purposes. All tabs go into a list, then I can export and process the list (Eg: remove all lines with wikipedia) and re-import the list and open all tabs.

    If you're a little more technically adept, TabFS is quite fantastic.

    As for iOS, my solution so far is to transfer all iOS tabs to my laptop (through Firefox Sync) and then handle it the same way.

    3 votes
  2. Comment on The world loses under Bill Gates’ vaccine colonialism in ~health.coronavirus

    entangledamplitude
    Link Parent
    Fair enough, but that feels like a lost opportunity on their part :-)

    Fair enough, but that feels like a lost opportunity on their part :-)

    1 vote
  3. Comment on How should we understand the warnings about artificial intelligence? in ~science

    entangledamplitude
    (edited )
    Link
    Speaking as someone who understands a good chunk of the technical details in “AI”, IMHO, the statement/article OP links to is a massive bait-and-switch. The first part makes it appear as if...

    Speaking as someone who understands a good chunk of the technical details in “AI”, IMHO, the statement/article OP links to is a massive bait-and-switch. The first part makes it appear as if computing powered AI is somehow going to be much better than humans. Rest assured that we are far from that right now. “AI” could have been used to think up moves in a game of Go, but to actually move the damn pieces reliably required a human!

    The real worry, espoused in the second half of the article, is how we’re currently rushing to apply this stupid AI at scale — from moderating content online, to making decisions about credit, to automated facial recognition, to micromanaging human labor, etc — and how that stupid AI is most often subtly biased and optimizing for something different from what we really want to optimize for.

    Layer on top of this a messed up system of incentives percolating through our economic system, and it sets up landscape of the problem. In fact, it might be argued that corporations/bureaucracies which run by processes are already an AI — only, they use humans as substrates, rather than silicon — and illustrate all the problems which we’re worried about with forecasted “runaway Al”. In that sense, Kahneman’s comments in the article are both underwhelming and also misdirected.

    4 votes
  4. Comment on The world loses under Bill Gates’ vaccine colonialism in ~health.coronavirus

    entangledamplitude
    Link Parent
    Was this data/report ever released? I’d love to get an understanding of the healthcare facilities in different regions!

    Was this data/report ever released? I’d love to get an understanding of the healthcare facilities in different regions!

    1 vote
  5. Comment on Man against marketing in ~tech

    entangledamplitude
    (edited )
    Link
    By making fungible that which was once not (eg: waiting in the checkout line at the grocery store couldn't have been used to respond to an email, or check a few tweets), technology that promotes...

    By making fungible that which was once not (eg: waiting in the checkout line at the grocery store couldn't have been used to respond to an email, or check a few tweets), technology that promotes "convenience" has the insidious tendency to absorb every nugget of human attention. Coupled with the business model where companies directly monetize the attention rather than any other part of the "transaction" (much older than modern smartphone technology), we've unwittingly set up a bad evolutionary race to the bottom. The heart of the problem is that the system is eroding every bit of "slack" in human attention and cognition -- kneecapping opportunities for the kind of silence and reflection necessary for deep awareness, creative realizations, etc.


    PS: The Adam Curtis documentary The century of the self has an interesting take on the historical evolution of this drive towards ubiquitous marketing. I'd also like to hear about other treatises on the subject, and thoughts on how we might move out of it.

    7 votes
  6. Comment on FOSS and UX (twitter thread) in ~comp

    entangledamplitude
    Link Parent
    Yup -- While I personally really like the convenience of system level package management (like in linux), I'm quite okay with the VLC model, and leave it to people's tastes and market/consensus...

    Understood - so it's the software automatically choosing to install the update that bothers you, not an in-app "update/cancel" prompt à la VLC?

    Yup -- While I personally really like the convenience of system level package management (like in linux), I'm quite okay with the VLC model, and leave it to people's tastes and market/consensus mechanisms to find a healthy equilibrium.

    the reality that we would have a whole lot more botnets out there if Chrome didn't self-patch.

    In such cases, I think security patches ought to be decoupled from feature/interface changes. But honestly, I don't have a great solution.


    My meta concern is that so much of UX effort is focused on moving software to meet the users where they are, rather than helping people become generally better users of computing.

    2 votes
  7. Comment on FOSS and UX (twitter thread) in ~comp

    entangledamplitude
    (edited )
    Link Parent
    Maybe the fact that the application interface can be updated to change the interface willy-nilly after the user has invested countless hours figuring out a way to use it? Or that the app can...

    Maybe the fact that the application interface can be updated to change the interface willy-nilly after the user has invested countless hours figuring out a way to use it? Or that the app can unilaterally change the terms of use and hold your data/experience hostage? Or that they might update at really inopportune moments?

  8. Comment on FOSS and UX (twitter thread) in ~comp

    entangledamplitude
    Link Parent
    FOSS user experience might be stupid, but it is seldom hostile like a company holding its users captive — either to milk them of value or out of a misguided notion of simplicity.

    FOSS user experience might be stupid, but it is seldom hostile like a company holding its users captive — either to milk them of value or out of a misguided notion of simplicity.

    5 votes
  9. Comment on California will discourage students who are gifted at math in ~humanities

    entangledamplitude
    (edited )
    Link Parent
    Let’s pause for a minute and imagine the absurdity of saying: “Let’s rate limit the excelling athletes to speed X — instead of putting effort into running faster, let them teach their peers.”...

    keeping classes heterogeneous will help gifted students really solidify their knowledge. The act of teaching your fellow students what you've acquired is mutually beneficial.

    Let’s pause for a minute and imagine the absurdity of saying: “Let’s rate limit the excelling athletes to speed X — instead of putting effort into running faster, let them teach their peers.”

    Replacing a physical activity by a mental activity doesn’t make it any more sensible. At that young age, most people who “get” math can’t easily explain “how” they get it. It’s very much instinctive, like physical coordination.

    It’s definitely healthy to discuss homework exercises with peers and compare ideas, but the typical excelling student at that age doesn’t have enough self-awareness to understand their own learning process enough to teach lagging students. Forcing stilted interaction (among two parties lacking emotional maturity) is likely to engender frustration on both sides! The lagging student is more likely to consider themselves “stupid” and the excelling student will start considering academics/education “stupid”. Both will start acting out, but for different reasons.

    The idea that every student has to progress at a uniform rate is an artifact of an industrial approach to education (which was never a great idea in the first place). The best way to build humility and empathy in excelling students is to allow them to progress to challenges they struggle with (in at least some areas, while sharing common classes/platforms in other areas). That will allow them to understand their fellows far better than forced interactions ever can.

    20 votes
  10. Comment on Is there anything considered pseudoscientific/unscientific that you suspect has some truth to it and might be re-examined in the future? in ~talk

    entangledamplitude
    Link Parent
    That’s exactly right! What gets called “bias” and what gets called “variance” is a matter of perspective. It’s common to think of regularization as introducing priors/bias when the learning...

    That’s exactly right! What gets called “bias” and what gets called “variance” is a matter of perspective. It’s common to think of regularization as introducing priors/bias when the learning process can lead to high variance in models, but it’s equally useful to add noise/impediments (yet another form of bias, in some meta sense) when the learning process might converge incorrectly in some non-convex landscape. AKA exploration/exploitation trade off.

    2 votes
  11. Comment on Is there anything considered pseudoscientific/unscientific that you suspect has some truth to it and might be re-examined in the future? in ~talk

    entangledamplitude
    Link Parent
    The justification for regularization is exactly to prevent overfitting. It’s incorrect that overfitted models are unsupported by evidence — in face they (over) fit the evidence to the neglect of...

    The justification for regularization is exactly to prevent overfitting. It’s incorrect that overfitted models are unsupported by evidence — in face they (over) fit the evidence to the neglect of everything else. The problem is that they take the biases in the specific data/evidence so seriously that they underperform when generalizing to other situations. Think about what it means to use “early stopping” as a form of regularization (quite common in ML) — it means the optimization goal is so bad that the best way to truly optimize is to stop optimizing the stupid objective.

    So all kinds of impediments to naive optimization might serve as very effective regularization. “Rationality/Science” is just a name for a certain class of empirical models (often learned in quite a naive manner), so just like any other modeling, regularization that stops you from taking those models too seriously is likely to be very useful depending on context (it’s hard to give a simple prescription).

    If you want specific examples of how pseudo/randomness is helpful, look at the blog/book link I shared which mentions it helping foragers spread out their consumption and avoid over-grazing a fertile pasture — thereby allowing it time to replenish and helping their overarching goal of long-term survival.

    Regularization is particularly important to prevent short-term motivations from killing long-term survival.

    5 votes
  12. Comment on Is there anything considered pseudoscientific/unscientific that you suspect has some truth to it and might be re-examined in the future? in ~talk

    entangledamplitude
    Link
    Re: astrology/etc... It’s important to understand that at the system level (not individual level) it is often beneficial to promote pseudo-random behavior among individuals, instead of letting...

    Re: astrology/etc... It’s important to understand that at the system level (not individual level) it is often beneficial to promote pseudo-random behavior among individuals, instead of letting them “overfit” to the wrong ideas. In the context of machine learning / optimization, this is called “regularization” and is absolutely crucial for a learning/progress mechanism to work well. So imho, it’s quite possible to have useful practices which are simply sophisticated randomness generators.

    ——

    For a more thorough take on your question, I highly recommend this blog post (and the book it reviews): https://slatestarcodex.com/2019/06/04/book-review-the-secret-of-our-success/

    4 votes
  13. Comment on Tech people of Tildes, what have you automated in your life? in ~tech

    entangledamplitude
    Link Parent
    Nit: That would be $150/TB. Starts at minimum $18/yr and then the per-unit pricing is set to keep lowering the higher the total quantity. Btw, FYI, “credit card” is a generic placeholder in...

    Nit: That would be $150/TB.

    Starts at minimum $18/yr and then the per-unit pricing is set to keep lowering the higher the total quantity.

    Btw, FYI, “credit card” is a generic placeholder in payments that typically also includes/accepts debit cards.

    3 votes
  14. Comment on Tech people of Tildes, what have you automated in your life? in ~tech

    entangledamplitude
    Link Parent
    Very much in the same boat, but it’s easy to keep putting it off :-/ FYI, rsync.net has a decent offer for Borg users, which you might find useful: https://www.rsync.net/products/borg.html

    Very much in the same boat, but it’s easy to keep putting it off :-/

    FYI, rsync.net has a decent offer for Borg users, which you might find useful: https://www.rsync.net/products/borg.html

    2 votes
  15. Comment on Enzyme: Automatic differentiation of LLVM IR in ~comp

    entangledamplitude
    Link Parent
    Depending on whether this covers ALL llvm code, that would mean auto diff for all languages that compile through llvm — including C/Cpp!!

    Depending on whether this covers ALL llvm code, that would mean auto diff for all languages that compile through llvm — including C/Cpp!!

    1 vote
  16. Comment on How tech loses out over at companies, countries and continents in ~tech

    entangledamplitude
    Link
    Stumbled on this through the 5G discussion, but I think this is a far bigger question worth discussing. The link is a transcript of a talk (slides and video are also linked in there, in case you...

    Stumbled on this through the 5G discussion, but I think this is a far bigger question worth discussing. The link is a transcript of a talk (slides and video are also linked in there, in case you prefer).

  17. Comment on SCOTUS justice Clarence Thomas argues for regulating large internet platforms as common carriers in ~tech

    entangledamplitude
    Link Parent
    I’m definitely not fond of large cellular/internet service providers, but none of them block you from connecting with people using other service providers (thanks to a previous generation of...

    I’m definitely not fond of large cellular/internet service providers, but none of them block you from connecting with people using other service providers (thanks to a previous generation of common carriers regulation), so there’s at least in-principle potential for market mechanisms to sort things out.

    3 votes
  18. Comment on Results from the Fermilab g-2 experiment indicate new physics with 4.2 sigma confidence, stronger than previous measurements in ~science

    entangledamplitude
    Link
    To clarify, for those not familiar with this topic, this experiment is making measurements at such exquisite precision that even the calculations for the theoretical prediction are extremely...

    To clarify, for those not familiar with this topic, this experiment is making measurements at such exquisite precision that even the calculations for the theoretical prediction are extremely non-trivial and require careful estimation of many many pieces which are then combined. Which is to say that debugging the theoretical prediction is (almost) as hard as debugging the experiment. So I would expect the particle physics community to be extremely circumspect while the details get ironed out.

    The Quanta magazine article on this topic explains it quite nicely. To quote their example of what has happened in the past:

    ”A year after Brookhaven’s headline-making measurement, theorists spotted a mistake in the prediction. A formula representing one group of the tens of thousands of quantum fluctuations that muons can engage in contained a rogue minus sign; fixing it in the calculation reduced the difference between theory and experiment to just two sigma. That’s nothing to get excited about.”

    https://www.quantamagazine.org/muon-g-2-experiment-at-fermilab-finds-hint-of-new-particles-20210407

    5 votes