onyxleopard's recent activity

  1. Comment on The coming software apocalypse in ~tech

    onyxleopard
    Link
    While I liked this overall, this bunk about interfaces and how "text" is somehow an inferior medium for writing code rubbed me the wrong way. Of course making domain-specific tools and...
    • Exemplary

    With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.

    When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”

    While I liked this overall, this bunk about interfaces and how "text" is somehow an inferior medium for writing code rubbed me the wrong way. Of course making domain-specific tools and visualizations is worthwhile if you are going to spend a lot of time exploring that domain. But, how do you write the domain-specific tools and visualizers? Unless someone's already written a neat tool to give you the "god's eye view" of your Braid clone, you are going to have to build those tools yourself. And somewhere on that toolchain, as you progress toward the bare-metal silicon, you're going to hit a general purpose programming language that is written as text in a text editor (or an IDE, which is ultimately a fancy text editor).

    The takeaway isn't that programmers should throw away their text editors and start using Swift Playgrounds or Jupyter Notebooks exclusively. The point is that programmers shouldn't arbitrarily restrict themselves in their tools. They should use whatever tools are optimal, and if they need to, they should build their own tools! I personally find interactive programming with a REPL to be far more intuitive and productive to start prototyping and playing around with. I start most of the Python programs I write in an IPython session. But, eventually, I usually end up writing them in a full-fledged text editor.

    Anyone who thinks it would be worthwhile to try to build a photo editor with Photoshop or a word processor with MS Word (or vice versa) is delusional. You can't build arbitrary tools with purpose-specific tools as easily as you can with general purpose tools. Depending on the tools and what you intend to build, it may not even be possible at all. The most general purpose tool is logic itself. Any tool that does not interpret arbitrary logical expressions and execute them faithfully is going to be a hindrance to those who know what they're doing. If you don't know what you're doing, sure, more restricted tools may suffice for exploratory work. And building models is absolutely a good idea. But, you can easily build models with general purpose programming languages if you already fully understand the model before you start coding. If you were taught about Turing completeness and expressivity of languages, you'd know precisely why programmers are loathe to give up programming languages as primary tools. And when you are writing a critical program, obviously, the best thing to do would be to write a proof of its correctness and safety. But most projects aren't given the resources to do that, so here we are where programmers have to model and write logic simultaneously, and usually can't comprehensively test for all states that their programs and systems might get into.

    Personally, I wouldn't allow myself to write a truly critical program because I know I don't have the skills to prove any nontrivial program I write is correct. I learned lambda calculus, and I had to learn how to write logical proofs at one point. If I were ever in a position where I had to write a critical program, I would absolutely look into proof-solvers and TLA+ and any other tools that can ensure safety. But, for the kind of programs I write, that's not only overkill, it might prevent me from writing anything at all. For instance, creating models of text for natural language processing is incredibly difficult, and there are arcane things like Unicode where the models have been standardized, and you don't have any control over it. If I had to formally verify a model of Unicode myself just to start writing a text analysis tool, the barrier to entry would be prohibitive. (And Unicode bugs still plague us.) Most software I write has dependencies on either the standard library of the programming language, or on 3rd party libraries, and most if not all of it is not formally verified, so my dependent code can't possibly be formally verified either.

    10 votes
  2. Comment on The bit short: Inside Crypto’s doomsday machine in ~finance

    onyxleopard
    Link Parent
    The long term trend that I’m expecting is that if cryptocurrencies proliferate and stabilize, fiat currencies will lose value, and vice versa. Ultimately I don’t think the overall amount being...

    I haven’t checked but I don’t think there is enough money involved in cryptocurrency as a whole to affect demand for traditional currencies, especially not the US dollar market.

    The long term trend that I’m expecting is that if cryptocurrencies proliferate and stabilize, fiat currencies will lose value, and vice versa. Ultimately I don’t think the overall amount being exchanged is what matters, it’s the perceived value. Stability is a huge factor in general perception for most people (investors and fraudsters excluded). A currency that is advertised as stable, but really is not, seems dangerous to everyone because it defrauds people of value in any number of currencies that were exchanged for it, and reduces confidence in cryptocurrency even for those who don’t own any.

    2 votes
  3. Comment on The bit short: Inside Crypto’s doomsday machine in ~finance

    onyxleopard
    Link
    Thanks for posting this. I’m really worried about adversarial attacks on traditional, fiat currencies via these sorts of unregulated exchanges. Just so I understand this, from the Stablecoin...

    Thanks for posting this. I’m really worried about adversarial attacks on traditional, fiat currencies via these sorts of unregulated exchanges.

    Just so I understand this, from the Stablecoin Wikipedia article:

    Advantages of asset backed cryptocurrencies are that coins are stabilized by assets that fluctuate outside of the cryptocurrency space, that is, the underlying asset is not correlated, reducing financial risk. Bitcoin and altcoins are highly correlated, so that cryptocurrency holders cannot escape widespread price falls without exiting the market or taking refuge in asset backed stablecoins. Furthermore, such coins, assuming they are managed in good faith, and have a mechanism for redeeming the asset/s backing them, are unlikely to drop below the value of the underlying physical asset, due to arbitrage.

    Tether is avoiding arbitrage, so far, by largely offering Tethers on exchanges where they can’t be exchanged directly for USD? And the author is making a plea for some well-capitalized entity to essentially test this theory and attempt to buy lots of Tether where the Tether issuer will be forced to buy an underlying stable asset (USD) to maintain the illusion that Tether is worth something?

    I’m somewhat indirectly exposed to BTC since I own stock in Square, Inc. (SQ) which now holds a bunch of BTC. Not very exposed, but exposed. While I would like if algorithmic cryptocoins were more stable, using stablecoins that are issued in this shady way seems like the opposite of the way to reach legitimacy in the long term. If Tether’s apparent fraud is generally recognized, what will the effect be? It seems like this would tank BTC and other coins? Would this bolster USD and other traditional, fiat currencies? How can someone like the “Bob” in the article be encouraged to use legitimate exchanges like Coinbase instead of Bybit? AFAICT, “Bob” is essentially an unwitting criminal who is defrauding everyone with stake in USD, but is impossible to go after from the US, legally, without outlawing VPNs. This all just seems like a huge powder keg waiting to go off.

    3 votes
  4. Comment on Daily thread - United States 2021 transition of power - January 14 in ~news

    onyxleopard
    Link Parent
    I think relegating it to "police culture" is missing the big picture. I don't mean to downplay systemic racism within police departments, but this isn't a police-only thing. The police just...

    how deeply ingrained racism is in police culture

    I think relegating it to "police culture" is missing the big picture. I don't mean to downplay systemic racism within police departments, but this isn't a police-only thing. The police just happens to be an appealing local power structure for racists to infiltrate because it is one of the few public venues where racists can physically enact their fantasies by leveraging the power imbalance inherent to in-person police vs. citizen interactions. These same racists are not only availing themselves of the power structures of local police forces. They are availing themselves of any power structure that isn't categorically hostile to them—and to the discredit of the US, that remains a lot of power structures: local government, state government, federal government, local police, state police, federal law enforcement like ICE, etc.

    17 votes
  5. Comment on Pelosi joins growing call for Trump to be immediately removed from office, may back impeachment in ~news

    onyxleopard
    Link Parent
    Well that opens the question of which senators you believe will actually vote to prevent Trump from running again. I can imagine maybe Romney. Even him I’m not sure. The votes just aren’t there,...

    Well that opens the question of which senators you believe will actually vote to prevent Trump from running again. I can imagine maybe Romney. Even him I’m not sure. The votes just aren’t there, even for a simple majority AFAICT.

  6. Comment on Pelosi joins growing call for Trump to be immediately removed from office, may back impeachment in ~news

    onyxleopard
    (edited )
    Link Parent
    I guess the transition date of the 20th makes it all not so high stakes in my mind. Personally I think they should impeach him again anyway for the call to Georgia SoS. It’s their duty to impeach...

    I guess the transition date of the 20th makes it all not so high stakes in my mind. Personally I think they should impeach him again anyway for the call to Georgia SoS. It’s their duty to impeach in the House. I don’t think it should be used as a threat over Pence’s head. Either Trump deserves to be impeached or he doesn’t. The senate isn’t going to convict and remove him before the transition anyway, so it seems like this isn’t a big deal either way. We already saw that Trump is essentially politically immune from the currently sitting Congress. And after his term expires, presumably his legal jeopardy will come into question, rather than his political jeopardy.

    Edit:

    Impeachment means he can't run again.

    Impeachment and successful conviction in the Senate. I don’t think that’s realistic given what happened with the first impeachment. No matter how ashamed the Rs may currently be, I don’t think they are entirely prepared to actually oust Trump and risk alienating his base (which seems to be finally fracturing the Rs).

    2 votes
  7. Comment on Pelosi joins growing call for Trump to be immediately removed from office, may back impeachment in ~news

    onyxleopard
    (edited )
    Link Parent
    It’s not up to Pelosi, though, right? It’s ultimately up to Pence and the cabinet members to invoke Section 3 of the 25th amendment. How does Pelosi putting a timeline on it do anything when she...

    It’s not up to Pelosi, though, right? It’s ultimately up to Pence and the cabinet members to invoke Section 3 of the 25th amendment. How does Pelosi putting a timeline on it do anything when she is not lawfully granted the power to invoke Section 3? Pelosi can only offer support of invocation.

    Edit:

    If the concern is she isn’t putting a timeline on the “if you don’t invoke the 25th, then we’ll impeach in the House”, that timeline is basically already drawn at this point, right? Pence and the cabinet only have until January 20th to invoke, so if not by then, the effect will be the same: Trump will no longer be president. I have no idea if a US president can be impeached after they leave office. Is that on the table?

    5 votes
  8. Comment on The illusion only some can see in ~science

    onyxleopard
    Link
    Illusions are great ways to remind ourselves of the fallibility of our senses. I also really enjoyed the explanation of the experiment looking at the priming effect of human children raised in...

    Illusions are great ways to remind ourselves of the fallibility of our senses. I also really enjoyed the explanation of the experiment looking at the priming effect of human children raised in "carpentered environments" vs. those raised elsewhere. I’m not sure how we can defend against people who understand illusions and priming effects and employ them to influence others for their own agenda. It’s basically brain hacking along vectors that are so evolutionarily engrained that there’s basically no defense against this sort of manipulation. You can’t not see some illusions, even if you know they’re illusory!

    6 votes
  9. Comment on Why the iPhone timer displays a fake time in ~comp

    onyxleopard
    Link Parent
    Huh, that special case is interesting, though I actually don’t like it due to inconsistency. I can see both sides on that decision.

    Huh, that special case is interesting, though I actually don’t like it due to inconsistency. I can see both sides on that decision.

  10. Comment on Why the iPhone timer displays a fake time in ~comp

    onyxleopard
    (edited )
    Link Parent
    round leads to a step function where the GUI changes the display once every second, but offset 0.5 seconds from when the real value changes. This isn’t arbitrary or incredible for the reasons laid...

    round leads to a step function where the GUI changes the display once every second, but offset 0.5 seconds from when the real value changes. This isn’t arbitrary or incredible for the reasons laid out in the article.

    This is certainly not "well-refined, user-focused design"—if it were, they would have used ceil (the next integer up rather than down) …

    For GUI elements that transition after the timer expires (i.e., reaches 0.000…), ceil would lead to a situation where the timer will appear to a human not to end on 0. For human perception, this is wrong. Rewatch the screen capture of the iOS 5 second timer included in the article. After the timer expires, the GUI transitions back to the control widget to create a new timer. If the displayed timer countdown used ceil, there might be, at most, one frame where a human could see the numeric value of 0. Depending on all sorts of other factors like display refresh rate and other things, it’s possible they wouldn’t even see that, or won’t perceive it because it will flash on the screen for such a short amount of time that you won’t actually have time to process it. round has the effect of ensuring that the reduced precision shows the timer start value (5) for .5 seconds at the beginning, and showing the end value (0) for .5 seconds at the end. Designing GUIs around human perception is an art. Apple decided to do it this way for perfectly credible reasons.

    To give an extreme example, if you used ceil for a 1 second timer, the display would just show the numeral 1 for all frames except possibly one final frame on 0 (again, depending on the display refresh rate and other timing considerations it might not even display 0 at all). Notice also that there is a higher precision, non-numerical representation of the remaining time displayed in iOS. There is a circular progress bar that depletes, counter-clockwise surrounding the integer countdown.

    Consider alternatively that for countdown timers like this in iOS’s Clock app, there is a 'pause' button. If you pause a 5 second countdown timer within half a second of starting it, then the GUI will still display 5 seconds left. Given the reduced precision of whole integer values, this is the most correct answer; 5 is the closest integer value to the answer of the question, "How many seconds are left on the timer?". And if you pause it within 0.5 seconds of the timer expiring, the GUI will display 0, which, again, is the closest value to the truth within the constraints of the reduced precision. That is, there isn’t even 1 whole second left on the timer, but it is still not expired yet.

    So, the point of using round is exactly the intuitive reason why the round function exists: you want to know what is the nearest integer value that the higher precision real value is closest to. You can argue whether the original decision to limit the precision of the numerical value to begin with is principled, but given that decision, round is very clearly a principled choice.

    4 votes
  11. Comment on What do you wear? in ~design

    onyxleopard
    (edited )
    Link
    I generally prioritize comfort over style. I prefer clothing that does two things: Regulates body temperature Protects my body For comfort, Merino wool is one of the best materials for clothing....

    I generally prioritize comfort over style. I prefer clothing that does two things:

    1. Regulates body temperature
    2. Protects my body

    For comfort, Merino wool is one of the best materials for clothing. It helps keep you insulated in the cold and helps wick sweat in the heat. There are some minor annoyances with 100% wool clothing, esp. durability, but these can sometimes be compensated for by combining wool with other materials, like nylon. And concerns like itchiness can be mitigated by using high-grade wool (ultrafine or superfine fibers smaller than ~18 microns in diameter). I esp. prefer Merino wool long and short sleeve T-shirts, socks, and underwear. E.g., this long sleeve T and this short sleeve T.

    While I think synthetic textiles, like Polartec, are quite versatile and technically advantageous, I’ve been trying to avoid synthetic fleece (esp. polyester fabrics) lately due to the shedding of microfibers when washed. I do own a couple Houdini hoodies and sweatpants made of Polartec Power Stretch, but I don’t regularly put these through the washing machine.

    That said, there are other clothing items that are made of synthetic textiles that I wear due to durability concerns. In the past few years I’ve picked up several items from Outlier. Some of their stuff is aesthetically pretty wild, but I like their basic items like their Strong Dungarees made of a SUPPLEX nylon to feel more like cotton, but has far superior durability and does not absorb sweat. I also appreciate that Outlier, while pricy, does not focus on plastering their branding on their clothing. There are logos on the insides of their garments, but you don’t become a walking advertisement while wearing them. I also have picked up some of Outlier’s dressier shirts and pants.

    Though, I don’t wear dressy clothing frequently, and don’t like to dress up outside occasions where there is social pressure to do so. Collared shirts and lapels are some sartorial artifacts that just totally perplex me. Like, anthropologically speaking, I understand the history of shirt collars, but if I have the option between a shirt with or without a collar, I will go without the collar every time. I guess in that sense my personal aesthetic is utilitarian.

    Living in New England, there are also times when being outdoors you need more than the insulation that Merino base layers can provide, so for many months of the year when going outdoors I also wear an insulated down jacket. Specifically a 1,000 fill power down jacket from Mont·Bell. Nothing beats natural down in warmth to weight, and this is an esp. lightweight jacket that is easy to layer.

    And, in the rain or snow, I layer on top of this a GORE-TEX shell with a waterproof membrane from REI Co-op.

    Overall, in my adulthood, I’m trying to buy clothing items that will last a while or are as close to "buy-it-for-life" as one can reasonably get, from responsible companies, while not sacrificing comfort.

    5 votes
  12. Comment on What’s the point of pointfree programming? in ~comp

    onyxleopard
    Link Parent
    Also, thanks for linking to A Raku Manifesto, Part 3 piece. I think the trade-offs you discuss there are really good to keep in mind. This is something that bugs the hell out of me. I hate it when...

    Also, thanks for linking to A Raku Manifesto, Part 3 piece. I think the trade-offs you discuss there are really good to keep in mind. This is something that bugs the hell out of me. I hate it when working with others who start off writing a shell script and then start trying to maintain it as it grows and feature creeps. At some point, usually after ~3 people have touched significant parts of the code, I think it’s better to take a step back and rewrite the thing in a language that has been designed with full-scale software engineering in mind.

    But, for little functions or quality of life things that you throw in your .*rc or write for personal use, I totally see the appeal of trying to optimize for mastery and individual productivity.

    1 vote
  13. Comment on What’s the point of pointfree programming? in ~comp

    onyxleopard
    (edited )
    Link Parent
    I think it’s an ideal to aspire to. It’s not possible, in practice, except for trivial programs. I don’t think I can make the argument any better than Joel Spolsky and his adage "It’s harder to...

    If I'm understanding you correctly, you believe that a program should not only be clear to someone well versed in a particular language but should also be clear to someone who is new to the language/unfamiliar with the language specific idioms. Is that correct?

    I think it’s an ideal to aspire to. It’s not possible, in practice, except for trivial programs. I don’t think I can make the argument any better than Joel Spolsky and his adage "It’s harder to read code than to write it."

    (By the way, .<matches>, .<sets>, and .<name> aren't named captures; they're the keys in each of the hashes in the array we're sorting. I agree 100% that giving those keys names like x, y, and z would make the code much less readable.)

    Ah, I see now. The fact that Raku uses the same symbols <> for regex capture group identifiers and associative keys confused me (I’m obviously illiterate in Raku). I can sort of get behind the motivation of raising regex to be a first class thing, and as such, having named capture groups be more similar to associative keys, but since regexes still don’t seem to be raised up out of being a domain specific language (DSL) within Raku even if regexes are built-in, it’s still confusing. Overloading of symbols like this within DSLs makes programs even more difficult to read because you have to remember which language you’re reading in order to interpret the program correctly.

    IMO, ==> sort(…) is just as clear a signpost for "this is the code that does the sorting" as my @sorted = … -- maybe even a bit easier to find when skimming the code.

    I think the main thing is that the well chosen name @sorted makes it very clear what’s going on. Yes, the built-in function name sort() is potentially clear, but declaring @sorted by name is even clearer, IMO.

    1 vote
  14. Comment on What’s the point of pointfree programming? in ~comp

    onyxleopard
    Link
    This was a nice read, but the thing that strikes me about the point free style is that it seems like there is a lot of shifting of load around that ultimately I don’t think fulfills the stated...

    This was a nice read, but the thing that strikes me about the point free style is that it seems like there is a lot of shifting of load around that ultimately I don’t think fulfills the stated aim:

    Programming in a pointfree style can make code far more readable; done correctly, it makes code less obscure rather than more.

    The reason that I think Perl Raku is often derided as line noise or "write-only" by is because to program in Raku fluently requires first learning a lot of new symbols and idioms (i.e., reserved names and syntactic sugar that the language provides). That is to say, Raku allows you to avoid using your own names in many situations because the built-in affordances of the language provide basic objects and functions that are quite powerful if you can keep the logic of your program in your head and are willing to adopt a functional style.

    Given that the chosen example involved some text processing, there is also a lot of reliance on regex, which is itself another domain-specific language with its own special symbols (i.e., another lexicon of special names that are afforded, but require familiarity).

    I agree that there is a subjective quality that the rewritten version possesses. I wouldn’t call that quality "clarity", but rather some aesthetic quality that borders on cleverness and punctiliousness. But, to the reader who is illiterate in Raku, it is absolutely line noise. That’s not a critique, and if I take the author’s word that the rewrite is not even idiomatic Raku, I dare not imagine what an idiomatic version of this program might look like.

    The main issue I have with the point free style of the rewritten version is that while the flow of the program is quite clear with the use of the feed (==>) operator, I think that modifying or refactoring the point free version would be so much more difficult than the 101 version. The context that one needs to build up in the point free version between any two ==> is a lot. You basically need to keep the whole program in your head to grok what any given part of the pipeline is doing. And if you want to make a change, you may have a long distance dependency in the pipeline that would affect things somewhere far away.

    For programs that are intended to be reused, or are useful enough to end up being reused, they likely will need to be modified and refactored down the line. So, for one-off scripts, my critique isn’t consequential. But for any code that is going to be read (esp. by multiple different human readers), I think avoiding line noise is a much higher priority than aiming for any sort of aesthetic punctiliousness.

    I think the concern about readability and density of language-specific idioms is a major reason why languages like Raku are avoided for larger programs that are intended to be used, reused, and extended/modified.

    To give a brief example that occurred to me, what if I wanted to modify each version of this program to change the order that results are printed so that the players with the most wins are at the bottom rather than top of the list? Scanning the program to find the part that is relevant is so much easier because there is a very obvious line where the sort order is determined:

    my @sorted = @names.sort({ %sets{$_} }).sort({ %matches{$_} }).reverse;
    

    Whereas, in the point free style, we would want to remove this line:

    ==> reverse()
    

    But, we’d basically have to read and grok the entire pipeline to establish the context in order to interpret that line. Or at least we’d have to make some educated guesses based on the named capture groups in the preceding line. If the capture groups were just named <x>, <y>, and <z>, or were just indexed by their order, I’m not sure I’d be able to understand it at all without having seen the 101 version first.

    3 votes
  15. Comment on Sony is pulling Cyberpunk 2077 from the Playstation Store; refunds offered in ~games

    onyxleopard
    Link Parent
    Film FPS and video game FPS are not so easily comparable. Unless video games have inter-frame post-effects like motion blur or other post-processing, lower frame rates in games can throw people...

    Film FPS and video game FPS are not so easily comparable. Unless video games have inter-frame post-effects like motion blur or other post-processing, lower frame rates in games can throw people out of the illusion that you are looking into a simulated world that represents continuous motion between frames. People often talk about low frame rates turning into a "slide show". Whereas, with film, 24 fps is, for one thing, absolutely constant (no dips below that). And, there is no computer simulation, so if there is motion captured during a single exposure, the motion will be blurred across a single frame (this is why motion blur is motivated as a post-process in video games and CGI generally, though, the quality of such post-processing is considered by some to detract from gaming).

    So, a game that hits 24 fps, but inconstantly and dips below that, without motion blur, may be unacceptably non-fluid given the expectation of 30, 60, or even higher frame rates. Whereas, 24 fps film is expected (in the US at least), and since it captures real, unstimulated motion, it is at least subjectively fluid and realistic.

    7 votes
  16. Comment on The stock market is less disconnected from the “real economy” than you think in ~finance

    onyxleopard
    Link Parent
    There is an alternative, but it’s not much better. Some companies, backed by vast sums of venture capital, will adopt the abstract strategy of selling dollars for cents. Their game plan is to...

    the only type of business logic that makes sense with that strategy is if the business exists only to be inflated and sold as quickly as possible.

    There is an alternative, but it’s not much better. Some companies, backed by vast sums of venture capital, will adopt the abstract strategy of selling dollars for cents. Their game plan is to price out competitors who are unwilling to sell at a loss. Once they drive out the competition and establish an effective monopoly, they will then price-gouge. It’s anticompetitive, and yes there should be laws against this, but this has essentially been the initial model for Uber, MoviePass, and some other "disruptive" startups.

    5 votes
  17. Comment on Share a link to a song that tells a story in ~music

    onyxleopard
    Link
    Whenever I think of songs that tell a story, I always think of the absurd ballad of Charlie and the "MTA". And I always think of the Kingston Trio rendition. I also like this version by Tracy...

    Whenever I think of songs that tell a story, I always think of the absurd ballad of Charlie and the "MTA". And I always think of the Kingston Trio rendition. I also like this version by Tracy Newman (which is nice since she gives some history of the song). As that video mentions, like many folk songs, the tune of "MTA" was taken from "Wreck of the Old ’97". "MTA" is so ingrained in Massachusetts culture that we call T pass cards CharlieCards in tribute to the fictional protagonist.

    1 vote
  18. Comment on Cheap Rejection as a Mental Model Feature in ~misc

    onyxleopard
    Link
    Once you get to a certain model of the world, the kind of information that is useful for updating your model becomes sparser and you get diminishing returns. You realize that there’s a lot of...

    Once you get to a certain model of the world, the kind of information that is useful for updating your model becomes sparser and you get diminishing returns. You realize that there’s a lot of noise and that finding signal amongst the noise requires significant work. I think lots of people don’t have the resources or patience to continue to learn at that point. Willingness to learn, even when you are older, is actually one of the qualities I admire most in people. Being willing to admit that your current model is not the end all be all is the first step in updating it. And spending the effort to be open to new ideas and experiences is definitely work. Learn to filter noise, but don’t pretend you have a perfect noise filter, either.

    That said, I don’t think the model is the filter like OP suggests. The filter is the thing that decides when and how to update the model. I think not everyone’s filter is totally a conscious thing. Not everyone chooses what they believe all the time, they sometimes just end up in a state of belief due to unconscious processes. Having that insight to take stock of what you actually believe, and being willing to change it is a sign of maturity and intelligence. It’s that adage that if you truly care about being right, you have to be willing to change your mind.

    8 votes
  19. Comment on The way we train AI is fundamentally flawed in ~tech

    onyxleopard
    Link Parent
    The issue is fundamentally generalization. We know there are machines that are good at generalizing based on limited training data: healthy, living human minds. When you have umpteen...

    The point of making ridiculously complex models with billions of parameters and training them on merely hundreds of thousands of data points is that you have many degrees of freedom. This makes it very likely that a good theory can be expressed within your model (because you have the parameters to encode it) but it makes it possible to have different good theories - theories that explain the data you have. But they might differ substantially on the data you don't have. Particularly if the data you don't have (i.e. the real world) is markedly different from the data you do have.

    The issue is fundamentally generalization. We know there are machines that are good at generalizing based on limited training data: healthy, living human minds. When you have umpteen machine-learned models that have only epsilon delta accuracy on hold-out set 1, but have large deltas in accuracy on hold-out set 2, one has to ask: If you asked umpteen humans to learn and perform this task on hold-out sets 1 and 2, would you see the same failure to generalize? I would argue that you would not, if the task is well defined.

    What is well defined? Well, we know what that means in terms of mathematical functions, but what does it mean in terms of defining a task? I’d say it’s impossible to concretely define what well-defined means for arbitrary tasks. But, we can still do science here! If we’re positing that we’ll be doing supervised learning, anyway. The way that I determine if an annotation task is well-defined for supervised learning is if I can write guidelines for humans such that a group of humans with those guidelines annotating independently can achieve a reliable level of inter-annotator agreement relative to chance agreement. We can objectively compute this agreement with metrics like Krippendorff’s alpha. If you can’t get a group of humans to independently achieve >= ~0.8 alpha (given an appropriate delta metric on your data and a good-effort set of human-grokkable task guidelines), I would argue that the task is ill-defined, and any model you train on data produced by the humans who failed to achieve >= 0.8 alpha will not generalize or even necessarily achieve a useful level of accuracy at all. Alpha lets us basically measure whether humans "know it when they see it". Now, there are still issues of data shift, and data sparsity, and other issues. But, if you look at tasks like sentiment analysis for NLP, you will never see an IAA score >=0.8 on a representative dataset. E.g., the alpha for the Stanford Sentiment Treebank with an ordinal delta metric is only <= 0.6. So, if you want to make progress, it’s not going to come from better learning algorithms or optimizers. That is to say, in the range of -1.0 to +1.0, where 0.0 is random chance, -1.0 is perfect disagreement, and +1.0 is perfect agreement, independent humans only agree on this task at a level of <= 0.6. The work to be done to make progress is about defining a more well-defined task. Maybe we have to model the problem with representations that are more complex than ordinal sentiment scale values? Maybe there are more dimensions to sentiment? And, if you can define a task that humans actually agree on, all the fancy universal function approximators you apply will have a chance at actually finding optimal solutions.

    5 votes
  20. Comment on Cover Your Tracks - A new EFF project designed to better uncover the tools and techniques of online trackers and test the efficacy of privacy add-ons (successor to Panopticlick) in ~tech

    onyxleopard
    Link Parent
    Even if they can’t build a picture of the person behind the screen on the first impression, once you’re fingerprinted, you can be tracked across domains and even browsing sessions. And that...

    Thanks to modern technology, it's possible to gather and correlate all these data points together to paint a picture of person behind the screen.

    Even if they can’t build a picture of the person behind the screen on the first impression, once you’re fingerprinted, you can be tracked across domains and even browsing sessions. And that tracking can then garner even more information, such as what times you browse, and potentially what sites you visit if you visit a group of sites that can match the same fingerprint in their database (or if they are part of the same ad network that can match your fingerprint).

    3 votes