18 votes

Cheap rejection as a mental model feature

I’m increasingly convinced that worldviews / mental models are not simply modeling devices, but information rejection tools. Borrowing from Clay Shirkey's "It's not information overload, it's filter failure", the world is a surprisingly information-rich space, and humans (or any other information-processing system, biological or otherwise) simply aren't equipped to deal with more than a minuscule fraction of it.

We aim for a useful fraction. It paints an incomplete, but useful picture.

Even a bad model has utility if it rejects information cheaply: without conscious effort, without physical effort, and without lingering concerns or apprehensions. It's a no-FOMO mechanism.

Usually, what happens is that we apply our bad models to a given scenario, act, process the new resulting scenario, and notice that that is obviously not favourable, and take appropriate actions to correct the new circumstance. Net loss: one round of interaction. Net gain: not succumbing to analysis paralysis or having to hunt for a new and improved worldview (especially: a new concensus worldview shared with numerous others, creating a large coordination problem).

Sometimes that doesn't work out and people (or companies, or governments, or cultures) get stuck in a nonproductive rut, often characterised by "doing the one thing we know how to do, only harder".

The big problem comes when there's a recognition that a former large-scale world model no longer applies. I'm leaning strongly to the notion that this is behind many psychological conditions: Grief, denial, meloncholia, depression, PTSD. Possibly burnout and ADHD.[1]

Classic grief is triggered by the loss of a loved one, or in the "five stages of grief" study, news of the subject's own impending mortality (a fatal disease prognosis). That is, an invalidation of a previously-defining mental model. This triggers denial, anger, bargaining, depression, and eventually, for some, acceptance of a new world view.

It's a pattern once recognised that one sees repeated across numerous scenarios, and scales, from individuals to groups to entire countries --- almost any disaster, epidemics, global catastrophic risks, wartime attacks, business failures, relationship breakups, and on. The phenomenon intersects with the problem-solving success (or failure) chain.

What's curious to me is what the threshold for grief or denial is. There are some surprises which don't elicit this response: almost all humour is based on the principle of surprise, and horror films and thrill rides are based on the premise of surprise or extreme experience, but rarely result in a traumatic response. We go through our daily lives experiencing small and medium-sized suprises and disappointments all the time. The grief/denial response seems to be triggered only above a magnitude or repetition threshold, though that can differ markedly between individuals.


Notes:

  1. I'm not claiming that all PTSD, burnout, and ADHD are grief responses, but rather that there are at least strong similarities. Early psychologists linked grief and melancholia (itself then considered a much stronger longing, to the point of mental illness). The mechanisms for overload might be internal --- chemical, physical, illness, injury, or genetic in origin --- or external. But there's a common thread that seems to run through these conditions, ultimately an inability to cope with a level of change.

(Adapted from https://news.ycombinator.com/item?id=22208255.)

6 comments

  1. onyxleopard
    Link
    Once you get to a certain model of the world, the kind of information that is useful for updating your model becomes sparser and you get diminishing returns. You realize that there’s a lot of...

    Once you get to a certain model of the world, the kind of information that is useful for updating your model becomes sparser and you get diminishing returns. You realize that there’s a lot of noise and that finding signal amongst the noise requires significant work. I think lots of people don’t have the resources or patience to continue to learn at that point. Willingness to learn, even when you are older, is actually one of the qualities I admire most in people. Being willing to admit that your current model is not the end all be all is the first step in updating it. And spending the effort to be open to new ideas and experiences is definitely work. Learn to filter noise, but don’t pretend you have a perfect noise filter, either.

    That said, I don’t think the model is the filter like OP suggests. The filter is the thing that decides when and how to update the model. I think not everyone’s filter is totally a conscious thing. Not everyone chooses what they believe all the time, they sometimes just end up in a state of belief due to unconscious processes. Having that insight to take stock of what you actually believe, and being willing to change it is a sign of maturity and intelligence. It’s that adage that if you truly care about being right, you have to be willing to change your mind.

    8 votes
  2. [4]
    skybrian
    Link
    I'm not sure what to say about your main point, but to comment on one piece of it, you typically don't need much of a mental model to filter things quickly. In terms of "Thinking, Fast and Slow",...

    I'm not sure what to say about your main point, but to comment on one piece of it, you typically don't need much of a mental model to filter things quickly. In terms of "Thinking, Fast and Slow", quickly skimming or filtering sensory information to see what jumps out at you is System 1.

    A change in worldview seems like a deeper thing?

    2 votes
    1. [2]
      Omnicrola
      Link Parent
      I think there's probably a few different levels of mental model/filter to consider here. One could be what you're describing, which is a purposeful skim over a text or image to look for something...

      I think there's probably a few different levels of mental model/filter to consider here. One could be what you're describing, which is a purposeful skim over a text or image to look for something in particular. If it doesn't work, then you probably will just go back through the material at a different pace and be forced to filter out more information but do so more deliberately.

      What @dredmorbius is describing I think encompasses the kind of unconscious mental modeling of the world that you can't turn on/off easily. For instance, maybe someone has a mental model that causes them to automatically reject any suggestion made by women. For whatever reason, it's worked for them, or at least worked well enough. This is obviously incredibly misogynistic, but it's become a part of their unconscious mental model/filter. To change it would require someone to examine some fundamental precepts and assumptions that they have about the world. Which could cause them to exhibit several of the things mentioned, most particularly denial, but also anger and depression.

      4 votes
      1. dredmorbius
        Link Parent
        My sense is broad. Some models might be easier to change, others not. Language influencing thought -- Sapir-Whorf and related theories. Our senses and their limitations. Sight, sound, smell,...

        My sense is broad. Some models might be easier to change, others not.

        • Language influencing thought -- Sapir-Whorf and related theories.
        • Our senses and their limitations. Sight, sound, smell, touch, and taste (a partial and inaccurate set as the perceptive perceptual psychologists will note) are a small subset of possible channels to gain insigght on the world. We're sensitive across only a small fraction of that range, and there are othe phenomena we don't directly perceive; magnetism, radioactivity, gravity waves, much of the electromagnetic an audio spectra, etc. Tools wich do register these literally give us new ways of seeing.
        • Debunked scientific or causal theories, say, medicine's several-millennia false-path obsession with humours. Alchemy's Philosopher's Stone. Astronomy's geocentrism. Phrenology. Phlogiston. Aether. Fixed reference frames.
        • Cultural lenses --- virtually any form of tribalism or ethnocentrism, or sybols or indicia of these.
        • Technical or engineering heuristics.
        • Political theories; Divine Right of Kings, democracy, Communism,
        • Cultural or journaistic narratives. Trying to fit events or facts to a specific predefined arc or set of roles creates much of the dissonance in news stories; hero, everyman, captain of industry, waif, n'er-do-well, statesman, corrupt politician. Comedy, tragedy, hero's jouney, David vs. Goliath. The Conquoring Hero / Oppressed Peoples and Moral Subject / Oppressed Subcculture narrative flips have been particularly notably over the past 50 years or so.
        • Virtually anything related to religion.
        • Or economics. But I repeat myslf.
        • Various folk wisdom, technical practices, folk or social beliefs, myths, hoaxes, fictions.
        2 votes
    2. dredmorbius
      Link Parent
      Fair point. Simple works in simple domains. Make the environment more complex, more consequential (the win/loss register gets bigger for mistakes or correct responses), with complex inteeractions...

      Fair point.

      Simple works in simple domains. Make the environment more complex, more consequential (the win/loss register gets bigger for mistakes or correct responses), with complex inteeractions or secondary effects, etc., and simple filtering becomes less viable. Or maybe a previously minor signal becomes major.

      There's also the matter of shared worldviews or models. We're communicating through a shared knowledge of English, cultural and vaguely scientific terms and touchstones, computers, online communities, et cetera. That shifts the model space.

      A worldview that's shared amongst a group (family, team, tribe, community, company/organisation, country) provides predictability, clarity, ease in communication (a shared spoken, idiomatic, pattern, and cultural language). Coordination and response is easier --- people both know what they should do, and what others will do. But it comes at the cost of learning those models, something that takes about a quarter of a human lifetime currently.

      If you're a lone wolf or giant jewel beetle, a simple model either yours alone or genetically programmed is largely sufficient (until it's not).

      A more complex model maps onto a more complex domain. The trade-off is accuracy for simplicity, but that complexity also carries costs of development in the first place, general adoption (concensus worldview, coordination problem, showing advantage over any other new worldview candidates), teaching, maintenance, and ongoing refinement.

      The key ideas here are that simplicity itself may be an advantage, and the ability to rapidly assess circumstances and discard obviously bad fits can be an advantage over more complex, more accurate, but more data-intensive, slower, harder to learn and teach, more cumbersome to use models. That's a point not gerneraly emphasized. I don't claim it's original, though I'm not aware of it being put plainly elsewhere.

      Where selection pressures in an environment change over time --- being high-consequence at some times, low-consequence at others --- simple models may propagate during low-consequence regimes ("don't worry, be happy"), complex ones (fitted to that specific scenario) in high-consequence regimes ("only the clever survive"). Since the selections apply to both worldviews and those holding them, there are effectively two different selection process occurring simultaneously.

      And, key: even complex models discard virtually all information. We forget this in our Big Data world, but even that domain is highly lossy. And excess collection, sensing, memory, or storage may itself be a liability.

      Note that this isn't an argument for general advantage of either complex or simple models, or of any specific complex model over others. Fitness is always relative to a specific selection environment.

      3 votes
  3. entangledamplitude
    Link
    There is evidence from study in cognition/neuroscience (Eg: predictive processing, active inference) that our models do help us attribute salience to different aspects of incoming signals, and...

    There is evidence from study in cognition/neuroscience (Eg: predictive processing, active inference) that our models do help us attribute salience to different aspects of incoming signals, and thereby elevate them into feedback that can influence & refine the model, rather than rejecting them as noise.

    That said, a practical question in the modern world... what hastens when you’re filtering heuristics are used by an adversary (often automated) trying to infiltrate your defenses with memes — be it advertising, or propaganda, or just social media “engagement”.

    1 vote