psi's recent activity

  1. Comment on A billion years from now, a lack of oxygen will wipe out life on Earth in ~enviro

    psi
    Link Parent
    On the other hand, ignoring any priors we might have for a moment, shouldn't we expect to be somewhere in the middle of an era where oxygen is plentiful? Wouldn't it be unusual for us to be near...

    To get back to my self-sampling point: I think it's curious that humanity decides to turn up after 500-1000 million years (Ma) of substantial amounts of oxygen in our atmosphere, only to find out that in another 500 Ma, the circumstances enabling human life would be gone.

    On the other hand, ignoring any priors we might have for a moment, shouldn't we expect to be somewhere in the middle of an era where oxygen is plentiful? Wouldn't it be unusual for us to be near one of the tails instead?

    3 votes
  2. Comment on Do any other US citizens think of emigrating? in ~talk

    psi
    Link Parent
    To add some numbers, the first ~$100K/yr usually isn't taxable (per the Foreign Earned Income Exclusion [1]), and the remaining tax burden can lessened a little bit per the Foreign Tax Credit [2]....

    Unless you're quite wealthy, you won't have to pay a cent usually, but it's still a damn hassle.

    To add some numbers, the first ~$100K/yr usually isn't taxable (per the Foreign Earned Income Exclusion [1]), and the remaining tax burden can lessened a little bit per the Foreign Tax Credit [2]. But yeah, definitely an unnecessary hassle.


    [1] https://www.irs.gov/individuals/international-taxpayers/foreign-earned-income-exclusion

    [2] https://www.irs.gov/individuals/international-taxpayers/foreign-tax-credit

    3 votes
  3. Comment on A billion years from now, a lack of oxygen will wipe out life on Earth in ~enviro

    psi
    Link Parent
    The tone in my first comment was a bit more combative than I'd like, so sorry about that. Still, I would push back against the narrative that this research is premature. Luckily I have access to...

    The tone in my first comment was a bit more combative than I'd like, so sorry about that.

    Still, I would push back against the narrative that this research is premature. Luckily I have access to the article through my university, so we can take a peak inside the article. From the introduction:

    Previous work on the future lifespan of Earth’s biosphere has focused principally on the links between secular changes in solar luminosity, the stability of the carbonate–silicate geochemical cycle and the loss of surface water to space [16–20].

    (Even without access, you can view the references here [1].) Notably, those references date from 1992, 1982, 2006, 2002, and 2013, suggesting this has been known for decades. Although I have admittedly no expertise here, I'd be willing to bet that "increasing solar luminosity will destabilize the carbonate-scilicate cycle" is an established fact in this field. The conclusion basically says as much:

    In summary, our stochastic analysis suggests that the eventual deoxygenation of Earth’s atmosphere is a robust outcome of increasing solar luminosity, irrespective of large uncertainties in geophysical/biological boundary conditions [...]

    Of course, you're correct that a billion years is a long time, and more generally there are unknown-unknowns that could disrupt our understanding of geoscience in unforeseen ways.


    [1] https://www.nature.com/articles/s41561-021-00693-5#Bib1

    3 votes
  4. Comment on A billion years from now, a lack of oxygen will wipe out life on Earth in ~enviro

    psi
    Link Parent
    This result isn't exactly new -- I'm not sure what particularly differentiates this study, but similar predictions have been made before. In fact, just over a month ago I made a comment outlining...

    This result isn't exactly new -- I'm not sure what particularly differentiates this study, but similar predictions have been made before. In fact, just over a month ago I made a comment outlining our apocalyptic future [1] (though I concentrated on CO2 levels instead of O2 levels, but they're obviously connected; in fact, the study in the OP explores disruptions to the carbonate-silicate cycle, which would be the catalyst for everything I described in my comment).

    More on the nose, the Wikipedia article "Timeline of The Far Future" (probably my favorite Wikipedia article) has this to say regarding oxygen between 500-800 million years from now:

    As Earth begins to rapidly warm and carbon dioxide levels fall, plants—and, by extension, animals—could survive longer by evolving other strategies such as requiring less carbon dioxide for photosynthetic processes, becoming carnivorous, adapting to desiccation, or associating with fungi. These adaptations are likely to appear near the beginning of the moist greenhouse. The death of most plant life will result in less oxygen in the atmosphere, allowing for more DNA-damaging ultraviolet radiation to reach the surface. The rising temperatures will increase chemical reactions in the atmosphere, further lowering oxygen levels. Flying animals would be better off because of their ability to travel large distances looking for cooler temperatures. Many animals may be driven to the poles or possibly underground. These creatures would become active during the polar night and aestivate during the polar day due to the intense heat and radiation. Much of the land would become a barren desert, and plants and animals would primarily be found in the oceans.

    and this to say about oxygen a few hundred million years later:

    Carbon dioxide levels fall to the point at which C4 photosynthesis is no longer possible. Without plant life to recycle oxygen in the atmosphere, free oxygen and the ozone layer will disappear from the atmosphere allowing for intense levels of deadly UV light to reach the surface. In the book The Life and Death of Planet Earth, authors Peter D. Ward and Donald Brownlee state that some animal life may be able to survive in the oceans. Eventually, however, all multicellular life will die out. At most, animal life could survive about 100 million years after plant life dies out, with the last animals being animals that do not depend on living plants such as termites or those near hydrothermal vents such as worms of the genus Riftia. The only life left on the Earth after this will be single-celled organisms.


    [1] (self-plug) https://tildes.net/~enviro/uun#comment-64tg

    5 votes
  5. Comment on Make a loop with the Chrome Music Lab song maker! in ~creative

    psi
    Link
    That was a fun little distraction. Here's my song/loop.

    That was a fun little distraction. Here's my song/loop.

    4 votes
  6. Comment on What games have you been playing, and what's your opinion on them? in ~games

    psi
    Link
    Nexomon: Extiction [2020, Nintendo Switch] After finally finishing Cyberpunk, I decided I wanted to play something totally different (ie, not another open world action RPG in my backlog). I...

    Nexomon: Extiction [2020, Nintendo Switch]

    After finally finishing Cyberpunk, I decided I wanted to play something totally different (ie, not another open world action RPG in my backlog). I decided to give Nexomon a go, which is an unabashed Pokemon knockoff.

    Pros

    • Monsters use mana rather PP, presumably to discourage you from spamming your strongest attack every turn
    • Similarly, attack speed is particular to the Monster and move. Of course, Pokemon does this to some extent with its priority system, but most moves in Pokemon have normal priority. In contrast, strong moves tend to be slower in Nexomon, which adds a layer of complexity to battles
    • Combat is more streamlined (see below)

    Cons

    • Battles are just too time-consuming.
    • Not enough healing.
    • Dialog is perhaps too self-aware. That was funny enough the first few times, but most of the humor in the game comes from pointing out RPG/pokemon/anime tropes.
    • The story failed to pull me in.

    Other thoughts

    As I wrote above, the combat in Nexomon is more streamlined, and if there's anything that the mainstream Pokemon games could learn from Nexomon, it's this. Consider a typical encounter in a Pokemon game:

    "[Trainer] wants to fight!"

    "[Trainer] sent out [Trainer's Pokemon]!"

    [Player chooses a move.]

    "[Player's Pokemon] uses [move]!"

    [Drawn-out move animation.]

    "It's a critical hit!"

    "[Move] is super effective!"

    "[Trainer's Pokemon] fainted!"

    "[Player] defeated [Trainer]!"

    In that entire encounter, which could easily be thirty seconds or longer (not including loading screen transitions), the player only had a single moment of autonomy. No matter how preordained the battle might be (eg, a lvl 100 Charizard vs a lvl 5 rattata), the fights are still boggled down by unnecessary text boxes. If I had to guess, this is probably what people least enjoy about turn-based RPGs: they're slow, even when the outcome in the encounter has already been decided. Interrupting exploration to mindlessly mash the "A" button disrupts the flow of the game and is, worse yet, simply not fun.

    Nexomon improves over Pokemon by incorporating those unneeded dialog into the UI. So, for example, the above battle in Nexomon would be more like

    "[Trainer] sent out [Trainer's Nexomon]!"

    [Player chooses a move.]

    [Short animation. If critical, display "Critical!" above opponent's Nexomon. If supereffective, display "Effective!" above opponent's Nexomon. If afflicted by a status ailment, display ailment over Nexomon.]

    [Enemy Nexomon attacks. Same as above except with reversed roles.]

    [Player chooses a move.]

    [etc]

    "[Trainer's Pokemon] fainted!"

    Combat in Nexomon becomes more immediate. You perform your action, your opponent performs their actions, and you read the UI to determine the game state.

    Contrast that with a Pokemon game, in which the game will (for example) explicitly tell you when your Pokemon's been afflicted with the burn ailment. But that's redundant information -- you can already tell you've been afflicted by looking at the UI. That additional dialog box unnecessarily slows down the pacing of the battle.

    "But wait," you ask, "if the battles are more streamlined, then how can they be too time-consuming?"

    And here's where we hit the fundamental flaw in the Nexomon formula. Perhaps to encourage strategy, Nexomon don't hit as hard as Pokemon. Strong supereffective attacks are unlikely to reduce the opponent's HP by less than half. Therefore players are encouraged to constantly swap Nexomon during battles (opponents will do the same). Every battle becomes a drawn-out game of rock-paper-scissors.

    Moreover, grinding is not an escape. Encounters with wild Nexomon and trainers level-scale based on the level of the Nexomon in your party. Thus, your weakest Nexomon becomes a liability. Leveling-up doesn't make you stronger so much as (maybe) less-weak.

    Finally, due to the level-scaling, all encounters tend to deplete a significant amount of HP and mana. Therefore you might only be able to fight through a couple encounters before having to decide whether you should run back to the previous healing station. Effectively, compared to Pokemon, the game devs reduced the UI sluggishness, kept the battle lengths the same, and added mandatory backtracking.

    I'd say I played roughly a third of it before I decided to drop the game. I'd consider revisiting the game if the following revisions were made:

    • Cap the level-scaling.
    • Automatically heal all Nexomon after combat encounters.
    8 votes
  7. Comment on Tildes is pushing out the minority voice in ~talk

    psi
    Link Parent
    I think that's a better example than mine. For reference, the SSC post had more engagement (67 comments) than any post on this site in the past week (other than this one).

    I think that's a better example than mine. For reference, the SSC post had more engagement (67 comments) than any post on this site in the past week (other than this one).

    5 votes
  8. Comment on Tildes is pushing out the minority voice in ~talk

    psi
    Link Parent
    My comment wasn't an indictment of you, and I certainly don't want you to get the impression that I was talking about you specifically. I consider myself somewhat rationalist-leaning even though...

    My comment wasn't an indictment of you, and I certainly don't want you to get the impression that I was talking about you specifically. I consider myself somewhat rationalist-leaning even though I've spent basically no time on SSC or LW. I don't want to get into particulars (though generally, see this thread [1]), but rationalist discourse is pervasive throughout these forums, even if the users of such rhetoric might not self-identify as rationalists.


    (I'm of two minds about this particular thread: on the one hand, I think this is a thought-provoking discussion. And in that sense, we're early participants in what could eventually become a culture war issue, but we're fortunate to be able to engage in the topic before its been colored by politics. On the other hand, the issue is not academic for some people, and treating the issue as such could cause those people to feel further marginalized/unwelcome.)

    5 votes
  9. Comment on Tildes is pushing out the minority voice in ~talk

    psi
    Link
    (Full disclosure: let me preface my comment by saying that I don't identify with any minority group, but I do find the lack of representation frustrating.) I think this is the crux issue, so I'm...

    (Full disclosure: let me preface my comment by saying that I don't identify with any minority group, but I do find the lack of representation frustrating.)

    What I knew, but didn't truly understand is that if 1 in 100 users have problematic behavior and 1 in 100 users are transgender, we have an equal number of transgender individuals as we do users with problematic behavior. I want you to stop here and reread the last sentence and really absorb it before moving on. Ask yourself what problems might arise by these inequality existing.

    I think this is the crux issue, so I'm glad you reiterated it. But I think the situation is actually worse than presented: yes, we have a crappy signal-to-noise ratio, but the noise isn't just the problematic users -- to a large extent, it's also the 98% of users who don't have relevant lived experience.

    I think it's fair to say this forum has a rationalist-leaning philosophy (in the Slate Star Codex sense), hence its willingness to engage in good-faith arguments defending controversial opinions (the ~1% of problematic users not engaging in good faith generally get censored eventually). But as noted elsewhere in this thread, those with relevant lived experience aren't interested in rehashing a sophomoric Devil's argument for the upteenth time (for instance, I would imagine -- again, not a PoC -- its infinitely easier to have a dispassionate discussion about racism when you've never been victimized by it).

    I believe that what we're facing here is a crisis of identity: do we want to be a left-leaning rationalist community, or do we want to be a more welcoming one? Ultimately I don't think these ideals are compatible. Personally, I'd like to see us become more welcoming even if that means shedding some of our rationalist tendencies.

    13 votes
  10. Comment on Switching from Mac to PC? in ~tech

    psi
    (edited )
    Link
    If you have recent-ish hardware (and you aren't too intimidated by the command line), maybe consider giving VFIO [1] a shot (that is, virtualization with gpu passthrough -- the performance can be...

    If you have recent-ish hardware (and you aren't too intimidated by the command line), maybe consider giving VFIO [1] a shot (that is, virtualization with gpu passthrough -- the performance can be very close to bare metal). Compared to dual-booting, you have the advantage of being able to run both operating systems at the same time. So you can, for instance, temporarily switch to Windows when you need to run some specific program without closing your Linux apps. In my case, I like to play video games on one monitor (Windows/guest) while leaving discord/an internet browser up on the other (Arch/host).

    Downsides:

    1. As I said, you need recent-ish hardware.
    2. Works much better with two gpus (one of those can be integrated, however; in fact, technically both can be integrated if your chipset supports GVT-g, but personally I haven't had much success with this).
    3. gpu bugs with older AMD/NVIDIA cards makes gpu passthrough difficult (reset bug and error 43, respectively).
    4. Probably a total pain in the ass to set up on a laptop.

    I suspect the limiting factor here is probably (3). But if you're interested in seeing this setup in action, LinusTechTips has quite a few videos where they experiment with VFIO [3,4].


    6 votes
  11. Comment on Let's talk about computer algebra systems in ~comp

    psi
    Link
    You might consider also Mathics [1], which is a an open-source alternative to Mathematica that supports Mathematica-like syntax. However, I have no experience with Mathics, and I haven't needed to...

    You might consider also Mathics [1], which is a an open-source alternative to Mathematica that supports Mathematica-like syntax. However, I have no experience with Mathics, and I haven't needed to use Mathematica in years, so I'm not sure to what extent it can be considered a full replacement.

    That said, Mathematica's success is due in part to its algorithms, which are unfortunately hidden because of its closed-source nature. Even an open-source clone with 100% compatible syntax wouldn't necessarily give the same results, particularly if you're evaluating some wicked integral or simplifying some nasty expression.


    [1] https://mathics.org/

    2 votes
  12. Comment on <deleted topic> in ~enviro

    psi
    Link Parent
    Whoops, good catch. Yes, the sun will transition from a red giant to a white dwarf instead – I'm not sure what that process is called.

    Whoops, good catch. Yes, the sun will transition from a red giant to a white dwarf instead – I'm not sure what that process is called.

    2 votes
  13. Comment on What programming/technical projects have you been working on? in ~comp

    psi
    Link Parent
    Sorry, I'm sure my explanation wasn't perfect. As a bit of a tangent, one of the hardest skills to learn in grad school is how to effectively communicate technical results to any audience (of...

    Sorry, I'm sure my explanation wasn't perfect. As a bit of a tangent, one of the hardest skills to learn in grad school is how to effectively communicate technical results to any audience (of course, this is an important skill outside academia too). Grad school is somewhat paradoxical in that, compared to your average layperson, grad students know orders of magnitude more about their field; but grad students don't work with laypersons -- grad students work with other researchers, who themselves know orders of magnitude more than the grad students.

    So graduate school is basically the awkward teenage phase of academia. We know enough to understand our own research, but we don't know enough of the technical details to communicate effectively with our seniors, and we generally don't have the intuition to explain our research to a non-technical audience.

    Anyway, I guess my point here is that I wasn't purposefully trying to obfuscate my explanation with professional verbiage. I'm just using this post to practice explaining my work to others, which is something I'm admittedly not great at.


    So now to actually answer your question: fitting correlators basically amounts to staring at a few relevant plots and trying to use your best judgement to determine where the signal start. For example, if you plot the effective mass

    m_eff(t) = C(t) / C(t+1)
    

    you can work-out that for large t, this quantity asymptotically approaches E0 (ie, the mass of the particle on the lattice, which is usually the quantity we're most interested in). But as I wrote before, the noise dominates the signal for late times, so the effective mass might not plateau until after the signal disappears.

    Nevertheless, with enough practice, you start to intuit what a good fit looks like (which you augment with statistics such as the p-value and reduced chi square [2] to check for goodness of fit). In that manner, some people have thought to use machine learning to determine these fits, as you can essentially train a machine to think like a human. Of course, machine learning algorithms can be a of a black box, so when a fit appears to go awry, it's hard to know whether the fault is with your judgement or the algorithm's.


    More generally, fitting correlators can be thought of as a data selection problem. To give a dumb example: suppose you're taking an introductory physics course and your instructor asks you to determine the radius of three balls on your desk. You and your lab partners make the following measurements of the three balls:

    Ball A Ball B Ball C
    10 cm 15 cm 12 m

    See the mistake here? A 12 m ball probably isn't going to fit on your desk. In principle you shouldn't exclude data from your analysis, but you might be justified in doing that if you have a physical reason for thinking your data is inaccurate (in this case, your lab partner probably made a typo).

    But if you start removing outliers willy-nilly, you risk biasing your data (sometimes that blip in your data is actually exciting new physics). In general, data selection problems tend to be about understanding the limitations of your instruments and less so about calculating statistics. To be slightly more technical, the easiest way to compare models in a Bayesian scheme is to compare the Bayes factor [3] of model A to model B, but Bayes factors can only be compared if the models share a common data set. In a data selection problem, the models do not.


    At least, that was more or less my thinking until I came across this paper [4]. The authors managed to convert the correlator data selection problem ("which range of times should I fit") into a model selection problem. Unlike data selection problems, model selection problems are easily amenable to statistical techniques (such as computing Bayes factors). Using the algorithm outlined in the paper, I can automatically select the best time range and the number of terms to include in my sum. And... it works pretty well! Sometimes the algorithm makes some bold choices, but even then they are usually pretty reasonable. And while the automated fits aren't quite convincing enough for us to use this algorithm exclusively, I can at least use this automated technique as a cross-check against the fits I pick by hand.


    [1] https://en.wikipedia.org/wiki/P-value
    [2] https://en.wikipedia.org/wiki/Reduced_chi-squared_statistic
    [3] https://en.wikipedia.org/wiki/Bayes_factor
    [4] https://arxiv.org/abs/2008.01069

    5 votes
  14. Comment on <deleted topic> in ~enviro

    psi
    (edited )
    Link Parent
    Somewhat of an aside, but even without climate change the Earth will become uninhabitable long before then (at least, without technological breakthroughs such as [1]). I'd recommend looking at...

    Earth will go on doing this until the sun undergoes its expansion.

    Somewhat of an aside, but even without climate change the Earth will become uninhabitable long before then (at least, without technological breakthroughs such as [1]). I'd recommend looking at Wikipedia's timeline of the far future [2] starting at approximately the 500 million yr mark.

    To summarize:

    • 500 million CE: the Sun's increasing luminosity disrupts the carbonate-silicate cycle. Consequently carbon dioxide levels begin to drop, causing all plants that utilize C3 photosynthesis to die (99% of plant species).
    • 900 million CE: C4 photosynthesis also becomes impossible, which eventually results in the death of all multicellular life.
    • 1 billion CE: the oceans evaporate.

    By 2.8 billion CE, the Earth's surface temperature is around 300 deg F. For reference, the sun won't supernova until about 8 billion CE.


    [1] https://en.wikipedia.org/wiki/Space_sunshade
    [2] https://en.wikipedia.org/wiki/Timeline_of_the_far_future

    3 votes
  15. Comment on What programming/technical projects have you been working on? in ~comp

    psi
    Link
    I'm currently fitting correlation functions [1]. In theory, correlation functions are extremely useful since they can be used to, for example, extract the energy spectrum of a particle created on...

    I'm currently fitting correlation functions [1]. In theory, correlation functions are extremely useful since they can be used to, for example, extract the energy spectrum of a particle created on a lattice [2]. In practice, it's the most tedious part of my PhD research.

    Essentially my task is to fit a sum of exponentials

    C(t) = A_0 e^{E_0 t} + A_1 e^{E_1 t} + ...
    

    in order to determine the wave function overlaps A_i and energies E_i given data vectors C(t), t and a covariance matrix for C(t) (examples: [3, 4]).

    I've written some code to perform a Bayesian non-linear least squares fit and output some pretty plots (the latter being basically the most important part of my research). In some sense, everything up to this point is "easy": there are a lot of concepts to wrap your head around, but at the end of the day you're just writing a fitter.

    However, the tricky part is determining what constitutes a "good" fit. For each set of correlator data (there are more than 100 in total), I have to determine:

    1. How many terms to include in the expansion
    2. Which subset of the data to include in the fit

    That means balancing the following considerations:

    1. Obviously it's impossible to fit a function to infinitely many terms in an expansion, so I have to cut off the sum at some point. Therefore I need to make a choice for how many terms to fit.
    2. Consequently, the fit is sensitive to the range of time I fit. If I only include a single term in the expansion, for example, then it wouldn't make sense to include data from the earliest times when the signal is almost certainly contaminated by truncated terms. I need to choose the earliest time that is minimally impacted by terms not included in my expansion.
    3. On the other hand, the noise dominates the signal at late times, so I also have to make a decision on the latest times to include.
    4. Occasionally there are random, correlated fluctuations in the data, which means I have to suss-out whether the signal I'm fitting is actually real.

    For all these reasons, fitting correlation functions feels more like an art than a science. Fortunately we have cross-checks, and there is even some work on automating this process. But unfortunately nobody will trust your automated techniques unless you fit all the data by hand too.


    [1] https://en.wikipedia.org/wiki/Correlation_function_(quantum_field_theory)
    [2] https://en.wikipedia.org/wiki/Lattice_QCD#Fermions_on_the_lattice
    [3] Quick example (just imagine adding noise): https://www.wolframalpha.com/input/?i=plot+y+%3D+1+e%5E%28-+0.5x+%29+-++2+e%5E%28-x%29+%2B+0.5+e%5E%28-3x%29+from+x+%3D+0+to+x+%3D+20
    [4] More realistic example (fig 1 & the last few pages): https://arxiv.org/abs/2011.12166

    4 votes
  16. Comment on Weekly US politics news and updates thread - week of January 25 in ~news

    psi
    (edited )
    Link Parent
    Impeaching a former president might be untrodden ground, but impeaching a former official is not. [1] (And honestly, impeachments are rare enough that they're all unprecedented in some way or...

    Impeaching a former president might be untrodden ground, but impeaching a former official is not. [1] (And honestly, impeachments are rare enough that they're all unprecedented in some way or another.) Regardless, I think Senators should vote their conscience here and let the courts debate the constitutionality of the matter -- they're better suited to settle that dispute. Or at the very least, Senators should wait until after arguments have been made at the impeachment trial.

    But mostly I don't take their concerns in good faith. I'm speculating here (and maybe I'm too optimistic), but I believe there are enough Republican Senators that privately believe Trump committed an impeachable offense. However, they fear the political backlash for voting for conviction, so now they're looking for whatever flimsy excuse they can take to weasel their way out of a vote.


    [1] https://en.wikipedia.org/wiki/William_W._Belknap#House_corruption_investigation,_resignation_and_impeachment_(1876)

    7 votes
  17. Comment on How do you manage data backups? in ~tech

    psi
    Link
    I use BorgBackup [1] with rsync.net [2], which I use for daily/weekly/monthly snapshots. As a plus, the snapshots are also encrypted. I think I'm paying $18/yr for 100 GB of storage. [1]...

    I use BorgBackup [1] with rsync.net [2], which I use for daily/weekly/monthly snapshots. As a plus, the snapshots are also encrypted.

    I think I'm paying $18/yr for 100 GB of storage.


    [1] https://github.com/borgbackup/borg
    [2] https://www.rsync.net/products/borg.html (Note that there's special pricing available for borg/technical users.)

    2 votes
  18. Comment on I'm getting spammed by robocalls, what can I do about it? in ~talk

    psi
    Link Parent
    There is something immensely satisfying about making robocallers talk to a robot.

    There is something immensely satisfying about making robocallers talk to a robot.

    1 vote
  19. Comment on Daily thread - United States 2021 transition of power - January 22 in ~news

  20. Comment on What's something you wish people outside of your field knew/understood? in ~talk

    psi
    (edited )
    Link Parent
    Hmm, that seems like a stronger statement that mine, and I'm not sure that I'd agree with it. I think most physicists probably assume there is some minimum formulation of physics such that, with...

    What I am claiming is that even with such a powerful computer, there are phenomena that are not constructible just from the "basic rules" of QCD alone, for example.

    Hmm, that seems like a stronger statement that mine, and I'm not sure that I'd agree with it. I think most physicists probably assume there is some minimum formulation of physics such that, with the right initial conditions, you could eventually reproduce the richness of the universe (assuming you had a Turing machine with infinite tape/time etc; in practice this would be futile). To give a more specific example: symmetries in a classical field theory can arise from broken symmetries in a quantum field theory (these are known as quantum anomalies). In fact, I think the examples Anderson lists work better as examples of emergence. The closing paragraph puts it nicely.

    In closing, I offer two examples from economics of what I hope to have said. Marx said that quantitative differences become qualitative ones, but a dialogue in Paris in the 1920's sums it up even more clearly:

    FITZGERALD: The rich are different from us.

    HEMINGWAY: Yes, they have more money.

    If there isn't some minimum formulation of physics from which all else follows, then that would mean something like the following.


    Alternate hypothesis 1: the universe is not fundamentally mathematical and any attempts to describe it mathematically cannot work.

    Physics is, at its core, about create models to explain physical phenomenon. And while it might seem inconceivable to separate math from physics, it's worth noting that until Galileo pronounced that the language of nature is mathematics, people generally didn't think about physics mathematically.

    And maybe there's some good reason to think the universe can't be described by math: the above lattice QCD calculation I mentioned requires months of computational time, and those calculations simulate a "universe" only about 10 times larger than a proton. Nevertheless, the universe makes similarly difficult calculations at every point in the universe simultaneously, a calculation which itself depends on the values of the quantum fields at every point in the universe. And unless the universe really is a simulation, there is no physical stratum (eg, HDD, ram, etc) on which the universe to run these calculations; the universe simply evolves.

    In some sense, I think this is the largest unanswerable question in physics. Yes, we can understand how phenomena emerge from principles; but why do those principles exist to begin with?


    Alternate hypothesis 2: there exist many different fundamental regimes.

    Let's continue with the "universe as a simulation" thought experiment. If you were to build a universe from scratch, and you wanted to simulate some particular critter (say, ants), do you really need general relativity? Sure, maybe you need general relativity sometimes -- that seems to be important for the large scale structure of the universe; and sometimes human beings like to talk about that; and sometimes when human beings are talking about general relativity, they tend to be a bit less considerate of the little critters moseying about below.

    But when we're thinking about the physical effects of general relativity on an ant, that is completely negligible; Newton's laws will suffice. One could imagine that there exist multiple regimes in which general relativity perfectly describes (non-quantum) gravity sometimes, while other times the effect of general relativity is "shut off" such that Newton's laws perfectly describes (non-quantum) gravity other times. And you could apply similar cutoffs to other fields of science: perhaps at some scale quantum effects "turn off" and animals obey purely chemical/biological axioms.


    Summary

    I think of these two alternate hypotheses (and there could be more), the first one is more appealing than the second. But regardless, I think any alternate explanation for emergence (ie why simple rules can generate complex patterns) would be less intuitive than the more straightforward explanation (sometimes simple rules have hidden secrets).

    2 votes