A glowing and extremely uncritical article on FHI by someone who would seem to be a longtermist themselves (or at least their byline mentions writing a book about it). Only mentions The Email to...
A glowing and extremely uncritical article on FHI by someone who would seem to be a longtermist themselves (or at least their byline mentions writing a book about it). Only mentions The Email to gloss over it, a few paragraphs on how what they wrote isn't what they meant, somehow. Which, sure, maybe, but it comes with an implication that it's the only real criticism of the whole endeavour, aside from existing philosophy departments that didn't "get it." Though it mentions Émile Torres only as a critic characterised as doing nothing but turning up emails that make FHI look bad, it may be worth reading some of his material on the FHI and associated ideologies. The following covers the FHI specifically, though other articles discuss longtermist ideas in more detail (and in a more serious way): https://xriskology.substack.com/p/the-future-of-the-future-of-humanity
Also as an academic I find the narrative around that aspect of the story relatively cringey - like, you want to use the facilities, funding, and prestige of a university but also not work within the organisational structure of it at all? Do they really think that they are the only people in academia considering big questions, doing big things? Come on.
It took me five minutes of just looking them up to see all the things I consider red flags, and I haven't even read your link yet. Nick has a history of racism which while he apologized for using...
It took me five minutes of just looking them up to see all the things I consider red flags, and I haven't even read your link yet.
Nick has a history of racism which while he apologized for using the N-word he's specifically not apologized for believing in the intellectual superiority of white people and has dodged the question since.
"Effective altruism"
Musk
"Darling of tech bros"
Focusing on the "rationality" as if emotions are useless.
All of this is just a guy waving a giant red flag for me.
So then I read your link.
And the racism is even worse and the irony of being unable to prevent your own downfall is poignant.
Indeed, one of the most striking features of the FHI/TESCREAL literature is the conspicuous absence of virtually any reference whatsoever to what the future could or, more importantly, should look like from the perspectives of Indigenous communities, Afrofuturism, feminism, Queerness, Disability, Islam, and other non-Western perspectives and thought traditions—or even the natural world. Heck, Bostrom’s colleague William MacAskill even argues in What We Owe the Future that our ruthless destruction of the environment might be net positive: wild animals suffer, so the fewer wild animals there are, the less wild-animal suffering there will be. It is, therefore, not even clear that nonhuman organisms have a future in the techno-utopian world of TESCREALism.
It's like when you only have people in the room that think that white people and "rationality" are the ideal, you sort of miss the philosophy of the rest of the world.
I have liked the broad idea of transhumanism but haven't been engaged with any sort of actual movement (I have friends who self-implant RFID chips or do body mods but that's it) and I'm annoyed to find it's also doing eugenic shit but not surprised.
This article discusses the creation, history, and closure of the Future of Humanity Institute, in the Department of Philosophy at Oxford University. Tl;dr... It seems Nick Bostrom, for all his...
This article discusses the creation, history, and closure of the Future of Humanity Institute, in the Department of Philosophy at Oxford University.
On April 16, 2024, the website of the Future of Humanity Institute was replaced by a simple landing page and a four-paragraph statement. The institute had closed down, the statement explained, after 19 years. It briefly sketched the institute’s history, appraised its record, and referred to “increasing administrative headwinds” blowing from the University of Oxford’s Faculty of Philosophy, in which it had been housed.
Thus died one of the quirkiest and most ambitious academic institutes in the world. FHI’s mission had been to study humanity’s big-picture questions: our direst perils, our range of potential destinies, our unknown unknowns. Its researchers were among the first to usher concepts such as superintelligent AI into academic journals and bestseller lists alike, and to speak about them before such groups as the United Nations.
To its many fans, the closure of FHI was startling. This group of polymaths and eccentrics, led by the visionary philosopher Nick Bostrom, had seeded entire fields of study, alerted the world to grave dangers, and made academia’s boldest attempts to see into the far future. But not everyone agreed with its prognostications. And, among insiders, the institute was regarded as needlessly difficult to deal with — perhaps to its own ruin. In fact, the one thing FHI had not foreseen, its detractors quipped, was its own demise.
Why would the university shutter such an influential institute? And, to invert the organization’s forward-looking mission: how should we — and our descendants — look back on FHI? For an institute that set out to find answers, FHI left the curious with a lot of questions.
Tl;dr...
It seems Nick Bostrom, for all his visionary futurism and ability to secure outside funding, created a think-tank that wasn't a good fit for a mature university with rigid disciplinary boundaries and protocols. He and others in the organization also got tangled in Effective Altruism and AI hype.
I'm not deeply versed in the FHI-sponsored literature of existential risk, but they seem to have missed some nearer-term catastrophic ones, like lack of capacity to manage increasingly complex systems; unrestrained laissez-faire capitalism and all its externalities; creation of societies hostile to safe human reproduction and child rearing; and widespread existential despair.
Nonetheless, the Future of Humanity Institute brought a number of intellectual luminaries together for productive cross-disciplinary collaboration, and had enormous impact on our current policy and commercial environment.
A glowing and extremely uncritical article on FHI by someone who would seem to be a longtermist themselves (or at least their byline mentions writing a book about it). Only mentions The Email to gloss over it, a few paragraphs on how what they wrote isn't what they meant, somehow. Which, sure, maybe, but it comes with an implication that it's the only real criticism of the whole endeavour, aside from existing philosophy departments that didn't "get it." Though it mentions Émile Torres only as a critic characterised as doing nothing but turning up emails that make FHI look bad, it may be worth reading some of his material on the FHI and associated ideologies. The following covers the FHI specifically, though other articles discuss longtermist ideas in more detail (and in a more serious way): https://xriskology.substack.com/p/the-future-of-the-future-of-humanity
Also as an academic I find the narrative around that aspect of the story relatively cringey - like, you want to use the facilities, funding, and prestige of a university but also not work within the organisational structure of it at all? Do they really think that they are the only people in academia considering big questions, doing big things? Come on.
It took me five minutes of just looking them up to see all the things I consider red flags, and I haven't even read your link yet.
Nick has a history of racism which while he apologized for using the N-word he's specifically not apologized for believing in the intellectual superiority of white people and has dodged the question since.
"Effective altruism"
Musk
"Darling of tech bros"
Focusing on the "rationality" as if emotions are useless.
All of this is just a guy waving a giant red flag for me.
So then I read your link.
And the racism is even worse and the irony of being unable to prevent your own downfall is poignant.
I have liked the broad idea of transhumanism but haven't been engaged with any sort of actual movement (I have friends who self-implant RFID chips or do body mods but that's it) and I'm annoyed to find it's also doing eugenic shit but not surprised.
This article discusses the creation, history, and closure of the Future of Humanity Institute, in the Department of Philosophy at Oxford University.
Tl;dr...
It seems Nick Bostrom, for all his visionary futurism and ability to secure outside funding, created a think-tank that wasn't a good fit for a mature university with rigid disciplinary boundaries and protocols. He and others in the organization also got tangled in Effective Altruism and AI hype.
I'm not deeply versed in the FHI-sponsored literature of existential risk, but they seem to have missed some nearer-term catastrophic ones, like lack of capacity to manage increasingly complex systems; unrestrained laissez-faire capitalism and all its externalities; creation of societies hostile to safe human reproduction and child rearing; and widespread existential despair.
Nonetheless, the Future of Humanity Institute brought a number of intellectual luminaries together for productive cross-disciplinary collaboration, and had enormous impact on our current policy and commercial environment.