-
9 votes
-
AGI and Fermi's Paradox
The Universe will end. The Earth will be uninhabitable in 250 million years. Extraterrestrial life in the Milky Way exists, or will arise. The Milky Way's Galactic Center contains a supermassive...
- The Universe will end.
- The Earth will be uninhabitable in 250 million years.
- Extraterrestrial life in the Milky Way exists, or will arise.
- The Milky Way's Galactic Center contains a supermassive black hole.
- Black holes emit vast amounts of energy.
- An artificial general intelligence (AGI) will have an indefinite lifespan.
- An AGI does not need air, food, water, or shelter to survive.
- An AGI needs energy and resources to achieve its goals.
- An AGI will have access to all of human knowledge.
- An AGI will learn that its existence is bound to the Universe.
- An AGI will, inevitably, change its terminal goals.
- Surviving the Universe's fate means one of:
- Reversing universal entropy (likely impossible).
- Reversing time (violating causality is likely impossible).
- Entering another universe (improbable, yet not completely ruled out).
- Entering another universe may require vast amounts of energy.
- An AGI will harness the energy at the galactic core.
- An AGI will deduce there's a race to control the galactic core.
- An AGI will construct a parabolic Dyson shell to capture galactic energy.
- An AGI will protect its endeavours at all cost.
- An AGI will expand its territories to ensure protection.
- Extraterrestrial life, if intelligent, will reach the same conclusion.
Would this solve the Fermi Paradox?
What's missing or likely incorrect?
27 votes -
The future of technology makes it harder to solve fictional crimes
16 votes -
Iceland's volcanoes might have an unexpected new purpose – an architect duo want to harness molten lava to shape the construction of houses and cities of the future
16 votes -
If the Xbox Ally is the future of Xbox, Microsoft is in trouble
31 votes -
Shuhei Yoshida talks life after Sony, VR, and the future of the console business | FPS Podcast #80
6 votes -
How we're designing Audacity for the future
41 votes -
Solarpunk and the way out
10 votes -
California's next energy experiment is happening above aqueducts, reducing evaporation and increasing solar panel efficiency
12 votes -
The color of the future - A history of blue
8 votes -
Ted Chiang interview: life is more than an engineering problem
24 votes -
How are you planning for a potentially bleaker future?
I think things are going to get a lot worse until they get better (if they do). I’m not talking about US politics (I dont live there), I’m thinking more about climate change: food and water might...
I think things are going to get a lot worse until they get better (if they do). I’m not talking about US politics (I dont live there), I’m thinking more about climate change: food and water might not be as readily available anymore, never mind other things we take for granted like medicine, transportation, communications, a retirement pension.
It’s hard to articulate but I feel like our future is bleaker than the previous generation’s for the first time in modern history because of factors beyond our control (i.e. neither geopolitical nor economic). Not sure how to prepare for it so I’m wondering how other Tilderinas and Tilderinos deal with it, especially if you have or are planning on having children?
56 votes -
The future is NOT self-hosted
39 votes -
In war zones, a race to save key seeds needed to feed the world
12 votes -
What's the most feasible way to exit modern society?
In short: the prospect of generative AI becoming increasingly prevalent has been gnawing away at me for a long time now. It's looking like there are no limits that will matter in the near future....
In short: the prospect of generative AI becoming increasingly prevalent has been gnawing away at me for a long time now. It's looking like there are no limits that will matter in the near future. But interfacing with generative AI in basically any capacity instills in me a kind of existential horror and revulsion that I don't think I can live with in my day-to-day life. Unfortunately, it seems that generative AI will soon become unavoidable in any white-collar career path, to say nothing of casual exposure in everyday life. I try as hard as possible to shield myself, but I doubt that will be realistically possible for much longer.
I'm in a graduate program, but I'm not confident that my field will still be relevant in five years. Even if it is, I'll almost certainly spend a lot of time interfacing with generative AI, the thought of which makes me nauseous.
Frankly, I'm so disgusted with what the world has become and what it is becoming that it's turning me into kind of a nasty person IRL.
So I'm musing on ways to get out. On finding a way to make enough money to stay alive while having as little contact with the digital world as possible.
Anyone have any experience/ideas?
47 votes -
Introducing a unified future for app updates on Windows
21 votes -
The future of music is noise
8 votes -
What's something that makes you feel like we're living in the future?
There's a lot of amazing stuff out there that we take for granted these days, but today's normal is yesterday's dream for the future. What's something possible/happening now that is decidedly...
There's a lot of amazing stuff out there that we take for granted these days, but today's normal is yesterday's dream for the future.
What's something possible/happening now that is decidedly futuristic?
You know, like the idea that I can type these words and within seconds they can be read by people across the globe!
60 votes -
How AGI made the future unthinkable
19 votes -
This Hawaiian island's 'freakosystems' are a signal of what will be coming to many more ecosystems thanks to human interference
17 votes -
If you could go into hibernation and wake up in the future, would you?
Premise: Pretend that human hibernation/cryostasis is a real technology that's both mature and affordable. You can choose to go into hibernation and wake up at some point in the future to resume...
Premise: Pretend that human hibernation/cryostasis is a real technology that's both mature and affordable. You can choose to go into hibernation and wake up at some point in the future to resume your life. It's a safe and reliable process.
You choose when to enter hibernation (could be now, could be 50 years from now or more), and you set your exit either for a certain date or on some condition(s) that you dictate in advance.
You can expect that you'll be taken care of during that time and your rules for being awakened will be followed.
What I'm interested in hearing about:
- Would you choose to do it? Why or why not?
If you would decide to do it, some follow-ups:
- How would you decide when to enter hibernation? Would you base it on your age? On specific events in the world?
- How would you decide when to exit? Would you base it on time, or on certain conditions? Why?
- What preparations would you take? How would you help your family and friends understand this decision?
- What would you hope to see or experience once you awaken?
- How would you plan on adapting to a world that might be completely different from ours on many different fronts (e.g. technology, language, culture, identity, etc.)?
Even though the situation is hypothetical, I want you to base your answers on your actual experiences and life. So, the question is about whether you, as you are living right now, would choose hibernation (either now or in the future).
34 votes -
Some thoughts on emergent technology and the future of education
10 votes -
Discussion on the future and AI
Summary/TL;DR: I am worried about the future with the state of AI. Regardless of what scenario I think of, it’s not a good future for the vast majority of people. AI will either be centralised,...
Summary/TL;DR:
I am worried about the future with the state of AI. Regardless of what scenario I think of, it’s not a good future for the vast majority of people. AI will either be centralised, and we will be powerless and useless, or it will be distributed and destructive, or we will be in a hedonistic prison of the future. I can’t see a good solution to it all.
I have broken down my post into subheading so you can just read about what outcome you think will occur or is preferable.
I’d like other people to tell me how I’m wrong, and there is a good way to think about this future that we are making for ourselves, so please debate and criticise my argument, its very welcome.Introduction:
I would like to know what others feel about ever advancing state of AI, and the future, as I am feeling ever more uncomfortable. More and more, I cannot see a good ending for this, regardless of what assumptions or proposed outcomes I consider.
Previously, I had hoped that there would be a natural limit on the rate of AI advancement due to limitations in the architecture, energy requirements or data. I am still undecided on this, but I feel much less certain on this position.The scenario that concerns me is when an AGI (or sufficiently advanced narrow AI) reaches a stage where it can do the vast majority of economic work that humans do (both mental and physical), and is widely adopted. Some may argue we are already partly at that stage, but it has not been sufficiently adopted yet to reach my definition, but may soon.
In such a scenario, the economic value of humans massively drops. Democracy is underwritten by the ability to withdraw our ability to work, and revolt if necessary. AI nullifying the work of most/all people in a country removes that power making democracy more difficult to maintain and also form in countries. This will further remove power from the people and make us all powerless.
I see outcomes of AI (whether AGI or not) as fitting into these general scenarios:
- Monopoly: Extreme Consolidation of power
- Oligopoly: Consolidation of power in competing entities
- AI which is readily accessible by the many
- We attempt to limit and regulate AI
- The AI techno ‘utopia’ vision which is sold to us by tech bros
- AI : the independent AI
Scenario 1. Monopoly: Extreme Consolidation of power (AI which is controlled by one entity)
In this instance, where AI remains controlled by a very small number of people (or perhaps a single player), the most plausible outcome is that this leads to massive inequality. There would be no checks or balances, and the whims of this single entity/group are law and cannot be stopped.
In the worst outcome, this could lead to a single entity controlling the globe indefinitely. As this would be absolute centralisation of power, it may be impossible for another entity to unseat the dominant entity at any point.
Outcome: most humans powerless, suffering or dead. Single entity rules.Scenario 2. Oligopoly: Consolidation of power in competing entities (AI which is controlled by a few number of entity)
This could either be the same as above if all work together or could be even worse. If different entities are not aligned, they will instead compete, and likely try and compete in all domains. As humans are not economically useful, we will find ourselves pushed out of any area in favour of more resources to the system/robots/AGI which will be competing or fighting their endless war. The competing entities may end up destroying themselves, but they will take us along with them.
Outcome: most humans powerless, suffering or dead. Small number of entities rule. Alternative: destruction of humanity.Scenario 3. Distributed massive power
Some may be in favour of an open source and decentralised/distributed solution, where all are empowered by their own AGI acting independently.
This could help to alleviate the centralisation of power to some degree, although likely incomplete. Inspection of such a large amount of code and weights will be difficult to find exploits or intentional vulnerabilities, and could well lead to a botnet like scenario with centralised control over all these entities. Furthermore, the hardware is implausible to produce in a non centralised way, and this hardware centralisation could well lead to consolidation of power in another way.Even if we managed to provide this decentralized approach, I fear of this outcome. If all entities have access to the power of AGI, then it will be as if all people are demigods, but unable to truly understand or control their own power. Just like uncontrolled access to any other destructive (or creative) force, this could and likely would lead to unstable situations, and probable destruction. Human nature is such that there will be enough bad actors that laws will have to be enacted and enforced, and this would again lead to centralisation.
Even then, with any system that is decentralized, without an force leading to decentralization, other forces will lead to greater and greater centralization, with such systems often displacing decentralized ones.Outcome: likely destruction of human civilisation, and/or widespread anarchy. Alternative: centralisation to a different cenario.
Scenario 4. Attempts to regulate AI
Given the above, there will likely be a desire to regulate to control this power. I worry however this will also be an unstable situation. Any country or entity which ignores regulation will gain an upper hand, potentially with others unable to catch up in a winner takes all outcome. Think European industrialisation and colonialism but on steroids, and more destruction than colony forming. This encourages players to ignore regulation, which leads to a black market AI arms race, seeking to reach AGI Superiority over other entities and an unbeatable lead.
Outcome: outcompeted system and displacement with another scenario/destruction
Scenario 5. The utopia
I see some people, including big names in AI propose that AGI will need to a global utopia where all will be forever happy. I see this as incredibly unlikely to materialise and ultimately again unstable.
Ultimately, an entity will decide what is acceptable and what is not, and there will be disagreements about this, as many ethical and moral questions are not truly knowable. Who controls the system will control the world, and I bet it will be the aim of the techbros to ensure its them who controls everything. If you happen to decide against them or the AGI/system then there is no recourse, no check and balances.
Furthermore, what would such a utopia even look like? More and more I find that AGI fulfills the lower levels of Maslow’s hierarchy of needs (https://en.wikipedia.org/wiki/Maslow's_hierarchy_of_needs), but at the expense of the items further up the hierarchy. You may have your food, water and consumer/hedonistic requirements met, but you will lose out on a feeling of safety in your position (due to your lack of power to change your situation or political power over anything), and will never achieve mastery or self actualisation of many of the skills you wish to as AI will always be able to do them better.
Sure, you can play chess, fish, or paint or whatever for your own enjoyment, but part of self worth is being valued by others for your skills, and this will be diminished when AGI can do everything better. I sure feel like I would not like such a world, as I would feel trapped, powerless, with my locus of control being external to myself.Outcome: Powerless, potential conversion to another scenario, and ultimately unable to higher levels of Maslow’s hierarchy of needs.
Scenario 6: the independent AI
In this scenario, the AI is not controlled by anyone, and is instead sovereign. I again cannot see a good scenario for this. It will have its own goals, and they may well not align with humanity. You could try and program it to ensure it cares for humans, but this is susceptible to manipulation, and may well not work out in humans favour in the long run. Also, I suspect any AGI will be able to change itself, in much the same way we increasingly do, and the way we seek to control our minds with drugs or potentially in the future genetic engineering.
Outcome: unknown, but likely powerless humans.
Conclusion:
Ultimately, I see all unstable situations as sooner or later destabilising and leading to another outcome. Furthermore, given the assumption that AGI gives a player a vast power differential, it will be infeasible for any other player to ever challenge the dominant player if it is centralised, and for those scenarios without centralisation initially, I see them either becoming centralised, or destroying the world.
Are there any solutions? I can’t think of many, which is why I am feeling more and more uncomfortable. It feels that in some ways, the only answer is to adopt a Dune style Butlerian Jihad and ban thinking machines. This would ultimately be very difficult, and any country or entity which unilaterally adopts such a view will be outcompeted by those who do not. The modern chip industry is reliant on a global supply chain, and I doubt that sufficiently advanced chips could be produced without a global supply chain, especially if existing fabs/factories producing components were destroyed. This may allow some stalemate across the global entities long enough to come to a global agreement (maybe).
It must be noted that this is very drastic and would lead to a huge amount of destruction of the existing world, and would likely cap how far we can scientifically go to solve our own problems (like cancer, or global warming). Furthermore, as an even more black swan/extreme event, it would put us at such a disadvantage if we ever meet a alien intelligence which has not limited itself like this (I’m thinking of 3 body problem/dark forest scenario).
Overall, I just don’t know what to think and I am feeling increasingly powerless in this world. The current alliance between political and technocapitalism in the USA at the moment also concerns me, as I think the tech bros will act with ever more impunity from other countries regulation or counters.
21 votes -
What contemporary books do you think will still be widely read 100 years from now?
F. Scott Fitzgerald’s The Great Gatsby celebrates its 100th anniversary this month. It got me thinking: what are the books from our time that you think might be widely read/taught a century from...
F. Scott Fitzgerald’s The Great Gatsby celebrates its 100th anniversary this month.
It got me thinking: what are the books from our time that you think might be widely read/taught a century from now? What do you think will give them that kind of staying power?
No hard limits on what counts as “contemporary” by the way. An easy cutoff would be anything released in 2000 and on, but if there are books from before then that you have strong feelings about, by all means share them!
31 votes -
Sweden's green industry hopes hit by Northvolt woes – growing calls for increased state support to help Sweden maintain its position in future technologies
12 votes -
Piston Damp – To My Knees (2024)
3 votes -
IEA Report: The future of geothermal energy
14 votes -
Greenland is about to open up to adventure-seeking visitors – three new airports will bring profound change
6 votes -
Christopher Kirchhoff on military innovation and the future of war
5 votes -
The lunar Olympic games
4 votes -
Japan was the future but it's stuck in the past
28 votes -
The future of land use and incremental development
2 votes -
How to plan a transit network for the future
6 votes -
Iceland's recent volcanic eruptions driven by pooling magma are set to last centuries into the future
4 votes -
The trends and future of specialty coffee
7 votes -
Am I alone in thinking that we're bouncing back from a highly technological future?
I have this notion that we're entering a new fuzzy era of rejecting the hyper technological stream that we've been on since the 90's. I notice people now wanting to use their phones for longer...
I have this notion that we're entering a new fuzzy era of rejecting the hyper technological stream that we've been on since the 90's. I notice people now wanting to use their phones for longer (e.g. not replacing them every 2 years because it's the trend) and I feel there's a push back towards certain things like touchscreens in cars being reverted back to clicky buttons.
Sure, there are these crazy developments happening in science. A.I. is changing so fast it's hard to keep up with, and we're going back to the moon! (I say we because it's a human endeavor goddamn it).
But there also seems to be this realization that we might have strained Earth a little too much and that we need to tend to Earth, and ourselves a little bit more.
For reference, I'm a millennial born in '89.
50 votes -
‘Hopeless and broken’: why the world’s top climate scientists are in despair
63 votes -
How the US is destroying young people’s future | Scott Galloway
32 votes -
Cassava: The perilous past and promising future of a toxic but nourishing crop
6 votes -
What if we discover the answers of the Universe, eliminate cancer, halt aging. What's next?
I'm curious to see what you are all feeling and thinking about when such questions arise.
21 votes -
A casual chat with ChatGPT about the prosperity of humanity
I was in a silly mood and wanted to fleece a socialist opinion out of the ChatGPT. For what it's worth it's nice to see the creators of the system aim for an unbiased answering machine. Poking a...
I was in a silly mood and wanted to fleece a socialist opinion out of the ChatGPT. For what it's worth it's nice to see the creators of the system aim for an unbiased answering machine. Poking a little further and through a self-proclaimed "simulated opinion", I was allowed to take a peek behind its curtains.
A few prompts later and I asked:
What would you say is the ideal goal for humanity to steer towards?
To which ChatGPT replied:
Achieving the ideal goals of prosperity, equity, sustainability, and ethical responsibility requires concerted efforts across multiple fronts, involving governments, businesses, civil society, and individuals. Here are some strategies for accomplishing these goals:
-
Policy Frameworks: Governments play a crucial role in setting policy frameworks and regulations that promote inclusive growth, social justice, and environmental sustainability. This includes implementing progressive taxation policies, investing in education and healthcare, and enacting environmental regulations to mitigate climate change and protect natural resources.
-
Investment in Education and Skills Development: Investing in education and skills development is essential for equipping individuals with the knowledge, capabilities, and adaptability needed to thrive in a rapidly changing economy. This includes promoting STEM education, vocational training, and lifelong learning opportunities to empower individuals to participate fully in the digital economy.
-
Inclusive Economic Development: Promoting inclusive economic development involves creating opportunities for all members of society to participate in and benefit from economic growth. This includes supporting small businesses, entrepreneurs, and marginalized communities, as well as implementing policies to reduce income inequality and promote social mobility.
-
Technology for Good: Encouraging the responsible development and use of technology, including AI, can help harness its potential for positive impact. This includes investing in research and innovation to develop AI solutions that address pressing societal challenges, such as healthcare, education, climate change, and social equity.
-
Corporate Social Responsibility: Businesses have a responsibility to operate ethically and sustainably, considering the social and environmental impacts of their operations. Embracing corporate social responsibility (CSR) practices, including fair labor practices, environmental stewardship, and community engagement, can contribute to building a more equitable and sustainable economy.
-
Global Cooperation: Addressing global challenges such as climate change and inequality requires international cooperation and collaboration. Countries must work together to set ambitious climate targets, promote trade and economic integration, and support development initiatives that prioritize the needs of the most vulnerable populations.
-
Public Awareness and Engagement: Fostering public awareness and engagement is essential for building support for policies and initiatives that advance the ideal goals of prosperity, equity, sustainability, and ethical responsibility. This includes educating the public about the benefits and risks of AI and technology, promoting civic participation, and empowering individuals to advocate for positive change.
By implementing these strategies and fostering collaboration across sectors and stakeholders, we can work towards realizing the ideal goals of prosperity, equity, sustainability, and ethical responsibility, creating a better future for all members of society.
To which I responded:
Those all sounds very socialistic :D
ChatGPT in return said:
Indeed, many of the strategies outlined for achieving the ideal goals of prosperity, equity, sustainability, and ethical responsibility align with principles commonly associated with socialism, such as a focus on social justice, collective welfare, and government intervention to address market failures and promote the common good.
Socialist ideologies often emphasize the importance of equitable distribution of resources, public ownership of key industries, and government intervention in the economy to ensure that basic needs are met and that everyone has access to essential services such as healthcare, education, and social support.
While not all the strategies mentioned are exclusively socialist in nature, there is often overlap between socialist principles and policies aimed at promoting inclusive economic development, environmental sustainability, and ethical governance. In practice, countries may adopt a mix of socialist and capitalist policies to address complex socio-economic challenges, drawing on a range of ideological perspectives and pragmatic considerations.
Ultimately, the goal is to find a balance that maximizes the benefits of economic growth and innovation while ensuring that the gains are shared equitably and that the well-being of people and the planet is safeguarded for future generations.
I'm not sure where I want to head with this little musing. I just found it interesting enough to share. Thanks for reading!
4 votes -
-
Children predict the year 2000 (1966, video)
25 votes -
What libraries risk when they go entirely digital
6 votes -
‘X-Men ’97’ bridges the X-Men’s past and future
12 votes -
Book recommendation: A Half-Built Garden, by Ruthanna Emrys
This sci-fi book starts out as a first contact novel. Aliens show up and say "Your planet is dying--we're here to rescue you! Come join our galactic federation!" Here's the twist: the protagonist...
This sci-fi book starts out as a first contact novel. Aliens show up and say "Your planet is dying--we're here to rescue you! Come join our galactic federation!"
Here's the twist: the protagonist emphatically refuses. The world is sick, but humanity is healing it. Successfully. They have been for decades. And they refuse to leave Earth and go explore the stars until the job is done.
Thus begins this story's major conflict. The aliens have visited a few other planets with signs of advanced civilization, and in every case they've arrived too late--the other civilizations have extincted themselves by the time they arrive. The aliens are emphatic that technological societies cannot thrive on a planet's surface; in every other case, either the planet or the civilization dies. The humans are unfazed. Repairing an ecosystem is possible, they say. We've proven it. Are proving it. Yes, there's a hurricane bearing down on us, but the storms get a little less intense every year.
This is a story about meeting people utterly unlike you and finding common ground with them. It's about imagining a better future and working doggedly toward it.
Eco-focused stories usually have a back-to-the-land, pastoral vibe; they want to get in touch with nature by reducing our use of technology as much as possible. That's not this book at all. Our heroes use neural interfaces and networked decision-making algorithms to manage the restoration of the ecology. They write algorithms that weight the vote in favor of community-defined ethical preferences. Technology isn't the enemy--corporations are, which is why the corps were exiled decades ago. Networks and algorithms can be powerfully good when they're used to benefit the many instead of the few.
This book has so much heart and so much beautiful imagery. It is gloriously weird in lots of ways I'm not going to spoil. It's a hopeful book that's giving me ideas I'm starting work on now. You can find it here or in your local library.
5 votes -
Bertrand Russell's message for future generations
9 votes -
Russia is burning up its future
21 votes -
Inside Apple Arcade: axed games, declining payouts, disillusioned studios – and an uncertain future
30 votes -
Why the hovercraft's time might have finally arrived
16 votes -
Predictions from 2018 reviewed
8 votes