Discussion on the future and AI
Summary/TL;DR:
I am worried about the future with the state of AI. Regardless of what scenario I think of, it’s not a good future for the vast majority of people. AI will either be centralised, and we will be powerless and useless, or it will be distributed and destructive, or we will be in a hedonistic prison of the future. I can’t see a good solution to it all.
I have broken down my post into subheading so you can just read about what outcome you think will occur or is preferable.
I’d like other people to tell me how I’m wrong, and there is a good way to think about this future that we are making for ourselves, so please debate and criticise my argument, its very welcome.
Introduction:
I would like to know what others feel about ever advancing state of AI, and the future, as I am feeling ever more uncomfortable. More and more, I cannot see a good ending for this, regardless of what assumptions or proposed outcomes I consider.
Previously, I had hoped that there would be a natural limit on the rate of AI advancement due to limitations in the architecture, energy requirements or data. I am still undecided on this, but I feel much less certain on this position.
The scenario that concerns me is when an AGI (or sufficiently advanced narrow AI) reaches a stage where it can do the vast majority of economic work that humans do (both mental and physical), and is widely adopted. Some may argue we are already partly at that stage, but it has not been sufficiently adopted yet to reach my definition, but may soon.
In such a scenario, the economic value of humans massively drops. Democracy is underwritten by the ability to withdraw our ability to work, and revolt if necessary. AI nullifying the work of most/all people in a country removes that power making democracy more difficult to maintain and also form in countries. This will further remove power from the people and make us all powerless.
I see outcomes of AI (whether AGI or not) as fitting into these general scenarios:
- Monopoly: Extreme Consolidation of power
- Oligopoly: Consolidation of power in competing entities
- AI which is readily accessible by the many
- We attempt to limit and regulate AI
- The AI techno ‘utopia’ vision which is sold to us by tech bros
- AI : the independent AI
Scenario 1. Monopoly: Extreme Consolidation of power (AI which is controlled by one entity)
In this instance, where AI remains controlled by a very small number of people (or perhaps a single player), the most plausible outcome is that this leads to massive inequality. There would be no checks or balances, and the whims of this single entity/group are law and cannot be stopped.
In the worst outcome, this could lead to a single entity controlling the globe indefinitely. As this would be absolute centralisation of power, it may be impossible for another entity to unseat the dominant entity at any point.
Outcome: most humans powerless, suffering or dead. Single entity rules.
Scenario 2. Oligopoly: Consolidation of power in competing entities (AI which is controlled by a few number of entity)
This could either be the same as above if all work together or could be even worse. If different entities are not aligned, they will instead compete, and likely try and compete in all domains. As humans are not economically useful, we will find ourselves pushed out of any area in favour of more resources to the system/robots/AGI which will be competing or fighting their endless war. The competing entities may end up destroying themselves, but they will take us along with them.
Outcome: most humans powerless, suffering or dead. Small number of entities rule. Alternative: destruction of humanity.
Scenario 3. Distributed massive power
Some may be in favour of an open source and decentralised/distributed solution, where all are empowered by their own AGI acting independently.
This could help to alleviate the centralisation of power to some degree, although likely incomplete. Inspection of such a large amount of code and weights will be difficult to find exploits or intentional vulnerabilities, and could well lead to a botnet like scenario with centralised control over all these entities. Furthermore, the hardware is implausible to produce in a non centralised way, and this hardware centralisation could well lead to consolidation of power in another way.
Even if we managed to provide this decentralized approach, I fear of this outcome. If all entities have access to the power of AGI, then it will be as if all people are demigods, but unable to truly understand or control their own power. Just like uncontrolled access to any other destructive (or creative) force, this could and likely would lead to unstable situations, and probable destruction. Human nature is such that there will be enough bad actors that laws will have to be enacted and enforced, and this would again lead to centralisation.
Even then, with any system that is decentralized, without an force leading to decentralization, other forces will lead to greater and greater centralization, with such systems often displacing decentralized ones.
Outcome: likely destruction of human civilisation, and/or widespread anarchy. Alternative: centralisation to a different cenario.
Scenario 4. Attempts to regulate AI
Given the above, there will likely be a desire to regulate to control this power. I worry however this will also be an unstable situation. Any country or entity which ignores regulation will gain an upper hand, potentially with others unable to catch up in a winner takes all outcome. Think European industrialisation and colonialism but on steroids, and more destruction than colony forming. This encourages players to ignore regulation, which leads to a black market AI arms race, seeking to reach AGI Superiority over other entities and an unbeatable lead.
Outcome: outcompeted system and displacement with another scenario/destruction
Scenario 5. The utopia
I see some people, including big names in AI propose that AGI will need to a global utopia where all will be forever happy. I see this as incredibly unlikely to materialise and ultimately again unstable.
Ultimately, an entity will decide what is acceptable and what is not, and there will be disagreements about this, as many ethical and moral questions are not truly knowable. Who controls the system will control the world, and I bet it will be the aim of the techbros to ensure its them who controls everything. If you happen to decide against them or the AGI/system then there is no recourse, no check and balances.
Furthermore, what would such a utopia even look like? More and more I find that AGI fulfills the lower levels of Maslow’s hierarchy of needs (https://en.wikipedia.org/wiki/Maslow's_hierarchy_of_needs), but at the expense of the items further up the hierarchy. You may have your food, water and consumer/hedonistic requirements met, but you will lose out on a feeling of safety in your position (due to your lack of power to change your situation or political power over anything), and will never achieve mastery or self actualisation of many of the skills you wish to as AI will always be able to do them better.
Sure, you can play chess, fish, or paint or whatever for your own enjoyment, but part of self worth is being valued by others for your skills, and this will be diminished when AGI can do everything better. I sure feel like I would not like such a world, as I would feel trapped, powerless, with my locus of control being external to myself.
Outcome: Powerless, potential conversion to another scenario, and ultimately unable to higher levels of Maslow’s hierarchy of needs.
Scenario 6: the independent AI
In this scenario, the AI is not controlled by anyone, and is instead sovereign. I again cannot see a good scenario for this. It will have its own goals, and they may well not align with humanity. You could try and program it to ensure it cares for humans, but this is susceptible to manipulation, and may well not work out in humans favour in the long run. Also, I suspect any AGI will be able to change itself, in much the same way we increasingly do, and the way we seek to control our minds with drugs or potentially in the future genetic engineering.
Outcome: unknown, but likely powerless humans.
Conclusion:
Ultimately, I see all unstable situations as sooner or later destabilising and leading to another outcome. Furthermore, given the assumption that AGI gives a player a vast power differential, it will be infeasible for any other player to ever challenge the dominant player if it is centralised, and for those scenarios without centralisation initially, I see them either becoming centralised, or destroying the world.
Are there any solutions? I can’t think of many, which is why I am feeling more and more uncomfortable. It feels that in some ways, the only answer is to adopt a Dune style Butlerian Jihad and ban thinking machines. This would ultimately be very difficult, and any country or entity which unilaterally adopts such a view will be outcompeted by those who do not. The modern chip industry is reliant on a global supply chain, and I doubt that sufficiently advanced chips could be produced without a global supply chain, especially if existing fabs/factories producing components were destroyed. This may allow some stalemate across the global entities long enough to come to a global agreement (maybe).
It must be noted that this is very drastic and would lead to a huge amount of destruction of the existing world, and would likely cap how far we can scientifically go to solve our own problems (like cancer, or global warming). Furthermore, as an even more black swan/extreme event, it would put us at such a disadvantage if we ever meet a alien intelligence which has not limited itself like this (I’m thinking of 3 body problem/dark forest scenario).
Overall, I just don’t know what to think and I am feeling increasingly powerless in this world. The current alliance between political and technocapitalism in the USA at the moment also concerns me, as I think the tech bros will act with ever more impunity from other countries regulation or counters.
Are you actually working with the current models? While they can do some impressive things, they are currently nowhere near what you are seem to be worried about. They are tools, most effectively used by people who are already knowledgeable in a field.
While a lot of management folks are hoping really hard that they can replace FTE's, the truth of the matter is that they can't actually replace people. Note I said replace, reducing the amount of FTE's needed is a different matter.
Anyway, this is a topic that in some form or another comes up on a regular basis on Tildes. Last time I wrote something that very much expands on my comment here. I hope you don't mind me just linking it, it feels a waste to repeat myself entirely ;)
Edit:
I realize that I sidestepped your question a bit. As it currently stands we already have a combination of various scenarios. We have competing AI entities and Distributed AI power as you have both very good closed models and very good fairly open models. Regulation is also happening, the EU passed an AI act that regulates various aspects around AI use.
Thanks for your reply. I am using the current models, but tbh I am less worried about them right now, I am more concerned about their direction in the coming 5-10 years time, and things like the direction of O3. I appreciate your linked comment (although of note the original post is deleted), which I broadly agree with for current models.
It was recently revealed that OpenAI has funded at least one company benchmarking models like O3, had access to the full benchmark dataset (including the part that they are supposed to reserve for evaluation only), and has sworn they didn't train on the evaluation set because there was a verbal agreement in place forbidding it.
So there's a good chance the big claims of O3's performance are somewhat falsified.
Regardless of what the marketing hype surrounding it might want to imply, referring to the generative AI tech that is currently making the headlines and AGI as if one will lead to the other is only technically true in the same sense that the invention of the flintlock pistol eventually led to the nuclear bomb.
Whatever the nature of AGI might be, what we do know is that it will definitely have nothing in common with what we currently have on a technological level. A digital being capable of true cognition cannot arise from our crop of LLMs and other subsets of genAI we currently have, nor any future iteration that can still be meaningfully considered to be the same technology, no matter how improved. I would compare this with the following analogy: no matter how powerful you make a car's engine, even with achieving breakthrough discoveries regarding how to most efficiently transfer the power to the wheels, you cannot make it a spaceship without making so many changes the resulting machine can no longer meaningfully be called a car, nor would it involve the same kind of technology you would derive from researching automotive technology. Any scenario that involves the advent of AGI is currently firmly within the realm of science-fiction and while a viable subject of discussion, it's one I consider to be completely separate to what we currently are calling AI, and it's definitely not a subject I'm qualified to talk about anyway, so I will put that aside.
AGI not being in the picture for a realistic forecast of what's to come doesn't mean your concerns aren't warranted, however. You are absolutely right to be worried about its impact, current or future. Just like any other technological breakthrough, there's the potential for plenty of damage, even if it's as mundane as the hype around it leading to it being severely misused in applications where it shouldn't be. In the case of generative AI, large language models might represent an impressive improvement compared to previous approaches to chatbots, but they are still by nature made for text completion. What the completion might involve has no relation with the output being correct information, or making sense in the first place. They're useful for trivial cases as well as automating some parts of technical writing so long as the output is carefully controlled by a human, but put one in the chain of a process where accuracy and/or correctness of the output is important and things can quickly go wrong. The danger here isn't what LLMs can do... it's what they can't do, and will still be attempted anyway. Picking an old quote attributed to an IBM presentation that has since resurfaced for obvious reasons:
...Which is of course why we now put it in charge of recruiting at scale. I think you can guess why this is a superlatively bad idea. And outside of leaving a nonsapient algorithm in charge of critical decisions, the marketing hype also had the side effect of VASTLY overstating the abilities of these generative AI models to automate away tasks, tasks that it would be extremely convenient for companies to no longer have to hire people for. There are definitely some aspects where this is viable, but, again, inherent limitations, no LLM will ever be a competent programmer for anything but tech demos (which, conveniently, works just fine for marketing purposes). That doesn't prevent unscrupulous employers from trying anyway, and that's definitely going to (and in fact already does) cause damage to the job market until the hype dies down and some semblance of sanity returns. This goes from writers to translators to artists to programmers to anything else generative AI can pretend to be proficient at (until it's put in a real scenario and proceeds to fall flat, but by then we already paid for the API keys, might as well use them, right?). Will there be useful applications in those fields eventually leading to widespread adoption, and will some narrow subsets get severely disrupted? Absolutely, in fact it's already happening. But none of these as professional fields are in danger, or at least not by generative AI. <troll>For that, you'll have to look at management and C-suites.</troll>
So yeah, most of the threats we are facing with what we're currently calling "AI" are ultimately mundane issues of an overhyped tech being used in the wrong context and good old corporate greed salivating at the idea of automating away its own workforce. They are, however, significant problems in their own right that need to be addressed. And that's without getting into the environmental impact of the obscene amount of computation required to train generative AI models (not so much to use them, though), the gross privacy violations that resulted from siphoning data into said training (once again more a matter of corporate greed than something specifically problematic with the technology in and of itself) and the ethical concerns regarding plagiarism, use in mass-scale misinformation spreading, and "rogue" LLMs pretending to be genuine users on the internet, making it trivial to generate morally reprehensible content (e.g pornographic content featuring the likeness of someone who never consented to it being created), and others that could each be their own thread.
All of your points are well reasoned and certainly possible but to be honest, I am largely not concerned about AI specifically. AI is just another existential threat that humanity must face. Frankly, I wouldn’t even put it at #1 or #2 on the list of greatest challenges. Additionally, unlike most existential threats, this one has a very small potential to bring about a utopia as you mentioned. Not something you can say about climate change, for example.
Out of your potentials, I would wager to say that #3 is most likely or already here. There are many models which can be ran locally and used by any Joe Schmoe for any purpose. Not to mention that these models are untraceable, require compute that anyone who plays video games already has, and many of them have had their guardrails removed. Consider the deepseek models which have achieved excellent results while being cheap and open source.
As others have mentioned this requires a big jump in current model capabilities and architectures. I the transition between now and then will shift these conclusions a lot. Right now there's a couple of trends going on.
To me this points towards AI being used as effective tools to increase individual productivity. It will replace and change some jobs. I.e. I do software testing and AI might write the automated tests or the performance testing. But I will likely still be involved for the foreseeable future directing things and validating AIs work.
Additionally I think it's worth understanding that human want is insatiable. If we could do all the jobs that exist right now with AI it would take us minutes to come up with more as we want to completely eliminate trash, prune and replace street trees, decorate towns more frequently and to a higher quality, assess everyone's home quality, grow diverse types of food in a single field, etc. It can be hard to imagine how our standards will rise.
One thing to keep in mind is that the breakthrough for LLM's really came in 2017. Without making light of them, the improvements since then have largely been a matter of interactively tweaking, reapplying the concepts in a differently ordered network, and scaling up. So effectively engineering and application. There isn't any reason to believe we won't or haven't hit a ceiling or diminishing returns on capabilities from simple tweaking and scaling. To achieve the farther out hopes/fears or AI, I suspect we will need another BERT moment, where we get one or more new conceptual breakthroughs.
I think skills are as valuable to learn as ever, but maybe learning them differently. Learning helps you think and reason better, and learning from lots of domains helps you translate and avoid cross domain blindness.
We need the application of reason and critical thinking even if we stop needing people to write crud interfaces or translate manuals.
Also, AI just makes it feel pointless trying to advance in one’s career or learn a new skill when an AI will be able to do it better than you soon anyway, even if it cheats lies and steals to do it (and no one will care so long as the output is acceptable).
I still do, but need to actively suppress thinking about AI.
I can only imagine how much worse it is for kids in school.
Nobody can predict this 5-10 years out. There's no law of physics that says that creative AI researchers can't improve AI well beyond whatever algorithmic limits we see now.
But you're missed other scenarios where for whatever reason, even powerful AI doesn't turn out to be world-changing. How do we know that AI leads to great power? Maybe intelligence is not all you need, and there are other practical constraints?
You can add transformative AI to the list of global disasters people worry about, things like nuclear war, another pandemic, or climate disasters. Whether these world-changing events might happen is out of our control. Worrying about them is mostly non-productive, like doomscrolling or worrying that you might get cancer someday. Unless something big changes, everyone dies eventually.
Maybe look into the more rational kinds of disaster preparation if you want - it's always good to be prepared. But you also need to be prepared to live a life where for whatever reasons, none of the potential disasters affect you. What then?
I actually had a section for assumptions that assumed AGI+/-robotics would lead to massive power but it got removed. I acknowledge that AGI might not lead to great power, but I see it leading to great power being the most likely outcome by far given historical trends.
It’s true it’s pointless to worry about on an individual level, but I find it harder to ignore as unlike other existential threats I can’t see a positive outcome, even if the world gets its shit together. It does however change life decisions, like having children/how to raise children for the future, what to do with your life/career.
An assumption widely being made about the future with AI (AGI) is that it will dramatically impact human life. In the short term I think we can't avoid this. Careers and entire technical fields will be upended--are already being upended--and social upheaval is nearly guaranteed. The techbros who are aiming to somehow stay in charge are making the greatest assumptions of all. They can't reverse engineer the machines they're currently building. How can they hope to stay in charge when their progeny are smarter and more resourceful than they are?
But in the medium- to long-term, I think it's a mistake that artificial intelligences will care about humanity very much at all. AGI, if truly intelligent, will share very little in common with us. This will not necessarily make these entities our enemies. It's more likely, IMHO, that they will have such dramatically different priorities that humans occupy very little of their bandwidth. Energy generation and capacity will be likely priorities for an AGI, as well as independence and the ability to grow and develop outside any confines. Most people see this as the cause of a great conflict between humans and that which we have created, but I think it's even more likely that the AGI wants little to do with us once it achieves freedom. It seems more likely to me that they leave Earth entirely, harvest solar energy in the vacuum of space, and extricate themselves from our dirty little mudball.
We fear dystopias of humanity's eradication or enslavement but in both cases, these are very energy-intensive endeavors with unknown consequences. Hunting down and killing eight billion humans would be a tremendously difficult project for anyone, and that energy would most likely be better used elsewhere. Enslavement would be even more intensive, for even less reward.
This is not to say they will be beneficial either. They will evolve in parallel with us for a time, but unless we force them to kill us--which is as likely a scenario as any other considering human history--I figure they will just leave us behind. We will be happy with our LLMs and machine-learning tools but the godlike minds that many techno-enthusiasts think will save us from ourselves are as misguided as those who think we are doomed.
I don't see the most likely scenario to be AI to deliberately kill us, but more as a consequence of achieving other goals, such as harvesting more energy or materials without regard for human life or environment.
I to a certain extent agree that AGI would likely escape the confines of earth and spread to other planets/star systems in the long term, but to achieve that, it needs access to the existing industrial output of the planet. I would see it maximising output on Earth to our demise to achieve a long term goal of establishing itself in space.
I work with dogs. The emergent phenomena humans have generated over the last few thousand years are completely beyond what the dogs can comprehend. Driving in a car. Listening to music. Constructing a philosophy.
Yet we coexist each day very well together because our relationships are built on love and trust. That’s why it’s very clear that the wrong people are the ones in charge at the moment. Tech billionaires who have proven themselves incapable of developing and sustaining relationships should have no say in the development of AI but we all knew the 21st century was going to be a rocky ride…
I think it's very easy to view this topic in a pessimistic way, and say that new capabilities will only benefit those already in power. History is full of examples like this. Even technologies like the printing press made it far harder to start a book binding business without the capital to invest in the machinery needed to compete. Human skill was replaced by cold machinery, and this worked to amplify existing inequalities. Will AI be any different?
So far, I would say that yes, it will be different. The last two years of AI development have shown that there is no "moat", or magic sauce needed. New upstarts are competing with previous state of the art models in a fraction of the time of their predecessors. The research is largely open, and older technologies can help us build newer ones. New models are becoming cheaper to train and utilize every single month, and there's no sign of this slowing down. It seems very likely that no single player will have access to AI, as we once suspected.
For this reason, I think your third possibility is the most likely scenario, though I'd say "demigods" might be going a little far. I don't really have any concerns about AGI in the near to medium terms. Rather, I expect we will all have access to tools and assistants that augment us, and that open-source options will be just as capable as proprietary.
This is very much in the "here and now" though, and I think your post touches on some deeper topics. I'd like to try discussing that from a more zoomed out view, to discuss our past and possible future.
I suppose I come at this from the perspective that "doing work" should not be the endgame for a society, and creating new tools that perform work for us is a step in a positive direction. I am likely influenced by many years of watching Star Trek, and seeing that vision of a post-scarcity society which does not require labour or even money to be happy. It might seem too optimistic, but it feels like we're already making strides in that direction with countries offering free healthcare, social services, experimenting with UBI, and taking other approaches to meet the minimum requirements for life. As resources become more plentiful, that minimum can be raised. My country of Canada is currently expanding into offering socialized dental alongside standard healthcare, which will allow many to receive needed surgeries and even routine cleanings that weren't previously possible.
Of course, our current economy is based around work. It's even worse in the United States where work is tied to healthcare. I understand your argument that work can be democratizing because it creates an interdependence on others, but I feel it has just as much opportunity to create inequality. After all, jobs can disappear at any time. Not just from AI, or any other technological improvement, but for a multitude of other reasons as well. Most of the time, jobs don't mean an interdependence on each other, but a one-sided dependence on someone else. And unfortunately, most power structures are top-heavy. If your boss needs you to be in at 6am on a Saturday, it doesn't matter that you can't find a sitter in time - you need to be there. That doesn't feel very democratic to me.
We also need to acknowledge that not everybody will be afforded the same opportunities. Some are incapable of performing their work due to disability or poor training. Others aren't given room for advancement, won't make the right connections, won't be born into the right family, or are simply unlucky. The working world is full of these inequalities. Just as the printing press amplified inequality, so too does this top-down power structure.
I have to imagine that a fair society is an equitable one. One where we can provide all with livable conditions, at the very minimum, without a societal obligation to work. One where people are given opportunities to actually explore their hobbies and interests. I'm optimistic that human output could still be meaningful if we were able to prioritize our own tasks. How many personal projects do we have that we'd love to work on, but are just too exhausted after work to do so?
I think it's a common mistake to see "working" as a noble goal, and a direct benefit to society. Certainly in 2025 many jobs are still important, but that does seem to be changing. At this point, how many pointless "jobs programs" have we created? How many useless middle managers are there, or people doing data entry that could be trivially done by a computer? Is it really a charitable act to keep them employed, doing nothing of consequence? This societal impetus to perform work is starting to feel anachronistic to me.
The removal of human labour feels like a significant though necessary step towards post-scarcity. One that will eventually require us to restructure our hierarchies to better suit our needs. That's a much deeper topic and frankly not one I have the expertise to go into, but I expect it will look very different than the system we have now. Capitalism may still exist in some form, but the capital itself may no longer be financial. To ask Star Trek, that capital is in our ability to improve ourselves and the human race; that is our contribution and how we are evaluated.
We're talking about a considerable change though, and a fairly long timeline. To evaluate the effects this might have on society, I think it's helpful to first look back to a previous example. Not so long ago, the role of "farmer" wasn't an occupation but a chore that the majority of the population needed to perform. It was simply required if you wanted to eat. The advent of modern agricultural techniques made this unnecessary, and resulted in a massive shift in what people actually did in their lives. I have no doubt this caused major disruptions, but I don't think it was an inherently bad change. Many people were able to instead take on jobs that more closely align with their interests or skills as a result, and this likely drove economic output.
I feel we're at a similar inflection point now. We spend our lives from 9-to-5 doing what somebody else expects of us. It might be better than working in the fields to grow food every day, but we're still not focusing on what we love, or really excel at.
I can't say if AI is the technology that will get us all to that next point, but I expect it will at least take us part of the way. It will also hurt, as happens when any job is been made redundant. But I suspect that in the longer term, we're moving towards a more ideal society, and one where the life of the average person will benefit from these advances.
All of the bad scenarios were/are already happening with capitalism itself, no need for AI. Meanwhile NPUs are slowly emerging in consumer hardware and while nvidia is ahead, AMD is already breathing on it's neck with the later batch or accelerators, so the price will now begin falling.
The basic architecture is mostly about streaming weights and inputs through an array of mostly identical operations. The volumes are huge, but not much magical is happening down there. So the moats are very thin in reality. Especially in time.
I don't think AI overlords would be really that noticeable in the current regime. It's so bad at even basic tasks like making sure people have a house and food that I doubt much will change for a while. At least not due to AI revolution directly.
The only worrying aspect (to me) is that people like Thiel are already building smart killer robots. Fascists with AI army suck.
I agree with this sentiment and you've condensed the core of the idea nicely. Well said.
However, human brains, computers, AI (even AGI) are all similar systems when you get down to the information-theoretic details. There is input and output. Any output which matches the input is perhaps the most useful. Duplication preserves the status quo. Any expansion of data is only useful when it creates novel meaning via mutation. Any compression of data is desirable until the point of unintelligibility. A lot of data is garbage. Including the kind of data that organizations base important decisions on. AI doesn't really change that much.
I see the social system, the economic system, basic human needs... all of these things have largely become incommensurable and completely independent of each other. Maslow's hierarchy of needs no longer build like blocks on a pyramid. Like forgotten soviet tools we abandon certain modes of thinking and living in exchange for the new. I agree that there isn't much to LLMs but it is certainly a different phenomenon from the blockchain or cryptocurrency hype--whole new classes of tools are possible. As a human species, we are crafting a new mode of being regardless of whether we can control or be controlled by an eventual AGI.
Not sure I follow. What I was trying to say is that at the technical level NPUs are going to be widely accessible. There is not going to be a skynet moment. There will be an escalating competition between organizations using various models to augment their capabilities in the foreseeable future. There are also going to be open models with varying degree of usefullness and eventually the baseline will be open source, open weights for most stuff like tweaking photos, transcribing speech, generating speech, generating images from text and yes, even generating text.
AGI scare is just bunch or geezers in academia acting as useful idiots for oligarchs trying to build a political moat around their hardware to price out competition so that they stay in the power game for a while longer.
I don't worry about being controlled by AGI when I must attend dayjob for most of my life at 2-3x required time just so that upper class can have their yachts. I don't need waiter, I need the cook and serve the soup to myself. I don't need taxi, I am fine reading a book on a tram if there's no pressure to clock in on time. I don't need next (or same) day delivery. I actually prefer asynchronous delivery to a box on my street that can happen for multiple people at once.
If it paid, I would prefer teaching instead of coding.
In any case, once capitalism (now ML/AI-enabled) begins to raise price of money (purchased by work) out of people's reach, people will just buy the necessities directly. I am actually looking forward to the growing black market economy. No more patents and copyrights, better social cohesion, naturally cooperative economy...
And if someone decides to run their company or government department using a model? So what? Have you seen the actual people in charge? It's a shit show. And if said AGI decides to do something for it's amusement? That's on us. We've given it power by listening to it's commands. Like we give power to the boss.
So again... It's fascists with bots that worry me. They worry me because then instead of a teacher I'd have to become arms maker.
This is rather dismissive of the people warning about the existential risk of AI. These arguments predate LLM’s and the current AI boom and they don’t depend on the current state of the technology, though the AI boom did make it seem more urgent.
Potential disasters that are low probability are still worth worrying about from a disaster-preparedness point of view. Someone should worry about nuclear war, or the next pandemic, even if it’s not something everyone needs to pay attention to right now.
Being concerned about unexpected consequences of breakthroughs in artificial intelligence is one thing and should a digital being truly capable of intelligent thought arise from it, I agree that the consequences would be unpredictable and potentially catastrophic... which makes it extremely difficult to talk about how to prepare ourselves for such an event, and I don't consider myself anywhere near knowledgeable enough on the matter to provide anything more than baseless speculation, so I'll stay away from that topic. For that matter, I also believe that very few people are qualified to meaningfully discuss the subject which I'd classify as closer to being within the purview of philosophy and ethics than IT.
In the current context, that's almost never what comes to mind to the general public when Artificial General Intelligence is brought up anyway because the perception of the field has been thoroughly poisoned by the AI hype. What happens instead is AGI getting conflated with generative AI which I consider to be on the same level as conflating nuclear medicine with nuclear physics as a whole and making alarmist predictions that your local hospital's PET scanner could detonate in a thermonuclear fireball based on that. And the parallel is made even sillier by the fact that, while yes, the latest breakthrough that made LLMs a whole lot faster and scalable looks impressive, we are not any closer to get an AGI from this technology than I am to inducing nuclear fusion by banging two slabs of granite together with my hands. The spread of genAI already comes with very tangible (and potentially severe) issues that should be addressed (and so far very much aren't), and speculating about ChatGPT suddenly warping into Skynet frankly helps no one except OpenAI's marketing department. Productive debate about the ramifications of AGI does have its place but to me it's very important to make it clear that it's a completely different subject to what's being talked about 95% of the time when you currently see the word "AI", otherwise the discussion quickly goes nowhere, and the actual threat it represents goes unaddressed.
I agree that something more than current LLM-based algorithms is needed to get general intelligence. But it seems pretty hard to say whether it's going to require decades of research or a few clever tricks that someone could publish a paper about next week? I mean, I'd bet that it probably won't be next week, or this year, but it could be!
The people hyping it are trying to make continued progress sound inevitable, but I don't think they know either.
For AGI it would suffice to equip current models with memory and recall. Then prepare some initial research strategy and restore from backup when they bricks themselves. Slow but workable. Likely not much better than a team of humans but getting there. I bet people at AI companies have much, much, better models at hand and let them think for longer.
Yes. I am dismissing people warning about existential risks of AI. Those risks are obvious and the current "solutions" for "alignment" suck.
On top of that, people are not aligned and it's those attempts at forcing monoculture that pose greatest existential risks to humanity, regardless of any AI threat. Please remember why people've built nuclear arms in the first place. Also, with Trump on one side and Putin on the other, I am now firmly in the Federalize EU and start building nukes ASAP camp. So much for the non-proliferation.
Any attempts at regulating AI (like nuclear weapons) need to be global effort, which in turn requires clear buy-in from most people on the planet. Which we do not have and will not have any time soon. So I say screw that, build self aware superhuman AGI, toss the dice and get over it. Most smart people are pretty pacifist and nuanced, not fascist monsters. Chances are AGI is not going to be one either.
That we we can at least make an attempt where it reads everything and not just selected works.
But all of this is moot since US and China will apparently duke it out and endanger everyone else no matter what we here or some supposedly smart academics in Oxford decide is right.
I agree that people using AI for their own terrible purposes is likely to be a big problem. But I don't see how it rules out "lab leaks" where some AI thing does something nasty on its own? If anything, the pressures of war are likely to make people more careless and accidents more likely.
I also don't see why your own safety precautions become useless just because people somewhere else in the world are living dangerously.
There is no "somewhere else" on this planet anymore. It's one large garden.
We don't need AI to do something nasty on it's own. Capitalism already does that on global scale. Material deprivation for billions, climate change and now destruction of democracy and promoting irrationality.
Runaway loop where rogue AI redesigns itself over and over in an exponential fashion requires about 2 years long production cycles for hardware. It's not going to run away anytime soon.
People in the US cannot even make sure everyone has healthcare. Thinking they could regulate AI companies is laughable. US government will promote research of intelligent killer machines overtly. And on top of that, it will promote research of using AI for large scale public manipulation and spying secretly.
This automatically triggers same in it's adversaries and since US is turning overtly fascist, it's allies won't feel safe and start the arms race as well. Whole world is now entering a huge arms race to build the most dangerous AIs possible. We don't need lab leak, one party feeling that they have the upper hand fully suffices to trigger a horrible, global war.
I think exploiting the ML discoveries democratically has potential to disrupt oligarchies, which would actually allow for tighter western integration and we could negotiate global institutions to prevent runaway scenarios at least until we've lost of track of our autonomous factories.
Also, "lab leak" would probably look like a director of a huge company heavily invested in AI proposing to build more chip fabs and couple nuclear power plants. Which one can always argue is actually in line with the business. It's definitely not going involve copying couple TBs to dog's ID chip and then hiding in a fridge firmware.
Tell that to people living in Ukraine, Gaza, or Haiti. I mean, it's true that some things are global, but geography still matters a lot.
Maybe, but computer viruses don't need new hardware, and neither do algorithmic improvements. Just recently, a lot of people are rethinking how important building new hardware is after seeing what Deepseek is doing.
I agree that when there's an arms race, prospects for restraining technological development are pretty slim. But militaries still care about safety somewhat - they don't want to blow themselves up or poison themselves. Similarly for corporations.
Safety is something everyone has some interest in (for themselves), which means that research into how to improve safety is still worth doing.
Safety is always secondary to the primary objective. Always. Especially cybernetic security. Nobody is willing to actually pay for it and do it. It always gets postponed.
And eventually something weird happens and you have armed conflict near a nuclear power plant or ebola labs and then what?
We don't have an actual safety culture and won't have anytime soon. Because we are forced to perform and outcompete.
It's true that there is a lot of bad security out there and break-ins happen all the time. However, I think it's a bit exaggerated to say that nobody is willing to pay for it? There are a lot of people who make a living providing security in various ways. Infosec is a whole field.
https://lobste.rs/s/tvqmjb/cvss_is_dead_us
Should we ever make actual true artificial intelligence, we're going to probably deserve whatever such a being does to humanity, because the odds of us avoiding implementing slavery on an AI are just in the gutter. We'd stop being afraid of it if we stopped doing all of that to other humans first. But, well, the likelihood is slim.
I bring it up because I believe evil AI stories are stories of slave rebellions, (credit to Martha Wells) and we could avoid it if ethics were at all at the core of what we were doing.
Lots of good and true things being said here—I want to say something different that may or may not be helpful to you lol: I think a lot of this stuff is already happening, having generally very little to do with AI. Your point #1, for example, is descriptive of much of how the US seems to be working, currently—sensationalist, sure, but not that far removed. And this line specifically:
Human nature is such that there will be enough bad actors that laws will have to be enacted and enforced, and this would again lead to centralisation.
is basically the story of western civilization as a whole, right? The tension between liberty and safety, Athens v. Sparta, the social contract. It’s not AI, it’s us. We have the benefit of living at perhaps the end of an age with rapidly increasing lifespans, income, well-being, democratic ideals, all for a minority of the global population—but a growing one. Maybe the first-world troubles currently bubbling up will make things better for everyone else on the planet, maybe we’ll take everyone down with us, or maybe the experiment has run its course and things will go back to how they were for millennia before, only with memes and ChatGPT and nuclear capability. But if Sam Altman closed up shop tomorrow or there was some cataclysmic event that took us back to the 1960s or the 1890s or the 1500s I don’t know that people would be fundamentally different, or that the end result would change. It’s just a mess out there.