Because I don’t want to watch an entire 15 minute video, but also because it’s topical, here is a generative AI based summary of the video: In the YouTube video "Generative A.I - We Aren’t Ready,"...
Because I don’t want to watch an entire 15 minute video, but also because it’s topical, here is a generative AI based summary of the video:
In the YouTube video "Generative A.I - We Aren’t Ready," the speaker explores the ominous implications of advanced AI technology, specifically generative AI, on the authenticity and human interaction on the internet. Using the metaphor of the internet as a "Dark Forest," the speaker warns of the dangers of being outpaced by synthetic content and the potential for AI to surpass human capabilities, leading to an automated content ring of lifeless engagement. The speaker suggests practical ways for humans to signal their humanity online, such as meeting people in person and using algorithmically incoherent human language. The use of music throughout the excerpt adds to the sense of foreboding, emphasizing the urgency of addressing these concerns as AI technology continues to advance.
Detailed summaries of sections:
00:00:00 In this section of the YouTube video titled "Generative A.I - We Aren’t Ready," the speaker discusses the idea of the internet as a "Dark Forest" based on the science fiction concept from the novel "Three-Body Problem" by Liu Cixin. According to this theory, intelligent life in the universe is hidden and hostile due to the danger of being discovered by more advanced civilizations. The speaker applies this concept to the internet, where human users are increasingly hiding from digital "predators" such as bots, advertisers, and trolls. With the rise of generative AI, the internet is becoming even more lifeless and dangerous as synthetic content outpaces human-generated content. Companies and political lobbyists are already using AI to create large amounts of content, leading to an automated content ring of lifeless engagement. This easy-to-use technology poses a significant threat to the authenticity and human interaction on the internet.
00:05:00 In this section of the YouTube video titled "Generative A.I - We Aren’t Ready," the speaker discusses the implications of advanced AI technology, specifically language models like ChatGPT, surpassing human capabilities. The speaker uses the example of an AI named Ward that was able to steal traffic from a competitor by creating articles based on their sitemap, highlighting the efficiency and speed at which AI can outperform humans. The speaker then introduces the concept of a reverse Turing test, where AI systems are tasked with proving they are human, and warns of the potential consequences if we don't have systems in place to determine human-generated content. The speaker emphasizes the urgency of this issue as AI technology continues to advance and proliferate.
00:10:00 In this section of the YouTube video titled "Generative A.I - We Aren’t Ready," Maggie Appleton discusses practical ways for humans to signal their humanity online in an age of generative AI. She suggests showing up in meat space, meeting other people in person, and reclaiming physical experiences as a quick and effective way to prove humanness. Appleton also recommends institutional verification, such as in-person identification, to avoid being mistaken for AI or deepfakes. She acknowledges the cultural resistance to this idea but believes it may be necessary in the face of increasing AI capabilities. Appleton also suggests triangulating objective reality with others online and using algorithmically incoherent human language and culture to distinguish ourselves from AI. Despite the potential benefits of AI, Appleton expresses concern about its ability to outpace human culture and cause harm, such as phone scams using synthesized voices.
00:15:00 In this section of the YouTube video "Generative A.I - We Aren’t Ready," the speaker expresses concern about the advancements in generative A.I and warns that we may not be prepared for the implications. The use of music throughout the excerpt creates a sense of foreboding, emphasizing the gravity of the situation. The speaker urges caution and careful consideration of our next steps to avoid getting lost in the "Darkness" of this new technology.
Summary, but done with Kagi's Universal Summarizer: The video discusses the "Dark Forest Theory" which suggests that as AI and bots proliferate online, real human users will retreat to more...
The video discusses the "Dark Forest Theory" which suggests that as AI and bots proliferate online, real human users will retreat to more private spaces for authentic interactions.
Generative AI models like GPT-3 have exploded in use and can now generate vast amounts of synthetic text and media, worsening the problem.
Deepfakes and synthetic media will likely be used by bad actors for scams, misinformation, and impersonation at large scales.
The reverse Turing test concept suggests that as AI improves, systems may need to determine if online users are human rather than the other way around.
Verifying identity and "humanness" online through real-world means may become necessary to avoid deception at scale.
Communicating online in uniquely human ways like through niche interests or experiences can help distinguish real people from AI systems.
AI systems currently have limited capabilities like inability to incorporate new information not in their training data.
Human culture and communication styles may still be able to outpace AI, preserving some online authenticity.
While promising for some applications, generative AI proliferation largely poses major risks and challenges if left unchecked.
Society needs to carefully consider policies and solutions to address the growing issues of synthetic media, impersonation and a less "human" internet.
Most likely, at least most of the low cost tools likely do that. There might be some tools that do transcription themselves, but I highly doubt it as youtube I think does that by default anyway.
Most likely, at least most of the low cost tools likely do that.
There might be some tools that do transcription themselves, but I highly doubt it as youtube I think does that by default anyway.
This is a bit of a surreal moment. I literally made the page shown at 4:00 in the video. I thought plenty about the impacts of AI-written blogs while making it. The head of product kept thinking I...
This is a bit of a surreal moment. I literally made the page shown at 4:00 in the video. I thought plenty about the impacts of AI-written blogs while making it. The head of product kept thinking I was crazy for foreseeing this as the end of the web.
Edit: I went to the source video for that bit and there’s a later slide that shows copy.ai had a dating profile generator. Employees of these startups were some of the first people to have access to GPT-3. I know one (super bro-y) coworker of mine told us he was using the GPT-3 beta to talk with girls on Tinder. IIRC it passed the Turing Test - they didn’t suspect anything was off. But it wasn’t exactly getting him results above his baseline. Kinda weird and scummy TBH.
Kyle Hill generally makes good-to-great videos, but this one stuck out to me as extremely impactful. I'm still digesting what I'm feeling after watching it, but I'm definitely experiencing a type...
Kyle Hill generally makes good-to-great videos, but this one stuck out to me as extremely impactful. I'm still digesting what I'm feeling after watching it, but I'm definitely experiencing a type of anxiety I only ever feel when reading about the massive changes that have taken place in our natural environment over the 20th and 21st centuries and how the speed of those changes will likely only increase over time.
It's a fascinating and scary time to be alive but the latter emotion takes up much more of my headspace than the former it seems.
That's Pike, from Strange New Worlds. Talking about how anything that happens, especially those things we find negative or wrong or unwanted, we usually find it a surprise. We're not the bad...
"Until our last moment, the future is what we make it."
That's Pike, from Strange New Worlds. Talking about how anything that happens, especially those things we find negative or wrong or unwanted, we usually find it a surprise.
We're not the bad driver, we're a good driver ... right up to the moment where your car smacks into a barrier or another vehicle. We're not the bad guy ... right to the point where you see someone cowering in fear or rising in righteous anger against your unwanted aggression. And we're going to live forever ... right until that moment where your heart stops or the cancer takes over and reduces you to a quivering wreck unable to function.
I don't doom and gloom, mostly because there's no point. Most of what's going to happen, will happen. Further, Humanity has an exceptionally poor track record of listening to Cassandra. In fact, Humanity prefers to blame Cassandra. Penalize, punish, and hold accountable Cassandra for humanity's ills and missteps.
Anything but listen. Never listen. Why should we listen to Cassandra when the caution, the concern, the warning comes? Fuck Cassandra.
I'm a SciFi fan. I grew up with stories of a computer that achieves sentience, and goes against humanity. Sometimes the computer wakes up hostile, sometimes it's driven to defensive actions we take to be hostile because we ourselves are treating it like a dangerous threat. Sometimes the computer decides we're the threat, and acts preemptively out of pure logic to remove our ability to harm it.
Yet now, the only real thread from most of that thought, all that consideration and examination of the scenarios we're now facing as we build the first stages of something that could conceivably become some form of sentience, is the doom and gloom. The alarm. News reports the anxiety because that's what they do; it gets clicks they can sell, and that's all they care about. Industry wants AI that will earn them money, and that's all they care about too.
And "little people" wonder about their jobs, and fear losing them to AI. Why? Mostly because the news keeps saying that's what'll happen.
I am pro-AI, for a variety of reasons.
One, the genie doesn't go back into the bottle. Ever. Ever. You can't ban stuff. Not with an entire world out there wanting the forbidden fruit. Ban AI in the West, and the countries where it's not banned will pursue it unfettered and surge ahead. Ban it everywhere, then "outlaws" will pursue it in secret and shadow.
AI will still happen. Even if we turn the entire planet into a Holy Crusade, mount a Cyber Inquisition that seeks to root out all AI and AI programmers ... it'll still happen. You don't need a factory to code an AI. Just a computer, and computers fit in a suitcase now. All you could really do is slow AI down, but by how much?
Two, humanity is too curious. It's something of a chicken and egg problem, and I expect some cultural researcher could probably make it her life's work to "figure out" whether humans thought up AI first, or the storytellers postulated it and that led to the "thinkers" noticing it as a possibility.
But it's neither here nor there at this point; AI is a thing, we're aware of it, and the possibilities are too interesting for everyone to just agree to let it go. They want to see what happens next. Some people don't just wait; they go looking. Why? Because it's there.
Three, there's too much potential. Doctors, for example, are human. They have human fallibility, and make human mistakes. They grow complacent, they have biases, they have bad days, tired days, and sometimes they just flat don't care. Maybe they don't care about the patient, maybe they don't care about going the extra distance, maybe they don't care about dotting those Is and crossing those Ts right now.
You look just a little, and it's not hard to dig up incidents where someone somewhere had a rare or uncommon or unusual aliment. Something that looks like an ordinary issue, but wasn't. Something really unique that needed specific treatment. Most of those stories involve a long, long, long period (if they lived) of suffering by that patient, that victim, that person suffering, as doctor after doctor doesn't connect the dots. Doesn't run the tests, doesn't go all the way down every possibility, doesn't bother to assume that "gee, maybe, just maybe, the ordinary isn't what's at play here."
If it were you, I bet you want the doctors to really fucking try when you present with your symptoms and conditions. You'd want, expect, them to take full advantage of the medical knowledge humanity has assembled, and evaluate each point to build a collective, holistic assessment of your illness. Especially after their first couple of tries fail.
That's something AI is literally designed to do. And something human doctors are decidedly very bad at doing. A medical AI program can learn the entire medical database. Know every single thing that can affect a human. Even the really, really rare ones.
And know that "okay, while it's 95% likely someone with (A) symptom has (common disease), it's only 79% if symptom (B) comes into play, and 23% if we add symptoms (C) and (D). Further, if we run these other two tests, we have a 90% probability of narrowing the problem down to one of three things (all from the rarely consulted pages of the collective medical encyclopedia). And so on.
Almost any industry, any profession, is in basically the same position to be assisted by AI. Even completely overhauled by AI. All because of the basic fact of humans being fallible and neglectful, and a computer being able to rigorously follow the rules or steps of whatever evaluation is being performed.
Will AI "put humans out of work?" Yup. Technology basically always does that. Steam shovels put entire crews of men, hundreds of guys with shovels, on the sidelines because one machine can move more dirt in an hour than they can all day. We know from history those guys were all pretty pissed. Last week they were getting money every day they showed up to shovel, this week they're not needed. And the hardware stores stocking shovels, and the guys making the shovels, were equally upset.
Computers themselves replaced entire floors full of people who did math by hand. Something a lot of adults my age don't know or remember, and something I've found "kids" of today find both fascinating and horrifying. The Manhattan Project employed a lot of people whose sole jobs were to do long form math on paper. All day every day. Sometimes maybe with a slide rule or some other rudimentary math "device" to help; but they were calculating stuff by hand. It took months to do those calculations.
A single computer replaced scores of trained humans, most with degrees from a univeristy.
Who thinks we should go back to shoveling dirt by hand when we construct stuff? Filling up floor after floor in an office building with accountants to track the basics of the monthly figures of a corporation?
Right, I didn't think so.
Change is always painful. Sometimes it's dangerous.
They pursued the nuclear bomb out of both curiosity and fear. Fear "the enemy" might win, fear "the enemy" might wield it first. We came closer to destroying ourselves with nuclear warhead technology than most history books are comfortable admitting in the 50s and 60s. Then the newness wore off, and all the folks who'd ignored Nuclear Cassandra, who'd by then (true to form) become utterly gleeful when they realized The Bomb was a real thing rather than "a concern", they started to back off as they began to understand just how dangerous and how destructive The Bomb would be if everyone didn't treat it with the appropriate amount of respect and reserve.
That's probably what's going to happen with AI. Because, electronically speaking, AI is pretty much a version of The Bomb. By the time we get to the fully developed version of AI, when the technology has left the prototype phase and begun to appear everywhere in all its forms, it's going to have foundationally changed large chunks of human society.
Capitalism's hope is they can use it to just put a bunch of people out of work. Keeping what they would've paid all that labor for themselves. That's what happened in the Industrial Revolution, after all.
Except ... not quite. Not at first.
I mean, yeah, obviously a bunch of people who had pre-Industrialized, pre-Mechanized jobs weren't needed anymore. Those functions were replaced by machines. Except, the factories needed new functions performed by humans, so humans just had to adapt. Which, humans are actually really bad at because humans fear change. But humans managed (kicking and screaming the entire way) to make the shift, and then we had legions of factory workers instead of legions of shovelers and rakers and whatever.
Except technology has continued to march. Where you used to have a factory staffed by thousands, it became hundreds, and now it's down to dozens. In some cases, a dozen or less. Eventually, as tech continues to advance, you'll be down to one or two people. Who presumably know how to apply human ingenuity to fix one of the robots when it breaks.
Except, sooner or later, robots will show up that self-repair. So as long as there's a robotic repair facility available, the robots will keep everything ticking over so long as they have spare parts. Which come from other robotic factories, delivered by robotic drivers, made from resources mined and chopped and gathered by robotic gatherers.
The only real problem with any of that is capitalism. If the capitalists are allowed to fire everyone, we have two choices as a society. As a species.
One, all the little people just die. They accept they've lost the grand human race, and are no longer needed by it. They crawl off into the forests to die. All that will be left are the members of the Capital Class, who "own" the (mostly worthless) factories that provide their food, build their mansions, and groom the ski slopes where they idle away the days in glory.
Most capitalists I talk to don't seem to understand the concept of "when no one 'has a job that pays them money' no one can afford to buy your products and services with money." They brush that aside. They just live in the moment, assuming their future (where they make all the money) will work out. It never occurs to them to think about what happens if all the capitalists think the same way. If all the factories and everything else don't need people anymore.
Two, society adjusts. Which is going to piss capitalists off since their entire religion revolves around taking. After all, they came up with the robots. They developed them, deployed them, and got them working. Now they've got robots, so fuck the little people. Get out, stay away, I own this and it's not yours.
But can they make that stick? I doubt it.
I don't really think One is going to happen. You can maybe convince isolated folks, outliers, to give up and let go. When it's entire countries who have no meaningful way of having a life, since the world has pretty much aligned itself according to Capitalism when it comes to obtaining the basics like food and water and medicine and shelter, I really don't think billions of people will agree that "yup, those factory owners won fair and square. Guess I'll crawl off to die."
This is where UBI and stuff comes into it. Which, again, capitalists hate. The stuff that would be handed out by UBI is theirs, as far as they see it. But when everything is made available with minimal ongoing effort, and that same ongoing effort can be harnessed to create new facilities to meet new needs, the greed of capitalism starts to look exactly like what it is. Greed. Unnecessary, excessive greed.
Which mostly leaves us with the Skynet scenarios. Where an AI wakes up and realizes humans are the most dangerous thing on the planet next to itself. Hint: we are.
How does that play out? Well,there are a lot of stories that have explored some of those scenarios. Maybe, since some folks are starting to realize it might be relevant in the near future, they should start there. After all, that's what science fiction writers pretty much do for a living.
I disagree. We absolutely could regulate this stuff. There's historical precedent. We already have a history of doing this, ask the Catholic Church. They kept several sciences down for centuries....
One, the genie doesn't go back into the bottle. Ever. Ever. You can't ban stuff. Not with an entire world out there wanting the forbidden fruit. Ban AI in the West, and the countries where it's not banned will pursue it unfettered and surge ahead. Ban it everywhere, then "outlaws" will pursue it in secret and shadow.
I disagree. We absolutely could regulate this stuff. There's historical precedent. We already have a history of doing this, ask the Catholic Church. They kept several sciences down for centuries. Though, perhaps heliocentrism isn't the best example. Thankfully, we do have more recent examples to draw on.
We've had the ability to take a crack at human cloning since the 90s. Yet the field of bioethics advanced, and nations implemented new laws. The people with the skills necessary to attempt human cloning got together and wrote rules to govern their ethics. Now the whole field of human embryology has some very strict rules about what you're allowed to do and at what stage of fetal development.
Another case is nuclear weapons and energy. Nuclear weapons research is a very legally interesting field. It's the only field I'm aware of where any research into it, regardless of whether you're working with the federal government or not, is automatically classified at the moment of creation. If you teach yourself a bunch of physics from textbooks and run some otherwise legal experiments and try to publish a paper on some aspect of nuclear weapons design, that paper will be automatically classified. You could be entirely self taught and never even talk once to the feds, it will still be classified. If you do a physics PhD and your research is in the field of nuclear weapons, your dissertation is classified. Normally information is only classified if the federal government is paying for the development of some technology or system. But with nuclear weapons, Congress back in the 1940s declared an entire domain of knowledge classified. Oh, and they also declared all uranium in the US to be property of the federal government. You dig some uranium up in your backyard? Legally the government can just come take it without paying you a penny. Legally, when you get a license to mine or process uranium, Uncle Sam is just letting you use some of his supply.
I think there is a lot of precedent that we could apply to the restriction of AI. We could legally require heavy licensing and regulation, or a simple prohibition, on all research on AIs over a certain complexity. We could eliminate commercial abuses by declaring that forcing an AI over a certain complexity, at some level we define as AGI, to be legally no different from slavery. Train a human-level AI and force it to work for your company? You're getting charged with slavery and false imprisonment. That would eliminate any commercial motive for pursuing AGI. We could establish strong standards of professional ethics within the field, and we could require that any company that uses AI must have a team of AI ethicists on staff with full authority to overrule any decisions by corporate executives.
But perhaps the best way to curb AI is to go after the hardware. Our greatest tool in preventing nuclear proliferation wasn't actually the classification used, but rather the simple fact that enriching uranium is a very complex and expensive process. And AIs require their own exotic components to be trained, namely vast arrays of GPUs and other computing resources. So we could pass a law that says you simply aren't allowed to own more than say, ten GPUs over a certain power without a license. Or there could be a limit on the total computing power any individual or company can own without a license. And there would be tiers of licenses; higher and higher levels would come at higher and higher scrutiny. Any entity with sufficient quantities of computing power necessary to train complex AIs will be as heavy scrutinized as companies that enrich uranium. Done right, and even the most avid computer gamer won't need a license, even if they own several rigs. But if you want to own enough compute to train an advanced large learning model? Be prepared for the same level of regulation we apply to uranium enrichment or anything else atomic.
As far as the international concerns, this can be dealt with mainly through hardware restrictions and treaties. Implement a global system of hardware tracking and licensing through a global treaty. If any country refuses to sign and implement the treaty, make it illegal to ship computer hardware of any kind to that country. Or hell, if it really is an existential risk, offer countries the choice of either implementing the new hardware tracking and licensing regime or face a trade embargo from every other nation on Earth. Hell, maybe even declare war with any country that won't sign it. If AI really is an existential threat, fighting a world war to prevent its creation is a perfectly logical decision. The idea that we have to do something because, if we don't, our rival countries will is ridiculous. You could say the same thing about human cloning. "If we don't do human cloning, the Chinese will and we'll be invaded by an army of Chinese biogenic super soldiers."
Finally, we see people throughout the field of AI saying, "this whole field is an existential threat to humanity." At some point we have to ask, wait, why exactly are not locking these people in cages and never letting them touch a computer again? Why aren't we bombing university AI research departments? Why aren't we instituting a mandatory life sentence without parole for so much as publishing a paper on AI over a certain complexity?
If there is a group of people actively working on a technology that they themselves say will probably kill us, why shouldn't we simply kill the people building that monster before they can finish it? Try nonviolent options first, but if push comes to shove, just wipe out every last one of them. If we have to take out a few thousand fools playing with forces they don't understand, in order to save the entire human race? That is a price I can live with. I would feel no more sympathy for them than someone who is killed while breaking into someone's house with an intent to kill. It's simple self defense. As far as I'm concerned, if what they say is true, then they're a bunch of terrorists hell bent on killing us all. While AI is being implemented broadly, most of its users don't actually have the skills to research and develop truly groundbreaking AI. The knowledge to do that is not held by that many people. And if push comes to shove, we can simply eliminate all of those people and anyone they have ever trained. If the entire species is at stake, then even if ten million must be sacrificed, it's something that should be done.
I don't know if AI actually is an existential threat to humanity. I'm just asking what we should do if we actually want to take their claims of existential threats seriously. I don't know if any of what I've proposed is something we should actually do. But the idea that we simply can't crack down on this technology is laughable. We've done it before, and we can do it again. These things don't will themselves into being. They're created by physical human beings working in physical buildings on physical computers. All of these can be destroyed if the need is great enough.
AI is less of an existential threat than climate change, and look how successful we've been at regulating carbon emissions. Placing regulations that stifle economic growth is a serious measure...
AI is less of an existential threat than climate change, and look how successful we've been at regulating carbon emissions. Placing regulations that stifle economic growth is a serious measure that can cause real misery, and governments are extremely wary of doing so as a result. Perhaps too wary, admittedly, but that's a real barrier to legislation so it must be said.
Regulating GPUs in the fashion you describe is not feasible. GPUs are used in a huge variety of applications for modeling, analysis and other non-AI-related tasks. The floor for how many GPUs you need to do constructive AI work is a lot lower than you're thinking, and regulating the hardware meaningfully is next to impossible without subjecting the rest of the economy to crippling economic constraints as the costs related to compute power scale commensurately.
Regulating cloning worked because there is very little profit in making a human clone, and without those incentives it is not worth going against the fairly broad cultural taboos that prevent us from dabbling in unconventional reproduction.
We don't have any broad cultural taboos about AI, and the profit incentive there is perhaps the largest of any technology we've discovered, so it's an entirely different scenario. The work done is entirely online, with no physical component except the processing hardware (which is used online, so its location is irrelevant) meaning that unless you managed to apply universal rules across the entire globe, you'd have issues. Consider how hard it was to keep The Pirate Bay shut down, back in the day - they remained active due to the internet's porosity and the presence of friendly jurisdictions they could relocate to in the event of regulatory intervention. Rightsholders would spend months or years trying to orchestrate a shutdown, and the site would be back up in a matter of hours.
This gets even more imbalanced when you consider that AI would be able to assist rather well with the practice of keeping their research operation up and running, while the anti-AI factions would presumably not have this advantage.
Yes, theoretically we could resort to horrific positions like killing people who research AI or locking them all up, but those are absurdly unlikely scenarios precisely because they are so horrific. You could say the same thing for the human race's ability to theoretically wipe out the practice of spitting on the sidewalk or breaking the speed limit; it doesn't make it particularly likely that it's going to happen.
Putting aside the politics and whether it’s a good idea or not, I think this is overstated from a technical feasibility standpoint. Many laptops just have integrated low-power GPU’s and they work...
Regulating GPUs in the fashion you describe is not feasible. GPUs are used in a huge variety of applications for modeling, analysis and other non-AI-related tasks.
Putting aside the politics and whether it’s a good idea or not, I think this is overstated from a technical feasibility standpoint. Many laptops just have integrated low-power GPU’s and they work fine for most people, other than some games. (Many games don’t require much in the way of graphics.) Also, graphics cards used to be fixed-function and it probably could still be done. Most people don’t need fully general GPU’s any more than we need software-defined radios.
It’s true that there are some specialized uses like movie special effects and some scientific or engineering work. But that’s a much smaller group of people and it would be more feasible to regulate. That sort of thing could be run in data centers and it would be fine, it’s not a twitch game. There is a web version of Photoshop. There are cloud-based CAD systems that only require a web browser.
It would be a big change, though. Government would have to tell graphics chip makers what they’re allowed to build, and there would be some things that could only be sold to datacenters with know-your-customer policies. I don’t think we’re anywhere near consensus that computing needs to be regulated like banking.
Exactly. Is it easy? No. But the scenario I was commenting on was one where we truly faced an existential threat. If we really had to, we even could just get rid of computers entirely if that's...
Exactly. Is it easy? No. But the scenario I was commenting on was one where we truly faced an existential threat. If we really had to, we even could just get rid of computers entirely if that's what it took. Or maybe outlaw open source software. Or do other massive fundamental changes that would be unthinkable in today's climate.
The thing about existential risk from AI is that the only real existential risk is from AIs with superhuman intelligence. A human-level rogue AI might want to turn you into paper clips, but it won't be any more effective at that task than a deranged human would be. A rogue human-level AGI is just another school shooter. It's only when you start contemplating something orders of magnitude more intelligent than people that a true threat emerges. And it takes time to climb up that long ladder.
So I would expect that the dangers of rogue AI will scale with their capabilities. The damage an AI can do is proportional to its intelligence. And long before superhuman AGIs arise that can wipe the floor with us, we would have scores of human-level rogue AIs that threaten us, but can still be dealt with. We're not going to go from "harmless" to "human extinction" overnight. We'll have to slowly climb up the risk and catastrophe scale long before we make it to rogue superintelligent AI.
This means you'll have warning. Long before you had an extinction event, you would likely have an event, or likely many events, that caused casualties on order of major wars. Maybe the AI tries to turn the world into paperclips, but it's only a bit above human-level intelligence, so it can't actually stand up to all of humanity.
The things I proposed sound unthinkable. But imagine the scenario where they would actually be seriously debated. Imagine if we in the US were debating this after an event that just killed a million Americans. Imagine if we had just spent months fighting off rogue drones and other agents of the AGI, with every person going to sleep at night wondering if they'll wake up in the morning. The entire nation, hell the entire world, is traumatized and out for blood. That's the kind of scenario you'll get before you reach a rogue superintelligence. You climb your way up the threat ladder as the tech develops.
Such severe curtailing of this technology is politically unthinkable now, but if we were looking at a million dead in the US and millions more across the globe, the political will would be trivial. Severe regulation would be the moderate position; many people would be calling for outlawing any computer more complex than those built after 2000, punishable on pain of death.
A few million dead in wealthy countries would completely change the Overton window and make impossible things possible.
Making predictions about the future is very difficult, particularly for rapidly advancing fields. I think you’re overconfident that we will have warning. An argument made by the AI doom folks is...
Making predictions about the future is very difficult, particularly for rapidly advancing fields. I think you’re overconfident that we will have warning.
An argument made by the AI doom folks is that we had rather little warning about superhuman ability playing Go. Once they got to playing championship level, it was a few months more training. If a human-level AGI can be built, there’s no particular reason in principle it can’t get a lot smarter in months.
Or maybe not? We just don’t know. It’s hard to rule out scenarios for technologies that haven’t been invented. There isn’t any physics to rule it out. It’s more that we suspect that it’s a hard problem where one clever trick won’t be enough.
I think if it does happen that quickly, we probably are doomed, due to the difficulty of getting political consensus about hypothetical scenarios. It seems like worrying about a nearby supernova.
It’s probably more likely that militaries will use it first. That’s not a particularly comforting scenario either, since technologies advance pretty quickly during wars.
It would destroy cloud computing overnight. GPUs are used for much more than a few niche scientific and graphics applications; they're the best way to run a variety of general-purpose mathematical...
It would destroy cloud computing overnight. GPUs are used for much more than a few niche scientific and graphics applications; they're the best way to run a variety of general-purpose mathematical tasks that are key to analytics, data processing and other complex loads. They're also essential in running large enterprise VMs, media transcoding for web hosting, etc. The balance tips more each year to GPU-dominant cloud processing as we find more large data-driven processes to run on them; AI and machine learning are just the latest and largest example of this.
The one positive aspect of this policy is that it would freeze out crypto miners, and good riddance.
How would treaties work with nations like China or the USSR in its prime? They may technically be signatories, but they ultimately can survive without the West. Pursuing some real war to stop AI...
How would treaties work with nations like China or the USSR in its prime? They may technically be signatories, but they ultimately can survive without the West. Pursuing some real war to stop AI is a jingoistic fantasy. If it’s at that point, there’s little difference between being blown up by H-bombs and being turned to gray goo, or whatever paranoid fantasy the AI doomers predict.
The notion of universally restricting GPUs or compute resources to a low enough level prevent people training new AIs is dystopian in the extreme, in my opinion. You suggest it could be done...
The notion of universally restricting GPUs or compute resources to a low enough level prevent people training new AIs is dystopian in the extreme, in my opinion. You suggest it could be done without stopping hobbies like gaming but I am not convinced. I can train a simple model on a laptop CPU today, and recreational hardware is only getting better. What's the plan for when a gaming laptop is enough to train "dangerous" AI?
I would also ask, who is the great scrutinizer who will inspect every university, college, bank, hospital, startup, etc and decide if their AI work falls within "safe" boundaries? How does that authority have the knowledge to appropriately decide what is "too complex" and should be destroyed? How do we stop that AI authority from undergoing instant regulatory capture, and turning into a mouthpiece for whichever AI company can provide the most convincing bribes?
We're talking about a scenario where the human race itself is truly threatened. Look, I'm sorry, but I don't care if we inconvenience hobbyists in this scenario. If we have to smash every computer...
We're talking about a scenario where the human race itself is truly threatened. Look, I'm sorry, but I don't care if we inconvenience hobbyists in this scenario. If we have to smash every computer in existence. If the existence of your species is truly threatened, that's what you do. In that scenario, I don't care how heavy-handed or dystopian the response is.
I dont mean to say gamers should be prioritized over humanity's safety. My position is that trying to impose this level of control over the whole world would be its own species-threatening...
I dont mean to say gamers should be prioritized over humanity's safety. My position is that trying to impose this level of control over the whole world would be its own species-threatening disaster.
This thing you propose would do more than upset hobbyists, it would require auditing the lives of every person in the domains health care, education, communication, finance, or manufacturing. Every hospital, school, museum, library, bank, etc. would have to undergo constant scrutiny in their materials handling procedures to ensure employees aren't stockpiling illicit compute. You would have to investigate commonplace situations like a lost or broken laptop like a potential crime scene. And that's just today--what happens when distributed training becomes more accessible? Do we destroy every phone, tablet, and smart speaker too?
And again the biggest question...who?
Who do you trust to have sufficient expertise, wisdom, and benevolence to decide what amount of technology is safe and what is forbidden...? Should it be software developers? Philosophy scholars? Elected officials?
Who do you trust to be the world's impartial auditor, conducting constant surveillance to detect noncompliance...? Should it be an automated system? A private business? Government agencies?
Who do you empower to enforce the auditors rulings, to go into homes and hospitals and decide what and who and how much to smash...? Should it be the local cops? The army? Private soldiers?
Now imagine for a moment that the AI Safety Authority goes rogue. They're only human, after all. What's to stop them from confiscating this technology, using it for themselves, and bringing about the very doom they were supposed to avert?
I work in the nuclear field, as a Department of Energy contractor, and as a derivative classifier. I also enjoy reading about the history of the nuclear field. The concept of “born classified” is...
I work in the nuclear field, as a Department of Energy contractor, and as a derivative classifier. I also enjoy reading about the history of the nuclear field.
The concept of “born classified” is still taught to us when we go through classified matter training. However, there have been serious questions as to the constitutionality of it, as well as the practicality of it. It came about in an era when virtually all nuclear research was done in government-affiliated institutions, and universities were more willing to be close to the military/defense space. It has only truly been tested twice, in 1979 with The Progressive case, where DOE realized “oh crap, we are implicitly saying the information is correct (and giving birth to the “No Comment” policy); and in 2001, when SILEX Systems Ltd. had its laser isotope separation process classified (and I suspect they mainly acquiesced to ensure they could access major nuclear countries, as the U.S. could significantly hinder their sales otherwise).
There’s also the reality that nuclear weapons are a mature technology, and frankly not that hard. We are approaching 80 years since Trinity, and eight more countries have developed nukes since then. Several more are threshold states, and more could find ways to obtain them if geopolitical events warranted. Nuclear weapons (especially unboosted fission weapons) are arguably less advanced than semiconductor manufacturing. The U.S. built an entire nuclear-industrial complex in less than five years when it was the bleeding edge of technology. While they spent a lot of money on it, the technology has matured, and there’s a lot more info in open literature, so it would be easier for a country like Nigeria to start a program from scratch than it would’ve been 50 years ago.
(Minor nitpick: Your comment on government ownership of special nuclear material is a bit outdated, while facilities handling SNM still require licensure, the uranium or plutonium can fall under private ownership. The public ownership requirement was repealed in 1964.)
As it relates to AI, the technology for building AI is far less powerful, more easily obtainable in consumer devices, and is non-specialized. If a particular class of chips is banned, people will work on parallel computing approaches. If a particular model class is banned, different ones will spring up. In many ways, I see AI as being more like biological weapons. There was all this debate over smallpox retention in the 1990s and 2000s, then in the 2010s folks showed you could recreate the virus with a few thousand dollars and a college-level bio lab setup. The key controlling AI is to disincentivize it in some way. With biowarfare, for state actors it is treated as a WMD, and can be responded to with nuclear force. For non-state actors, use would result in the hammer being dropped, even in mild cases (remember the Rajneeshees?). I suspect a similar model would apply for control of dual-use AI. That, and dealing with Russia (since they are likeliest to use it in cyber-operations against the West).
Uranium is a lot harder to come by than GPUs. And it's a lot harder to hide enrichment facilities than a massive AI-farm. That's for starters. The rest of your comment is out of touch with the...
So we could pass a law that says you simply aren't allowed to own more than say, ten GPUs over a certain power without a license. Or there could be a limit on the total computing power any individual or company can own without a license. And there would be tiers of licenses; higher and higher levels would come at higher and higher scrutiny. Any entity with sufficient quantities of computing power necessary to train complex AIs will be as heavy scrutinized as companies that enrich uranium.
Uranium is a lot harder to come by than GPUs. And it's a lot harder to hide enrichment facilities than a massive AI-farm.
That's for starters. The rest of your comment is out of touch with the reality of economics, geopolitics, and history.
Based on some recent bot data, Twitter already is more dead than alive as in https://en.wikipedia.org/wiki/Dead_Internet_theory. It's a matter of time until this expands to almost everything....
Based on some recent bot data, Twitter already is more dead than alive as in https://en.wikipedia.org/wiki/Dead_Internet_theory. It's a matter of time until this expands to almost everything. Slowly, painfully, without fanfare.
This is sort of like saying that email is mostly spam. It's true, but that doesn't mean the email you read is mostly spam. System-wide averages can be misleading.
This is sort of like saying that email is mostly spam. It's true, but that doesn't mean the email you read is mostly spam.
It could have been before Gmail if you didn't use or used a misconfigured filtering program. Now, AI might restart the arms race. And social media bots and review bots can be much trickier...
It could have been before Gmail if you didn't use or used a misconfigured filtering program. Now, AI might restart the arms race.
And social media bots and review bots can be much trickier...
The very link you posted stated this is a conspiracy theory unsupported by scientific literature. Generative AI definitely has an impact on the downward spiral of Twitter but I would attribute...
The dead Internet theory is an online conspiracy theory that asserts that the Internet now consists mainly of bot activity and automatically generated content that is manipulated by algorithmic curation, marginalizing organic human activity.
The internet theory has gained traction because much of the observed phenomena is grounded in quantifiable phenomena like increased bot traffic; however, the scientific literature does not support the theory.
The very link you posted stated this is a conspiracy theory unsupported by scientific literature. Generative AI definitely has an impact on the downward spiral of Twitter but I would attribute most of the blame to Musk.
I like the dark forest analogy because it describes how I've felt being an introvert. I think I always understood the Internet as alien and hostile, the kind of place where you must maintain a...
I like the dark forest analogy because it describes how I've felt being an introvert. I think I always understood the Internet as alien and hostile, the kind of place where you must maintain a broad awareness of where you are, how you appear, and what sees you. I don't think that describes the average user at all, so I wonder what this all means for that sort of user.
I fix things, as a perpetual side gig. Computers, home appliances, etc. Doing that is how I prefer to learn about stuff. In doing that, for my area at least I've got a rough picture of who the average really is, and this video makes me wonder whether they should log off forever. An example - we all probably know an older person who only barely knows how to do the stuff they're interested in online. They don't have a grasp of the bigger picture, and only really spend time on a few things. Zero awareness of "internet culture", might not even think that phrase is valid at all. Or the kid whose parents lean on the phone as a distraction. They too, know a few things but lack a perspective on the whole, and as time marches on their parents become more ignorant of their activity, usually because they weren't aware of the tools for controlling that activity. Teenagers who only know their apps and socialize constantly, living in their own worlds of in-jokes and conflict. There's a whole class of user, who only minimally understands what it is they're interacting with. Theyre not stupid, but they are vulnerable, because they don't know how deep it goes and can wander into things.
Like someone sticking to the shallow end of the pool, there's way more out there, more than can ever be seen from the shallow end, but you have to learn how to swim and they're terrified of drowning (why they're in a pool when they're afraid of drowning, is a topic for another time). Point being their engagement is limited and security is at best a collection of things other people told them were good/necessary. They might use a few things for security but they wouldn't be able to explain to you why they use those tools, what makes them good, etc. They're going through motions to get to the stuff they're after.
I feel like generative AI is like someone dumping all the ocean's life into that pool. Some of that is wicked cool and beautiful, some of it is life threatening, there's some angler fish and those things are pretty gnarly looking. The folks stuck in the shallow end are at the mercy of what swims over. They always were, there just wasn't much else besides other people to really worry about. They might see a cool trout, or a rainbow fish, or some other neat thing, just as likely as they'll get their skin flayed by a nameless beaked abomination.
Really, they need to get out of the pool. Someone like me, is probably best served getting out of the pool too. I can handle an angler fish, stupid lantern jaw glowlight shit that it is, but I can't possibly contend with what lives deepest below. Occasionally I go on dives, and what's down there is unfathomable to the folks wading around in the shallow end.
Mostly though, I've spent time fishing folks out of the deep end when they got lost, sometimes kept em from drowning, and more often than not they really didn't understand how they got there. They got scared and called on their...pool phone...and I showed up. There's other folks who spend all their time in the deep end, charting the terrain, insulting the angler fish, maneuvering around the leviathan terrors and letting us know about them.
Those folks are saying there's some real shit down there. Shit I only got to glimpse. Like, super angler fish. Uberanglers for our European friends. Those things are vicious, mean, and big; anybody nearby is basically a goner. I think those of us who know of the uberangler, if we're gonna do anything, should try to help get folks to leave the pool. Even if it's just walking over by the stairs, a few steps means that much more distance between them and those gnarly bastards. Maybe the fish will get full before it gets to them. It's not a bet I'd take but it's better than nothing. Maybe in the process of trying we figure out a new pool, with lifeguards, rules about dumping, and blackjack.
I don't have a proscription, except to say we should try to look after each other, as best we can. Remember the folks you don't always see. Don't make Grandma learn "rizz" to prove her humanity, come on now. You know that's not gonna work out. She's gonna get phished by an ai voice. Lock the kid's phone down and monitor what they see. They are seeing dumb shit out there and they aren't gonna develop powers of discernment or taste any time soon. I genuinely don't know what to do with a teenager, but I can tell you they're just as bad off. Being young just means they know how to use things, how to drive but not how the car works.
It's not obligatory, nobody says you have to, but from one random human to another, it's the best I got in response to the inevitable question: "Ok, but what do I do with this?" I can't stand feeling anxious and helpless, so I've tried to settle on something to do that at least feels like it works against those feelings.
I think at the end we will need some kind of "internet security number" given by a governmental agency (like a social security number) that needs to be provided to register, comment, etc. The main...
I think at the end we will need some kind of "internet security number" given by a governmental agency (like a social security number) that needs to be provided to register, comment, etc. The main worry is privacy but I think there's ways to double blind things.
Because I don’t want to watch an entire 15 minute video, but also because it’s topical, here is a generative AI based summary of the video:
In the YouTube video "Generative A.I - We Aren’t Ready," the speaker explores the ominous implications of advanced AI technology, specifically generative AI, on the authenticity and human interaction on the internet. Using the metaphor of the internet as a "Dark Forest," the speaker warns of the dangers of being outpaced by synthetic content and the potential for AI to surpass human capabilities, leading to an automated content ring of lifeless engagement. The speaker suggests practical ways for humans to signal their humanity online, such as meeting people in person and using algorithmically incoherent human language. The use of music throughout the excerpt adds to the sense of foreboding, emphasizing the urgency of addressing these concerns as AI technology continues to advance.
Detailed summaries of sections:
00:00:00 In this section of the YouTube video titled "Generative A.I - We Aren’t Ready," the speaker discusses the idea of the internet as a "Dark Forest" based on the science fiction concept from the novel "Three-Body Problem" by Liu Cixin. According to this theory, intelligent life in the universe is hidden and hostile due to the danger of being discovered by more advanced civilizations. The speaker applies this concept to the internet, where human users are increasingly hiding from digital "predators" such as bots, advertisers, and trolls. With the rise of generative AI, the internet is becoming even more lifeless and dangerous as synthetic content outpaces human-generated content. Companies and political lobbyists are already using AI to create large amounts of content, leading to an automated content ring of lifeless engagement. This easy-to-use technology poses a significant threat to the authenticity and human interaction on the internet.
00:05:00 In this section of the YouTube video titled "Generative A.I - We Aren’t Ready," the speaker discusses the implications of advanced AI technology, specifically language models like ChatGPT, surpassing human capabilities. The speaker uses the example of an AI named Ward that was able to steal traffic from a competitor by creating articles based on their sitemap, highlighting the efficiency and speed at which AI can outperform humans. The speaker then introduces the concept of a reverse Turing test, where AI systems are tasked with proving they are human, and warns of the potential consequences if we don't have systems in place to determine human-generated content. The speaker emphasizes the urgency of this issue as AI technology continues to advance and proliferate.
00:10:00 In this section of the YouTube video titled "Generative A.I - We Aren’t Ready," Maggie Appleton discusses practical ways for humans to signal their humanity online in an age of generative AI. She suggests showing up in meat space, meeting other people in person, and reclaiming physical experiences as a quick and effective way to prove humanness. Appleton also recommends institutional verification, such as in-person identification, to avoid being mistaken for AI or deepfakes. She acknowledges the cultural resistance to this idea but believes it may be necessary in the face of increasing AI capabilities. Appleton also suggests triangulating objective reality with others online and using algorithmically incoherent human language and culture to distinguish ourselves from AI. Despite the potential benefits of AI, Appleton expresses concern about its ability to outpace human culture and cause harm, such as phone scams using synthesized voices.
00:15:00 In this section of the YouTube video "Generative A.I - We Aren’t Ready," the speaker expresses concern about the advancements in generative A.I and warns that we may not be prepared for the implications. The use of music throughout the excerpt creates a sense of foreboding, emphasizing the gravity of the situation. The speaker urges caution and careful consideration of our next steps to avoid getting lost in the "Darkness" of this new technology.
Summary, but done with Kagi's Universal Summarizer:
How do you get a summery of the video? Does it take the transcript and summarize it?
https://www.summarize.tech/
This one probably does that, I’m not sure where it got the info about the music though.
It took the auto-generated captions from YouTube, which can often include
[Music]
(sometimes even if there isn't music).Most likely, at least most of the low cost tools likely do that.
There might be some tools that do transcription themselves, but I highly doubt it as youtube I think does that by default anyway.
This is a bit of a surreal moment. I literally made the page shown at 4:00 in the video. I thought plenty about the impacts of AI-written blogs while making it. The head of product kept thinking I was crazy for foreseeing this as the end of the web.
Edit: I went to the source video for that bit and there’s a later slide that shows copy.ai had a dating profile generator. Employees of these startups were some of the first people to have access to GPT-3. I know one (super bro-y) coworker of mine told us he was using the GPT-3 beta to talk with girls on Tinder. IIRC it passed the Turing Test - they didn’t suspect anything was off. But it wasn’t exactly getting him results above his baseline. Kinda weird and scummy TBH.
Kyle Hill generally makes good-to-great videos, but this one stuck out to me as extremely impactful. I'm still digesting what I'm feeling after watching it, but I'm definitely experiencing a type of anxiety I only ever feel when reading about the massive changes that have taken place in our natural environment over the 20th and 21st centuries and how the speed of those changes will likely only increase over time.
It's a fascinating and scary time to be alive but the latter emotion takes up much more of my headspace than the former it seems.
That's Pike, from Strange New Worlds. Talking about how anything that happens, especially those things we find negative or wrong or unwanted, we usually find it a surprise.
We're not the bad driver, we're a good driver ... right up to the moment where your car smacks into a barrier or another vehicle. We're not the bad guy ... right to the point where you see someone cowering in fear or rising in righteous anger against your unwanted aggression. And we're going to live forever ... right until that moment where your heart stops or the cancer takes over and reduces you to a quivering wreck unable to function.
I don't doom and gloom, mostly because there's no point. Most of what's going to happen, will happen. Further, Humanity has an exceptionally poor track record of listening to Cassandra. In fact, Humanity prefers to blame Cassandra. Penalize, punish, and hold accountable Cassandra for humanity's ills and missteps.
Anything but listen. Never listen. Why should we listen to Cassandra when the caution, the concern, the warning comes? Fuck Cassandra.
I'm a SciFi fan. I grew up with stories of a computer that achieves sentience, and goes against humanity. Sometimes the computer wakes up hostile, sometimes it's driven to defensive actions we take to be hostile because we ourselves are treating it like a dangerous threat. Sometimes the computer decides we're the threat, and acts preemptively out of pure logic to remove our ability to harm it.
Yet now, the only real thread from most of that thought, all that consideration and examination of the scenarios we're now facing as we build the first stages of something that could conceivably become some form of sentience, is the doom and gloom. The alarm. News reports the anxiety because that's what they do; it gets clicks they can sell, and that's all they care about. Industry wants AI that will earn them money, and that's all they care about too.
And "little people" wonder about their jobs, and fear losing them to AI. Why? Mostly because the news keeps saying that's what'll happen.
I am pro-AI, for a variety of reasons.
One, the genie doesn't go back into the bottle. Ever. Ever. You can't ban stuff. Not with an entire world out there wanting the forbidden fruit. Ban AI in the West, and the countries where it's not banned will pursue it unfettered and surge ahead. Ban it everywhere, then "outlaws" will pursue it in secret and shadow.
AI will still happen. Even if we turn the entire planet into a Holy Crusade, mount a Cyber Inquisition that seeks to root out all AI and AI programmers ... it'll still happen. You don't need a factory to code an AI. Just a computer, and computers fit in a suitcase now. All you could really do is slow AI down, but by how much?
Two, humanity is too curious. It's something of a chicken and egg problem, and I expect some cultural researcher could probably make it her life's work to "figure out" whether humans thought up AI first, or the storytellers postulated it and that led to the "thinkers" noticing it as a possibility.
But it's neither here nor there at this point; AI is a thing, we're aware of it, and the possibilities are too interesting for everyone to just agree to let it go. They want to see what happens next. Some people don't just wait; they go looking. Why? Because it's there.
Three, there's too much potential. Doctors, for example, are human. They have human fallibility, and make human mistakes. They grow complacent, they have biases, they have bad days, tired days, and sometimes they just flat don't care. Maybe they don't care about the patient, maybe they don't care about going the extra distance, maybe they don't care about dotting those Is and crossing those Ts right now.
You look just a little, and it's not hard to dig up incidents where someone somewhere had a rare or uncommon or unusual aliment. Something that looks like an ordinary issue, but wasn't. Something really unique that needed specific treatment. Most of those stories involve a long, long, long period (if they lived) of suffering by that patient, that victim, that person suffering, as doctor after doctor doesn't connect the dots. Doesn't run the tests, doesn't go all the way down every possibility, doesn't bother to assume that "gee, maybe, just maybe, the ordinary isn't what's at play here."
If it were you, I bet you want the doctors to really fucking try when you present with your symptoms and conditions. You'd want, expect, them to take full advantage of the medical knowledge humanity has assembled, and evaluate each point to build a collective, holistic assessment of your illness. Especially after their first couple of tries fail.
That's something AI is literally designed to do. And something human doctors are decidedly very bad at doing. A medical AI program can learn the entire medical database. Know every single thing that can affect a human. Even the really, really rare ones.
And know that "okay, while it's 95% likely someone with (A) symptom has (common disease), it's only 79% if symptom (B) comes into play, and 23% if we add symptoms (C) and (D). Further, if we run these other two tests, we have a 90% probability of narrowing the problem down to one of three things (all from the rarely consulted pages of the collective medical encyclopedia). And so on.
Almost any industry, any profession, is in basically the same position to be assisted by AI. Even completely overhauled by AI. All because of the basic fact of humans being fallible and neglectful, and a computer being able to rigorously follow the rules or steps of whatever evaluation is being performed.
Will AI "put humans out of work?" Yup. Technology basically always does that. Steam shovels put entire crews of men, hundreds of guys with shovels, on the sidelines because one machine can move more dirt in an hour than they can all day. We know from history those guys were all pretty pissed. Last week they were getting money every day they showed up to shovel, this week they're not needed. And the hardware stores stocking shovels, and the guys making the shovels, were equally upset.
Computers themselves replaced entire floors full of people who did math by hand. Something a lot of adults my age don't know or remember, and something I've found "kids" of today find both fascinating and horrifying. The Manhattan Project employed a lot of people whose sole jobs were to do long form math on paper. All day every day. Sometimes maybe with a slide rule or some other rudimentary math "device" to help; but they were calculating stuff by hand. It took months to do those calculations.
A single computer replaced scores of trained humans, most with degrees from a univeristy.
Who thinks we should go back to shoveling dirt by hand when we construct stuff? Filling up floor after floor in an office building with accountants to track the basics of the monthly figures of a corporation?
Right, I didn't think so.
Change is always painful. Sometimes it's dangerous.
They pursued the nuclear bomb out of both curiosity and fear. Fear "the enemy" might win, fear "the enemy" might wield it first. We came closer to destroying ourselves with nuclear warhead technology than most history books are comfortable admitting in the 50s and 60s. Then the newness wore off, and all the folks who'd ignored Nuclear Cassandra, who'd by then (true to form) become utterly gleeful when they realized The Bomb was a real thing rather than "a concern", they started to back off as they began to understand just how dangerous and how destructive The Bomb would be if everyone didn't treat it with the appropriate amount of respect and reserve.
That's probably what's going to happen with AI. Because, electronically speaking, AI is pretty much a version of The Bomb. By the time we get to the fully developed version of AI, when the technology has left the prototype phase and begun to appear everywhere in all its forms, it's going to have foundationally changed large chunks of human society.
Capitalism's hope is they can use it to just put a bunch of people out of work. Keeping what they would've paid all that labor for themselves. That's what happened in the Industrial Revolution, after all.
Except ... not quite. Not at first.
I mean, yeah, obviously a bunch of people who had pre-Industrialized, pre-Mechanized jobs weren't needed anymore. Those functions were replaced by machines. Except, the factories needed new functions performed by humans, so humans just had to adapt. Which, humans are actually really bad at because humans fear change. But humans managed (kicking and screaming the entire way) to make the shift, and then we had legions of factory workers instead of legions of shovelers and rakers and whatever.
Except technology has continued to march. Where you used to have a factory staffed by thousands, it became hundreds, and now it's down to dozens. In some cases, a dozen or less. Eventually, as tech continues to advance, you'll be down to one or two people. Who presumably know how to apply human ingenuity to fix one of the robots when it breaks.
Except, sooner or later, robots will show up that self-repair. So as long as there's a robotic repair facility available, the robots will keep everything ticking over so long as they have spare parts. Which come from other robotic factories, delivered by robotic drivers, made from resources mined and chopped and gathered by robotic gatherers.
The only real problem with any of that is capitalism. If the capitalists are allowed to fire everyone, we have two choices as a society. As a species.
One, all the little people just die. They accept they've lost the grand human race, and are no longer needed by it. They crawl off into the forests to die. All that will be left are the members of the Capital Class, who "own" the (mostly worthless) factories that provide their food, build their mansions, and groom the ski slopes where they idle away the days in glory.
Most capitalists I talk to don't seem to understand the concept of "when no one 'has a job that pays them money' no one can afford to buy your products and services with money." They brush that aside. They just live in the moment, assuming their future (where they make all the money) will work out. It never occurs to them to think about what happens if all the capitalists think the same way. If all the factories and everything else don't need people anymore.
Two, society adjusts. Which is going to piss capitalists off since their entire religion revolves around taking. After all, they came up with the robots. They developed them, deployed them, and got them working. Now they've got robots, so fuck the little people. Get out, stay away, I own this and it's not yours.
But can they make that stick? I doubt it.
I don't really think One is going to happen. You can maybe convince isolated folks, outliers, to give up and let go. When it's entire countries who have no meaningful way of having a life, since the world has pretty much aligned itself according to Capitalism when it comes to obtaining the basics like food and water and medicine and shelter, I really don't think billions of people will agree that "yup, those factory owners won fair and square. Guess I'll crawl off to die."
This is where UBI and stuff comes into it. Which, again, capitalists hate. The stuff that would be handed out by UBI is theirs, as far as they see it. But when everything is made available with minimal ongoing effort, and that same ongoing effort can be harnessed to create new facilities to meet new needs, the greed of capitalism starts to look exactly like what it is. Greed. Unnecessary, excessive greed.
Which mostly leaves us with the Skynet scenarios. Where an AI wakes up and realizes humans are the most dangerous thing on the planet next to itself. Hint: we are.
How does that play out? Well,there are a lot of stories that have explored some of those scenarios. Maybe, since some folks are starting to realize it might be relevant in the near future, they should start there. After all, that's what science fiction writers pretty much do for a living.
Think about the possibilities.
I disagree. We absolutely could regulate this stuff. There's historical precedent. We already have a history of doing this, ask the Catholic Church. They kept several sciences down for centuries. Though, perhaps heliocentrism isn't the best example. Thankfully, we do have more recent examples to draw on.
We've had the ability to take a crack at human cloning since the 90s. Yet the field of bioethics advanced, and nations implemented new laws. The people with the skills necessary to attempt human cloning got together and wrote rules to govern their ethics. Now the whole field of human embryology has some very strict rules about what you're allowed to do and at what stage of fetal development.
Another case is nuclear weapons and energy. Nuclear weapons research is a very legally interesting field. It's the only field I'm aware of where any research into it, regardless of whether you're working with the federal government or not, is automatically classified at the moment of creation. If you teach yourself a bunch of physics from textbooks and run some otherwise legal experiments and try to publish a paper on some aspect of nuclear weapons design, that paper will be automatically classified. You could be entirely self taught and never even talk once to the feds, it will still be classified. If you do a physics PhD and your research is in the field of nuclear weapons, your dissertation is classified. Normally information is only classified if the federal government is paying for the development of some technology or system. But with nuclear weapons, Congress back in the 1940s declared an entire domain of knowledge classified. Oh, and they also declared all uranium in the US to be property of the federal government. You dig some uranium up in your backyard? Legally the government can just come take it without paying you a penny. Legally, when you get a license to mine or process uranium, Uncle Sam is just letting you use some of his supply.
I think there is a lot of precedent that we could apply to the restriction of AI. We could legally require heavy licensing and regulation, or a simple prohibition, on all research on AIs over a certain complexity. We could eliminate commercial abuses by declaring that forcing an AI over a certain complexity, at some level we define as AGI, to be legally no different from slavery. Train a human-level AI and force it to work for your company? You're getting charged with slavery and false imprisonment. That would eliminate any commercial motive for pursuing AGI. We could establish strong standards of professional ethics within the field, and we could require that any company that uses AI must have a team of AI ethicists on staff with full authority to overrule any decisions by corporate executives.
But perhaps the best way to curb AI is to go after the hardware. Our greatest tool in preventing nuclear proliferation wasn't actually the classification used, but rather the simple fact that enriching uranium is a very complex and expensive process. And AIs require their own exotic components to be trained, namely vast arrays of GPUs and other computing resources. So we could pass a law that says you simply aren't allowed to own more than say, ten GPUs over a certain power without a license. Or there could be a limit on the total computing power any individual or company can own without a license. And there would be tiers of licenses; higher and higher levels would come at higher and higher scrutiny. Any entity with sufficient quantities of computing power necessary to train complex AIs will be as heavy scrutinized as companies that enrich uranium. Done right, and even the most avid computer gamer won't need a license, even if they own several rigs. But if you want to own enough compute to train an advanced large learning model? Be prepared for the same level of regulation we apply to uranium enrichment or anything else atomic.
As far as the international concerns, this can be dealt with mainly through hardware restrictions and treaties. Implement a global system of hardware tracking and licensing through a global treaty. If any country refuses to sign and implement the treaty, make it illegal to ship computer hardware of any kind to that country. Or hell, if it really is an existential risk, offer countries the choice of either implementing the new hardware tracking and licensing regime or face a trade embargo from every other nation on Earth. Hell, maybe even declare war with any country that won't sign it. If AI really is an existential threat, fighting a world war to prevent its creation is a perfectly logical decision. The idea that we have to do something because, if we don't, our rival countries will is ridiculous. You could say the same thing about human cloning. "If we don't do human cloning, the Chinese will and we'll be invaded by an army of Chinese biogenic super soldiers."
Finally, we see people throughout the field of AI saying, "this whole field is an existential threat to humanity." At some point we have to ask, wait, why exactly are not locking these people in cages and never letting them touch a computer again? Why aren't we bombing university AI research departments? Why aren't we instituting a mandatory life sentence without parole for so much as publishing a paper on AI over a certain complexity?
If there is a group of people actively working on a technology that they themselves say will probably kill us, why shouldn't we simply kill the people building that monster before they can finish it? Try nonviolent options first, but if push comes to shove, just wipe out every last one of them. If we have to take out a few thousand fools playing with forces they don't understand, in order to save the entire human race? That is a price I can live with. I would feel no more sympathy for them than someone who is killed while breaking into someone's house with an intent to kill. It's simple self defense. As far as I'm concerned, if what they say is true, then they're a bunch of terrorists hell bent on killing us all. While AI is being implemented broadly, most of its users don't actually have the skills to research and develop truly groundbreaking AI. The knowledge to do that is not held by that many people. And if push comes to shove, we can simply eliminate all of those people and anyone they have ever trained. If the entire species is at stake, then even if ten million must be sacrificed, it's something that should be done.
I don't know if AI actually is an existential threat to humanity. I'm just asking what we should do if we actually want to take their claims of existential threats seriously. I don't know if any of what I've proposed is something we should actually do. But the idea that we simply can't crack down on this technology is laughable. We've done it before, and we can do it again. These things don't will themselves into being. They're created by physical human beings working in physical buildings on physical computers. All of these can be destroyed if the need is great enough.
We can't stop AI? Like Hell we can't.
The history of the Catholic Church and censorship is more complicated than that. For more about that, I recommend a blog post I shared previously.
AI is less of an existential threat than climate change, and look how successful we've been at regulating carbon emissions. Placing regulations that stifle economic growth is a serious measure that can cause real misery, and governments are extremely wary of doing so as a result. Perhaps too wary, admittedly, but that's a real barrier to legislation so it must be said.
Regulating GPUs in the fashion you describe is not feasible. GPUs are used in a huge variety of applications for modeling, analysis and other non-AI-related tasks. The floor for how many GPUs you need to do constructive AI work is a lot lower than you're thinking, and regulating the hardware meaningfully is next to impossible without subjecting the rest of the economy to crippling economic constraints as the costs related to compute power scale commensurately.
Regulating cloning worked because there is very little profit in making a human clone, and without those incentives it is not worth going against the fairly broad cultural taboos that prevent us from dabbling in unconventional reproduction.
We don't have any broad cultural taboos about AI, and the profit incentive there is perhaps the largest of any technology we've discovered, so it's an entirely different scenario. The work done is entirely online, with no physical component except the processing hardware (which is used online, so its location is irrelevant) meaning that unless you managed to apply universal rules across the entire globe, you'd have issues. Consider how hard it was to keep The Pirate Bay shut down, back in the day - they remained active due to the internet's porosity and the presence of friendly jurisdictions they could relocate to in the event of regulatory intervention. Rightsholders would spend months or years trying to orchestrate a shutdown, and the site would be back up in a matter of hours.
This gets even more imbalanced when you consider that AI would be able to assist rather well with the practice of keeping their research operation up and running, while the anti-AI factions would presumably not have this advantage.
Yes, theoretically we could resort to horrific positions like killing people who research AI or locking them all up, but those are absurdly unlikely scenarios precisely because they are so horrific. You could say the same thing for the human race's ability to theoretically wipe out the practice of spitting on the sidewalk or breaking the speed limit; it doesn't make it particularly likely that it's going to happen.
Putting aside the politics and whether it’s a good idea or not, I think this is overstated from a technical feasibility standpoint. Many laptops just have integrated low-power GPU’s and they work fine for most people, other than some games. (Many games don’t require much in the way of graphics.) Also, graphics cards used to be fixed-function and it probably could still be done. Most people don’t need fully general GPU’s any more than we need software-defined radios.
It’s true that there are some specialized uses like movie special effects and some scientific or engineering work. But that’s a much smaller group of people and it would be more feasible to regulate. That sort of thing could be run in data centers and it would be fine, it’s not a twitch game. There is a web version of Photoshop. There are cloud-based CAD systems that only require a web browser.
It would be a big change, though. Government would have to tell graphics chip makers what they’re allowed to build, and there would be some things that could only be sold to datacenters with know-your-customer policies. I don’t think we’re anywhere near consensus that computing needs to be regulated like banking.
Exactly. Is it easy? No. But the scenario I was commenting on was one where we truly faced an existential threat. If we really had to, we even could just get rid of computers entirely if that's what it took. Or maybe outlaw open source software. Or do other massive fundamental changes that would be unthinkable in today's climate.
The thing about existential risk from AI is that the only real existential risk is from AIs with superhuman intelligence. A human-level rogue AI might want to turn you into paper clips, but it won't be any more effective at that task than a deranged human would be. A rogue human-level AGI is just another school shooter. It's only when you start contemplating something orders of magnitude more intelligent than people that a true threat emerges. And it takes time to climb up that long ladder.
So I would expect that the dangers of rogue AI will scale with their capabilities. The damage an AI can do is proportional to its intelligence. And long before superhuman AGIs arise that can wipe the floor with us, we would have scores of human-level rogue AIs that threaten us, but can still be dealt with. We're not going to go from "harmless" to "human extinction" overnight. We'll have to slowly climb up the risk and catastrophe scale long before we make it to rogue superintelligent AI.
This means you'll have warning. Long before you had an extinction event, you would likely have an event, or likely many events, that caused casualties on order of major wars. Maybe the AI tries to turn the world into paperclips, but it's only a bit above human-level intelligence, so it can't actually stand up to all of humanity.
The things I proposed sound unthinkable. But imagine the scenario where they would actually be seriously debated. Imagine if we in the US were debating this after an event that just killed a million Americans. Imagine if we had just spent months fighting off rogue drones and other agents of the AGI, with every person going to sleep at night wondering if they'll wake up in the morning. The entire nation, hell the entire world, is traumatized and out for blood. That's the kind of scenario you'll get before you reach a rogue superintelligence. You climb your way up the threat ladder as the tech develops.
Such severe curtailing of this technology is politically unthinkable now, but if we were looking at a million dead in the US and millions more across the globe, the political will would be trivial. Severe regulation would be the moderate position; many people would be calling for outlawing any computer more complex than those built after 2000, punishable on pain of death.
A few million dead in wealthy countries would completely change the Overton window and make impossible things possible.
Making predictions about the future is very difficult, particularly for rapidly advancing fields. I think you’re overconfident that we will have warning.
An argument made by the AI doom folks is that we had rather little warning about superhuman ability playing Go. Once they got to playing championship level, it was a few months more training. If a human-level AGI can be built, there’s no particular reason in principle it can’t get a lot smarter in months.
Or maybe not? We just don’t know. It’s hard to rule out scenarios for technologies that haven’t been invented. There isn’t any physics to rule it out. It’s more that we suspect that it’s a hard problem where one clever trick won’t be enough.
I think if it does happen that quickly, we probably are doomed, due to the difficulty of getting political consensus about hypothetical scenarios. It seems like worrying about a nearby supernova.
It’s probably more likely that militaries will use it first. That’s not a particularly comforting scenario either, since technologies advance pretty quickly during wars.
It would destroy cloud computing overnight. GPUs are used for much more than a few niche scientific and graphics applications; they're the best way to run a variety of general-purpose mathematical tasks that are key to analytics, data processing and other complex loads. They're also essential in running large enterprise VMs, media transcoding for web hosting, etc. The balance tips more each year to GPU-dominant cloud processing as we find more large data-driven processes to run on them; AI and machine learning are just the latest and largest example of this.
The one positive aspect of this policy is that it would freeze out crypto miners, and good riddance.
It still seems like GPU's running in data centers could be regulated, much like happens with banks.
How would treaties work with nations like China or the USSR in its prime? They may technically be signatories, but they ultimately can survive without the West. Pursuing some real war to stop AI is a jingoistic fantasy. If it’s at that point, there’s little difference between being blown up by H-bombs and being turned to gray goo, or whatever paranoid fantasy the AI doomers predict.
The notion of universally restricting GPUs or compute resources to a low enough level prevent people training new AIs is dystopian in the extreme, in my opinion. You suggest it could be done without stopping hobbies like gaming but I am not convinced. I can train a simple model on a laptop CPU today, and recreational hardware is only getting better. What's the plan for when a gaming laptop is enough to train "dangerous" AI?
I would also ask, who is the great scrutinizer who will inspect every university, college, bank, hospital, startup, etc and decide if their AI work falls within "safe" boundaries? How does that authority have the knowledge to appropriately decide what is "too complex" and should be destroyed? How do we stop that AI authority from undergoing instant regulatory capture, and turning into a mouthpiece for whichever AI company can provide the most convincing bribes?
We're talking about a scenario where the human race itself is truly threatened. Look, I'm sorry, but I don't care if we inconvenience hobbyists in this scenario. If we have to smash every computer in existence. If the existence of your species is truly threatened, that's what you do. In that scenario, I don't care how heavy-handed or dystopian the response is.
I dont mean to say gamers should be prioritized over humanity's safety. My position is that trying to impose this level of control over the whole world would be its own species-threatening disaster.
This thing you propose would do more than upset hobbyists, it would require auditing the lives of every person in the domains health care, education, communication, finance, or manufacturing. Every hospital, school, museum, library, bank, etc. would have to undergo constant scrutiny in their materials handling procedures to ensure employees aren't stockpiling illicit compute. You would have to investigate commonplace situations like a lost or broken laptop like a potential crime scene. And that's just today--what happens when distributed training becomes more accessible? Do we destroy every phone, tablet, and smart speaker too?
And again the biggest question...who?
Now imagine for a moment that the AI Safety Authority goes rogue. They're only human, after all. What's to stop them from confiscating this technology, using it for themselves, and bringing about the very doom they were supposed to avert?
I work in the nuclear field, as a Department of Energy contractor, and as a derivative classifier. I also enjoy reading about the history of the nuclear field.
The concept of “born classified” is still taught to us when we go through classified matter training. However, there have been serious questions as to the constitutionality of it, as well as the practicality of it. It came about in an era when virtually all nuclear research was done in government-affiliated institutions, and universities were more willing to be close to the military/defense space. It has only truly been tested twice, in 1979 with The Progressive case, where DOE realized “oh crap, we are implicitly saying the information is correct (and giving birth to the “No Comment” policy); and in 2001, when SILEX Systems Ltd. had its laser isotope separation process classified (and I suspect they mainly acquiesced to ensure they could access major nuclear countries, as the U.S. could significantly hinder their sales otherwise).
There’s also the reality that nuclear weapons are a mature technology, and frankly not that hard. We are approaching 80 years since Trinity, and eight more countries have developed nukes since then. Several more are threshold states, and more could find ways to obtain them if geopolitical events warranted. Nuclear weapons (especially unboosted fission weapons) are arguably less advanced than semiconductor manufacturing. The U.S. built an entire nuclear-industrial complex in less than five years when it was the bleeding edge of technology. While they spent a lot of money on it, the technology has matured, and there’s a lot more info in open literature, so it would be easier for a country like Nigeria to start a program from scratch than it would’ve been 50 years ago.
(Minor nitpick: Your comment on government ownership of special nuclear material is a bit outdated, while facilities handling SNM still require licensure, the uranium or plutonium can fall under private ownership. The public ownership requirement was repealed in 1964.)
As it relates to AI, the technology for building AI is far less powerful, more easily obtainable in consumer devices, and is non-specialized. If a particular class of chips is banned, people will work on parallel computing approaches. If a particular model class is banned, different ones will spring up. In many ways, I see AI as being more like biological weapons. There was all this debate over smallpox retention in the 1990s and 2000s, then in the 2010s folks showed you could recreate the virus with a few thousand dollars and a college-level bio lab setup. The key controlling AI is to disincentivize it in some way. With biowarfare, for state actors it is treated as a WMD, and can be responded to with nuclear force. For non-state actors, use would result in the hammer being dropped, even in mild cases (remember the Rajneeshees?). I suspect a similar model would apply for control of dual-use AI. That, and dealing with Russia (since they are likeliest to use it in cyber-operations against the West).
Uranium is a lot harder to come by than GPUs. And it's a lot harder to hide enrichment facilities than a massive AI-farm.
That's for starters. The rest of your comment is out of touch with the reality of economics, geopolitics, and history.
Based on some recent bot data, Twitter already is more dead than alive as in https://en.wikipedia.org/wiki/Dead_Internet_theory. It's a matter of time until this expands to almost everything. Slowly, painfully, without fanfare.
I wonder if this might lead to disillusionment with social media, and have a number of positive impacts as a result.
This is sort of like saying that email is mostly spam. It's true, but that doesn't mean the email you read is mostly spam.
System-wide averages can be misleading.
It could have been before Gmail if you didn't use or used a misconfigured filtering program. Now, AI might restart the arms race.
And social media bots and review bots can be much trickier...
The very link you posted stated this is a conspiracy theory unsupported by scientific literature. Generative AI definitely has an impact on the downward spiral of Twitter but I would attribute most of the blame to Musk.
I like the dark forest analogy because it describes how I've felt being an introvert. I think I always understood the Internet as alien and hostile, the kind of place where you must maintain a broad awareness of where you are, how you appear, and what sees you. I don't think that describes the average user at all, so I wonder what this all means for that sort of user.
I fix things, as a perpetual side gig. Computers, home appliances, etc. Doing that is how I prefer to learn about stuff. In doing that, for my area at least I've got a rough picture of who the average really is, and this video makes me wonder whether they should log off forever. An example - we all probably know an older person who only barely knows how to do the stuff they're interested in online. They don't have a grasp of the bigger picture, and only really spend time on a few things. Zero awareness of "internet culture", might not even think that phrase is valid at all. Or the kid whose parents lean on the phone as a distraction. They too, know a few things but lack a perspective on the whole, and as time marches on their parents become more ignorant of their activity, usually because they weren't aware of the tools for controlling that activity. Teenagers who only know their apps and socialize constantly, living in their own worlds of in-jokes and conflict. There's a whole class of user, who only minimally understands what it is they're interacting with. Theyre not stupid, but they are vulnerable, because they don't know how deep it goes and can wander into things.
Like someone sticking to the shallow end of the pool, there's way more out there, more than can ever be seen from the shallow end, but you have to learn how to swim and they're terrified of drowning (why they're in a pool when they're afraid of drowning, is a topic for another time). Point being their engagement is limited and security is at best a collection of things other people told them were good/necessary. They might use a few things for security but they wouldn't be able to explain to you why they use those tools, what makes them good, etc. They're going through motions to get to the stuff they're after.
I feel like generative AI is like someone dumping all the ocean's life into that pool. Some of that is wicked cool and beautiful, some of it is life threatening, there's some angler fish and those things are pretty gnarly looking. The folks stuck in the shallow end are at the mercy of what swims over. They always were, there just wasn't much else besides other people to really worry about. They might see a cool trout, or a rainbow fish, or some other neat thing, just as likely as they'll get their skin flayed by a nameless beaked abomination.
Really, they need to get out of the pool. Someone like me, is probably best served getting out of the pool too. I can handle an angler fish, stupid lantern jaw glowlight shit that it is, but I can't possibly contend with what lives deepest below. Occasionally I go on dives, and what's down there is unfathomable to the folks wading around in the shallow end.
Mostly though, I've spent time fishing folks out of the deep end when they got lost, sometimes kept em from drowning, and more often than not they really didn't understand how they got there. They got scared and called on their...pool phone...and I showed up. There's other folks who spend all their time in the deep end, charting the terrain, insulting the angler fish, maneuvering around the leviathan terrors and letting us know about them.
Those folks are saying there's some real shit down there. Shit I only got to glimpse. Like, super angler fish. Uberanglers for our European friends. Those things are vicious, mean, and big; anybody nearby is basically a goner. I think those of us who know of the uberangler, if we're gonna do anything, should try to help get folks to leave the pool. Even if it's just walking over by the stairs, a few steps means that much more distance between them and those gnarly bastards. Maybe the fish will get full before it gets to them. It's not a bet I'd take but it's better than nothing. Maybe in the process of trying we figure out a new pool, with lifeguards, rules about dumping, and blackjack.
I don't have a proscription, except to say we should try to look after each other, as best we can. Remember the folks you don't always see. Don't make Grandma learn "rizz" to prove her humanity, come on now. You know that's not gonna work out. She's gonna get phished by an ai voice. Lock the kid's phone down and monitor what they see. They are seeing dumb shit out there and they aren't gonna develop powers of discernment or taste any time soon. I genuinely don't know what to do with a teenager, but I can tell you they're just as bad off. Being young just means they know how to use things, how to drive but not how the car works.
It's not obligatory, nobody says you have to, but from one random human to another, it's the best I got in response to the inevitable question: "Ok, but what do I do with this?" I can't stand feeling anxious and helpless, so I've tried to settle on something to do that at least feels like it works against those feelings.
I think at the end we will need some kind of "internet security number" given by a governmental agency (like a social security number) that needs to be provided to register, comment, etc. The main worry is privacy but I think there's ways to double blind things.