31 votes

The people do not yearn for automation

37 comments

  1. [7]
    skybrian
    Link
    From the article: [...]

    From the article:

    What I see when I encounter clips like this is the true gap between the tech industry and regular people when it comes to AI — the limit of software brain. Like I said, everyone in tech understands how much regular people dislike AI. What I think they’re missing is why. They think this is a marketing problem. OpenAI just spent $200 million on the TBPN podcast because the company thinks it will help make people like AI more. Sam Altman has said so explicitly:

    Sam Altman: Oh, they are genius marketers and I would love to have better marketing. Somebody said to me recently that if AI were a political candidate, it would be the least popular political candidate in history. And given the amazing things AI can do, I think there’s got to be better marketing for AI.

    It feels like someone just needs to say this clearly, so I’m just going to do it. AI doesn’t have a marketing problem. People experience these tools every single day! ChatGPT has 900 million weekly users, trending to a billion, and everyone has seen AI Overviews in Google Search and massive amounts of slop on their feeds.

    You can’t advertise people out of reacting to their own experiences. This is a fundamental disconnect between how tech people with software brains see the world and how regular people are living their lives.

    So what is software brain? The simplest definition I’ve come up with is that it’s when you see the whole world as a series of databases that can be controlled with the structured language of software code. Like I said, this is a powerful way of seeing things. So much of our lives run through databases, and a bunch of important companies have been built around maintaining those databases and providing access to them.

    Zillow is a database of houses. Uber is a database of cars and riders. YouTube is a database of videos. The Verge’s website is a database of stories. You can go on and on and on. Once you start seeing the world as a bunch of databases, it’s a small jump to feeling like you can control everything if you can just control the data.

    [...]

    But: not everything is a business. Not everything is a loop! The entire human experience cannot be captured in a database. That’s the limit of software brain. That’s why people hate AI. It flattens them.

    Regular people don’t see the opportunity to write code as an opportunity at all. The people do not yearn for automation. I’m a full-on smart home sicko; the lights and shades and climate controls of my house are automated in dozens of ways. But huge companies like Apple, Google and Amazon have struggled for over a decade now to make regular people care about smart home automation at all. And they just don’t.

    AI isn’t going to fix that. Most people are not collecting data about every single thing that they do. And if they’re collecting any at all, it’s stored across lots of different systems — your email in Gmail, your messages in iMessage, your work schedule in Outlook, your workouts in Peloton. Those systems don’t talk to each other and maybe they never will, because there’s no reason for them to. Asking people to connect them all freaks them out.

    25 votes
    1. [2]
      hobbes64
      Link Parent
      The company that I work at has completely succumbed to AI fever. Over the last few months, more and more resources have been diverted to work on it. I’ve tried to raise some red flags about this....

      The company that I work at has completely succumbed to AI fever. Over the last few months, more and more resources have been diverted to work on it. I’ve tried to raise some red flags about this.

      • There’s lost opportunity by working on this instead of our traditional strengths
      • We’re being asked to create systems that have almost no requirements or use cases from management. So the developers either just wing it or come up with their own.
      • There’s no evidence our customers want any of this, and possibly will hate and resent it.

      I guess the company is just afraid competitors will be able to market AI stuff before us. But they don’t know why that would matter. Why would someone want to type a request to a bot rather than click a button in a UI?

      Regarding the software brain and everything being in databases: I noticed some time ago that software models a little bit of the real world. It tries to solve a problem in the real world, but doesn’t contain all the information. Just like a plastic model airplane looks like an airplane but it can’t fly or take people anywhere. So when you take that data, which is just a shadow of the world, and feed it to AI, you are going to get results that don’t line up to the real world, and the more you process and bend that data, the less “real” the answer.

      But regarding thinking that you can advertise in a way that makes people like things that are harming them: There’s evidence that this works. Notice how often people vote for politicians who are not going to make things better. A specific example is that people in the US vote for republicans based on the premise that they are better for the economy. There’s ample evidence they are worse, except for the rich. But advertising apparently tricks people into falling for this over and over.

      17 votes
      1. skybrian
        Link Parent
        I see populism working when politicians appeal to peoples’ preexisting prejudices. It’s targeting a weakness. It doesn’t mean advertising is effective for promoting anything you like.

        I see populism working when politicians appeal to peoples’ preexisting prejudices. It’s targeting a weakness. It doesn’t mean advertising is effective for promoting anything you like.

        8 votes
    2. [4]
      chocobean
      Link Parent
      AI does have a marketing problem though. They're spending money and influence on marketing, instead of contributing to the market. If these company CEOs are saying once we get rid of all jobs we...

      AI does have a marketing problem though. They're spending money and influence on marketing, instead of contributing to the market.

      If these company CEOs are saying once we get rid of all jobs we will use our profits to financially support every single person, overnight we would all be AI champions.

      8 votes
      1. [3]
        skybrian
        (edited )
        Link Parent
        No, because people are not that gullible. Nobody would believe them and they’d be right not to believe a promise about what an AI firm would do in the unlikely event that it had all the money in...

        No, because people are not that gullible. Nobody would believe them and they’d be right not to believe a promise about what an AI firm would do in the unlikely event that it had all the money in the world.

        I don’t know how many jobs will be eliminated, but the basic problem is that AI firms aren’t going to earn nearly enough from it to pay the affected workers much, even if they wanted to. That’s not how productivity improvements work. When labor is automated away, prices go down.

        9 votes
        1. [2]
          chocobean
          Link Parent
          Conservative governments promising trickle down are elected in all the time; people can be extremely gullible. You're right we have no reason to believe them. But people will.

          Conservative governments promising trickle down are elected in all the time; people can be extremely gullible. You're right we have no reason to believe them. But people will.

          17 votes
          1. skybrian
            (edited )
            Link Parent
            Has that ever happened? Yes, conservatives sometimes win, and sometimes there has been talk about trickle-down theories of economics, but they talk about a lot of other things too. Proving...

            Has that ever happened? Yes, conservatives sometimes win, and sometimes there has been talk about trickle-down theories of economics, but they talk about a lot of other things too. Proving causation, that people actually believed a certain thing and acted on it, seems more difficult.

            5 votes
  2. [24]
    chocobean
    Link
    This isn't software brain, this is soft brain. Asking individuals to protest with dollars against the handfuls racing each other to become the first Trillionaire. Our attention on the Internet is...

    this violence is unacceptable. If you want to meaningfully oppose AI in a way that lasts, you should speak loudly with your dollars in the market and your attention on the internet, and you should speak loudly with your votes. You should participate in the democratic regulatory and political process.

    This isn't software brain, this is soft brain. Asking individuals to protest with dollars against the handfuls racing each other to become the first Trillionaire. Our attention on the Internet is packaged and sold and manipulated for more of those dollars towards the Big T. Our votes have demonstrated to mean nothing in terms of policy no matter who we vote for precisely because their dollars, their attention and their votes means many more billions of times more than ours.

    We have rags. We have empty bottles. And although they're working on it, we can still afford fuel.

    Violence is unacceptable sure. There's not a good reason to beat up a dog because dogs are our pets and companions. But there's a good time to put down a rabid dog when it's trying to eat our childrens' faces. Firefighters have the moral obligation to destroy machines that are crushing a human. If someone set up a bomb on our spaceship, we have every right and duty to dismantle it.

    24 votes
    1. [23]
      unkz
      Link Parent
      … isn’t this just terrorism?

      … isn’t this just terrorism?

      14 votes
      1. [19]
        chocobean
        Link Parent
        It's not. It's a calculated use of violence to create a targeted and specific sense of fear in specific public figures, without stating any specific political objective. It's older than that. Its...

        It's not. It's a calculated use of violence to create a targeted and specific sense of fear in specific public figures, without stating any specific political objective. It's older than that. Its what people have always done in the face of tyranny, before we have collective bargaining, before we have legal system where we agreed everyone is equal before the law, and before we had democracy where the state exists of the people, for the people, by the people. These systems existed for the protection of kings from the anger of the people they oppress, and they took them away. I'm not advocating for it, I'm stating that this is what happens, the way I can state when you toss a ball into the air it comes down at a certain angle at a certain acceleration.

        16 votes
        1. [7]
          unkz
          Link Parent
          That sounds like every justification from a terrorist I’ve ever read. This violence is ok because it is just, unlike the other guys whose cause is unjust. This violence is ok because the...

          That sounds like every justification from a terrorist I’ve ever read. This violence is ok because it is just, unlike the other guys whose cause is unjust. This violence is ok because the perpetrator is weak and the victim is strong.

          I'm not advocating for it

          It sounds like you are arguing that it is inherently justified.

          13 votes
          1. [6]
            DynamoSunshirt
            Link Parent
            Replace "terrorism" with "war" and "terrorist" with "head of state" and now you're cookin'.

            Replace "terrorism" with "war" and "terrorist" with "head of state" and now you're cookin'.

            8 votes
            1. [5]
              unkz
              Link Parent
              What does this have to do with trying to kill the CEO of OpenAI? You’ve gone off on some totally disconnected political tangent.

              What does this have to do with trying to kill the CEO of OpenAI? You’ve gone off on some totally disconnected political tangent.

              5 votes
              1. [4]
                DynamoSunshirt
                Link Parent
                This thread began with you asking "...isn't this just terrorism?" The next comment: Your reply indicates that this is justification for terrorist activity. My reply pokes fun at your response,...

                This thread began with you asking "...isn't this just terrorism?"

                The next comment:

                Its what people have always done in the face of tyranny, before we have collective bargaining, before we have legal system where we agreed everyone is equal before the law, and before we had democracy where the state exists of the people, for the people, by the people.

                Your reply indicates that this is justification for terrorist activity.

                My reply pokes fun at your response, since the same logic applies to states: 'the violence is ok because it is just'.

                If you didn't catch my drift, I don't think your logic checks out. You're trying to say that no matter the circumstances people should just just roll over and accept abuse from the powerful, as long as it falls within the bounds of the law.

                Let's say you're a homosexual, and your state declared homosexuality a crime. The state throws you in prison. Should you just accept it, because that's The Law? Or should you resist and stick up for what's right?

                I'm not saying that violence against the oligarch class is justified. But it feels disingenuous to pretend that you don't understand the argument being made here. And honestly it's pretty dickish to call it a 'totally disconnected political tangent.'

                12 votes
                1. [3]
                  unkz
                  (edited )
                  Link Parent
                  I'm saying DON'T MURDER PEOPLE. America is still a functioning democracy. Don't pretend it isn't. Extrajudicial killings are not ever a reasonable option in this time and place. You can play the...

                  If you didn't catch my drift, I don't think your logic checks out. You're trying to say that no matter the circumstances people should just just roll over and accept abuse from the powerful, as long as it falls within the bounds of the law.

                  I'm saying DON'T MURDER PEOPLE. America is still a functioning democracy. Don't pretend it isn't. Extrajudicial killings are not ever a reasonable option in this time and place. You can play the "what if this was North Korea" game if you want, and I do think North Koreans are in a position where assassinating Kim Jong Il is fair game, but this isn't North Korea. Not even close.

                  I'm not saying that violence against the oligarch class is justified. But it feels disingenuous to pretend that you don't understand the argument being made here. And honestly it's pretty dickish to call it a 'totally disconnected political tangent.'

                  No, I understand exactly what the argument being made here is, and I think that accepting political assassinations is absurd. Full stop.

                  I don't like it when people say one thing, and then turn around and say exactly the other thing. And that's really what you're doing. You're making excuses for people who are trying to murder people.

                  3 votes
                  1. [2]
                    DynamoSunshirt
                    Link Parent
                    Interesting that you bring up North Korea. So you acknowledge there is a place for political violence? I do condemn political assassinations. But I also admit there's a fine line; I understand why...

                    Interesting that you bring up North Korea. So you acknowledge there is a place for political violence? I do condemn political assassinations. But I also admit there's a fine line; I understand why Malcolm X tended towards more violent behavior than MLK, for instance. Oppress and mistreat people for long enough, and they become desperate.

                    Oddly I think I agree with you: assassination in North Korea? Fine enough. Assassination in the USA, even in 2026? Not OK.

                    Maybe we're stuck on the nuance of where to draw that fine line? But I'm not sure anyone knows precisely where to draw it. Consider Germany in Hitler's time; with the information of the common person, filtered through the propaganda state, in what month of what year of the Nazi takeover would you change from "assassination wrong" to "ok"? I don't think any of us can draw that line. I certainly can't!

                    So I suppose I'm just saying that I empathize with the desperation of these people. If you think assassination in North Korea is OK, don't you feel the same way?

                    2 votes
                    1. unkz
                      Link Parent
                      Political violence is acceptable only in an environment that lacks political process. These assassins don't lack political process, they just have goals that the majority of the population won't...
                      • Exemplary

                      Political violence is acceptable only in an environment that lacks political process. These assassins don't lack political process, they just have goals that the majority of the population won't support, so they are going outside the system for a quick fix. Don't like Trump so much that you'll literally throw away the rest of your life? Why not invest the next two years door knocking?

                      In my opinion, that was June 30, 1934 (Night of the long knives) in Germany. Aug. 2, 1934 (declared Führer) is probably a line that almost nobody would disagree with except for maybe some pathological adherents of non-violence who wouldn't even kill in self-defence. Some would say July 14, 1933 (Law Against the Formation of New Parties), but I think the mass execution of political opponents clearly has priority.

                      I empathize with those who suffer under Trump. He's terrible, and he's making the world worse every day. I support their intrinsic right to feel sad. But I absolutely condemn anyone who is trying to murder him, and they should face the harshest imprisonment available. Peaceful transfer of power is one of the, if not the, most important achievements of the modern world.

                      4 votes
        2. [2]
          skybrian
          Link Parent
          The sort of lashing out you’re referencing is rarely well-targeted or calculated. Often minorities get the blame.

          The sort of lashing out you’re referencing is rarely well-targeted or calculated. Often minorities get the blame.

          8 votes
          1. chocobean
            Link Parent
            well, I definitely don't support poorly targeted lashing out using misplaced violence

            well, I definitely don't support poorly targeted lashing out using misplaced violence

            15 votes
        3. [9]
          R3qn65
          Link Parent
          Generally, I think most people would consider this advocating for violence. Like, if you saw someone saying these things about some other cause, would you view this speech as a problem? In any...

          I'm not advocating for [violence]

          We have rags. We have empty bottles. And although they're working on it, we can still afford fuel… there's a good time to put down a rabid dog when it's trying to eat our childrens' faces.

          Generally, I think most people would consider this advocating for violence. Like, if you saw someone saying these things about some other cause, would you view this speech as a problem?

          In any event, your entire argument rests on violence being necessary to resist tyranny. There are two problems with this:

          1. Violence is less effective than nonviolence, and by some metrics makes nonviolence movements less likely to succeed. Let’s say we accept your premise, that we need to do a calculated use of violence to create fear in “specific public officials” (i.e. Sam Altman) without stating a specific political objective. What happens next? Let’s say he’s scared out of the market or simply killed. Either way, he’s not the CEO anymore. But OpenAI doesn’t close! AI’s not over!

          2. OpenAI is not tyranny. It is not government-imposed force. Nobody is making you use OpenAI. Yes, I recognize that there’s an argument that OpenAI is lobbying the government, is close with Trump, etc. There’s an argument that AI could be very bad for the economy as a whole. But that is a far stretch from being tyranny, and an even farther stretch from being the sort of tyranny that would then justify violence.

          Hopefully I haven’t crossed from passionate into mean. Generally, I like your posts and you seem a decent sort. But Choco, when you write something like “well, I definitely don't support poorly targeted lashing out using misplaced violence,” don’t you see that that’s what every violent organization in history has said? Violence begets violence. It always grows, and innocent people are always victimized.

          11 votes
          1. [5]
            chocobean
            (edited )
            Link Parent
            R3qn65, no, you haven't crossed into being mean, and I've been thinking a lot about what I said and you and several others have said since yesterday. I don't like what I said either. Edit: gist --...
            • Exemplary

            R3qn65, no, you haven't crossed into being mean, and I've been thinking a lot about what I said and you and several others have said since yesterday. I don't like what I said either.

            Edit: gist -- you are right, but I need help getting back to agreeing with you on an emotional level.

            I'm not stupid enough to think "this time these are the right people so it's okay." There's probably not even good ends to justify here: just angry, desperate people lashing out that'll hurt other people, as SkyBrian (edited typo sorry) mentioned.

            Violence is unacceptable, as I originally said, but two things: (1) what these tech bros are doing is also violence and unacceptable. We're rushing headlong into war, mass slavery and mass starvation, and they have to be stopped because (2) society has the obligation to stop those who enact violence upon others.

            Using what means is the difference. One could keep it up with the peaceful, rational, non violent route. Others will try something else because they don't see the first route working in a timeframe that doesn't kill many many others. There is something that has broken inside of me that I no longer feel any ability to judge the latter group, and I don't know how to get that sort of faith back.

            The long term studies you linked are of some help. Remembering and participating in nonviolent resistance might help. But at this point I can only manage to wish for no one to morally injure themselves or to hurt innocents.

            13 votes
            1. [4]
              R3qn65
              (edited )
              Link Parent
              I really admire this response. The world would be better if more people had the humility to say "maybe I went too far" instead of doubling down. A couple of thoughts-- overall your despair seems...

              I really admire this response. The world would be better if more people had the humility to say "maybe I went too far" instead of doubling down.

              A couple of thoughts-- overall your despair seems linked to a belief that AI companies are 1) bringing about the end of the world and 2) doing it on purpose. In short, I don't think AI will be the end of the world and I don't think AI companies are doing it on purpose. Or maliciously, at least. More below.

              I agree that society has an obligation to stop those perpetrating violence (though I would double down on your point about the means being important. For force to be legitimate, it must be just, as in carried out by the state.) But: even if we accept that causing massive job losses is systemic violence (by no means a bulletproof claim), we don't even know that that's going to happen yet! Throughout history new technologies have generally created more jobs on net, not less. Honestly I'm not quite sure what you mean about mass slavery or mass starvation. I don't really see how AI leads to that and I'm pretty familiar with the policy space. I can think of a few links to war, but... I'm not calling you out so much as pointing out that these futures are far from guaranteed either.

              And yes, all the major AI labs talk about job replacements a lot. But they're concerned about it, not gleeful. That's why the major CEOs have openly supported UBI, for instance. The point here is that they're trying to create technology that they believe will make the world better - that will help cure diseases, reduce suffering, etc. So regardless of results, even their intentions aren't evil. Dario Amodei has this whole essay titled, in full seriousness, "machines of loving grace."

              Sam Altman does not draw a salary from OpenAI and has no equity in the company. I’m not shilling for Altman - I do not like him - but that's a little-known fact and is vital context. Obviously he'll benefit from the company's success, but not in the way most people think. It is not the move of a supervillian who cares only for his own wallet so that he can retreat to his bunker.

              So: there's no reason to believe the world will end. The possibilty that things might get worse is no excuse for violence, and things might even get better. And if we set aside the consequences and judge AI CEOs on their intentions, their intentions seem to mostly be good, even if they are not necessarily good or admirable people.

              Nihilism is so easy. Especially now, when many people feel that times are hard. Many of us find it easy to agree with the frustrations expressed in Ted kaczynski's manifesto. But what does carrying it out look like in practice? Murdering a bunch of entirely innocent Federal workers. And then look at the other side of things - the world, since 1979, when the Unabomber wrote technological slavery, is safer, freer, and more compassionate. He was wrong.

              So too are his modern day equivalents.

              4 votes
              1. DynamoSunshirt
                Link Parent
                The AI companies love to talk about their 'concern' for job replacement as a marketing strategy. It's just like the Mythos bit they tried last week: scaring us with ghost stories about how...

                The AI companies love to talk about their 'concern' for job replacement as a marketing strategy. It's just like the Mythos bit they tried last week: scaring us with ghost stories about how powerful their product is, with little evidence and no objective opinions to back them up.

                But the truth is, we don't have to gallop forward into massive LLM adoption at full speed. The AI companies and their vested interests are pushing this. The rest of us won't benefit. You could stop every datacenter and every LLM chatbot offering on the planet right now and most of us would return to the way we used to live without issue. It would only be a problem for the people who have ~invested~ gambled hundreds of billions of dollars into companies like OpenAI and Anthropic, and the megacorporations like Facebook and Google and Oracle who have pushed hundreds of billions of dollars into LLM development and data centers.

                We're not concerned about the world ending, and that should be obvious. We're actually concerned about a bunch of rich people laying many of the rest of us off to balance their spreadsheets. Meanwhile the rest of us just want to feed and clothe and house our families, and maybe, if we're really really ambitious, save for a cushy retirement of not working ourselves to death after 65-70.

                7 votes
              2. [2]
                chocobean
                Link Parent
                Hey, I appreciate the back and forth. I don't feel like we're fighting on opposite sides at all, more like you taking the time to talk while siting next to me. If you have more time for more...

                Hey, I appreciate the back and forth. I don't feel like we're fighting on opposite sides at all, more like you taking the time to talk while siting next to me. If you have more time for more stream of consciousness... If not that is totally okay as well, just more venting coming from a hurt place.

                So: there's no reason to believe the world will end.

                Baby steps for me: for much of my life, it was not a despair that the world will end, but feeling that perhaps the world as it is should end, or at least drastically change, at whatever the cost. With patient and optimistic people like yourself, I don't idly wish for the world to end anymore; this is not a boast, but an embarrassing confession of holding a teenage edgelord view far into adulthood. I stepped up from "everyone sucks" to "there are some who sucks, but many who are dear" and then, this newfound love is accompanied by newfound pain that the sucky people are hurting the dear. This is where I am now. I have not yet moved to acceptance that there is no way to quickly stop the hurt, only a slow hope for "one day", all the while people continue to be hurt.

                As mentioned, although I know you're right, the anger and hurt feelings are still there and clamor for override all the time. The desire to see "swift retribution" is not realistic: that is not justice, nor does the suck end with one or even a hundred or a thousand dead.

                religious stuff

                If You, Lord, should mark iniquities, O Lord, who could stand?

                If we were to be rid of all the bad people, I'd be pretty far ahead on that list and we'd still have problems however many we kill.

                (Not trying to preach; as demonstrated, I'm in more dire need of kindness and reason. You can think of it as "without even this Chocobean would be even loonier".)

                The proper, reasonable and Christian response is to treasure all human lives, to wish for the well-being of our enemies so that they may have the opportunity to repent, to remember that they are only given power and privilege under the permission of God and that they aren't allowed to do anything that God does not permit. That there will be deaths and troubles in this world, as it always has been, but that is why Christ is Risen and everything that we lose here could be redeemed. To trust that just as I don't agree with capital punishment because we give imperfect judgements and even the worst deserve a chance to change, I need to have the same attitude for the ultra rich or tyrants or whoever I am hating on today: I don't have good enough information to judge, how can I condemn?

                I need to remember that, I am already richer than at least half the world, that, were I in the shoes of the ultra rich I would probably be less charitable, less diligent about using tech for good, less wise with managing a large organization, less sociable and persuasive towards positive societal change , and less responsible with my time or resources or speech ...... so thank God they are given that platform, and I am not tempted with it. Are they truly my enemy, or is it just more convenient to hate on others rather than be busy doing what I could in my own sphere of influence?

                Hate is easy. Nihilism is easy. Violence is easy. Despair is easy. Hope is hard. Love is harder. Choosing to maintain hope and taking actions out of love, especially when it's hard, needs to be part of my praxis.

                I have not read Amodei's essay, thank you for the mention. Why did he and his sister leave OpenAI? I think even Peter Thiel and Elon Musk and Trump are sincerely trying to be/do good in their own eyes (like myself). It is a good thing that not all of the AI cheerleaders fantasize about replacing all human jobs. The environmental destruction and planetary climate destabilization seems more incidental rather than supervillain-esque. Perhaps most of them sincerely believe us not owning any property truly is freedom / equitable accessibility for all. Maybe instead of wishing [redacted], I can start with wishing that they could just see the suffering of others. We would get a lot more from them making the right decisions than by them disappearing.

                2 votes
                1. R3qn65
                  Link Parent
                  Yeah, I understand. Basically, they didn't trust Sam Altman and felt openAI wasn't taking AI safety seriously enough. There's kind of more to it than that, but then, also there isn't. I think this...

                  Yeah, I understand.

                  Why did he and his sister leave OpenAI?

                  Basically, they didn't trust Sam Altman and felt openAI wasn't taking AI safety seriously enough. There's kind of more to it than that, but then, also there isn't.

                  Maybe instead of wishing [redacted], I can start with wishing that they could just see the suffering of others. We would get a lot more from them making the right decisions than by them disappearing.

                  I think this is wise.

                  2 votes
          2. [3]
            raze2012
            Link Parent
            if you change the cause you change the entire context. "I was being attacked and killed in self-defense" "well yes. But if you weren't in danger, wouldn't it be murder?" violence is less effective...

            . Like, if you saw someone saying these things about some other cause, would you view this speech as a problem?

            if you change the cause you change the entire context.

            "I was being attacked and killed in self-defense"

            "well yes. But if you weren't in danger, wouldn't it be murder?"

            In any event, your entire argument rests on violence being necessary to resist tyranny. There are two problems with this:

            1. violence is less effective than non-violence. However, there's never been a 100% purely non-violent protest. You can't control every person's actions and not everyone is in the same state of mind.

            2. tyranny isn't purely governmental, it is simply "a rigorous condition imposed by some outside agency or force". And I'd say AI qualifies at this point. It's trying to unregulate itself. It's sucking up signifigant resources, it's being forced down our throats everwhere. Maybe it's not the same tyranny as ICE violence, but it meets the definition.

            iolence begets violence. It always grows, and innocent people are always victimized.

            There is no bloodless revolution. Even if we somehow went 100% peaceful right now, blood has already been spilt. I don't say this as endorsement so much as cold, historical, statistical, fact. People have, are, and will be hurt.

            2 votes
            1. [2]
              skybrian
              Link Parent
              I don’t think history says what you think it says. The color revolutions in post-Soviet regimes come to mind. Also, many nonviolent changes of government due to elections. In many cases, lone...

              There is no bloodless revolution

              I don’t think history says what you think it says. The color revolutions in post-Soviet regimes come to mind. Also, many nonviolent changes of government due to elections.

              In many cases, lone wolves attempting assassinations are neither necessary nor sufficient.

              That doesn’t mean nonviolent means always work. Most of the time they fail. I’m feeling optimistic about this year’s elections, though.

              4 votes
              1. raze2012
                Link Parent
                "mostly non-violent protests". And we know how that ended. I won't say that non-violent protests aren't effective; they very much are often the most effective form of upheaval when enough people...

                The color revolutions in post-Soviet regimes come to mind. Also, many nonviolent changes of government due to elections.

                "mostly non-violent protests". And we know how that ended. I won't say that non-violent protests aren't effective; they very much are often the most effective form of upheaval when enough people amass. But there's always some underpinnings of violence alongside the non-violent majority. It's already a Herculean effort gathering thousands or millions of people under one cause. Making sure they are all on their behavior despite heightened emotions is nearly impossibe.

                In many cases, lone wolves attempting assassinations are neither necessary nor sufficient.

                No, not at all. But we've seen enough of the last few decades of lone shooters in the US to know they certainly don't go ignored.

                And that's the main goal with many of these people; not to enact radical change with one big action, but to get people talking. And it works everytime because every incentive these days revolves around some sort of news cycle.

                I'm here and not on Reddit/Intagram/Twitter, so I despise this current societal structure rewarding big, shocking actions rather than steady progressive change. But I don't see such incentives changing anytime soon

                1 vote
      2. [3]
        wervenyt
        Link Parent
        You're right, the wannabe technocrats are using terrorism for their goals.

        You're right, the wannabe technocrats are using terrorism for their goals.

        12 votes
        1. [2]
          unkz
          Link Parent
          Can you explain what you mean by terrorism? I’m working with something like this: Are you saying that OpenAI is doing that? Or Anthropic? Or who are the technocrats you are referring to, and...

          Can you explain what you mean by terrorism? I’m working with something like this:

          the calculated use of violence to create a general climate of fear in a population and thereby to bring about a particular political objective.

          Are you saying that OpenAI is doing that? Or Anthropic? Or who are the technocrats you are referring to, and exactly what actions are they performing that are terrorism?

          11 votes
          1. wervenyt
            Link Parent
            It happens on many levels, but here? They incite fear by demonstrating the weight of their whims in the form of saying "we're doing AI now" while literally everyone outside of the AI business said...

            It happens on many levels, but here? They incite fear by demonstrating the weight of their whims in the form of saying "we're doing AI now" while literally everyone outside of the AI business said "but why", not "please", and propagandizing how these LLMs are going to massively shrink the job market. Given that almost everyone in the US is living extremely precariously, that is a huge payload of stochastic terror.

            But of course, they hold no guns themselves. No, they simply know that the poor are liable to resort to crime to survive, in lieu of wages. So this drives people who are especially precarious to go on high alert, because not only are they at risk of losing their livelihoods, they now are worried about Homeless Addicts. This has knock-on effects on policy, police behavior, ground-level culture, and they're all going to be broadly antisocial. They'll make small business owners more stingy, middle managers in large hierarchies will be held to tighter leashes, fewer people who need a subprime loan to pay for a car repair to get to work to pay for their medical debt will be approved. But Big Data will provide all the screening anyone with something to lose will need to get their risk backed by an insurer.

            The AI folks are all firmly grounded in cybernetics and economics. If they aren't doing this intentionally, they're dangerous fools.

            14 votes
  3. [2]
    ThrowdoBaggins
    Link
    I bolded a few bits that stood out to me from this article, but specifically I disagree with the idea that “there’s no reason” for these systems to talk to each other. I think it’s much more...

    Most people are not collecting data about every single thing that they do. And if they’re collecting any at all, it’s stored across lots of different systems — your email in Gmail, your messages in iMessage, your work schedule in Outlook, your workouts in Peloton. Those systems don’t talk to each other and maybe they never will, because there’s no reason for them to.

    I bolded a few bits that stood out to me from this article, but specifically I disagree with the idea that “there’s no reason” for these systems to talk to each other. I think it’s much more likely that each of these companies fight hard to avoid interoperability because rather than competing with a superior product, they would prefer to build a moat around themselves to prevent competition before it even raises its head. In fact I’m sure they could save a lot of money by adopting a more broadly supported standard for each of these data sources, but they know that would mean losing a bit more control and therefore maybe a bit more of their market share.

    4 votes
    1. skybrian
      Link Parent
      It’s also true that building, maintaining, and securing API’s requires effort. They often get low adoption, are targeted by attackers, and don’t directly make money. If there’s nobody at the...

      It’s also true that building, maintaining, and securing API’s requires effort. They often get low adoption, are targeted by attackers, and don’t directly make money. If there’s nobody at the company championing them then often they get retired.

      We’re seeing more interest in building API’s now because AI’s can use them, but sometimes this is speculative and the API’s that don’t get serious adoption might get dropped again.

  4. [4]
    kacey
    Link
    Bit of a tangent: may I ask what people think of polls which show that non-Americans have far more trust in AI products? (e.g. Trust in AI far higher in China than West, poll shows, per Edelman...

    Bit of a tangent: may I ask what people think of polls which show that non-Americans have far more trust in AI products? (e.g. Trust in AI far higher in China than West, poll shows, per Edelman via Aljazeera) I don't have enough of a background in survey design outside of a western context, so I can't tell if there's some sort of systemic bias going on which poisons the results. But if it's accurate, perhaps these sorts of pieces only speak to the American experience, and overgeneralize to some global perspective on AI ...?

    3 votes
    1. DynamoSunshirt
      Link Parent
      American AI is fueled by billionaires and VC, and much larger than AI efforts anywhere else except maybe China. When the oligarchs have placed all-in bets on wiping out our jobs, of course the...

      American AI is fueled by billionaires and VC, and much larger than AI efforts anywhere else except maybe China. When the oligarchs have placed all-in bets on wiping out our jobs, of course the rest of us will be pissed. I imagine if I lived in Europe with a functional social safety net I would be far less concerned, especially with much less investment into AI in the first place.

      8 votes
    2. [2]
      skybrian
      Link Parent
      I think adoption of AI is higher in the US? People might have more experience with it, including with earlier models that were worse.

      I think adoption of AI is higher in the US? People might have more experience with it, including with earlier models that were worse.

      3 votes
      1. kacey
        Link Parent
        Maybe? I'm not sure how to measure that -- e.g. do we go with a simple per-capita adoption rate, does opening ChatGPT once count, what about students and etc. From some sensationalist reporting,...

        Maybe? I'm not sure how to measure that -- e.g. do we go with a simple per-capita adoption rate, does opening ChatGPT once count, what about students and etc. From some sensationalist reporting, everyone and their dog uses Openclaw (somehow) in China, but I assumed that was generic sinophobia spritzed up for the 21s century (i.e. Fox News riling up the perception of a powerful enemy force to unite against).

        True, though, that perhaps early adopters have a worse impression overall.

        2 votes