38 votes

Social media probably can’t be fixed

34 comments

  1. [10]
    polaroid
    Link
    Humans truly weren't meant to be in such large groups. I stand by my theory that we are only meant to live in groups of, say, 30 or less. My personal record is ~4. lol Social media is horrid....

    Humans truly weren't meant to be in such large groups.
    I stand by my theory that we are only meant to live in groups of, say, 30 or less. My personal record is ~4. lol

    Social media is horrid. Asocial media (like Tildes) is so much better. The cycle will repeat as the asocial media blows up and becomes "social media" and the asocials go and create something else to keep their sanity until it blows up and...

    28 votes
    1. [6]
      ButteredToast
      Link Parent
      Part of me wants to agree, but another wants to disagree simply because I’m not sure how much bearing being “meant to” do something has or if the idea even has broad validity — it could very well...

      Humans truly weren't meant to be in such large groups.
      I stand by my theory that we are only meant to live in groups of, say, 30 or less.

      Part of me wants to agree, but another wants to disagree simply because I’m not sure how much bearing being “meant to” do something has or if the idea even has broad validity — it could very well be among the many arbitrary inventions we humans love so dearly. If we chose that as our guiding star, practically none of not just modernity but even things dating back to the dawn of technology use and agriculture arguably shouldn’t exist, because we’re not really built for backbreaking work in crop fields either.

      For social media specifically, I believe that perhaps the mistake was the assumption that physical world polite society norms, where there are any number of sanity checks and filters present to modulate social interactions and keep things relatively sane, would transfer to the digital world. Turns out that not only that they don’t, but in fact things that would not be given the time of day in the physical world are like addictive candy in the digital.

      People from across the political spectrum won’t like the answer for various reasons, but I believe that the answer is moderation, as well as algorithms that strongly negatively weights things like highly emotionally charged content. Most people wouldn’t entertain people screaming hysterically about random things in a shopping mall, why should we encourage it online?

      15 votes
      1. NaraVara
        Link Parent
        The thing is they do translate to the digital world as long as everyone involved has some kind of persistent identity. Where things break down is when the majority of your interactions are...

        The thing is they do translate to the digital world as long as everyone involved has some kind of persistent identity. Where things break down is when the majority of your interactions are transient, with accounts that you will probably not see again and you’re only interacting with their thoughts as a sort of snack-sized bit of content unmoored from any sort of social connection or conversational context.

        Even on pseudonymous forums most people didn’t act like total fuckwads. The community fuckwads were a bit of local color within each forum community. What’s changed is that social media has enabled all the fuckwads to get together into a giant fuckwad mob, and then those mobs can be incited into collective action based on rage bait to do Gamergate type shit. This has its own sort of feedback loop where when people marinate in enough of this it makes us all more defensive, more hostile in our interactions, more glib and less charitable or able to assume good faith. At that point the internet stops being a place to socialize (as in have open conversations with people) and turns into a place to sling propaganda around.

        11 votes
      2. [3]
        polaroid
        (edited )
        Link Parent
        I disagree - I think there needs to be a space with little to no moderation because that is how you know how you truly are. Curation is all well and good, but censorship is a serious problem...

        I disagree - I think there needs to be a space with little to no moderation because that is how you know how you truly are. Curation is all well and good, but censorship is a serious problem especially as of late. You don't need draconian rule to not be a terrible person. Moderation alone cannot work because it is always biased and weaponized, such as orgs targeting LGBT+ stuff as predatory/inappropriate for one example. Imo the digital world is like this because it's one of the few places you can actually be yourself free from all restriction. Learning to self-curate is an important skill for navigating it.

        Also, of course, there is no real "meant to" for humans... but you know what I mean. Our brains are wired for only keeping the knowledge of ~100 people or less. I for one am not built for large scale interactions and I don't think most people are either. Social media designed to keep you in line and addicted to outrage and horror can't be good for us. If you don't know how or why to curate it, you're cooked.

        8 votes
        1. [2]
          NaraVara
          Link Parent
          There is no “true you” that exists independent of your social context. The social norms and expectations around you are a core part of what make you, you. It’s also not like an unmoderated space...

          that is how you know how you truly are.

          There is no “true you” that exists independent of your social context. The social norms and expectations around you are a core part of what make you, you.

          It’s also not like an unmoderated space is free of social pressures. You’re not getting whacked with a moderation stick but you are responding to how other people react to you. You are reacting to abuse or praise or attention or being ignored, all of which is based on the nature and composition of the crowd that the community has drawn in. All of this applies pressures to make you behave like this and not like that. Moderation is meant to maintain healthy dynamics around that and without it you get a lot of ambient pressure to behave like a fuckwad and to have a lot of really strong (and socially unhealthy) fuckwad-resistant defense mechanisms. This is no more of a “true self” than if a moderator shows up to say “Hey you’re being kind of a dick tone it down.”

          19 votes
          1. polaroid
            (edited )
            Link Parent
            True. I suppose the difference is some of us are more immunized to those kinds of social pressures as it is a highly personal thing, so it does feel different to me even if it is true that no...

            True. I suppose the difference is some of us are more immunized to those kinds of social pressures as it is a highly personal thing, so it does feel different to me even if it is true that no space is free of social pressure. I am biased because though I was a bit of an ass back then historically I haven't had good run-ins with moderators. Most mods are ok, but power corrupts. Just my two cents.

      3. raze2012
        Link Parent
        Those are unfortunately the blue sky thinking answers (not the website, the term is literally "blue sky thinking" ). Moderation can fix everything, but it's impractical once your userbase hits...

        t I believe that the answer is moderation, as well as algorithms that strongly negatively weights things like highly emotionally charged content

        Those are unfortunately the blue sky thinking answers (not the website, the term is literally "blue sky thinking" ). Moderation can fix everything, but it's impractical once your userbase hits around 100k+ active users. And of course, volunteer moderation has its own issues.

        Meanwhile the incentives for algorithms maximize engagement, and it's simply taking advantage of human nature. Anger unfortunately is more engaging than any other emotion. Trying to even define "charged language" would be the mother of all landmines if we wanted to regulate it that way.

        2 votes
    2. NaraVara
      Link Parent
      To be fair to humans, I don’t think the study method is all that great at simulating what the investigator thinks it is. Even if you change the incentive structures, the text the LLMs are trained...

      To be fair to humans, I don’t think the study method is all that great at simulating what the investigator thinks it is. Even if you change the incentive structures, the text the LLMs are trained on are still largely bodies of text from Reddit and Twitter.

      When Elon told Grok to stop being PC and be “unfathomably based” instead as part of its system prompt it suddenly turned into a psychotic neo-Nazi. This is because LLMs aren’t people they’re statistical models where the prompt you give it is operating on weighted relationships between a bunch of different tokens/concepts it’s working with. When you change the prompt it’s not quite like you’re talking to a person to change their behavior about something. You’re altering how it’s modeling the web of relationships it has across its body of text. If the prompt makes the subject behave differently what has happened is that it has switched into simulating a different sort of person. But what if there aren’t any different sorts of people in the training data?

      I would assume there is so much in the corpus based on behavior optimized for the actual existing social media we have that the behavior is not going to change regardless of what you do to it. Because bot isn’t responding to incentives like a person would, it’s mimicking the behavior it’s been trained on and if there isn’t enough training data of behavior under the contexts you’re trying to test you’re not actually applying the “treatment” you would think you are.

      14 votes
    3. slade
      Link Parent
      So you're saying AOL chat rooms had it right?

      I stand by my theory that we are only meant to live in groups of, say, 30 or less.

      So you're saying AOL chat rooms had it right?

      5 votes
    4. FishFingus
      Link Parent
      It's kind of like orks gathering into a Waaagh that launches against Armageddon and gets defeated and scatters, only to repeat the cycle. Everything is kind of like Warhammer, really.

      It's kind of like orks gathering into a Waaagh that launches against Armageddon and gets defeated and scatters, only to repeat the cycle.

      Everything is kind of like Warhammer, really.

      4 votes
  2. [9]
    TonesTones
    Link
    I’m pretty unconvinced that LLMs are a suitable simulation of human behavior. You can get an LLM (without oversight/safeguards) to say whatever you want given the right prompts. Humans just don’t...

    So they combined standard agent-based modeling with large language models (LLMs), essentially creating little AI personas to simulate online social media behavior.

    I’m pretty unconvinced that LLMs are a suitable simulation of human behavior. You can get an LLM (without oversight/safeguards) to say whatever you want given the right prompts. Humans just don’t behave the same way; we tend to more strongly maintain our current views on society. That difference seems fundamentally important to a study measuring how behavior responds to different environments.

    14 votes
    1. papasquat
      Link Parent
      Yeah, that's the main thing that jumped out at the study for me. You can use agents to simulate rational behavior with simple rules based on observations of human behavior (a trading bot that...

      Yeah, that's the main thing that jumped out at the study for me.

      You can use agents to simulate rational behavior with simple rules based on observations of human behavior (a trading bot that always seeks to maximize returns, or a airline ticket purchasing bot that always seeks to buy the cheapest tickets with the fewest layovers).

      Using an LLM to simulate an entire human seems... Completely misguided and not really science. LLMs don't act like humans whatsoever. The things that make a human click one link over another, or follow one influencer over another are not entirely well understood. If we did understand them, the formula to going viral would be solved, and the only content that would ever go viral would be content created by huge media conglomerates with the resources to apply that formula. Instead, we find things like Disney scrambling to replicate the eyeballs that a girl on the street got by saying "hawk tuah".

      Human behavior on the internet isn't something you can easily model, and it seems like the researchers knew that, so instead just threw their hands up and let LLMs have a field day at it, which isn't modeling. You can't set up deterministic rules for their behavior, so any lessons you supposedly glean from studying them can't be backed up with observations of human behavior.

      I see this becoming more and more of an issue, and it's concerning that this is creeping into the sciences now. LLMs are not humans. They're not even conscious entities. Generalizing anything they do to even slightly apply to a human being, rational actor, or whatever is just flat out poor methodology.

      It's really frightening that this kind of stuff makes it past peer review.

      12 votes
    2. [6]
      NaraVara
      Link Parent
      I’ve found if you keep pushing back on a chatbot enough, like “no not like that,” “wrong,” “this is incorrect try again,” it eventually starts to sound like a person having a mental breakdown and...

      I’ve found if you keep pushing back on a chatbot enough, like “no not like that,” “wrong,” “this is incorrect try again,” it eventually starts to sound like a person having a mental breakdown and will begin going through chains of repetitive logorrhea. It even starts to make mistakes, with typos and run on sentences, and start doing harsh self-talk sometimes. It’s very bizarre, and I suspect once you push far enough you start to get into exchanges from nasty break-up emails or something.

      5 votes
      1. krellor
        Link Parent
        It sounds like watching people argue on a forum where you can tell they've started watching for replies and posting faster and faster with more typos and unfiltered language. We've truly made the...

        It sounds like watching people argue on a forum where you can tell they've started watching for replies and posting faster and faster with more typos and unfiltered language. We've truly made the chatbots we deserve, lol

        3 votes
      2. [4]
        skybrian
        Link Parent
        Yeah, pushing back after a wrong answer is basically polluting the chatbot's context with junk that confuses it further. It doesn't learn. Better to erase the wrong answer and improve the question.

        Yeah, pushing back after a wrong answer is basically polluting the chatbot's context with junk that confuses it further. It doesn't learn. Better to erase the wrong answer and improve the question.

        2 votes
        1. [3]
          NaraVara
          Link Parent
          What confused me is why the spelling and grammar start to fall apart. It makes sense that it would get repetitive and make less and less sense, but it’s odd that it starts making typos.

          What confused me is why the spelling and grammar start to fall apart. It makes sense that it would get repetitive and make less and less sense, but it’s odd that it starts making typos.

          2 votes
          1. [2]
            skybrian
            Link Parent
            I don’t know, but LLM’s like to continue whatever patterns they see and are good at spotting patterns. So once it starts making typos for whatever reason, it makes sense that it would notice and...

            I don’t know, but LLM’s like to continue whatever patterns they see and are good at spotting patterns. So once it starts making typos for whatever reason, it makes sense that it would notice and continue making typos to “stay in character.” Also, repeating things is a pattern.

            Also, when you tell it that it’s wrong repeatedly, that is a pattern and maybe it will make more mistakes to continue the pattern of doing something wrong and being corrected? That’s staying in character for someone who makes mistakes. (I said before that “it doesn’t learn” but another way to put it is that it learns the wrong patterns.)

            There has been interesting research into “persona vectors.” I wonder if there’s a vector for “someone who makes lots of mistakes” that they could suppress to fix this?

            1 vote
            1. NaraVara
              Link Parent
              Oh then we'll also get the fun puzzle of how personal vectors might change as the training data updates. For example, suppose I tell an LLM personal assistant to emulate Neil Gaiman. And then at...

              There has been interesting research into “persona vectors.” I wonder if there’s a vector for “someone who makes lots of mistakes” that they could suppress to fix this?

              Oh then we'll also get the fun puzzle of how personal vectors might change as the training data updates.

              For example, suppose I tell an LLM personal assistant to emulate Neil Gaiman. And then at some point the training data updates with a whole lot of new quotes from Neil Gaiman that shift the personality from a dark gothic fantasy writer to being that of a serial sexual abuser. All of a sudden my LLM shifts over? If you asked the virtual assistant why it's being rapey it would deny it because Gaiman denies it!

              Just a thought experiment obviously. I doubt the news stories would be enough to significantly shift the tone. But another example might be picking J.K. Rowling off a training corpus that stops at 2010 and then having your assistant start getting really weird as you add more years.

              1 vote
    3. rogue_cricket
      (edited )
      Link Parent
      I was also uninterested in the study beyond this point basically. So you're telling me that when you train the Reproduce-Trained-Behaviour machine on data from interactions in existing social...

      I was also uninterested in the study beyond this point basically. So you're telling me that when you train the Reproduce-Trained-Behaviour machine on data from interactions in existing social media and then put the Reproduced-Trained-Behaviour machine in an environment that is a little different from our existing social media, the Reproduce-Trained-Behaviour machine continues to reproduce the behaviour it was trained on? Wow, you don't say! I mean, what else would it possibly do?

      LLMs are not capable of the novelty required to actually get meaningful information out of a study like this. LLMs used as a proxy as humans for a study that really should be psychology is actual madness. It's slop. Why even bother?

      2 votes
  3. [13]
    first-must-burn
    Link
    Regardless of the methods of this study, I think the real assertion I would make is: Social media unchecked by regulation and relentlessly driven by profit motive cannot be fixed. However, you can...

    Regardless of the methods of this study, I think the real assertion I would make is:

    Social media unchecked by regulation and relentlessly driven by profit motive cannot be fixed.

    However, you can probably replace the part in bold with anything and have it still be true. Until we decide, as individuals and as a society, to draw a line and "this is enough, I don't need anymore", we'll never stop having these problems.

    11 votes
    1. [9]
      arrza
      Link Parent
      ...and there it is. The real conclusion of all of this: there is no ethical consumption under capitalism. But of course, in our current reality this conclusion would be rejected by the mainstream....

      ...and there it is. The real conclusion of all of this: there is no ethical consumption under capitalism.

      But of course, in our current reality this conclusion would be rejected by the mainstream. Therefore the authors need to be circumspect and narrow in what they say, whether or not they actually recognize the broader truth.

      5 votes
      1. [8]
        skybrian
        Link Parent
        Buying food is unethical? We're really sunk then.

        Buying food is unethical? We're really sunk then.

        4 votes
        1. [7]
          arrza
          Link Parent
          Sure, explain to me the ethical parts of our food production. They're insignificant. The vast majority of pur food is made by huge conglomerates, exploiting both nature and their workers to the...

          Sure, explain to me the ethical parts of our food production. They're insignificant.

          The vast majority of pur food is made by huge conglomerates, exploiting both nature and their workers to the absolute extreme. And that's just production. Distribution is also exploitative, and grossly uneven. In the US, large amounts of food are just thrown away or left to spoil yet people die of starvation. Not to mention the rest of the world.

          The works of the roots of the vines, of the trees, must be destroyed to keep up the price, and this is the saddest, bitterest thing of all. Carloads of oranges dumped on the ground. The people came for miles to take the fruit, but this could not be. How would they buy oranges at twenty cents a dozen if they could drive out and pick them up? And men with hoses squirt kerosene on the oranges, and they are angry at the crime, angry at the people who have come to take the fruit. A million people hungry, needing the fruit- and kerosene sprayed over the golden mountains. And the smell of rot fills the country. Burn coffee for fuel in the ships. Burn corn to keep warm, it makes a hot fire. Dump potatoes in the rivers and place guards along the banks to keep the hungry people from fishing them out. Slaughter the pigs and bury them, and let the putrescence drip down into the earth.

          There is a crime here that goes beyond denunciation. There is a sorrow here that weeping cannot symbolize. There is a failure here that topples all our success. The fertile earth, the straight tree rows, the sturdy trunks, and the ripe fruit. And children dying of pellagra must die because a profit cannot be taken from an orange. And coroners must fill in the certificate- died of malnutrition- because the food must rot, must be forced to rot. The people come with nets to fish for potatoes in the river, and the guards hold them back; they come in rattling cars to get the dumped oranges, but the kerosene is sprayed. And they stand still and watch the potatoes float by, listen to the screaming pigs being killed in a ditch and covered with quick-lime, watch the mountains of oranges slop down to a putrefying ooze; and in the eyes of the people there is the failure; and in the eyes of the hungry there is a growing wrath. In the souls of the people the grapes of wrath are filling and growing heavy, growing heavy for the vintage.

          If you see it another way, please share.

          6 votes
          1. [6]
            skybrian
            Link Parent
            What I'm wondering is, even if you did it ethically, wouldn't it still be capitalism? Farming is a business. Even if you're buying organic produce at the local farmer's market from people you...

            What I'm wondering is, even if you did it ethically, wouldn't it still be capitalism? Farming is a business. Even if you're buying organic produce at the local farmer's market from people you trust and they didn't do anything wrong while growing it.

            3 votes
            1. [5]
              arrza
              Link Parent
              If the farmer was self directed, did this on their own land, with their own machinery, that'd constitute them owning their means of production which would be socialism. Again, this represents a...

              If the farmer was self directed, did this on their own land, with their own machinery, that'd constitute them owning their means of production which would be socialism.

              Again, this represents a tiny amount of our food production compared to the rest of it. It's an edge case, a tiny exception to the maxim.

              2 votes
              1. [4]
                skybrian
                Link Parent
                It seems like you’re defining owning your own business (if done the right way) as socialism? That would probably be news to the owners.

                It seems like you’re defining owning your own business (if done the right way) as socialism? That would probably be news to the owners.

                1 vote
                1. [3]
                  arrza
                  Link Parent
                  Yea, you're right thats probably an incorrect usage of socialism and me shoehorning the term in there. Let me try another angle. In your hypothetical, does your consumption of those ethically...

                  Yea, you're right thats probably an incorrect usage of socialism and me shoehorning the term in there.

                  Let me try another angle. In your hypothetical, does your consumption of those ethically produced goods negate the existence of the other 99% of unethically produced goods? If one person consumes ethically, what does that show?

                  1. [2]
                    skybrian
                    Link Parent
                    I'm not going to argue with that. I was pushing back on capitalism being unethical by definition. I think businesses can be ethical sometimes, even if there are many, many examples where they aren't.

                    I'm not going to argue with that. I was pushing back on capitalism being unethical by definition. I think businesses can be ethical sometimes, even if there are many, many examples where they aren't.

                    1. arrza
                      Link Parent
                      If you look hard enough at anything, you'll find exceptions. I stand by what I said.

                      If you look hard enough at anything, you'll find exceptions. I stand by what I said.

    2. [2]
      raze2012
      Link Parent
      Sadly America has spent a good 60+ years programming it's people to "never settle". Unchecked ambitions can be even more dangerous than having no motivation whatsoever. Of course, the only way to...

      Sadly America has spent a good 60+ years programming it's people to "never settle". Unchecked ambitions can be even more dangerous than having no motivation whatsoever.

      Of course, the only way to resolve this is to literally make it impractical to buy anything. I guess that "correction" is looming on thr horizon.

      4 votes
      1. ButteredToast
        Link Parent
        The trickiest thing about keeping ambition in check is that depending on the person, doing that risks loss of reason to get up in the morning and keep on. A lot of people need a carrot to chase to...

        The trickiest thing about keeping ambition in check is that depending on the person, doing that risks loss of reason to get up in the morning and keep on. A lot of people need a carrot to chase to stay sane and will even put a new carrot on the string themselves if they somehow manage to reach the previous one.

        1 vote
    3. NaraVara
      Link Parent
      Yeah Facebook, as it initially existed, was a MySpace knock-off that didn’t let you autoplay music and had enough guardrails to keep you from hurting yourself with custom styling. Once they...

      Yeah Facebook, as it initially existed, was a MySpace knock-off that didn’t let you autoplay music and had enough guardrails to keep you from hurting yourself with custom styling. Once they introduced the News Feed it was all over. If it remained just a personal page with little comment threads for your personal updates. It was something that was kind of there for stalking people but did make stalking a little bit hard. But replacing the model of allowing people to put up information for others to go out and seek into one where the platform pushes information/content to you took the control out of the user’s hands.

      2 votes
  4. NaraVara
    Link
    This article is an interview with the author of a paper who used LLM simulated interactions to test different models of social media for their tendency to bring about toxic/anti-social outcomes....

    This article is an interview with the author of a paper who used LLM simulated interactions to test different models of social media for their tendency to bring about toxic/anti-social outcomes. His conclusions are not heartening.

    7 votes
  5. MetaMoss
    Link
    It'll be interesting to see long-term how usable LLMs end up being for social studies, but I suspect in this case their utility may be limited. Since these models were likely trained on data from...

    It'll be interesting to see long-term how usable LLMs end up being for social studies, but I suspect in this case their utility may be limited. Since these models were likely trained on data from existing platforms, is it any surprise that the tested interventions had little effect? These interventions are largely cultural changes, and I doubt LLMs are capable of realistic cultural adjustment. How many of us have had to "deprogram" our social media brains when coming to Tildes, after all?

    7 votes