15 votes

Politics: USA: Bring call center jobs, not manufacturing back

Obligatory apologies for not knowing what group to place political posts in.

Disclaimer:
Most people know trump's tariffs aren't about bringing numerous quality manufacturing jobs back to the U.S.. It is about establishing automated factories in the U.S. for the profits of the very rich. Some even say the wrecking of the economy with the tariffs is intentional to bring interest rates down to create discounted loans for the rich people who will want to build those automated factories.

Subject:

If it was really about providing jobs for Americans, call center, center jobs should be "brought back".

There are a lot of Americans in rural and other low employment opportunity areas who would do those jobs and value them. The patrons of those help lines would appreciate native English speakers. Those jobs could be made remote, helping even more people who trouble finding work.

24 comments

  1. DefinitelyNotAFae
    Link
    Call center jobs would have to provide closer to a living wage and (preferably unionized) job security, which is the employee side benefit to local manufacturing jobs. Without that aspect you just...

    Call center jobs would have to provide closer to a living wage and (preferably unionized) job security, which is the employee side benefit to local manufacturing jobs. Without that aspect you just have a bunch of near minimum wage jobs where you're fired for pausing too long between calls and the next person is hired.

    We used to have a call center in town when I worked with people in parole, besides not typically accepting my clients, they weren't a great place to work for folks trying to support themselves off the wage.

    I don't disagree that it could work, but convincing companies to pay more and not fight unionization will be a major barrier to service jobs feeling like a replacement for manufacturing

    28 votes
  2. [22]
    unkz
    Link
    Call centers around the world are going to be annihilated in the extreme near term by chatbots. I don’t see this as a promising avenue for employment.

    Call centers around the world are going to be annihilated in the extreme near term by chatbots. I don’t see this as a promising avenue for employment.

    16 votes
    1. [21]
      DefinitelyNotAFae
      Link Parent
      I don't agree mostly because the chatbots are idiots and the LLMs keep lying to people. They'll try, but I think they'll fail, if someone is calling you they usually need help from a person. At...

      I don't agree mostly because the chatbots are idiots and the LLMs keep lying to people.

      They'll try, but I think they'll fail, if someone is calling you they usually need help from a person. At least this is my experience

      7 votes
      1. [2]
        ShroudedScribe
        Link Parent
        Agree, but if the chatbots can be locked down a bit more, it's less of a concern for the company. Air Canada tried to claim that the chatbot on their site could be unreliable and that it wasn't...

        the chatbots are idiots and the LLMs keep lying to people.

        Agree, but if the chatbots can be locked down a bit more, it's less of a concern for the company.

        Air Canada tried to claim that the chatbot on their site could be unreliable and that it wasn't their problem if it lied. They lost the resulting case, because of course any source of information on your official website needs to be accurate.

        The irony in this is that if you lock down an LLM chatbot to only return results that spit out the line-for-line text of a FAQ or similar, it really doesn't need to be backed by an LLM.

        7 votes
        1. DefinitelyNotAFae
          Link Parent
          We all know how much everyone loves the automated phone systems that already exist, as we scream "REPRESENTATIVE" while it tells us to go to the website for something, which we would have if we...

          We all know how much everyone loves the automated phone systems that already exist, as we scream "REPRESENTATIVE" while it tells us to go to the website for something, which we would have if we could have.

          But the bots can't fully help unless they're given permission to refund things and the like, in which case sure, I'll figure out the exact "refund me" script and use it repeatedly.

          Customers don't like it, and that's the case even when humans are using scripts. I know I'm speaking to someone, likely abroad, using a script they are required to use and I still hate it because it's fake. (Not mad at the humans, they don't have a choice)

          The locked down bots already exist, but they just can't meet customer needs. The LLMs can't not lie, and would be no better than chatbots if they did. So yeah. It's just bad

          7 votes
      2. [10]
        donn
        Link Parent
        The way they're implemented, and I'm not agreeing with it for the record, is that LLMs can be a "first line" to filter the most obvious help requests so that only more complicated requests reach a...

        The way they're implemented, and I'm not agreeing with it for the record, is that LLMs can be a "first line" to filter the most obvious help requests so that only more complicated requests reach a human. Which allows them to hire less humans.

        Humans will never entirely be gone from the loop of course, but it's very much a shrinking job market.

        6 votes
        1. [7]
          unkz
          Link Parent
          This is exactly it. The vast majority of calls do not really require a human attendant, and this goes for outbound and inbound dialling. I think people might be quite surprised at how often they...

          This is exactly it. The vast majority of calls do not really require a human attendant, and this goes for outbound and inbound dialling. I think people might be quite surprised at how often they have talked to a robot and not even known it.

          6 votes
          1. tanglisha
            Link Parent
            It’s easy to tell. They’re surprised when I’m nice to them.

            It’s easy to tell. They’re surprised when I’m nice to them.

            5 votes
          2. [5]
            DefinitelyNotAFae
            Link Parent
            How do you not know? Genuinely, I can tell when the automated system is, well, automated, or the spam caller is a bot, or when I finally get a human on the phone (and even by chat)

            How do you not know? Genuinely, I can tell when the automated system is, well, automated, or the spam caller is a bot, or when I finally get a human on the phone (and even by chat)

            1. [4]
              unkz
              Link Parent
              Well, as I said, I think you would be surprised. The level of voice cloning and inflection that models have now can be very hard to distinguish. Automated agents can now sound bored or irritated,...

              Well, as I said, I think you would be surprised. The level of voice cloning and inflection that models have now can be very hard to distinguish. Automated agents can now sound bored or irritated, make inane small talk, and can be set up to almost seamlessly hand off to identical sounding humans after dealing with the boilerplate elements of a conversation if it can’t be handled automatically.

              2 votes
              1. [3]
                DefinitelyNotAFae
                Link Parent
                I guess I would be surprised, mostly because I don't think they're making AI models of individual phone reps in most places. Do you have examples of companies that use this? The problem is the...

                I guess I would be surprised, mostly because I don't think they're making AI models of individual phone reps in most places. Do you have examples of companies that use this? The problem is the experience is unfalsifiable since if I don't know, then I didn't know, but I'm still feeling pretty confident.

                1. [2]
                  unkz
                  (edited )
                  Link Parent
                  Well, the interesting thing is not that we are making AI models of individual phone reps (although this does happen — it’s useful in certain kinds of agent multiplexing cases where we actually...

                  Well, the interesting thing is not that we are making AI models of individual phone reps (although this does happen — it’s useful in certain kinds of agent multiplexing cases where we actually want a live agent but they are not waiting in a conference from the calls to drop and we want to keep the lead’s attention until we can connect), but rather the reverse: we are mutating individual phone reps to sound more generic. Here are some public facing examples of the technology being used in a slightly different setting:

                  https://www.sanas.ai/

                  https://tomato.ai/

                  So you can start out with a bot's voice, and then end up talking to a human that sounds the same but of course has the full capacity of a human to deal with your unique problems.

                  I suppose this might lead to a funny false positive case where you believe you have identified a bot call, but in fact are just picking out a voice modifier that is attached to a real human.

                  4 votes
                  1. DefinitelyNotAFae
                    Link Parent
                    Yeah I hate that. Heaven forbid we talk to actual people. As I said though it's unfalsifiable so I can't really say anything else.

                    Yeah I hate that. Heaven forbid we talk to actual people.

                    As I said though it's unfalsifiable so I can't really say anything else.

        2. DefinitelyNotAFae
          Link Parent
          I get that, but at that point you probably just want the chat bot, so it doesn't fuck up and promise things your company can't give. Which is what we have already with automated hold systems and...

          I get that, but at that point you probably just want the chat bot, so it doesn't fuck up and promise things your company can't give. Which is what we have already with automated hold systems and customer support chats.

          3 votes
        3. tauon
          Link Parent
          To be fair, LLMs wouldn’t be founding members of “first line” filters either. A lot of simple automations and predetermined paths, paired with “dumb” text-to-speech or just prerecorded voice...

          To be fair, LLMs wouldn’t be founding members of “first line” filters either.

          A lot of simple automations and predetermined paths, paired with “dumb” text-to-speech or just prerecorded voice lines, have been in effect for ages. (Think “press 1 for calling reason X, press 2 for reason Y, press 5 for niche reason Z”)

          3 votes
      3. [6]
        ackables
        Link Parent
        This is part of the big problem that a good customer support department tries to solve: How do you determine who really needs help from a real person and who just doesn’t want to use Google? My...

        They'll try, but I think they'll fail, if someone is calling you they usually need help from a person.

        This is part of the big problem that a good customer support department tries to solve: How do you determine who really needs help from a real person and who just doesn’t want to use Google?

        My wife doesn’t work in a call center, but she answers phones in a dental office. Lot’s of people call her to ask questions that they could just google themselves. Most of the time she just googles it and tells them what she found in her Google search. LLMs are great at doing internet searches for people who can’t be bothered to do it themselves, so those calls could be handled by an LLM. Calls that actually have to do with office policies and questions regarding scheduling and their treatment should be handled by a real person to be resolved.

        It would be great if there was a way to figure out who the “search engine” callers are and who the “valuable business” callers are to be able to route them to the proper lines.

        4 votes
        1. DefinitelyNotAFae
          Link Parent
          And I'm sure that's why the recordings all say "you can do this on our website." But also some folks are really bad at using Google, some company websites are confusing and Google may have given...

          And I'm sure that's why the recordings all say "you can do this on our website." But also some folks are really bad at using Google, some company websites are confusing and Google may have given them incorrect info in the past. So I do get it from both sides. But I still wouldn't trust an LLM for it, because you cannot trust they'll give correct information, as has been demonstrated by the recent airline case.

          It depends to an extent the size of the company being called as well. I know I'm of the "I only call when I need something let me talk to a human" type.

          2 votes
        2. [4]
          ShroudedScribe
          Link Parent
          Sure, but there's a significant risk to this in a medical setting. If the LLM tells someone to yank their own tooth out with a crowbar, the office/practitioner could be held liable. Even if this...

          LLMs are great at doing internet searches for people who can’t be bothered to do it themselves, so those calls could be handled by an LLM.

          Sure, but there's a significant risk to this in a medical setting. If the LLM tells someone to yank their own tooth out with a crowbar, the office/practitioner could be held liable.

          Even if this only happens to 1/1000 patients, it's still incredibly concerning.

          1 vote
          1. [3]
            sparksbet
            Link Parent
            While this is a small risk with LLMs (though it's a much lower chance than that, especially if you actually have someone who knows what they're doing setting it up), the bigger actual issue is...

            While this is a small risk with LLMs (though it's a much lower chance than that, especially if you actually have someone who knows what they're doing setting it up), the bigger actual issue is that LLMs are extremely overengineered for this type of task in a lot of cases. A much cheaper and simpler model could probably cover many use cases.

            1. [2]
              ShroudedScribe
              Link Parent
              I agree. I made this statement in another comment chain:

              LLMs are extremely overengineered for this type of task in a lot of cases. A much cheaper and simpler model could probably cover many use cases.

              I agree. I made this statement in another comment chain:

              The irony in this is that if you lock down an LLM chatbot to only return results that spit out the line-for-line text of a FAQ or similar, it really doesn't need to be backed by an LLM.

              1 vote
              1. sparksbet
                Link Parent
                Ah, I didn't notice that comment was from the same person! We're very much on the same page then.

                Ah, I didn't notice that comment was from the same person! We're very much on the same page then.

      4. [2]
        tibpoe
        Link Parent
        I've worked with the call-center industry, and oh boy do I have news for you!

        chatbots are idiots and the LLMs keep lying to people

        I've worked with the call-center industry, and oh boy do I have news for you!

        3 votes
        1. DefinitelyNotAFae
          Link Parent
          I know. But I can get the help I need from people. I cannot get the help I need from the bot.

          I know. But I can get the help I need from people. I cannot get the help I need from the bot.

  3. daychilde
    Link
    This will not solve the problem you want it to solve. People — the ones who aren't racist — who complain about outsourced call centers are probably complaining about the lack of training. I've...

    This will not solve the problem you want it to solve.

    People — the ones who aren't racist — who complain about outsourced call centers are probably complaining about the lack of training. I've talked to many such customer service agents. And while I see a correlation between companies trying to outsource to save money and a lack of spending money on training, I don't see that as causation for the bad support. It's the training.

    Also, call center work is very very shitty work. Some people are pleasant, but most are already irritated that they have to call in the first place, and many have no concept that the person they're talking to is a human just like them.

    I'd much rather see UBI. And frankly, those complaining about AI certainly have a point right now, but it is just a tool that is getting rapidly better. Another 2-5 years and the interaction side (i.e. it understanding you) will be fine. Now, what the company lets the AI do is likely the same as what it allows frontline agents to do.... there's really not going to be any difference once things shake out a bit more.

    Also, sure, force companies somehow to use domestic agents. They'll still train like shit and you'll not have to deal with a foreign accent, but the actual support will not be any better.

    So basically, I think it's a shitshow that this will not solve.

    7 votes