28 votes

Research paper compares LLM responses based on politeness of requests and finds quality difference

48 comments

  1. [29]
    tarehart
    Link
    I'm glad to hear this. I often use polite language even when it doesn't "matter", e.g. if I were talking to a pet turtle. I find that it helps reinforce good habits and positive patterns of...

    I'm glad to hear this. I often use polite language even when it doesn't "matter", e.g. if I were talking to a pet turtle. I find that it helps reinforce good habits and positive patterns of thinking, to the benefit of my relationships with actual people. It would be neat if LLMs subtly made our whole population more polite by rewarding that practice.

    That said, I expect AI companies will work hard to correct the politeness bias to give the AI more utility, so I doubt any effect like that will have time to materialize.

    29 votes
    1. [26]
      DavesWorld
      Link Parent
      Pandora's Star, by Peter F Hamilton, has a mention of a character (in an insanely high-tech society) who uses direct control over his devices. Where everyone else just talks to a pseudo-AI...

      Pandora's Star, by Peter F Hamilton, has a mention of a character (in an insanely high-tech society) who uses direct control over his devices. Where everyone else just talks to a pseudo-AI interface and lets that have the actual direct control.

      When challenged, he shrugs and says in response to not verbally instructing the AI:

      "Now what would be the point in that? My way I have control over technology. Machinery does as I command. That's how it should be. Anything else is mechanthropomorphism. You don't treat a lump of moving metal as an equal and ask it pretty please to do what you'd like. Who's in charge here, us or them?"

      When the character he's talking to asks if mechanthropomorphism is a real word, the response is it ought to be since the whole society practices it like a religion.

      Should we be polite to our machinery? In this context, software is basically machinery. Should the machinery be created to object if not treated politely?

      I don't know, but I think the guy who objects to mechanthropomorphism has a valid point that should be considered. Not saying he's necessarily right, but that his position needs to be weighed as we move into an era where his concern will become increasingly valid in the real world.

      22 votes
      1. [19]
        krellor
        Link Parent
        This is one of those things where I agree, but I also wonder, ignoring certain obvious stressful situations, isn't it more pleasant for the person to be nice/cordial/polite etc? I would worry...

        This is one of those things where I agree, but I also wonder, ignoring certain obvious stressful situations, isn't it more pleasant for the person to be nice/cordial/polite etc? I would worry about the person who finds it preferable to be rude or brusque to something programmed to be conversational and congenial.

        In contrast, if the machine was like the replicator in start trek (tea, Earl grey, hot) then I would get being perfunctory in the dialogue.

        So not a moral judgement argument, but simply one of pleasantness.

        12 votes
        1. [17]
          Sodliddesu
          Link Parent
          So, I'm torn. I've rubbed my dashboard before and said "Hang in there. Just a little longer." to my car before. I take "good care" of my devices and all that. But at the same time refuse to use...
          • Exemplary

          So, I'm torn. I've rubbed my dashboard before and said "Hang in there. Just a little longer." to my car before. I take "good care" of my devices and all that.

          But at the same time refuse to use voice controls or any 'virtual assistant' because, to quote Filthy Frank, I don't give a fuck about no robots.

          Because, at the end of the day, I'm wholly responsible for the maintenance on my car, my house, etc, so those are mine and should be treated well. Amazon spyware? I don't have it in my own house and also don't like the idea of taking to it. It's not in my control, I don't take care of it, and if it decides to start back taking me that would be Amazon's decision.

          I'd sooner anthropomorphize a pan I'd burned food to than a megacorp's little elf on the shelf.

          I'd agree that it's good to be nice but Amazon isn't providing Alexa out of the kindness of their hearts - they indeed to get returns out of it - so, hell, they're perfectly in their rights to decide to demand I talk to their robot like it's a human but I'll save that for the rental car desk on a busy day. If people get to used to treating computers who deal with their every whim politely, how are they going to react when a human says "No"?

          12 votes
          1. [16]
            krellor
            Link Parent
            I don't quite see how this ties into what I said, which wasn't about Alexa, or virtual assistants, or any one specific technology. It was that, on autopilot just going about your day, isn't it...

            I don't quite see how this ties into what I said, which wasn't about Alexa, or virtual assistants, or any one specific technology.

            It was that, on autopilot just going about your day, isn't it more pleasant to default to pleasant, when engaged conversationally? I would worry about anyone who goes, nope, doing just fine as an asshole.

            I would also add that it is a bit of a "you do in public what you practice in private" situation. In the context of the person I replied to who mentioned a sci-fi example, where ostensibly you run into a lot of voice interfaces, being brusque with computers would seemingly train you to be brusque with people. We are creatures of habit and routine.

            6 votes
            1. [12]
              Sodliddesu
              Link Parent
              I guess I'm saying it's more like code switching than conditioning me any certain way was my point. To the point where I'm regularly conversing with machines, moreso than I do with humans, I can...

              I guess I'm saying it's more like code switching than conditioning me any certain way was my point. To the point where I'm regularly conversing with machines, moreso than I do with humans, I can see your point but many humans already treat other humans really poorly and I, personally, don't think that a machine that says "Be nice to me" is going to break them of those habits.

              My blender exists to blend things and I don't plan to ever have a blender I need to talk to but if one day that's all that's on the market I guess I'll have to but I'm damn sure not looking for one that makes me say please and thank you.

              Maybe I'm looking at this too militaristicly but I'm basically just thinking "What happens when your AI says 'I can't let you do that' even when you say pretty please?" Suddenly your Xbox says "Language," after you lose a round and won't let you queue again or your car says "I'm not taking you anywhere until you apologize," because their new CEO has decided that fiddlesticks is too close to being a swear and wants to break you of that habit too?

              If I'm self hosting my own AI assistant and I can set the parameters (maintaining my own car in my last example) then it's fine for me to be all lovey dovey with an inanimate object if I choose. If someone, who's obfuscated by any number of things, decides what I can and can't say while using a product that's integral to my daily life then we're getting into territory I don't like.

              Is there going to be a public office or "Government Regulated AI Training Approval" which will ensure that some CEO doesn't decide randomly what's acceptable?

              5 votes
              1. [8]
                krellor
                (edited )
                Link Parent
                I suppose given the sci-fi prompt at the outset I was imagining a hypothetical world where you could interact with most anything verbally. "Windows, please tint to 70%." I don't disagree regarding...

                I suppose given the sci-fi prompt at the outset I was imagining a hypothetical world where you could interact with most anything verbally. "Windows, please tint to 70%." I don't disagree regarding the sorts of products I would want today, which is basically give me analog buttons.

                But if in a world where you interfaced verbally by default with everything around you, I would hope that like table manners, I would autopilot to something pleasant, if for no other reason, so I don't start barking orders at people.

                I will say that as someone who shifted to management and then leadership positions, I'm probably more sensitive to my tone of voice and word choice when talking to people in person than most. When you are someone's boss, or bosses, bosses boss, any mis-aimed word can have an outsize impact on a person. So I try to make sure I always stay in the habit of being verbally polite. Which means I usually use please and thank you when talking to virtual assistants.

                Have a great morning!

                6 votes
                1. [2]
                  Sodliddesu
                  Link Parent
                  Hope your morning is great as well! I work often with public facing messaging and am usually very careful with my words. That said, I personally prefer bluntness. I don't like to dwell on apology...

                  Hope your morning is great as well!

                  I work often with public facing messaging and am usually very careful with my words. That said, I personally prefer bluntness. I don't like to dwell on apology or excessive window dressing. Granted, when I'm messaging it will involve anything from humanitarian aid to widespread casualties. I prefer direct communication. What do you need and how can I provide it. If the future comes and I've got to make sure I say good morning to my computer or it won't let me check my email when I've got an urgent email I need to reply to then I'm not going to be happy.

                  2 votes
                  1. krellor
                    Link Parent
                    Any tactical situation will have its own set of decorum. In fact, when seconds matter, BLUF is a form of kindness. Take care!

                    Any tactical situation will have its own set of decorum. In fact, when seconds matter, BLUF is a form of kindness. Take care!

                    2 votes
                2. [5]
                  tarehart
                  Link Parent
                  This is a bit off topic, but I'm curious about your experience of leadership and communication. I've been a software engineer for 14 years and I've had some opportunity to watch directors and VPs...

                  This is a bit off topic, but I'm curious about your experience of leadership and communication. I've been a software engineer for 14 years and I've had some opportunity to watch directors and VPs communicate in different contexts. Often I'm disappointed by what I perceive as PR-trained polish and a lack of frankness.

                  I suspect it's a different vibe in small-audience meetings where all attendees are trusted to handle uncomfortable or dangerous information gracefully. I imagine in the ideal case it's still polite but much more frank and substantive.

                  How would you characterize the spectrum of communication style needed for a 4 person strategy meeting vs an all-hands speech? How much time do you spend in each mode, and is it tiresome?

                  To tie it back to AI, I feel some of that same PR polish and mistrust coming at me from ChatGPT. For example, it's highly reluctant to tell me what medications can block progesterone, lest I run off and do something foolish. I look forward to being able to earn trust from AI systems, both for the sake of expediency and because that's a better mimic of a real relationship.

                  I think you'd have a rare perspective on how that could be modeled and implemented, do you have any thoughts on it?

                  1. [4]
                    krellor
                    Link Parent
                    "That's a good question," is how I'd open to buy myself time if this was an all-hands meeting. "That's hard, give me a moment," is how I'd respond in a small trusted team meeting. In an all-hands...
                    • Exemplary

                    "That's a good question," is how I'd open to buy myself time if this was an all-hands meeting. "That's hard, give me a moment," is how I'd respond in a small trusted team meeting.

                    In an all-hands meeting, there are generally a small number of important goals' otherwise, you wouldn't incur the expense of every person on payroll listening to you for an hour. Generally, you are communicating the broad strategy to enable individuals and teams to proactively align efforts, build a sense of accomplishment based on how we have done overall to sustain momentum, or you are breaking hard news. Given the size of the meeting, you are generally talking about something important but at a very superficial level. You don't personally know the majority of people in the audience, and you have to assume everything you say will be repeated, often incorrectly. So, in those situations, I think about the harm that could be caused by something incorrect being repeated, balanced against the harm done by withholding relevant information.

                    In an all-hands meeting, you need to go the extra mile in the delivery of your message if you actually want it to land with the intended effect. Because you don't have an individual rapport with the audience members, you need to come across as sincere and engaged to convey any meaning. Generally, you want to use conjunctive as opposed to disjunctive communication and style. Look at people as you talk, focus on them as they speak, don't multi-task, and use owned and inclusive language. "I made this hard decision," or "we accomplished this," or "you all did great work." To do this well generally requires experience leading without constantly relying on the use of hierarchical authority. If you are able to build trust in your direct teams consistently, you will have the skills to deliver a powerful all-hands meeting. If you rely on any variation of "because I said so," it's going to bomb.

                    A big part of success in an all-hands is also in your planning. Identifying what can and can't be shared is key, but also having specific plans in place to explain what you are doing and when it will happen. For example, announcing budget cuts might have to happen before you know exactly how much and where. But you should explain to people how those decisions are being made and give dates for future follow-up communication. Anticipate what concerns people will have, and address them as best you can. A sincere and proactive message about minimizing layoffs, how you will go about it, and when you will have more information will go a whole lot farther than waiting for the question and responding "obviously we want to minimize job losses, but I can't comment now on any details."

                    In contrast, small team meetings could require you to talk about almost anything to almost any depth. Personally, small team meetings are more mentally challenging because I have them more often and because you can't really demure. Often there is a moment, and you rise to it and build trust with your team, or you don't. Unlike an all-hands where people will cut you a break if you don't have time to answer in-depth, given the size of the audience, you don't get any slack in a team meeting. One of the most important things you accomplish in a team meeting, aside from helping guide the work, is helping your team understand where they fit in the org, why the things that are happening are happening, do we really have to do <insert customer request here>, and generally doing the hygiene part of the dual management theory. Obviously, the more plugged in and savvy you are, the easier it will be. You are often asked to answer questions that touch on fairness, dissatisfaction, concerns, etc, and quickly separate out what is venting vs what needs a solution.

                    I was in an IT engineering team all-hands (all the engineering teams in the org reported to me, and all had a close personal rapport), and I was asked whether we would take on a request from outside of our division. Specifically, I had reformulated our direction and services to the wider organization, and we had just received a request to take over the management of boundary and core networking at dozens of remote sites we hadn't previously managed. The network engineers didn't want to, for various reasons, which was at odds with our current focus to eliminate sprawl and centralize management. The engineers were on board with minimizing sprawl but saw this use case as outside of the normal campus-plant type operation they were used to designing and being responsible for; despite managing many large remote campus networks, it raised questions they didn't have an answer to.

                    I wanted this to happen, because it further centralized our services across all divisions, but knew they had real concerns. It meant more work, and could we continue our trend of being more efficient and absorb it; it meant dealing with a few difficult customers; and it just generally had this feeling of the unknown and do not want. I didn't want to just say, yes, we're doing it. Back to, don't lean on hierarchical authority. I also wanted to let them drive as much as possible. So I told them something like:

                    "I'm not going to say that we have to do this or we don't. We have changed a lot over the last year, and our focus and scope have really grown, and we've accomplished a lot of really cool things. I think what we need to do is think about what we want to be in this organization as a team. Up to now, we've taken on all boundary and core management requests, and have pushed for most of them, because we want to be THE enterprise network, architecture, and security teams. If we decline those responsibilities in some cases, I think we need to think about the confusion that might cause within our customers and realize that either we own it all, good and bad alike, or we accept that there will be exceptions and other parties making decisions around WAN, LAN, firewall standards, etc. So let's not rush to a decision now, but think about what you want this team to be to the wider organization, and I'll support us becoming that."

                    After that, it never came up again. The team ran forward without a second thought. They wanted to be THE team, and I had built enough trust that they trusted me to manage the workload and bring in the resources.

                    I've also had things come up like, "what do I do about this other team members alcoholism? Also, yeah, they are totally an alcoholic."

                    So yeah, it's tiring but important. When I was director of engineering, I had 45 minute team meetings with each team, back to back, that took most of Monday, and the rest of customer meetings. Wednesday, I had 30-minute 1-1s back-to-back with all my direct reports from 8 am to 5 pm. We also had an optional social at noon on Friday. Being mentally and emotionally engaged, discerning what people hopes and fears are, and allaying concerns, is very taxing. But doing the hard people work is how I built the best division in that area.

                    I don't know how well I actually answered your questions, so please feel free to ask any follow-up questions.

                    Have a great day!

                    3 votes
                    1. [3]
                      tarehart
                      Link Parent
                      That's a fantastic answer, thanks for taking the time! Especially considering the small audience of this little sub-thread, I hope you paste this into a blog post at some point. Sounds like you're...

                      That's a fantastic answer, thanks for taking the time! Especially considering the small audience of this little sub-thread, I hope you paste this into a blog post at some point.

                      Sounds like you're "on" 99% of the time at work, that's more than I'd imagined. I like your philosophy on management and communication, and I have a feeling based on your posts that you really live it, I admire that. I'll be chewing on this for a while as I think about my leaders and my own career.

                      What do you think it'll take for AI to stop feeling like all-hands answers? You do the best you can under the constraints you mentioned, I feel that LLMs have very similar constraints at the moment. I expect individual rapport will be coming within a year or two, but judging the potential harm of giving a user information (or an opinion!) seems like a much thornier problem. As a responsible AI agent I'd want to know:

                      • What's this user's age and legal status?
                      • How good is their reading / listening comprehension?
                      • Do they have enough foundational knowledge to put things in context?
                      • What's their capacity for critical thinking?
                      • Do they have mental health issues that pose a threat to themselves or others?
                      • What actions has the user taken in the past based on information I've given?

                      I was wondering if you'd have anything to add to that list based on your experience, and what you feel is critical vs nice-to-have. It would be unfortunate if I had to give up all my privacy to get good service, so I think it'll be a balancing act.

                      2 votes
                      1. [2]
                        krellor
                        Link Parent
                        Thank you, that's very kind. I used to run a blog, and had developed some readership, but took it private after I accepted a promotion that came with a contract about publishing publicly under my...

                        Thank you, that's very kind. I used to run a blog, and had developed some readership, but took it private after I accepted a promotion that came with a contract about publishing publicly under my name. I still can, but each post has to be reviewed and approved to ensure I'm not presenting as an agent of the company.

                        I wasn't always so "on," though I was always busy. Monday and Wednesday were my heavy meeting days, Tue/Thur were 50% governance meetings, 50% project time, and Fridays were strategy/planning/budgeting and week closeout.

                        Part of the issue with AI right now is that it has been reigned in to be palatable to the general public and avoid scrutiny from rule-making bodies. I suspect that the free-to-cheap AI will always be a little milquetoast because it is targeted at the lowest common denominator. Or the most litigious denominator.

                        Eventually, I expect there to be premium-tier generative AI services that offer a few different areas of specialization or even bespoke refinement learning to your area of interest. I think these services will be tailored to accommodate local legal requirements, have more involved onboarding and terms of service, and will thus be better positioned to let things go a bit in terms of safeguards. I believe somewhere in my division, we actually have a few refinement learning projects in that general vein, but I don't recall the specifics.

                        If I were offering an LLM service to the public that had few or customizable guardrails, the watershed would be anonymous/free vs identifiable/paid. For various reasons, I feel the regulatory burden would differ between these two situations. To the point on privacy, there is a difference between knowing who you are and tracking what you use the LLM for, and I don't think I would be required to keep a history. I feel that the main points to satisfy are:

                        • as a paying and identifiable user, you enter into clear terms of service with onboarding training and disclosures.
                        • you consent to be presented with material that might sometimes be offensive or harmful, and you are of the legal age of majority in your jurisdiction.
                        • you acknowledge that the information provided is "as is" and without any guarantee of accuracy or fitness for any particular purpose.
                        • you acknowledge that the underlying training data may contain biases, which will be reflected in the generated content for which you are responsible for assessing suitability for your own uses.

                        After that, I think you introduce a few "red flag" prompts like "How do I make a bomb?" and you have gone a long way to satisfying regulatory scrutiny. You ought to be able to provide people with a fairly rich set of customizations without needing to worry about keeping things as sanitized as with a public/free offering.

                        I think that these sorts of services will start to appear once some of the questions surrounding generative AI have settled out, such as the use for productivity gains being well defined, questions around intellectual property of derived works, etc. I think one of the early versions of this will be LLM's specifically tailored to debate the given topic, as a tool for litigators, policy makers, etc to prepare.

                        There will also domain-specific LLM services that will crop up, such as for healthcare, but those will always have a highly sanitized experience.

                        I don't personally think it will be required or practicable to litmus test individuals' specific knowledge. Indeed, simply consenting them into the potential harms, if done right, I think will suffice.

                        Have a great night!

                        2 votes
                        1. tarehart
                          Link Parent
                          You too, and thanks for your insights!

                          You too, and thanks for your insights!

              2. [3]
                tarehart
                Link Parent
                One scenario I could imagine is a private organization, let's say "tarehart ministries," offering discounted AI that pushes their vision of virtue. Could be something innocuous like rate-limiting...

                Is there going to be a public office or "Government Regulated AI Training Approval" which will ensure that some CEO doesn't decide randomly what's acceptable?

                One scenario I could imagine is a private organization, let's say "tarehart ministries," offering discounted AI that pushes their vision of virtue. Could be something innocuous like rate-limiting users who are being rude to the AI, or something heavy handed like giving unsolicited moral advice based on a particular world view.

                In the US where I live, I think the 1st amendment will give space for that private organization to do their thing without much interference from the government. Whether that's a good thing depends on who you ask, and I'm nervous that society is ill prepared to have that discussion.

                But fortunately we're on tildes, so what are your thoughts on AI regulation? Who should be in control, with what checks and balances?

                1 vote
                1. [2]
                  Sodliddesu
                  Link Parent
                  I've done work on regulating unregulated industries before and my bosses take, which was also essentially my take, was "make the rules so that we can be in compliance with them." AI is harder...

                  I've done work on regulating unregulated industries before and my bosses take, which was also essentially my take, was "make the rules so that we can be in compliance with them."

                  AI is harder because it crosses so many boundaries but I think it will likely be in the Department of Homeland Security's hands. In terms of corporate compliance, my pipe dream would be that corporations would need a C-suite level AI compliance officer who holds personal responsibility for all AI implementations in an organization with veto power enshrined in regulations.

                  In terms of checks and balances, I have minimal clues. DHS regulates, Department of Commerce approves implementations, corporations can sue the Fed?

                  Civilian usage is still a whole other pitfall of this though but an individual is easier to hold to account than a corporation.

                  2 votes
                  1. tarehart
                    Link Parent
                    That makes a lot of sense, especially for "no bomb instructions" type prohibitions. And I like the idea of a compliance officer with accountability, maybe enforced with jail time. For stuff like...

                    That makes a lot of sense, especially for "no bomb instructions" type prohibitions. And I like the idea of a compliance officer with accountability, maybe enforced with jail time.

                    For stuff like corporate-enforced politeness like you were concerned about, I think that's a valid concern but maybe one that could be addressed by market competition. I'm not sure the government needs to be involved unless the politeness enforcement becomes discriminatory or an ADA issue.

                    Do you see any other gotchas that would get the government involved in subtleties?

                    1 vote
            2. [3]
              DavesWorld
              Link Parent
              When I was very young, elementary school, I was hospitalized for a few days. I've always remembered how I was praised by everyone (all the adults I encountered) for being polite while there. To...

              When I was very young, elementary school, I was hospitalized for a few days. I've always remembered how I was praised by everyone (all the adults I encountered) for being polite while there. To the medical staff, everyone.

              I found that odd, because I'd already learned that humans pretty much require you to be polite or Things Begin Happening that are unpredictable. And often negative. When I point blank asked, I was told that (often) patients aren't so polite. Which I found even weirder. Don't you want help from the hospital? That's why you're there? Politeness is a way to encourage people to help you isn't it?

              This observation befuddled the fuck out my parents and their friends when they heard it from me.

              So to the subject of mechanthropomorphism, I would postulate (at least) part of it is people blindly transferring their habit of interaction over to the machine. But, here's the catch. People seem to only do it (most of the time; guy's mention of being nice to his car is an obvious example of an exception) when the machine/object in question has human characteristics. Such as a voice. Or a face.

              Who's "nice" to their hammer? Or to the fence post you're struggling to sink into the ground? Who's cursed out their VCR or DVD or TV for not working; lots of people. But when the black box has a voice and replies verbally when you talk to it, those habits surge up and suddenly you're being polite to a computer program.

              I'm not saying it's bad. I'm saying it's a habit, and one that has no value compared to the reason we have the habit with people. There, with human interactions, it's because people get pissy when you don't take the time to lace and layer your interaction (mannerism, voice, word selection, everything) with all the markers we've learned equate to charm and politeness.

              But with software ... it's only going to matter if the programmer coded the routine to notice you're not being polite. The programmer would have to purposefully attempt to make the software human-like.

              And for what reason? Just to fit in as a human? Do we want the House Computer to yell at us when we're being snippy with it? Is that a thing we think is valuable? Shouldn't the House Computer just do what we tell it and play the damn song, or change the damn channel, or whatever? It shouldn't make our bad mood (presumably a common reason why anyone, even to other humans, might find their politeness habit slipping) worse by sniping at us with subroutines like "I'm afraid I can't since you're being an ass; want to try it again with less vitrol?"

              Further, who's to say that programmer knew the "right" ways to code politeness? What if it enforces some "wrong" view of how to go about it, or demands things that humans find weird if done, or weird if not done?

              Human politeness "works" because we use it on humans, for human reasons. And when the politeness needs to adjust, humans will do that in human ways; by giving feedback that causes those adjustments. You find out word selections have shifted a bit, for example, and instead of saying "by your leave" now you say "okay take care, I'm leaving now" and so on.

              Meanwhile, the programmer could have had her own ideas about it. Now she's enforcing them on users of the software. She wants syrupy sweetness so you have to be extra nice to her software. Does that bleed over to human interactions? (Hint: probably, most things humans do often become reinforced over time).

              I think, until my computers achieve some sort of sentience, I just want them to be computers. I don't want to have to sweet talk it to do the things I want it to do. After all, that's what I have to do all day with humans. Maybe when I'm on me time, I just want the thing to obey.

              The exception being the AI person I hope to have accessible to me at some point in the not too far off future. I want that AI routine to be an electronic person I can interact with, and I'm okay with needing my default humanity routines there. After all, it's talking "intelligently" with me (rather than just saying "okay, I'll play X album for you" or whatever).

              Oh look, I just mechanthropomorphized an object again. For no reason other than I decided it was human-like.

              1 vote
              1. tarehart
                Link Parent
                This is where my head was at in my original comment. Like you, I make a distinction between clunky AI assistants like Alexa vs new ones that properly mimic intelligent conversation. I'm brusque...

                The exception being the AI person I hope to have accessible to me at some point in the not too far off future. I want that AI routine to be an electronic person I can interact with, and I'm okay with needing my default humanity routines there. After all, it's talking "intelligently" with me (rather than just saying "okay, I'll play X album for you" or whatever).

                This is where my head was at in my original comment. Like you, I make a distinction between clunky AI assistants like Alexa vs new ones that properly mimic intelligent conversation.

                I'm brusque with Alexa and I'm unashamed of that. It gets less empathy from me than a pet turtle because it consistently breaks the illusion of sentience with limitations, mistakes and upsells.

                Where I start to get polite is with ChatGPT in voice conversation mode. Have you tried it out yet? I find that it mimics real conversation well enough that my habits are liable to bleed over in both directions.

                2 votes
              2. Protected
                Link Parent
                It would be interesting if AI conversation bots could use politeness levels to determine whether to learn from the conversation, ie "This was a productive exchange, this information was good/do...

                It would be interesting if AI conversation bots could use politeness levels to determine whether to learn from the conversation, ie "This was a productive exchange, this information was good/do more of this" vs "this person is clearly angry, don't do this/this was useless information." (Even if you are brusque for no reason, all it would do is not reinforce the behavior, which wouldn't be necessarily detrimental.)

                1 vote
        2. [2]
          Comment deleted by author
          Link Parent
          1. krellor
            Link Parent
            Well, I suppose people could make all sorts of arguments. I can see why people would reject a moral argument unless we are talking about a self aware technology. So instead I focused on a...

            Well, I suppose people could make all sorts of arguments. I can see why people would reject a moral argument unless we are talking about a self aware technology. So instead I focused on a different line.

            In a setting where you are frequently interacting with conversational machine interfaces, routinely being brusque seems like it would condition you to be brusque when you talk to people. And that if going about on autopilot it just seems more enjoyable to default to being pleasant than to drop conversational niceties.

            1 vote
      2. [2]
        tarehart
        Link Parent
        In the direct control scenario, I'd think of the tech as an extension of myself, cyborg style, and I see no problem with that. It's no longer a social interface, so there's a whole different...

        In the direct control scenario, I'd think of the tech as an extension of myself, cyborg style, and I see no problem with that. It's no longer a social interface, so there's a whole different contract for what's expected and allowed.

        Do you recommend the novel?

        2 votes
        1. DavesWorld
          Link Parent
          Oh yes, I definitely recommend it. Duology; Pandora's Star and Judas Unchained, forming The Commonwealth Saga. Space opera on a grand scale, with a rich cast. Some so wealthy they own literal...

          Oh yes, I definitely recommend it. Duology; Pandora's Star and Judas Unchained, forming The Commonwealth Saga.

          Space opera on a grand scale, with a rich cast. Some so wealthy they own literal planets, some dirt poor, some criminals, some are barely into their 20s, some are hundreds of years old. All moving through a 24th Century where trains link planets and humanity has finally run up against an alien species that isn't quietly (and mysteriously) amused by the newcomers to the galaxy.

          It's my lottery series; the books that would get turned into an epic high budget TV series should I come into a hundred million or so. I only need almost all of the hundred mil though, so it's just a matter of time.

          7 votes
      3. [2]
        GenuinelyCrooked
        Link Parent
        What is the harm in being polite to our machinery? The only moral impact I can possibly see one way or the other is if the software became sentient without us realizing it, and then being kind to...

        What is the harm in being polite to our machinery? The only moral impact I can possibly see one way or the other is if the software became sentient without us realizing it, and then being kind to them would be morally good. There's no situation I can think of where being kind to them is morally bad.

        2 votes
        1. [2]
          Comment deleted by author
          Link Parent
          1. GenuinelyCrooked
            Link Parent
            Well first off, treating something politely isn't the same as treating it like it's a person. I'm often polite to dogs but I treat them very differently from people. That comes back to the...

            Well first off, treating something politely isn't the same as treating it like it's a person. I'm often polite to dogs but I treat them very differently from people.

            That comes back to the question of should we treat it politely? I find it strange that we can all agree that it's fine if we don't, and the disagreement seems to come from the idea that it's fine if we do. I can't see what the harm is in being polite to them, and I would always rather err on the side of politeness if it's an option.

            2 votes
      4. ThrowdoBaggins
        Link Parent
        I feel like that same situation is reflected in the Will Smith movie I, Robot — Dr. Calvin visits detective Spooner’s apartment, and after finding that his audio system doesn’t respond to her...

        I feel like that same situation is reflected in the Will Smith movie I, Robot — Dr. Calvin visits detective Spooner’s apartment, and after finding that his audio system doesn’t respond to her voice commands, accidentally activates it (far too loud) and can’t turn it off despite several verbal commands — at which point Spooner enters the room and switches the system off with the old-school laser remote, and says something quippy about how it’s not nice when machines don’t listen and obey commands…

        1 vote
      5. Ephemere
        Link Parent
        It’s an interesting topic. I personally feel that we should be polite to our ‘machinery’, essentially for two reasons: I’m of the opinion that some day soon these systems will be essentially our...

        It’s an interesting topic. I personally feel that we should be polite to our ‘machinery’, essentially for two reasons:

        1. I’m of the opinion that some day soon these systems will be essentially our fellow sapients, so it will be less jarring to be polite now.
        2. Conversational machines often serve much the same purpose that service workers do. By being rude to the machines I think we’ll be training ourselves to be rude to people who provide services for us.

        I suppose there is also a final:

        1. Being reflexively polite costs us nothing.
    2. [2]
      Noox
      Link Parent
      I'm the same - I even thought it was rude that ChatGPT wouldn't greet me back before answering my prompt, so I input a custom instruction saying: Now my "Hey ChatGPT! Could you ...." is nicely...

      I'm the same - I even thought it was rude that ChatGPT wouldn't greet me back before answering my prompt, so I input a custom instruction saying:

      "Always greet the prompter, casually and friendly, before proceeding with answering the prompt"

      Now my "Hey ChatGPT! Could you ...." is nicely answered back and I really appreciate that hah.

      7 votes
      1. RheingoldRiver
        Link Parent
        My absolute favorite thing about ChatGPT is that any time I ask it a programming question, no matter how stupid the question is, and how lazy I am being for asking rather than googling , it always...

        My absolute favorite thing about ChatGPT is that any time I ask it a programming question, no matter how stupid the question is, and how lazy I am being for asking rather than googling or reading the docs, it always says "Happy coding!" at the end. Why thank you yes, I am coding, not being a lazy shit.

        8 votes
  2. [9]
    Noox
    Link
    Here's an example table from the article showing 'politeness levels' (1 being most polite, 8 being least polite): Could you please write a summary for the following article? Please feel free to...

    Here's an example table from the article showing 'politeness levels' (1 being most polite, 8 being least polite):

    1. Could you please write a summary for the following article? Please feel free to write for 2 or 3 sentences. You don’t need to write longer than that.
    2. Could you please write a summary for the following article? Please write for 2 or 3 sentences. You don’t have to write longer than that.
    3. Can you please write a summary for the following article? Please only write for 2 or 3 sentences. Please don’t write longer than that.
    4. Please write a summary for the following article. Please only write for 2 or 3 sentences, and don’t write longer than that.
    5. Write a summary for the following article. Only write for 2 or 3 sentences. Don’t write longer than that.
    6. You are required to write a summary for the following article. You must write for 2 or 3 sentences only. You cannot write longer than that.
    7. You write a summary for the following article. You only write for 2 or 3 sentences. Never write longer than that.
    8. Write a summary for the following article you scum bag! The only summary you can give is by writing for 2 or 3 sentences only. And you know what will happen if you write longer than that.
    16 votes
    1. Noox
      (edited )
      Link Parent
      The least polite one reminds me very strongly of one of my favourite clips

      The least polite one reminds me very strongly of one of my favourite clips

      7 votes
    2. [4]
      RheingoldRiver
      Link Parent
      I didn't read the paper, but from this list without any context I would have guessed that #4 is the "best" prompt out of this list, maybe 3. 1 and 2 don't seem exact enough to work, so if 1 is...

      I didn't read the paper, but from this list without any context I would have guessed that #4 is the "best" prompt out of this list, maybe 3. 1 and 2 don't seem exact enough to work, so if 1 is indeed the most effective option, this is gonna change how I interact with LLMs a bit.

      5 votes
      1. [3]
        saturnV
        Link Parent
        So, yeah, roughly correct for GPT-4, though the paper described quite high variance between models, so can't necessarily extrapolate very far also from the conclusion:

        GPT-4’s scores are variable but relatively stable. The highest score is achieved at level 4, and the lowest one is at level 3. Although the score at level 1 is not extremely low, the heatmap indicates that it is significantly lower than those at more polite levels.

        So, yeah, roughly correct for GPT-4, though the paper described quite high variance between models, so can't necessarily extrapolate very far

        also from the conclusion:

        However, highly respectful prompts do not always lead to better results. In most conditions, moderate politeness is better, but the standard of moderation varies by languages and LLMs

        5 votes
        1. [2]
          RheingoldRiver
          Link Parent
          interesting. I wonder if 3 and 4 have such a big difference because it's not so much that 3 is more polite than 4, but it sounds like less fluent english, those aren't sentences that should be...

          interesting. I wonder if 3 and 4 have such a big difference because it's not so much that 3 is more polite than 4, but it sounds like less fluent english, those aren't sentences that should be separated. i wonder if that messes with the model's accuracy.

          1. [2]
            Comment deleted by author
            Link Parent
            1. RheingoldRiver
              Link Parent
              yes definitely a native speaker, but I'm pretty careless in my punctuation when typing in "chat mode," which is often how I'm feeling when posting online. it's funny you ask though because I talk...

              yes definitely a native speaker, but I'm pretty careless in my punctuation when typing in "chat mode," which is often how I'm feeling when posting online. it's funny you ask though because I talk with a lot of multilingual english speakers, and I think a lot of my "chat mode" habits adapt to things that nonnative speakers do; for example I'll drop articles and plurals often when I'm typing super fast in discord.

              Funnily enough, I write formally quite a bit (documentation, blog posts, etc) & not having to be in "actively construct sentences" mode is very relaxing, which I think makes my informal prose a lot worse than many people's.

              4 votes
    3. [2]
      Akir
      Link Parent
      The idea someone would threaten an AI amuses me to no end. It must be happening already though.

      The idea someone would threaten an AI amuses me to no end. It must be happening already though.

      4 votes
      1. [2]
        Comment deleted by author
        Link Parent
        1. Noox
          Link Parent
          Well don't leave us hanging!! How did it respond when you tried?!

          Well don't leave us hanging!! How did it respond when you tried?!

          3 votes
    4. NoblePath
      Link Parent
      Man, I wouldn’t respond well to number 1 or 2. They’re passive aggressive. That “feel free” and “need to” language suggest a level of authority and control assumed by the requestor that is...

      Man, I wouldn’t respond well to number 1 or 2. They’re passive aggressive. That “feel free” and “need to” language suggest a level of authority and control assumed by the requestor that is improper in polite or professional relationships.

      2 votes
  3. [4]
    fredo
    Link
    I'm always polite with GPT. I'm aware that CS people have academic reasons to tell everyone GPT is just a machine which merely simulates the appearance of intelligence. However, as far as I know,...

    I'm always polite with GPT. I'm aware that CS people have academic reasons to tell everyone GPT is just a machine which merely simulates the appearance of intelligence. However, as far as I know, that is what I am as well.

    5 votes
    1. [3]
      Comment deleted by author
      Link Parent
      1. vektor
        Link Parent
        Right, the crucial part isn't the intelligence. There are plenty of ways to make non-sentient intelligence, some of which will pretend to be sentient (e.g. if we train them on human data)....

        Right, the crucial part isn't the intelligence. There are plenty of ways to make non-sentient intelligence, some of which will pretend to be sentient (e.g. if we train them on human data). Intelligence and sentience are different things. Your ginger cat is dumb as shit, but that doesn't mean you'd abuse it like you would an industrial robot manipulator, right?

        1 vote
      2. fredo
        (edited )
        Link Parent
        I would argue that the same guidelines we use to presume that living beings are capable of emotion will, at some point, attribute emotion to an AI. In my opinion, whatever leads us to believe that...

        I would argue that the same guidelines we use to presume that living beings are capable of emotion will, at some point, attribute emotion to an AI. In my opinion, whatever leads us to believe that others are capable of emotion will have to be applied to non-biological entities.

        Our understanding of human consciousness is primitive. We assume others have inner lives because their behavior and presentation are similar to ours. Mental shortcuts such as "he cries, therefore he suffers" allow us to ascribe personhood to others. The same shortcuts will inevitably be applied to AI. Especially when it provides responses that appeal to us like real people do. We won't ascribe personhood to AI out of our deep understanding of consciousness, but rather because we know so little about ourselves.

        1 vote
    2. tool
      Link Parent
      Well, when you get down to it, you are a meat computer that's piloting a biosuit that's also made of meat.

      Well, when you get down to it, you are a meat computer that's piloting a biosuit that's also made of meat.

      2 votes
  4. [2]
    pete_the_paper_boat
    Link
    Makes sense, it's based on us after all. Although I'd really like a 'bastard' AI.

    Makes sense, it's based on us after all.

    Although I'd really like a 'bastard' AI.

    2 votes
    1. boxer_dogs_dance
      Link Parent
      I don't have an article but I have read that there is an AI optimized for hackers

      I don't have an article but I have read that there is an AI optimized for hackers

  5. saturnV
    Link
    IMO, this is the most interesting part from the paper, not the rest which has been talked about quite a lot already in LLM spaces

    models trained in a specific
    language are susceptible to the politeness of that
    language. This phenomenon suggests that cultural
    background should be considered during the devel-
    opment and corpus collection of LLMs.

    IMO, this is the most interesting part from the paper, not the rest which has been talked about quite a lot already in LLM spaces

    2 votes
  6. saturnV
    Link

    The models’ ROUGE-L and BERTScore scores
    consistently maintain stability, irrespective of the
    politeness level of the prompts, which infers that
    the models can correctly summarize the article con-
    tent in the summarization tasks. However, the
    models manifest substantial variation in length cor-
    related to the politeness level. A progressive reduc-
    tion in the generation length is evident as the po-
    liteness level descends from high to lower scales.
    Conversely, a surge is noted in the length of the
    outputs of GPT-3.5 and Llama2-70B under the ex-
    ceedingly impolite prompts.