9 votes

Survey Chicken

7 comments

  1. [4]
    Greg
    Link
    "On a scale of 1 - 10, how likely are you to recommend Microsoft Windows to a friend?" "I need you to understand that most people don't sit around recommending operating systems to each other." We...

    "On a scale of 1 - 10, how likely are you to recommend Microsoft Windows to a friend?"

    "I need you to understand that most people don't sit around recommending operating systems to each other."


    We can make a reasonable guess that what they meant was "do you like this product?", but even in this comparatively simple example, that wasn't what they asked.

    8 votes
    1. [3]
      pocketry
      Link Parent
      This is called the net promoter score (commonly referred to as NPS). It is widely used as a measure of how likely your product can grow organically. It is becoming over used in software and you'll...

      This is called the net promoter score (commonly referred to as NPS). It is widely used as a measure of how likely your product can grow organically. It is becoming over used in software and you'll see it every where. Since the scoring mechanism boils it down to three categories, you can game it a little: 1-6 is negative, 7-8 is neutral, 9-10 is good.

      5 votes
      1. joplin
        Link Parent
        Oh god, this just reminded me of another stupid survey issue that I hate. (Who knew I was so passionate about surveys?) I'll often call tech support for something like my cable being down or...

        Oh god, this just reminded me of another stupid survey issue that I hate. (Who knew I was so passionate about surveys?) I'll often call tech support for something like my cable being down or whatever. At the end of the call, the representative will say, "You'll be offered a brief survey about the quality of my service after I hang up. You'll be asked to rate us on a scale of 1 to 10. Please know that a 7 is like a 0. Thank you."

        Now to me, this sounds like they're trying to cause the Hawthorne Effect, where when you know you're being observed, you act differently, which totally invalidates the survey. On top of which, given most American's fear of numeracy, that line is also probably highly confusing to them. "If a 7 is a 0, what's a 5? A negative 2? That's not a number!" It also makes me wonder - if a 7 is a 0, is an 8 a 1 or 3 & 1/3rd? Like does the scale now go: 0, 1, 2, 3, or does it go 0, 3 & 1/3rd, 6 & 2/3, 10? Is there an option for "I no longer understand the question?"

        5 votes
      2. Greg
        Link Parent
        Yeah, it's something that I've come across in the past through my work - I've always liked that example (shamelessly stolen from a tweet I saw many moons ago) as an illustration of how artificial...

        Yeah, it's something that I've come across in the past through my work - I've always liked that example (shamelessly stolen from a tweet I saw many moons ago) as an illustration of how artificial it can be.

        4 votes
  2. [2]
    joplin
    Link
    I'm inclined to agree. Any time I have to fill out a form or take a survey I find myself questioning the questioner. "What do they mean by 'x'?" where 'x' is some word or phrase in the question....

    I'm inclined to agree. Any time I have to fill out a form or take a survey I find myself questioning the questioner. "What do they mean by 'x'?" where 'x' is some word or phrase in the question.

    And I think this bugs me the most:

    There are several strategies to detect careless responses, such as including “trap questions” and asking the same question in multiple ways, on the assumption that the change in wording will not change the meaning in any way to a sincere survey taker. However, it is likely that savvy survey takers quickly detect these strategies and find ways to minimize cognitive effort despite their presence.

    It's pretty obvious when they're doing this. The question I always think is, "Didn't I already answer this? Or do they mean something slightly different here? The previous question about this asked 'x', but this seems to be focused on 'y'. And I do feel slightly different about that particular aspect... Hmm..." It drives me bonkers.

    Back in the 90s I was offered a free WebTV (pre-Microsoft acquisition) that I could keep as long as I answered some number of surveys on it per week. It was a free tech toy so I said, "Sure!" I probably spent 6 to 9 months just taking surveys. Some were straightforward questions about what I ate or drank. But some were these terrible brand surveys that had questions like:

    When you think of this airline, do you feel:

    a) Outdoorsy

    b) Urban

    c) Elegant

    d) Gassy

    (OK, I might have made up that last one.) My response was, "No" but that wasn't an option. I literally had no fucking idea what they were asking. When I think of an airline, I either think, "Yes, they can reasonably get me to my destination on time and in one piece," or "No, those fucking lunatics couldn't find their ass with both hands. There's no way I'm putting my life in their hands." Those are really the only 2 options.

    After about a month or two of taking surveys about things I didn't care about, I started putting in the most ridiculous answers I could just to fuck with them. I'm sure I'm not the only one.

    5 votes
    1. joplin
      Link Parent
      Oh, and the point about comprehension is also spot on. I love this part: When I'm taking a survey and see something like that, what I think is, "They missed one." By this, I mean, they have 2...

      Oh, and the point about comprehension is also spot on. I love this part:

      Taken all together, how would you say things are these days—would you say that you are very happy, pretty happy, or not too happy?

      When I'm taking a survey and see something like that, what I think is, "They missed one." By this, I mean, they have 2 "good" answers and 1 "bad" answer. They're skewing the results! I've taken surveys where they intentionally don't have an odd number of possible responses because everyone gravitates towards the inoffensive middle option, so instead of 3, they'll have 4 or 6 possible answers. But here, it seems like they intended to have 4, but forgot one. The responses "very happy," and "pretty happy" are more positive than "neutral". But "not too happy" is less positive than "neutral". And then I think, "maybe I'm overthinking it."

      5 votes
  3. skybrian
    (edited )
    Link
    From the blog post: [...] [...] [...] [...]

    From the blog post:

    In the abstract, I think a lot of people would agree with me that surveys are bullshit. What I don’t think is widely known is how much “knowledge” is based on survey evidence, and what poor evidence it makes in the contexts in which it is used.

    [...]

    I think it is important to remember that there is no ancestral practice equivalent to surveys. That is to say, there is no ancient human practice or context in which people anonymously tell the pure, innocent truth with language, in response to questioning, with no thought for the motives of the questioner or the effect of their answers. However, in the new, wholly invented ethnomethod of [doing a survey], it is imagined that subjects do tell the innocent truth, comprehending the underlying sense of the question but not answering with any motive or particularity of context. The anonymity of survey takers is given as proof that they feel free to tell the truth, rather than being perceived as a bar to asking them what they might have meant by their responses.

    [...]

    When we say that some conclusion “rests upon” assumptions, the metaphor might be one of a house resting on a foundation, where if the foundation is not sound, a collapse may occur. But in this case it is more like the parts of a machine, which all must function. In the case of a steam engine, the water container must be sound and hold pressure without leaking, the piston must pop up and down, the wheel must spin freely, etc. If any part of the machine does not function, the machine does not function.

    But in the case of surveys, even if all assumptions fail, if all the pieces of the machine fail to function, data is still produced. There is no collapse or apparent failure of the machinery. But the data produced are meaningless—perhaps unbeknownst to the audience, or even to the investigators. What follows is my attempt to identify the moving parts of survey meaningfulness, with some attention to how they interact. Keep in mind that all of these are based on an underlying assumption that there is no outright fraud—that data are gathered in the way stated, and not made up or altered, either by the researchers or by any of their subcontractors or employees.

    [...]

    Comprehension is difficult enough in actual conversation, when mutual comprehension is a shared goal. Often people think they are talking about the same thing, and then find out that they meant two completely different things. A failure of comprehension can be discovered and repaired in conversation, can even be repaired as the reading of a text progresses, but it cannot be repaired in survey-taking. Data will be produced, whether they reflect the comprehension of a shared reality or not.

    [...]

    I have presented the assumptions for survey meaningfulness in the order that seems most natural for understanding them, and finally come to what I regard as the most important factor, alluded to but not named in italics above. From comprehension and accuracy we can deduce the need for the existence of an underlying phenomenon that is being measured. In the case of nutrition surveys, there is a phenomenon—a fact of the matter—-that could be gotten at by, say, constant surveillance. People really do eat certain foods in certain quantities, whether or not it has much effect on their health or mortality. But for the really important issues, things like happiness and depression and political beliefs, there may really be no phenomenon underlying the attempt at measurement. This is not to say that happiness is not real, in the sense that it is subjectively felt that things are going well or poorly. Emotion and feeling and belief are real, as we can all see by experiencing them from time to time. We may even communicate about them in conversation, and use the communication and all the context surrounding it to judge the emotions and beliefs of others. My claim is that there is no basis for believing that a shared phenomenon underlies the use of words like these across contexts, especially the anti-context of survey taking.