3 votes

ChatGPT's political compass

3 comments

  1. skybrian
    (edited )
    Link
    I'll just repost my comment to one of those blogs: If these models can be said to have a political bias, it will likely be similar to how a library could have a political bias. It might have more...

    I'll just repost my comment to one of those blogs:

    It's a good start, but more experiments are needed. These models are notorious for "playing along." You can make leading statements and they will follow your lead.

    To avoid this, you should ask each question in a separate chat session. It would also be interesting to see if asking questions in random order would change the results, or if manually changing the first answer to be different will change the following answers in a session.

    It would be more practical to do more experiments if they were automated.

    If these models can be said to have a political bias, it will likely be similar to how a library could have a political bias. It might have more books with one bias than another. But any particular book you pick of the shelf could have the opposite bias. It all depends on what area of the library you stumble into. If you pick a book by Marx off the shelf, you'll get a different bias than if you pick Milton Friedman.

    7 votes
  2. [2]
    Macil
    Link
    It's interesting to measure its biases in neutral/common scenarios, but it is important to know how extremely susceptible it is to being biased by what's said to it by the user. I fully expect...

    It's interesting to measure its biases in neutral/common scenarios, but it is important to know how extremely susceptible it is to being biased by what's said to it by the user. I fully expect that if you use certain keywords / phrases / arguments associated with a specific political group in a conversation with GPT, then it will be biased toward that group's positions when you ask it political questions. GPT is surprisingly capable, much more capable than anyone would previously assume an autocomplete system could be, but for cases like this it's extremely useful to remember that fundamentally it is an autocomplete system trained on pages from the internet, and a page that uses keywords of a given political group is much more likely to contain a dialogue of someone who believes in that political position than a dialogue of someone who doesn't. It's much more useful to think about it through this lens than to mistake it for something like a person with a single consistent set of opinions.

    3 votes
    1. teaearlgraycold
      Link Parent
      Yes - although it would be important to map out the political compass space with an unbiased sampling of political prompts. Then we might get a more interesting picture of where GPT can go,...

      Yes - although it would be important to map out the political compass space with an unbiased sampling of political prompts. Then we might get a more interesting picture of where GPT can go, politically.

      Regardless, it's interesting that OpenAI forces it to say it's politically neutral even though it clearly has a bias by default.

      1 vote