38 votes

Q&A with Yoel Roth, Twitter’s former head of trust and safety, on the whirlwind first two weeks under Elon Musk, Twitter’s content moderation approach, and more

2 comments

  1. [2]
    skybrian
    (edited )
    Link
    You have to be careful with insider accounts since people aren't going to talk about their own screwups, but this is a solid, in-depth interview. There's a lot of detail and it rings true to me....

    You have to be careful with insider accounts since people aren't going to talk about their own screwups, but this is a solid, in-depth interview. There's a lot of detail and it rings true to me. Most outsider critiques of Twitter don't.

    He does admit being wrong about this:

    Shortly after I left Twitter, I wrote a guest essay in the New York Times speculating that it couldn’t possibly get that bad. And there were three reasons. The first one was advertisers. So it’s like nobody who is trying to run a profitable company … would alienate advertisers. Wrong. Turns out they alienated the advertisers. The second reason was regulation, right? So it was like, there will always be a backstop against this because … the disinfo code of practice is a thing and the company has to comply. Wrong. Twitter withdrew from the disinfo code of practice. Unthinkable steps by the company. And then the third was App stores. This idea that a platform can only get so toxic before you see Apple or Google step in and intervene. And that one’s interesting because you saw a big blow up shortly after my piece came out in the Times where Elon suggested that Apple threatened to kick them out of the App store. Apple walked it back. Elon and Tim Cook went for a walk in Cupertino, and now they’re best friends. And Apple is advertising extensively on Twitter. You can speculate about why that happened, but I mention all this because I believed that the plan was you can only go so far before you run into those limits. And I think we’ve seen the company has just absolutely trampled those limits at every turn, and I don’t see how that works.

    12 votes
    1. [2]
      Comment deleted by author
      Link Parent
      1. skybrian
        Link Parent
        I think that’s due to the collective decision-making process he talks about. In some cases, without directly blaming his boss, he says he was overruled. From the outside, it does look like Twitter...

        I think that’s due to the collective decision-making process he talks about. In some cases, without directly blaming his boss, he says he was overruled.

        From the outside, it does look like Twitter making decisions, and they made a decision to “own” decisions in the sense of Twitter taking blame for them.

        Also, I don’t know if this is related, but there’s a common practice at tech companies to write “blameless postmortems” where the goal is to describe an incident in detail (including who did what) and come up with all the ways of preventing it from happening again. Unless it’s a case of sabotage, the assumption is that people are well-meaning but often make mistakes, particularly when under pressure. All errors are assumed to be systemic errors and the system needs to be changed to catch mistakes before they become serious.

        This is in reaction to blame cultures where the goal is to find a scapegoat and fire them. When that’s the process, people are not going to be honest about what really happened.

        So, when I see the word “accountability” I stop and think about what it means. Is that blame culture? If not, how is it supposed to work?

        But I don’t know how Trust & Safety can catch errors before they become serious. The stakes are high and available information limited because it comes from outside the company from unreliable sources. It depends on fuzzy judgement calls, and the results of a decision often better understood in retrospect.

        5 votes