8 votes

The rise of the compliant speech platform

1 comment

  1. skybrian
    Link
    From the article: ...

    From the article:

    Once upon a time, operating a speech platform was something like running a bookstore and a bar full of dangerous brawling hooligans all at once. The platform employees who interpreted laws or enforced the companies’ own discretionary rules decided individual, human questions about speech and communication. As a lawyer for Google in 2004, for example, I recall fielding a fraternity’s complaint about the disclosure of its secret handshake one day, and a Turkish politician’s demand that we suppress critical news reporting about him the next. The Turkish politician went away when we declined his request. In other cases, we litigated over removal demands that we thought overstepped national laws and violated users’ rights to speak and access information.

    That artisanal and sometimes combative model of trust and safety persists at some smaller companies. For the biggest platforms, it became largely untenable as the sheer volume of online content grew. By the 2010s, a platform like Facebook or YouTube needed its speech rules to work at an industrial scale. A rule about artistic nudity, for example, had to be concrete and simple enough for tens of thousands of contractors around the world—or even machines—to apply with some modicum of consistency. Speech moderation got scaled up and standardized. Still, the rules that platforms adopted could never quite keep up with the sprawling diversity of speech on the internet, or with the human capacity to say entirely new things, whether wonderful or terrible.

    The current wave of change takes platforms’ existing routinized speech governance to another level entirely. Framing content moderation as a compliance function puts it firmly within a legal practice area that often has stated goals like “avoiding trouble” with regulators and creating a “culture of compliance.” Corporate compliance teams typically do things like building oversight systems to avoid violating anti-bribery laws, tax laws, or securities regulation. Platforms and other companies might use this approach for relatively predictable and formulaic obligations, such as those under privacy and data protection laws like the General Data Protection Regulation (GDPR). Building such systems to reduce complexity and risk by erring on the side of overcompliance often makes sense. Doing more than the law requires might cost the company a little extra money, but it’s worth it to stay out of trouble. And the public may be better off when companies opt to overcomply with laws in areas like privacy.

    Overcompliance with laws about speech is different. Laws like the DSA in Europe and the Digital Millennium Copyright Act in the U.S. already effectively encourage platforms to protect themselves by silencing users’ lawful expression under “notice and takedown” systems for potentially unlawful content. We should be alert to the possibility that the current “compliance-ization” of trust and safety may make that problem worse.

    ...

    Auditing firms, civil society groups, and platforms have all complained about the effort to turn complex human decision-making into something quantifiable and auditable. Auditors point to the lack of established standards, methodologies, or benchmarks for their assessments. They also object to the law’s requirement that auditors attest to their findings at the highest degree of certainty, exposing the firms to liability if their findings are wrong.

    6 votes