5 votes

"Change the Terms - Reducing Hate Online" - a model policy framework for social media platforms

3 comments

  1. [2]
    alyaza
    Link
    this initiative seems better late than never, but man this feels like something the tech industry should have been doing from the start. i feel like you can attribute basically all of the...

    this initiative seems better late than never, but man this feels like something the tech industry should have been doing from the start. i feel like you can attribute basically all of the difficulties social media platforms have now with increasing radicalization, militancy, and organization to the fact that they never considered any of this when they got started and as a consequence didn't start fighting any of these things until well after they became a problem. it's more excusable for some platforms than others, but christ can you imagine if these had been the ground rules for most social media from the start?

    2 votes
    1. patience_limited
      Link Parent
      In the large organizations I've worked in, the priority is always to make money first, then go resolve the things that impair making money in order of magnitude. Until ignoring hate costs social...

      In the large organizations I've worked in, the priority is always to make money first, then go resolve the things that impair making money in order of magnitude. Until ignoring hate costs social media platforms real money (lost user base, reputational damage, regulatory action, lawsuits, etc.), the companies won't allocate significant resources for resolution.

      2 votes
  2. patience_limited
    Link
    A broad coalition of civil rights organizations has created model policies for reduction of hate online, and an initiative to ensure their widespread adoption across multiple social media...

    A broad coalition of civil rights organizations has created model policies for reduction of hate online, and an initiative to ensure their widespread adoption across multiple social media platforms.

    From the FAQ page:

    IF A COMPANY ADOPTS THESE POLICIES AS ITS TERMS OF SERVICE, WHAT WOULD THE RULES BE?

    Under these policies, a company commits to not allowing their services to be used for hateful activities.

    ENFORCEMENT. The company will use the best available tools—with appropriately trained and resourced staff, technological monitoring, and civil rights expertise—to enforce the rules in a comprehensive and non-arbitrary manner.
    RIGHT OF APPEAL. The company will provide notice and a fair right of appeal to someone if their content is taken down. This is particularly important for creators of color.
    TRANSPARENCY. The company will regularly provide robust transparency reports and data so that outside groups and researchers can effectively monitor the company’s progress, study trends, and recommend improvements.
    EVALUATION AND TRAINING. The company will invest in its staff and training practices to ensure that it is providing sufficient resources to address the problem, and regularly audit its practices.
    GOVERNANCE AND AUTHORITY. The company will make a clear commitment to the importance of this issue by designating a senior executive, appoint a board of directors committee, and engage a committee of external advisors all dedicated to addressing hate and discrimination on the platform.
    STATE ACTORS, BOTS, AND TROLLS. Recognizing that social media in particular is a new front for information warfare, the company will take affirmative steps to identify, prohibit, and disrupt those who try to conduct coordinated hateful campaigns on the service.

    WHAT IS A “HATEFUL ACTIVITY?” WILL THIS INITIATIVE BLOCK FREE SPEECH?

    As defined in the model terms of service, “hateful activity” means “activities that incite or engage in violence, intimidation, harassment, threats, or defamation targeting an individual or group based on their actual or perceived race, color, religion, national origin, ethnicity, immigration status, gender, gender identity, sexual orientation, or disability.”

    Because of the strict definition of hateful activity found in the terms of service, these policies will not block free speech. First, as an initial matter, the First Amendment does not apply to the policies of a private company; it only applies to actions taken by a U.S., state, or local government.

    But even if it did apply, the First Amendment does not protect all speech. We carefully wrote the definition of hateful activity to cover types of speech that courts have said are not protected as free speech: incitement, violence, intimidation, harassment, threats, and defamation.

    We also looked to hate crimes laws to determine what types of characteristics to protect. It is wrong to discriminate against someone based on their immutable characteristics—those personal traits that one cannot change or that are fundamental to one’s identity—such as race, color, religion, national origin, ethnicity, immigration status, gender, gender identity, sexual orientation, or disability.

    1 vote