13 votes

You can sue media companies over Facebook comments from readers, Australian court rules

6 comments

  1. KapteinB
    Link
    Note that this seems to apply only to comments posted on the Facebook pages run by the media companies. So for example, if I post an article from Sky News Australia on my own Facebook wall, and...

    Note that this seems to apply only to comments posted on the Facebook pages run by the media companies. So for example, if I post an article from Sky News Australia on my own Facebook wall, and one of my friends write a defamatory comment on the article, then I don't believe the subject of the article can sue Sky News. (But maybe they can sue me?)

    Also that "hack" they describe to delay comments sounds pretty awful:

    The judge wrote that each company had the power to effectively delay reader comments on Facebook and monitor if they were defamatory before "releasing" them to the audience.

    This was based on evidence from social media expert Ryan Shelley, who testified that although you can't turn off comments on Facebook posts, you can deploy a "hack" to pre-moderate them.

    Shelley's hack involves putting 100 of the most commonly used words in the English language ("a", "the", etc) on a Facebook filter list, causing any comment containing those words to be automatically hidden from the public.

    The result of this might be that Australian media companies stop posting things to their Facebook pages, and maybe go as far as deactivate their Facebook pages, until Facebook creates better tools for moderating comments (tools that really should have been there from the beginning).

    8 votes
  2. [2]
    MetArtScroll
    Link
    As KapteinB wrote in an earlier comment here, the judge's decision was based on the “availability” of the “hack” described, though I have a question to people who know the Australian legal system:...

    As KapteinB wrote in an earlier comment here, the judge's decision was based on the “availability” of the “hack” described, though

    At the hearing, Shelley conceded that a one-word post containing a word that is not among the most common — e.g. "criminal" or "rapist" would not be blocked, and neither would picture comments.

    I have a question to people who know the Australian legal system: can this nonsensical decision be reverted in a higher court (i.e., is there a higher court)?

    This also demonstrates the importance of CDA 230 for the global Internet (I am neither a citizen nor a resident of the USA). I would totally support the plaintiff had he sued the commenters.

    P.S. Can the moderation tag be added?

    4 votes
    1. Algernon_Asimov
      Link Parent
      It took a bit of detective work to find out which court this case is being heard in. Turns out it's the Supreme Court of New South Wales. That's the highest court in the state of New South Wales....

      I have a question to people who know the Australian legal system: can this nonsensical decision be reverted in a higher court (i.e., is there a higher court)?

      It took a bit of detective work to find out which court this case is being heard in. Turns out it's the Supreme Court of New South Wales. That's the highest court in the state of New South Wales.

      The Federal Court of Australia is the general appellate court for the Supreme Courts of the various states. That's usually the highest judicial level. It may be possible to appeal the case to the High Court of Australia, but the High Court is primarily for cases involving constitutional matters, and they have to give special leave before they'll hear other types of cases - and that's usually reserved only for cases of high public importance. This case might not qualify for that special leave. In that case, the Federal Court would be the highest available court of appeal.


      the judge's decision was based on the “availability” of the “hack” described,

      That wasn't the only justification: "But Rothman found each company could reasonably monitor comments if they had sufficient staff to do so." It's not about the hack. It's about the fact that there are methods - whether algorithmic or manual - which would permit the media companies to review all comments posted on their social media pages if they wanted to. But they didn't do that.

      As the plaintiff's case states, "the defendants should have known there was a 'significant risk of defamatory observations' after placing the articles online". Dylan Voller is a relatively high profile person in Australia, for all that he's only a teenager - and a criminal, at that (he was the trigger for a royal commission about the treatment of young people in juvenile detention). He's claiming that the media companies should have known in advance that any story about him would attract more attention than usual, and that they therefore could have - and should have - paid more attention to the comments posted on that story. By choosing not to monitor the story about him, the companies implicitly chose to allow people to write whatever they wanted about Mr Voller, including the potentially defamatory statements he's now suing them about.

      I agree that the commenters should also be part of the case, but those commenters would not have been able to publish their allegedly defamatory statements if the media companies hadn't actively turned a blind eye to their own comments pages.

      4 votes
  3. [3]
    NaraVara
    Link
    Hmm. If you’re held liable for it on Facebook it stands to reason you’d be held liable on any social media including Twitter, Tumblr, Tildes, Reddit. . .

    Hmm. If you’re held liable for it on Facebook it stands to reason you’d be held liable on any social media including Twitter, Tumblr, Tildes, Reddit. . .

    3 votes
    1. [2]
      MetArtScroll
      Link Parent
      IIUC, the media companies either had Facebook-powered comment sections on their pages or published the original news on their Facebbok accounts they “could” “moderate” using the “hack” described...

      IIUC,

      • the media companies either had Facebook-powered comment sections on their pages or published the original news on their Facebbok accounts
      • they “could” “moderate” using the “hack” described

      For Twitter or Tumblr or any other user-centred platform, the court's argument could be used only if the platform allowed pre-moderation of a similar type.

      For Reddit, that argument could be used if the subreddit in question were moderated by the company, then it is possible to set up filters or Automoderator. Otherwise, a non-mod user has no control over what others reply. The same applies to most other topic-centred social media.

      In particular, for Tildes as of now, the court's argument would basically not stand.

      However, the court's argument should not stand at all in the first place.

      1 vote
      1. NaraVara
        Link Parent
        Ah I thought it was moderation functions at all rather than "pre-moderating." That seems odd, they're functionally penalizing a platform with accountability for permitting moderation tools. Even...

        For Twitter or Tumblr or any other user-centred platform, the court's argument could be used only if the platform allowed pre-moderation of a similar type.

        Ah I thought it was moderation functions at all rather than "pre-moderating." That seems odd, they're functionally penalizing a platform with accountability for permitting moderation tools. Even "unauthorized" mod capabilities that someone has to hack together.

        2 votes