Whenever things like this happen someone will say that "human moderation at this scale is not possible", and what they mean is "human moderation at this scale is expensive". I can understand why...
An astronomer from Oxfordshire has been locked out of her Twitter account since August 2022, when she shared a video of a meteor which was flagged by the site's automated moderation tools.
Mary McIntyre was told that her six second animated clip featured "intimate content", shared without the consent of the participant.
Her only option was to delete the tweet.
However, in doing so she would have had to agree that she had broken the rules.
Her initial 12-hour ban has now gone on for three months - and she has exhausted the online appeals process.
Whenever things like this happen someone will say that "human moderation at this scale is not possible", and what they mean is "human moderation at this scale is expensive". I can understand why the initial decisions are automatic, but I can't understand why there isn't a final decision process that includes human review.
My guess is that spambots would then be coded to push appeals as far as possible in the hopes that an overworked human moderator would let 1% of their spam through, and the manual review process...
My guess is that spambots would then be coded to push appeals as far as possible in the hopes that an overworked human moderator would let 1% of their spam through, and the manual review process would be hopelessly flooded, if they don't do that already.
I agree that online platforms are very underinvested in manual review in moderation, but I think there's a balancing factor - I don't think any of us are aware of the sheer scale of spam on the internet. If every single spambot could push an appeals process to its final stage and get a human review, no amount of people could review it all - it's faster to make than to review, and the volume is always increasing.
Maybe people could pay a deposit for a thorough manual review by a human, which is returned when the appeal is successful. That would deter most spammers.
Whenever things like this happen someone will say that "human moderation at this scale is not possible", and what they mean is "human moderation at this scale is expensive". I can understand why the initial decisions are automatic, but I can't understand why there isn't a final decision process that includes human review.
My guess is that spambots would then be coded to push appeals as far as possible in the hopes that an overworked human moderator would let 1% of their spam through, and the manual review process would be hopelessly flooded, if they don't do that already.
I agree that online platforms are very underinvested in manual review in moderation, but I think there's a balancing factor - I don't think any of us are aware of the sheer scale of spam on the internet. If every single spambot could push an appeals process to its final stage and get a human review, no amount of people could review it all - it's faster to make than to review, and the volume is always increasing.
Maybe people could pay a deposit for a thorough manual review by a human, which is returned when the appeal is successful. That would deter most spammers.