It’s easy to point fingers at social media moderators for doing a poor job, or for being unhappy with a decision from a “fact checker”, but things aren’t as easy as they seem, for two big reasons.
It’s a brutal job
First, being a social media moderator is an awful job. While we see things on social media that we sometimes shouldn’t, it’s rare to see video of a murder or porn or anything like that on Facebook — but people try to post that stuff, literally all day long.
From a post in the Verge a few years ago about a young woman named Chloe training to be a moderator:
She spent the past three and a half weeks in training, trying to harden herself against the daily onslaught of disturbing posts: the hate speech, the violent attacks, the graphic pornography. In a few more days, she will become a full-time Facebook content moderator, or what the company she works for, a professional services vendor named Cognizant, opaquely calls a “process executive.”
For this portion of her education, Chloe will have to moderate a Facebook post in front of her fellow trainees. When it’s her turn, she walks to the front of the room, where a monitor displays a video that has been posted to the world’s largest social network. None of the trainees have seen it before, Chloe included. She presses play.
The video depicts a man being murdered. Someone is stabbing him, dozens of times, while he screams and begs for his life. Chloe’s job is to tell the room whether this post should be removed.
I occasionally see people post content on social media with the express written purpose of “making life more difficult for moderators”. That itself is a pretty evil thing to do.
What needs to be moderated?
The other part is simply determining what should be allowed and what shouldn’t. There are some easy ones, like in the example above, but a whole lot of grey in the world.
In this great piece from Techdirt, they talk about going through an exercise with a group of people at a content moderation summit, working together to determine the best course of action for various pieces of content that a moderator might find. Even in that environment, it was impossible to get a consensus:
With each question there were four potential actions that the “trust & safety” team could take and on every single example at least one person chose each option.
They then expand on what it means in the real world, outside the confines of that summit:
Now, imagine (1) having to do that at scale, with hundreds, thousands, hundreds of thousands or even millions of pieces of “flagged” content showing up, (2) having to do it when you’re not someone who is so interested in content moderation that you spent an entire day at a content moderation summit, and (3) having to do it quickly where there are trade-offs and consequences to each choice — including possible legal liability — and no matter which option you make, someone (or perhaps lots of someones) are going to get very upset.
This isn’t to take Facebook or Twitter off the hook. Moderation can and should be continually improved. It’s simply important to remember that there are real people behind most of those decisions, faced with a horrible onslaught of things that none of us should have to see, making quick decisions the best way they can so they can keep their job.
They’re imperfect, but we owe them some compassion.
Leave a Reply