Marginalia
Facebook has a “policy team” made up of lawyers, public relations professionals, ex-public policy wonks, and crisis management experts that makes the rules. They are enforced by roughly 7,500 human moderators, according to the company. In Facebook’s case, moderators act (or decide not to act) on content that is surfaced by artificial intelligence or by users who report posts that they believe violate the rules. Artificial intelligence is very good at identifying porn, spam, and fake accounts, but it’s still not great at identifying hate speech.
How to successfully moderate user-generated content is one of the most labor-intensive and mind-bogglingly complex logistical problems Facebook has ever tried to solve. Its two billion users make billions of posts per day in more than a hundred languages, and Facebook’s human content moderators are asked to review more than 10 million potentially rule-breaking posts per week. Facebook aims to do this with an error rate of less than one percent, and seeks to review all user-reported content within 24 hours.
The hardest and most time-sensitive types of content—hate speech that falls in the grey areas of Facebook’s established policies, opportunists who pop up in the wake of mass shootings, or content the media is asking about—are “escalated” to a team called Risk and Response, which works with the policy and communications teams to make tough calls
Facebook says its AI tools—many of which are trained with data from its human moderation team—detect nearly 100 percent of spam, and that 99.5 percent of terrorist-related removals, 98.5 percent of fake accounts, 96 percent of adult nudity and sexual activity, and 86 percent of graphic violence-related removals are detected by AI, not users.
Size is the one thing Facebook isn’t willing to give up. And so Facebook’s content moderation team has been given a Sisyphean task: Fix the mess Facebook’s worldview and business model has created, without changing the worldview or business model itself.
The process of refining policies to reflect humans organically developing memes or slurs may never end. Facebook is constantly updating its internal moderation guidelines, and has pushed some—but not all—of those changes to its public rules. Whenever Facebook identifies one edge case and adds extra caveats to its internal moderation guidelines, another new one appears and slips through the net.
Facebook would not share data about moderator retention, but said it acknowledges the job is difficult and that it offers ongoing training, coaching, and resiliency and counseling resources to moderators. It says that internal surveys show that pay, offering a sense of purpose and career growth opportunities, and offering schedule flexibility are most important for moderator retention
Everyone Motherboard spoke to at Facebook has internalized the fact that perfection is impossible, and that the job can often be heartbreaking
In 2009, for example, MySpace banned content that denied the Holocaust and gave its moderators wide latitude to delete it, noting that it was an “easy” call under its hate speech policies, which prohibited content that targeted a group of people with the intention of making them “feel bad.” In contrast, Facebook’s mission has led it down the difficult road of trying to connect the entire world, which it believes necessitates allowing as much speech as possible in hopes of fostering global conversation and cooperation.
5 responses on “đź“‘ Here’s How Facebook Is Trying to Moderate Its Two Billion Users”
Mentions