Bookmarked Sacha Baron Cohen’s Keynote Address at ADL’s 2019 Never Is Now Summit on Anti-Semitism and Hate (Anti-Defamation League)

It’s time to finally call these companies what they really are—the largest publishers in history. And here’s an idea for them: abide by basic standards and practices just like newspapers, magazines and TV news do every day. We have standards and practices in television and the movies; there are certain things we cannot say or do. In England, I was told that Ali G could not curse when he appeared before 9pm. Here in the U.S., the Motion Picture Association of America regulates and rates what we see. I’ve had scenes in my movies cut or reduced to abide by those standards. If there are standards and practices for what cinemas and television channels can show, then surely companies that publish material to billions of people should have to abide by basic standards and practices too.

Sacha Baron Cohen provided the keynote address for the Anti-Defamation League’s 2019 Never Is Now Summit on Anti-Semitism and Hate. Stepping away from his many guises, Baron Cohen discusses the current threat to democracy being served by the ‘Silicon Six’. He argues although they often reference ‘freedom of speech’ as an excuse, this often leads to a freedom of reach for those wishing to manipulate the structure of society.

This reminds me of danah boyd’s discussion of cognitive strengthening, filling the gaps and the challenges of the fourth estate. Also, Ben Thompson provides a useful discussion of the challenges associated with moderation, one being the human side of the process, while Tarleton Gillespie suggests that moderation is not the panacea.

Doug Belshaw provides his own response to Baron Cohen’s speech, suggesting that the issues are associated with the financial roots of platform capitalism, the need for more local moderation and the problem of vendor lock-in.

Mike Masnick pushes back on Baron Cohen’s argument that social media is to blame for fake news and instead argues that things did not take off until Fox News validated things. In addition to this, Masnick questions whether there really is a solution to the problem of moderation and communication.

Marginalia

Democracy, which depends on shared truths, is in retreat, and autocracy, which depends on shared lies, is on the march. Hate crimes are surging, as are murderous attacks on religious and ethnic minorities.

Voltaire was right, “those who can make you believe absurdities, can make you commit atrocities.” And social media lets authoritarians push absurdities to billions of people.

Freedom of speech is not freedom of reach.

Zuckerberg at Facebook, Sundar Pichai at Google, at its parent company Alphabet, Larry Page and Sergey Brin, Brin’s ex-sister-in-law, Susan Wojcicki at YouTube and Jack Dorsey at Twitter. The Silicon Six

Those who deny the Holocaust aim to encourage another one.

Bookmarked Revealed: catastrophic effects of working as a Facebook moderator (the Guardian)

Some of the moderators’ stories were similar to the problems experienced in other countries. Daniel said: “Once, I found a colleague of ours checking online, looking to purchase a Taser, because he started to feel scared about others. He confessed he was really concerned about walking through the streets at night, for example, or being surrounded by foreign people.

Alex Hern’s discussion of Facebook moderators in Berlin provides a different perspective to the world of moderation. When you hear the ridiculous number of users that platforms like Facebook have, I shudder to think the content that needs to be processed.
Bookmarked A Framework for Moderation (Stratechery by Ben Thompson)

The question of what should be moderated, and when, is an increasingly frequent one in tech. There is no bright line, but there are ways to get closer to an answer.

Ben Thompson responds to CloudFlare’s decision to terminating service for 8chan with a look into the world of moderation. To start with, Thompson looks at Section 230 of the Communications Decency Act and the responsibility platforms have for content:

Section 230 doesn’t shield platforms from the responsibility to moderate; it in fact makes moderation possible in the first place. Nor does Section 230 require neutrality: the entire reason it exists was because true neutrality — that is, zero moderation beyond what is illegal — was undesirable to Congress.

He explains that the first responsibility lies with the content provider, however this then flows down the line to the ISP as a back stop.

Bookmarked Here’s How Facebook Is Trying to Moderate Its Two Billion Users (Motherboard)

Moderating billions of posts a week in more than a hundred languages has become Facebook’s biggest challenge. Leaked documents and nearly two dozen interviews show how the company hopes to solve it.

Jason Koebler and Joseph Cox take a deep dive into the difficulties of moderation on a platform with two billion users. They discuss Facebook’s attempts to manage everything with policy. This often creates points of confusion, but is required if it is to follow through with the goal of connecting the world. What is often overlooked in all of this is the human impact on moderators, especially with the addition of video.

Marginalia

Facebook has a “policy team” made up of lawyers, public relations professionals, ex-public policy wonks, and crisis management experts that makes the rules. They are enforced by roughly 7,500 human moderators, according to the company. In Facebook’s case, moderators act (or decide not to act) on content that is surfaced by artificial intelligence or by users who report posts that they believe violate the rules. Artificial intelligence is very good at identifying porn, spam, and fake accounts, but it’s still not great at identifying hate speech.

How to successfully moderate user-generated content is one of the most labor-intensive and mind-bogglingly complex logistical problems Facebook has ever tried to solve. Its two billion users make billions of posts per day in more than a hundred languages, and Facebook’s human content moderators are asked to review more than 10 million potentially rule-breaking posts per week. Facebook aims to do this with an error rate of less than one percent, and seeks to review all user-reported content within 24 hours.

The hardest and most time-sensitive types of content—hate speech that falls in the grey areas of Facebook’s established policies, opportunists who pop up in the wake of mass shootings, or content the media is asking about—are “escalated” to a team called Risk and Response, which works with the policy and communications teams to make tough calls

Facebook says its AI tools—many of which are trained with data from its human moderation team—detect nearly 100 percent of spam, and that 99.5 percent of terrorist-related removals, 98.5 percent of fake accounts, 96 percent of adult nudity and sexual activity, and 86 percent of graphic violence-related removals are detected by AI, not users.

Size is the one thing Facebook isn’t willing to give up. And so Facebook’s content moderation team has been given a Sisyphean task: Fix the mess Facebook’s worldview and business model has created, without changing the worldview or business model itself.

The process of refining policies to reflect humans organically developing memes or slurs may never end. Facebook is constantly updating its internal moderation guidelines, and has pushed some—but not all—of those changes to its public rules. Whenever Facebook identifies one edge case and adds extra caveats to its internal moderation guidelines, another new one appears and slips through the net.

Facebook would not share data about moderator retention, but said it acknowledges the job is difficult and that it offers ongoing training, coaching, and resiliency and counseling resources to moderators. It says that internal surveys show that pay, offering a sense of purpose and career growth opportunities, and offering schedule flexibility are most important for moderator retention

Everyone Motherboard spoke to at Facebook has internalized the fact that perfection is impossible, and that the job can often be heartbreaking

In 2009, for example, MySpace banned content that denied the Holocaust and gave its moderators wide latitude to delete it, noting that it was an “easy” call under its hate speech policies, which prohibited content that targeted a group of people with the intention of making them “feel bad.” In contrast, Facebook’s mission has led it down the difficult road of trying to connect the entire world, which it believes necessitates allowing as much speech as possible in hopes of fostering global conversation and cooperation.

Bookmarked Content moderation is not a panacea: Logan Paul, YouTube, and what we should expect from platforms (Social Media Collective)

Content moderation should be more transparent, and platforms should be more accountable, not only for what traverses their system, but the ways in which they are complicit in its production, circulation, and impact. But it also seems we are too eager to blame all things on content moderation, and to expect platforms to maintain a perfectly honed moral outlook every time we are troubled by something we find there. Acknowledging that YouTube is not a mere conduit does not imply that it is exclusively responsible for everything available there.

Tarleton Gillespie unpacks the recent discussions for more moderation for YouTube. One problem that she highlights is that the intent associated with the content being created is not consistent:

Incidents like the exploitative videos of children, or the misleading amateur cartoons, take advantage of this system. They live amidst this enormous range of videos, some subset of which YouTube must remove. Some come from users who don’t know or care about the rules, or find what they’re making perfectly acceptable. Others are deliberately designed to slip past moderators, either by going unnoticed or by walking right up to but not across the community guidelines. They sometimes require hard decisions about speech, community, norms, and the right to intervene.

She also discusses the difference between television and YouTube, questioning what it might mean to have such expectations:

MTV was in a structurally different position than YouTube. We expect MTV to be accountable for a number of reasons: they had the opportunity to review the episode before broadcasting it; they employed Kutcher and his team, affording them specific power to impose standards; and they chose to hand him the megaphone in the first place. While YouTube also affords Logan Paul a way to reach millions, and he and YouTube share advertising revenue from popular videos, these offers are in principle made to all YouTube users. YouTube is a distribution platform, not a distribution bottleneck — or it is a bottleneck of a very different shape. This does not mean we cannot or should not hold YouTube accountable. We could decide as a society that we want YouTube to meet exactly the same responsibilities as MTV, or more. But we must take into account that these structural differences change not only what YouTube can do, but how and why we can expect it of them.

So what we critics may be implying is that YouTube should be responsible to distinguish the insensitive versions from the sensitive ones. Again, this sounds more like the kinds of expectations we had for television networks — which is fine if that’s what we want, but we should admit that this would be asking much more from YouTube than we might think.

One of the problems associated with moderation is the rewards behind such content:

If video makers are rewarded based on the number of views, whether that reward is financial or just reputational, it stands to reason that some videomakers will look for ways to increase those numbers, including going bigger. But it is not clear that metrics of popularity necessarily or only lead to being over more outrageous, and there’s nothing about this tactic that is unique to social media. Media scholars have long noted that being outrageous is one tactic producers use to cut through the clutter and grab viewers, whether its blaring newspaper headlines, trashy daytime talk shows, or sexualized pop star performances. That is hardly unique to YouTube. And YouTube videomakers are pursuing a number of strategies to seek popularity and the rewards therein, outrageousness being just one. Many more seem to depend on repetition, building a sense of community or following, interacting with individual subscribers, and the attempt to be first.