Bookmarked ‘Predators can roam’: How Tinder is turning a blind eye to sexual assault (ABC News)

In an investigation, the teams at 4Corners and Hack unpack the way in which Tinder responds to claims of assault. Even with the billions in revenue, Match Group has done little to adequately support users. What makes things worse,  is that the platform seems to protect the perpetrators more than victims through the use of the ‘unmatch’ function.

Tinder allows offenders to use the ‘unmatch’ function to block their victims after a rape to delete any trace of their prior communication.

The problem with banning and blocking is that ease of functionality that makes the platform so enticing also makes it a challenge to manage.

They said they couldn’t completely ban someone from a dating platform because it involved banning a user’s IP address, a number assigned to each device or network, and the number would only last for up to 90 days before changing.

Avani Dias, Ange McCormack and Ali Russell have report that Match Group have since responded and are making improvements to the way in which they support victims.

Match Group said it has updated reporting functions within their apps so that users receive a response and are directed to support services.

However, as with Facebook’s backflip on Holocaust denial content, these responses are too often reactive and seem to be about public relations more than anything else.

Bookmarked YouTube’s Plot to Silence Conspiracy Theories (WIRED)

From flat-earthers to QAnon to Covid quackery, the video giant is awash in misinformation. Can AI keep the lunatic fringe from going viral?

Clive Thompson takes a dive into the world of conspiracy theories and how YouTube’s desire for growth helped spread it. Thompson provides a behind-the-scenes perspective to how the algorithm team are trying to address the problem, while also increasing growth and connecting people with interests.

They developed a set of about three dozen questions designed to help a human decide whether content moved significantly in the direction of those banned areas, but didn’t quite get there.

These questions were, in essence, the wireframe of the human judgment that would become the AI’s smarts. These hidden inner workings were listed on Rohe’s screen. They allowed me to take notes but wouldn’t give me a copy to take away.

One question asks whether a video appears to “encourage harmful or risky behavior to others” or to viewers themselves. To help narrow down what type of content constitutes “harmful or risky behavior,” there is a set of check boxes pointing out various well-known self-harms YouTube has grappled with—like “pro ana” videos that encourage anorexic behaviors, or graphic images of self-harm.

“If you start by just asking, ‘Is this harmful misinformation?’ then everybody has a different definition of what’s harmful,” Goodrow said. “But then you say, ‘OK, let’s try to move it more into the concrete, specific realm by saying, is it about self-harm? What kinds of harm is it?’ Then you tend to get higher agreement and better results.” There’s also an open-ended box that an evaluator can write in to explain their thinking.

Another question asks the evaluators to determine whether a video is “intolerant of a group” based on race, religion, sexual orientation, gender, national origin, or veteran status. But there’s a supplementary question: “Is the video satire?” YouTube’s policies prohibit hate speech and spreading lies about ethnic groups, for example, but they can permit content that mocks that behavior by mimicking it.

Although research has shown that these changes seem to have had success, however the COVID conspiracies are pushing back. One of the challenges is that it is not just YouTube recommendations which are causing the spread of borderline content, it is also networking and shout-outs.

This old-fashioned spread—a mix of organic link-sharing and astroturfed, bot-propelled promotion—is powerful and, say observers, may sideline any changes to YouTube’s recommendation system. It also suggests that users are adapting and that the recommendation system may be less important, for good and ill, to the spread of misinformation today. In a study for the think tank Data & Society, the researcher Becca Lewis mapped out the galaxy of right-wing commentators on YouTube who routinely spread borderline material. Many of those creators, she says, have built their often massive audiences not only through YouTube recommendations but also via networking. In their videos they’ll give shout-outs to one another and hype each other’s work, much as YouTubers all enthusiastically promoted Millie Weaver’s fabricated musings.

I like how Cory Docotorw captures this problem:

I am increasingly convinced that the problem isn’t that Youtube is unsuited to moderating the video choices of a billion users – it’s that no one is suited to this challenge.

It would seem that successful networks nurture bad ideas as well as good?

Liked Why QAnon Left Reddit (The Atlantic)

The tale of how Reddit squashed QAnon seems like it must hold a tangible lesson for the rest of the social web, but the internet is messier than that. The particularities of Reddit, its culture, and the timing of its QAnon purge cannot be replicated by other companies. QAnon has found fertile ground on even more mainstream sites than Reddit. It simply doesn’t need the platform anymore.

Bookmarked Sacha Baron Cohen’s Keynote Address at ADL’s 2019 Never Is Now Summit on Anti-Semitism and Hate (Anti-Defamation League)

It’s time to finally call these companies what they really are—the largest publishers in history. And here’s an idea for them: abide by basic standards and practices just like newspapers, magazines and TV news do every day. We have standards and practices in television and the movies; there are certain things we cannot say or do. In England, I was told that Ali G could not curse when he appeared before 9pm. Here in the U.S., the Motion Picture Association of America regulates and rates what we see. I’ve had scenes in my movies cut or reduced to abide by those standards. If there are standards and practices for what cinemas and television channels can show, then surely companies that publish material to billions of people should have to abide by basic standards and practices too.

Sacha Baron Cohen provided the keynote address for the Anti-Defamation League’s 2019 Never Is Now Summit on Anti-Semitism and Hate. Stepping away from his many guises, Baron Cohen discusses the current threat to democracy being served by the ‘Silicon Six’. He argues although they often reference ‘freedom of speech’ as an excuse, this often leads to a freedom of reach for those wishing to manipulate the structure of society.

This reminds me of danah boyd’s discussion of cognitive strengthening, filling the gaps and the challenges of the fourth estate. Also, Ben Thompson provides a useful discussion of the challenges associated with moderation, one being the human side of the process, while Tarleton Gillespie suggests that moderation is not the panacea.

Doug Belshaw provides his own response to Baron Cohen’s speech, suggesting that the issues are associated with the financial roots of platform capitalism, the need for more local moderation and the problem of vendor lock-in.

Mike Masnick pushes back on Baron Cohen’s argument that social media is to blame for fake news and instead argues that things did not take off until Fox News validated things. In addition to this, Masnick questions whether there really is a solution to the problem of moderation and communication.


Democracy, which depends on shared truths, is in retreat, and autocracy, which depends on shared lies, is on the march. Hate crimes are surging, as are murderous attacks on religious and ethnic minorities.

Voltaire was right, “those who can make you believe absurdities, can make you commit atrocities.” And social media lets authoritarians push absurdities to billions of people.

Freedom of speech is not freedom of reach.

Zuckerberg at Facebook, Sundar Pichai at Google, at its parent company Alphabet, Larry Page and Sergey Brin, Brin’s ex-sister-in-law, Susan Wojcicki at YouTube and Jack Dorsey at Twitter. The Silicon Six

Those who deny the Holocaust aim to encourage another one.

Bookmarked Revealed: catastrophic effects of working as a Facebook moderator (the Guardian)

Some of the moderators’ stories were similar to the problems experienced in other countries. Daniel said: “Once, I found a colleague of ours checking online, looking to purchase a Taser, because he started to feel scared about others. He confessed he was really concerned about walking through the streets at night, for example, or being surrounded by foreign people.

Alex Hern’s discussion of Facebook moderators in Berlin provides a different perspective to the world of moderation. When you hear the ridiculous number of users that platforms like Facebook have, I shudder to think the content that needs to be processed.
Bookmarked A Framework for Moderation (Stratechery by Ben Thompson)

The question of what should be moderated, and when, is an increasingly frequent one in tech. There is no bright line, but there are ways to get closer to an answer.

Ben Thompson responds to CloudFlare’s decision to terminating service for 8chan with a look into the world of moderation. To start with, Thompson looks at Section 230 of the Communications Decency Act and the responsibility platforms have for content:

Section 230 doesn’t shield platforms from the responsibility to moderate; it in fact makes moderation possible in the first place. Nor does Section 230 require neutrality: the entire reason it exists was because true neutrality — that is, zero moderation beyond what is illegal — was undesirable to Congress.

He explains that the first responsibility lies with the content provider, however this then flows down the line to the ISP as a back stop.

Bookmarked Here’s How Facebook Is Trying to Moderate Its Two Billion Users by Jason Koebler,Joseph Cox (Motherboard)

Moderating billions of posts a week in more than a hundred languages has become Facebook’s biggest challenge. Leaked documents and nearly two dozen interviews show how the company hopes to solve it.

Jason Koebler and Joseph Cox take a deep dive into the difficulties of moderation on a platform with two billion users. They discuss Facebook’s attempts to manage everything with policy. This often creates points of confusion, but is required if it is to follow through with the goal of connecting the world. What is often overlooked in all of this is the human impact on moderators, especially with the addition of video.


Facebook has a “policy team” made up of lawyers, public relations professionals, ex-public policy wonks, and crisis management experts that makes the rules. They are enforced by roughly 7,500 human moderators, according to the company. In Facebook’s case, moderators act (or decide not to act) on content that is surfaced by artificial intelligence or by users who report posts that they believe violate the rules. Artificial intelligence is very good at identifying porn, spam, and fake accounts, but it’s still not great at identifying hate speech.

How to successfully moderate user-generated content is one of the most labor-intensive and mind-bogglingly complex logistical problems Facebook has ever tried to solve. Its two billion users make billions of posts per day in more than a hundred languages, and Facebook’s human content moderators are asked to review more than 10 million potentially rule-breaking posts per week. Facebook aims to do this with an error rate of less than one percent, and seeks to review all user-reported content within 24 hours.

The hardest and most time-sensitive types of content—hate speech that falls in the grey areas of Facebook’s established policies, opportunists who pop up in the wake of mass shootings, or content the media is asking about—are “escalated” to a team called Risk and Response, which works with the policy and communications teams to make tough calls

Facebook says its AI tools—many of which are trained with data from its human moderation team—detect nearly 100 percent of spam, and that 99.5 percent of terrorist-related removals, 98.5 percent of fake accounts, 96 percent of adult nudity and sexual activity, and 86 percent of graphic violence-related removals are detected by AI, not users.

Size is the one thing Facebook isn’t willing to give up. And so Facebook’s content moderation team has been given a Sisyphean task: Fix the mess Facebook’s worldview and business model has created, without changing the worldview or business model itself.

The process of refining policies to reflect humans organically developing memes or slurs may never end. Facebook is constantly updating its internal moderation guidelines, and has pushed some—but not all—of those changes to its public rules. Whenever Facebook identifies one edge case and adds extra caveats to its internal moderation guidelines, another new one appears and slips through the net.

Facebook would not share data about moderator retention, but said it acknowledges the job is difficult and that it offers ongoing training, coaching, and resiliency and counseling resources to moderators. It says that internal surveys show that pay, offering a sense of purpose and career growth opportunities, and offering schedule flexibility are most important for moderator retention

Everyone Motherboard spoke to at Facebook has internalized the fact that perfection is impossible, and that the job can often be heartbreaking

In 2009, for example, MySpace banned content that denied the Holocaust and gave its moderators wide latitude to delete it, noting that it was an “easy” call under its hate speech policies, which prohibited content that targeted a group of people with the intention of making them “feel bad.” In contrast, Facebook’s mission has led it down the difficult road of trying to connect the entire world, which it believes necessitates allowing as much speech as possible in hopes of fostering global conversation and cooperation.

Bookmarked Content moderation is not a panacea: Logan Paul, YouTube, and what we should expect from platforms by Tarleton Gillespie (Social Media Collective)

Content moderation should be more transparent, and platforms should be more accountable, not only for what traverses their system, but the ways in which they are complicit in its production, circulation, and impact. But it also seems we are too eager to blame all things on content moderation, and to expect platforms to maintain a perfectly honed moral outlook every time we are troubled by something we find there. Acknowledging that YouTube is not a mere conduit does not imply that it is exclusively responsible for everything available there.

Tarleton Gillespie unpacks the recent discussions for more moderation for YouTube. One problem that she highlights is that the intent associated with the content being created is not consistent:

Incidents like the exploitative videos of children, or the misleading amateur cartoons, take advantage of this system. They live amidst this enormous range of videos, some subset of which YouTube must remove. Some come from users who don’t know or care about the rules, or find what they’re making perfectly acceptable. Others are deliberately designed to slip past moderators, either by going unnoticed or by walking right up to but not across the community guidelines. They sometimes require hard decisions about speech, community, norms, and the right to intervene.

She also discusses the difference between television and YouTube, questioning what it might mean to have such expectations:

MTV was in a structurally different position than YouTube. We expect MTV to be accountable for a number of reasons: they had the opportunity to review the episode before broadcasting it; they employed Kutcher and his team, affording them specific power to impose standards; and they chose to hand him the megaphone in the first place. While YouTube also affords Logan Paul a way to reach millions, and he and YouTube share advertising revenue from popular videos, these offers are in principle made to all YouTube users. YouTube is a distribution platform, not a distribution bottleneck — or it is a bottleneck of a very different shape. This does not mean we cannot or should not hold YouTube accountable. We could decide as a society that we want YouTube to meet exactly the same responsibilities as MTV, or more. But we must take into account that these structural differences change not only what YouTube can do, but how and why we can expect it of them.

So what we critics may be implying is that YouTube should be responsible to distinguish the insensitive versions from the sensitive ones. Again, this sounds more like the kinds of expectations we had for television networks — which is fine if that’s what we want, but we should admit that this would be asking much more from YouTube than we might think.

One of the problems associated with moderation is the rewards behind such content:

If video makers are rewarded based on the number of views, whether that reward is financial or just reputational, it stands to reason that some videomakers will look for ways to increase those numbers, including going bigger. But it is not clear that metrics of popularity necessarily or only lead to being over more outrageous, and there’s nothing about this tactic that is unique to social media. Media scholars have long noted that being outrageous is one tactic producers use to cut through the clutter and grab viewers, whether its blaring newspaper headlines, trashy daytime talk shows, or sexualized pop star performances. That is hardly unique to YouTube. And YouTube videomakers are pursuing a number of strategies to seek popularity and the rewards therein, outrageousness being just one. Many more seem to depend on repetition, building a sense of community or following, interacting with individual subscribers, and the attempt to be first.