📑 YouTube’s Plot to Silence Conspiracy Theories

Bookmarked YouTube’s Plot to Silence Conspiracy Theories (WIRED)

From flat-earthers to QAnon to Covid quackery, the video giant is awash in misinformation. Can AI keep the lunatic fringe from going viral?

Clive Thompson takes a dive into the world of conspiracy theories and how YouTube’s desire for growth helped spread it. Thompson provides a behind-the-scenes perspective to how the algorithm team are trying to address the problem, while also increasing growth and connecting people with interests.

They developed a set of about three dozen questions designed to help a human decide whether content moved significantly in the direction of those banned areas, but didn’t quite get there.

These questions were, in essence, the wireframe of the human judgment that would become the AI’s smarts. These hidden inner workings were listed on Rohe’s screen. They allowed me to take notes but wouldn’t give me a copy to take away.

One question asks whether a video appears to “encourage harmful or risky behavior to others” or to viewers themselves. To help narrow down what type of content constitutes “harmful or risky behavior,” there is a set of check boxes pointing out various well-known self-harms YouTube has grappled with—like “pro ana” videos that encourage anorexic behaviors, or graphic images of self-harm.

“If you start by just asking, ‘Is this harmful misinformation?’ then everybody has a different definition of what’s harmful,” Goodrow said. “But then you say, ‘OK, let’s try to move it more into the concrete, specific realm by saying, is it about self-harm? What kinds of harm is it?’ Then you tend to get higher agreement and better results.” There’s also an open-ended box that an evaluator can write in to explain their thinking.

Another question asks the evaluators to determine whether a video is “intolerant of a group” based on race, religion, sexual orientation, gender, national origin, or veteran status. But there’s a supplementary question: “Is the video satire?” YouTube’s policies prohibit hate speech and spreading lies about ethnic groups, for example, but they can permit content that mocks that behavior by mimicking it.

Although research has shown that these changes seem to have had success, however the COVID conspiracies are pushing back. One of the challenges is that it is not just YouTube recommendations which are causing the spread of borderline content, it is also networking and shout-outs.

This old-fashioned spread—a mix of organic link-sharing and astroturfed, bot-propelled promotion—is powerful and, say observers, may sideline any changes to YouTube’s recommendation system. It also suggests that users are adapting and that the recommendation system may be less important, for good and ill, to the spread of misinformation today. In a study for the think tank Data & Society, the researcher Becca Lewis mapped out the galaxy of right-wing commentators on YouTube who routinely spread borderline material. Many of those creators, she says, have built their often massive audiences not only through YouTube recommendations but also via networking. In their videos they’ll give shout-outs to one another and hype each other’s work, much as YouTubers all enthusiastically promoted Millie Weaver’s fabricated musings.

I like how Cory Docotorw captures this problem:

I am increasingly convinced that the problem isn’t that Youtube is unsuited to moderating the video choices of a billion users – it’s that no one is suited to this challenge.

It would seem that successful networks nurture bad ideas as well as good?

Leave a Reply

Your email address will not be published. Required fields are marked *