Liked Facebook - Trust us! by Daniel GoldsmithDaniel Goldsmith (View from Ascraeus)
Facebook - sure, we may have sold your most intimate data to the Russkies, installed a cryptofascist in the whitehouse, engendered genocide in Myanmar and the slaughter of hundreds of innocent people across the developing world, and (just this last week) got caught leaking user data of at least 50,000,000 people, but you should totally allow our always-on microphone and camera into your home! Trust us!
Liked Opinion | A Wise Man Leaves Facebook (nytimes.com)
“Social media is in a pre-Newtonian moment, where we all understand that it works, but not how it works,” Mr. Systrom told me, comparing this moment in the tech world to the time before man could explain gravity. “There are certain rules that govern it and we have to make it our priority to understand the rules, or we cannot control it.”
Liked Facebook Security Breach Exposes Accounts of 50 Million Users (nytimes.com)
The attack added to the company’s woes as it contends with fallout from its role in a Russian disinformation campaign.
It is hard to know what to make of a breach involving 5 million users when Facebook reportedly has 2.2 billion users. The disconcerting thing is that they took down postings about the incident:

Users who posted breaking stories about the breach from The Guardian, The Associated Press and other outlets were prompted with a notice that their posts had been taken down. So many people were posting the stories, they looked like suspicious activity to the systems that Facebook uses to block abuse of its network.

“We removed this post because it looked like spam to us,” the notice said.

Bookmarked It’s time to break up Facebook by Nilay Patel (The Verge)
"Start by breaking off WhatsApp and Instagram."
Nilay Patel explores the idea of reimagining anti-trust laws. At the moment there is too much grey for lawyers to argue about in regards to changes in price. Tim Wu and Hal Singer suggest that we need to think of anti-trust from the perspective of competition, not just cost. This is something that has been said about Google as much as Facebook. Cory Doctorow has also written about the problems big tech.
Bookmarked Here's How Facebook Is Trying to Moderate Its Two Billion Users by Jason Koebler,Joseph Cox (Motherboard)
Moderating billions of posts a week in more than a hundred languages has become Facebook’s biggest challenge. Leaked documents and nearly two dozen interviews show how the company hopes to solve it.
Jason Koebler and Joseph Cox take a deep dive into the difficulties of moderation on a platform with two billion users. They discuss Facebook’s attempts to manage everything with policy. This often creates points of confusion, but is required if it is to follow through with the goal of connecting the world. What is often overlooked in all of this is the human impact on moderators, especially with the addition of video.

Marginalia

Facebook has a “policy team” made up of lawyers, public relations professionals, ex-public policy wonks, and crisis management experts that makes the rules. They are enforced by roughly 7,500 human moderators, according to the company. In Facebook’s case, moderators act (or decide not to act) on content that is surfaced by artificial intelligence or by users who report posts that they believe violate the rules. Artificial intelligence is very good at identifying porn, spam, and fake accounts, but it’s still not great at identifying hate speech.

How to successfully moderate user-generated content is one of the most labor-intensive and mind-bogglingly complex logistical problems Facebook has ever tried to solve. Its two billion users make billions of posts per day in more than a hundred languages, and Facebook’s human content moderators are asked to review more than 10 million potentially rule-breaking posts per week. Facebook aims to do this with an error rate of less than one percent, and seeks to review all user-reported content within 24 hours.

The hardest and most time-sensitive types of content—hate speech that falls in the grey areas of Facebook’s established policies, opportunists who pop up in the wake of mass shootings, or content the media is asking about—are “escalated” to a team called Risk and Response, which works with the policy and communications teams to make tough calls

Facebook says its AI tools—many of which are trained with data from its human moderation team—detect nearly 100 percent of spam, and that 99.5 percent of terrorist-related removals, 98.5 percent of fake accounts, 96 percent of adult nudity and sexual activity, and 86 percent of graphic violence-related removals are detected by AI, not users.

Size is the one thing Facebook isn’t willing to give up. And so Facebook’s content moderation team has been given a Sisyphean task: Fix the mess Facebook’s worldview and business model has created, without changing the worldview or business model itself.

The process of refining policies to reflect humans organically developing memes or slurs may never end. Facebook is constantly updating its internal moderation guidelines, and has pushed some—but not all—of those changes to its public rules. Whenever Facebook identifies one edge case and adds extra caveats to its internal moderation guidelines, another new one appears and slips through the net.

Facebook would not share data about moderator retention, but said it acknowledges the job is difficult and that it offers ongoing training, coaching, and resiliency and counseling resources to moderators. It says that internal surveys show that pay, offering a sense of purpose and career growth opportunities, and offering schedule flexibility are most important for moderator retention

Everyone Motherboard spoke to at Facebook has internalized the fact that perfection is impossible, and that the job can often be heartbreaking

In 2009, for example, MySpace banned content that denied the Holocaust and gave its moderators wide latitude to delete it, noting that it was an “easy” call under its hate speech policies, which prohibited content that targeted a group of people with the intention of making them “feel bad.” In contrast, Facebook’s mission has led it down the difficult road of trying to connect the entire world, which it believes necessitates allowing as much speech as possible in hopes of fostering global conversation and cooperation.

Liked Back to the Blog (Dan Cohen)
It is psychological gravity, not technical inertia, however, that is the bigger antagonist of the open web. Human beings are social animals and centralized social media like Twitter and Facebook provide a powerful sense of ambient humanity—that feeling that “others are here”—that is often missing when one writes on one’s own site. Facebook has a whole team of Ph.D.s in social psychology finding ways to increase that feeling of ambient humanity and thus increase your usage of their service.
Bookmarked Mark Zuckerberg Is Doubly Wrong About Holocaust Denial by Yair Rosenberg (The Atlantic)
Truly tackling the problem of hateful misinformation online requires rejecting the false choice between leaving it alone or censoring it outright. The real solution is one that has not been entertained by either Zuckerberg or his critics: counter-programming hateful or misleading speech with better speech.
Yair Rosenberg touches on the dangers of simply suppressing disinformation. He explains that the only way to respond is to correct it. This continues some of the conversation associated with danah boyd’s keynote at SXSW.

via HEWN by Audrey Watters

Bookmarked Cory Doctorow: Zuck’s Empire of Oily Rags (Locus Online)
For 20 years, privacy advocates
Cory Doctorow provides a commentary on the current state of affairs involving Facebook and Cambridge Analytica. Rather than blame the citizens of the web, he argues that the fault exists with the mechanics in the garage and the corruption that they have engaged with. The question that seems to remain is if this is so and we still want our car fixed, where do we go?

Marginalia

Cambridge Analytica are like stage mentalists: they’re doing something labor-intensive and pretending that it’s something supernatural. A stage mentalist will train for years to learn to quickly memorize a deck of cards and then claim that they can name your card thanks to their psychic powers. You never see the unglamorous, unimpressive memorization practice. source

The comparison between Cambridge Analytica (and big data in general) with the stage mentalist is intriguing. I am left wondering about the disappointment and disbelief in the truth. Sometimes there is a part of us that oddly wants to be mesmerised and to believe.


It’s fashionable to treat the dysfunctions of social media as the result of the naivete of early technologists, who failed to foresee these outcomes. The truth is that the ability to build Facebook-like services is relatively common. What was rare was the moral recklessness necessary to go through with it. source

Facebook and Cambridge Analytica raise the question of just because we can, it doesn’t mean we should.


Facebook doesn’t have a mind-control problem, it has a corruption problem. Cambridge Analytica didn’t convince decent people to become racists; they convinced racists to become voters. source

In relation to the question of mind-control verses corruption, I wonder where the difference exists. Does corruption involve some element of ‘mind-control’ to convince somebody that this is the answer?