As it stands, machine intelligence functions an extension of corporations and power.
And that’s why all the stories are interlinked: from Wall Street to venture capital; from ridiculous startups to Uber/Lyft model of burning VC money till (the company hopes) it becomes a monopoly; from stagnation in wages to automation in the workplace.
Machine intelligence isn’t only an extension of power, and it doesn’t even have to be mostly that. But it is mostly that where we are.
That’s a story much bigger than Zuckerberg, Dorsey, Schmidt, Sandberg, Brin who-have-you. It’s also a story of Wall Street and increasing financialization of the world; it’s a story of what people are calling neoliberalism that’s been underway for decades. It is also a technical story: of machine learning and data surveillance, and our current inability deal with the implications of the whole technological stack as it is composed: hardware firmware mostly manufactured in China. Software everywhere that I’ve previously compared to building skyscrapers on swampy land. Our fundamentally insecure designs. Perhaps, more importantly our lack of functioning, sustainable alternatives that respect us, rather than act as extensions of their true owners.
The show did indeed take a turn for the worse, but the reasons for that downturn goes way deeper than the usual suspects that have been identified (new and inferior writers, shortened season, too many plot holes). It’s not that these are incorrect, but they’re just superficial shifts. In fact, the souring of Game of Thrones exposes a fundamental shortcoming of our storytelling culture in general: we don’t really know how to tell sociological stories.
The overly personal mode of storytelling or analysis leaves us bereft of deeper comprehension of events and history. Understanding Hitler’s personality alone will not tell us much about rise of fascism, for example. Not that it didn’t matter, but a different demagogue would probably have appeared to take his place in Germany in between the two bloody world wars in the 20th century. Hence, the answer to “would you kill baby Hitler?,” sometimes presented as an ethical time-travel challenge, should be “no,” because it would very likely not matter much. It is not a true dilemma.
Tufekci explains that this is the same reason we have problems talking about historic technological transition.
In my own area of research and writing, the impact of digital technology and machine intelligence on society, I encounter this obstacle all the time. There are a significant number of stories, books, narratives and journalistic accounts that focus on the personalities of key players such as Mark Zuckerberg, Sheryl Sandberg, Jack Dorsey and Jeff Bezos. Of course, their personalities matter, but only in the context of business models, technological advances, the political environment, (lack of) meaningful regulation, the existing economic and political forces that fuel wealth inequality and lack of accountability for powerful actors, geopolitical dynamics, societal characteristics and more.
Maybe this is a part of what Douglas Rushkoff touches on in his criticism of storytelling.
On one panel, Ressa; Emily Bell, of the Tow Center; and Zeynep Tufekci, a techno-sociologist who writes for The New York Times and Wired, discussed the overwhelming effect of junk information on our public sphere, and the role of social media platforms in disseminating it. Tufekci argued that, in the 21st century, a surfeit of information, rather than its absence, poses the biggest problem. “When I was growing up in Turkey, the way censorship occurred was there was one TV channel and they wouldn’t show you stuff. That was it,” she said. “Currently, in my conceptualization, the way censorship occurs is by information glut. It’s not that the relevant information isn’t out there. But it is buried in so much information of suspect credibility that it doesn’t mean anything.” Tufekci cited the frenzied reporting, during the 2016 election, on WikiLeaks’s dump of hacked Democratic Party emails—much of which lacked crucial context—as a malign example of the trend. “I don’t think traditional journalism has caught up on this,” she said.
What is to be done? Designing phones and other devices to be more privacy-protected would be start, and government regulation of the collection and flow of data would help slow things down. But this is not the complete solution. We also need to start passing laws that directly regulate the use of computational inference: What will we allow to be inferred, and under what conditions, and subject to what kinds of accountability, disclosure, controls and penalties for misuse?
Social media platforms have made some changes to tackle hate speech and violent behaviour, but they could choose to do more. They could set higher standards for removing offensive video and messages.
Free speech is unimaginable without the right to dissent — but commentators, opinion writers and politicians also have choices to make in the example they set.
In the end though it’s on all of us — in the news sources we rely on, the social networks we join and what we choose to watch and share.
The act of throwing out huge amounts of content, most of it ironic, low-quality trolling, for the purpose of provoking an emotional reaction in less Internet-savvy viewers.
Zeynep Tufekci backed this stance on Twitter:
Don’t do this. We know how to cover terrible news like this, without doing it on the killer’s terms. Don’t participate in the snuff film he directed: instead give us the crucial news coverage we need. https://t.co/jhDXdxaa9W
— zeynep tufekci (@zeynep) March 15, 2019
It used to make sense to believe something until it was debunked; now, it makes sense to assume certain claims are fake—unless they are verified.
The web’s founders fully expected some form of digital payment to be integral to its functioning. But nearly three decades later, we’re still waiting.
For all the talk of disruption, today’s internet is still young and hugely underinnovated. While it’s difficult to predict all the details—that’s the point of disruption!—I have little doubt that it’s technically possible to build a digital infrastructure that rewards creativity at many scales and protects our privacy. Bitcoin is not the answer, for a variety of reasons, but a blockchain scheme, along with a mixture of more conventional systems and cryptographic tools, might play a part. Whatever the solution is, we just need a combination of vision, smart regulation, and true innovation to advance it.
Right now, we’re stuck where the automobile industry was when cars were still “horseless carriages,” wagon-wheeled monstrosities with high centers of gravity and buggy seats. We’re still letting an older technology—credit cards, designed for in-person transactions, with high fees and financial surveillance baked in—determine the shape of a new technological paradigm. As a result, that paradigm has become twisted and monopolized by its biggest players. This is one of the modern internet’s greatest errors; it’s past time that we encounter “402 Payment Required” for real.
We all suffer when platforms, their users, and governments fall for the tactics of attention-gamers.
Thank you too for the shoutout. It definitely has sparked some interesting conversation. I read a post today about mindfulness apps, yet it overlooked the collection of data associated with the completion of various. We are asked to be conscious of our breathing, yet ignore the data that we share on a daily basis.
Our next book club reading has been decided! After a furious polling, the winner is…
…Zeynep Tufekci, Twitter and Tear Gas: The Power and Fragility of Networked Protest.
It’s not speech per se that allows democracies to function, but the ability to agree—eventually, at least some of the time—on what is true, what is important and what serves the public good. This doesn’t mean everyone must agree on every fact, or that our priorities are necessarily uniform. But democracy can’t operate completely unmoored from a common ground, and certainly not in a sea of distractions.
To understand how digital technologies went from instruments for spreading democracy to weapons for attacking it, you have to look beyond the technologies themselves.
Russia did not instigate the moves that have reduced Americans’ trust in health authorities, environmental agencies, and other regulators. Russia did not create the revolving door between Congress and the lobbying firms that employ ex-politicians at handsome salaries. Russia did not defund higher education in the United States. Russia did not create the global network of tax havens in which big corporations and the rich can pile up enormous wealth while basic government services get cut.
Instead we need to:
Figure out how our institutions, our checks and balances, and our societal safeguards should function in the 21st century.
Just because you’re a successful tech mogul doesn’t mean you know how to rescue kids trapped underground.
Heard about @Elonmusk's rescue "submarine"? The cave-diver who masterminded the Thai cave rescue called it a "PR stunt"—that was the politest thing he said. You might be wondering: well, he tried to help. Let me explain with this thread and this NYT piece. https://t.co/ihoqDd8lMf pic.twitter.com/MWicaJKaA6
— zeynep tufekci (@zeynep) July 15, 2018
At a minimum, Facebook has long needed an ombudsman’s office with real teeth and power: an institution within the company that can act as a check on its worst impulses and to protect its users. And it needs a lot more employees whose task is to keep the platform healthier. But what would truly be disruptive and innovative would be for Facebook to alter its business model. Such a change could come from within, or it could be driven by regulations on data retention and opaque, surveillance-based targeting—regulations that would make such practices less profitable or even forbidden.
But the backlash wouldn’t die down. Attempting to respond to the growing outrage, Facebook announced changes. “It’s Time to Make Our Privacy Tools Easier to Find”, the company announced without a hint of irony—or any other kind of hint—that Zuckerberg had promised to do just that in the “coming few weeks” eight full years ago. On the company blog, Facebook’s chief privacy editor wrote that instead of being “spread across nearly 20 different screens” (why were they ever spread all over the place?), the controls would now finally be in one place.
Sadly, this has nothing to do with users or community:
As far as I can tell, not once in his apology tour was Zuckerberg asked what on earth he means when he refers to Facebook’s 2 billion-plus users as “a community” or “the Facebook community.” A community is a set of people with reciprocal rights, powers, and responsibilities. If Facebook really were a community, Zuckerberg would not be able to make so many statements about unilateral decisions he has made—often, as he boasts in many interviews, in defiance of Facebook’s shareholders and various factions of the company’s workforce. Zuckerberg’s decisions are final, since he controls all the voting stock in Facebook, and always will until he decides not to—it’s just the way he has structured the company.
Tim Wu argues that we need to replace Facebook with a trustworthy platform not driven by survelliance and advertising:
If today’s privacy scandals lead us merely to install Facebook as a regulated monopolist, insulated from competition, we will have failed completely. The world does not need an established church of social media.
In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal. Over time, our tastes adjust, and we seek even more sugary, fatty foods, which the restaurant dutifully provides. When confronted about this by the health department and concerned citizens, the restaurant managers reply that they are merely serving us what we want.
It’s not that Youtube radicalize politics specifically. It radicalizes everything, and politics just gets swept along in the slurry of zomg.