Listened Digital dystopia: tech slavery and the death of privacy – podcast by Jordan Erica Webber from the Guardian
Jordan Erica Webber explores whether our privacy has been compromised by the tech giants whose business models depend on harvesting and monetising our data. We speak to cyborg rights activist Aral Balkan; the executive director of UK charity Privacy International Gus Hosein; and to Kevin Kelly, founding executive editor of Wired magazine and author of The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future.
In the first episode of our four-part miniseries, Jordan Erica Webber asks whether our digital selves are owned by tech firms in a new form of slavery? One of the interesting points made was that in the past, people were often private in public spaces, whereas today things have been reversed, where we are public in private places.
Bookmarked Google Maps’s Moat (Justin O’Beirne)
Google has gathered so much data, in so many areas, that it’s now crunching it together and creating features that Apple can’t make—surrounding Google Maps with a moat of time
Justin O’Beirne discusses the addition of ‘Areas of Interests’ to Google Maps. He wonders if others, such as Apple, can possibly keep up. The challenge is that these AOIs aren’t collected—they’re created. And Apple appears to be missing the ingredients to create AOIs at the same quality, coverage, and scale as Google.

O'Beirne's table demonstrating the difference between Google and Apple

Google’s is in fact making data out of data:

Google’s buildings are byproducts of its Satellite/Aerial imagery. And some of Google’s places are byproducts of its Street View imagery.

For a different take on Google Earth’s 3D imagery, watch this video from the [Nat and Friends]:

Reply to Chris Betcher and Location Tracking

I am wondering if this is the way of the future Chris? Are we coming to a time when insurance companies, car manufacturers or platforms collect our data whether we like it or not? It is baked into the Maps API infrastructure. I worry with the way that data is shared whether some of these companies even need our explicit permission anymore? Take for example the recent analysis of tracking on Android:

The tracker allows marketers to use machine learning to discover personas, uses cross-device ID, and even uses behavioral analysis to guess when a user is sleeping, and a probabilistic matching algorithm to match identities across devices.

What is disconcerting is that it may not be the application designed for location which provides a company with location information.

Questions for Data

Audrey Watters writes down a series of questions to consider when thinking about data:

Is this meaningful data? Are “test scores” or “grades” meaningful units of measurement, for example? What can we truly know based on this data? Are our measurements accurate? Is our analysis, based on the data that we’ve collected, accurate? What sorts of assumptions are we making when we collect and analyze this data? Assumptions about bodies, for example. Assumptions about what to count. Assumptions and value judgments about “learning”. How much is science, and how much is marketing? Whose data is this? Who owns it? Who controls it? Who gets to see it? Is this data shared or sold? Is there informed consent? Are people being compelled to surrender their data? Are people being profiled based on this data? Are decisions being made about them based on this data? Are those decisions transparent? Are they done via algorithms – predictive modeling, for example, that tries to determine some future behavior based on past signals? Who designs the algorithms? What sorts of biases do these algorithms encode? How does the collection and analysis of data shape behavior? Does it incentivize certain activities and discourage others? Who decides what behaviors constitute “a good student” or “a good teacher” or “a good education”? source

Continuing this conversation, Jim Groom suggests that the key question is:

How do we get anyone to not only acknowledge this process of extraction and monetization (because I think folks have), but to actually feel empowered enough to even care source

Speaking about assemblages, Ian Guest posits that:

When data is viewed in different ways, with different machines, different knowledge may be produced. source

Benjamin Doxtdater makes the link between power and data:

The operation of power continues to evolve when Fitbits and Facebook track our data points, much like a schoolmaster tracks our attendance and grades.source

Kin Lane provides the cautionary tale of privacy and security violations via APIs, in which he suggests:

Make sure we are asking the hard questions about the security and privacy of data and content we are running through machine learning APIs. Make sure we are thinking deeply about what data and content sets we are running through the machine learning APIs, and reducing any unnecessary exposure of personal data, content, and media.source

Emily Talmage questions the intent behind platform economy and the desire for correlations that detach values from the human face:

For whatever reason – maybe because they are too far away from actual children – investors and their policy-makers don’t seem to see the wickedness of reducing a human child in all his wonder and complexity to a matrix of skills, each rated 1, 2, 3 or 4. [source}(

Yael Grauer documents how researches at Yale Privacy Lab and French nonprofit Exodus Privacy have uncovered the proliferation of tracking software on smartphones, finding that weather, flashlight, ride-sharing, and dating apps, among others, are infested with dozens of different types of trackers collecting vast amounts of information to better target advertising.

“The real question for the companies is, what is their motivation for having multiple trackers?” asked O’Brien.source

Ben Williamson collects together a number of critical questions when addressing big data in education:

How is ‘big data’ being conceptualized in relation to education?
What theories of learning underpin big data-driven educational technologies?
How are machine learning systems used in education being ‘trained’ and ‘taught’?
Who ‘owns’ educational big data?
Who can ‘afford’ educational big data?
Can educational big data provide a real-time alternative to temporally discrete assessment techniques and bureaucratic policymaking?
Is there algorithmic accountability to educational analytics?
Is student data replacing student voice?
Do teachers need ‘data literacy’?
What ethical frameworks are required for educational big data analysis and data science studies?source

Discussing personal data, Kim Jaxon asked her students to consider the platforms they frequent:

I invited our class to look closely at Google, Facebook, Snapchat, Blackboard Learn, TurnItIn, and many other platforms they frequent or are asked to use, and to think critically about the collection and control of their data. Borrowing from Morris and Stommel’s work, we are asking: Who collects data? Who owns it? What do they do with it? Who profits or benefits? What is left out of the results: what is hidden?

Nicholas Carr wonders if we are data mines or data factories:

If I am a data mine, then I am essentially a chunk of real estate, and control over my data becomes a matter of ownership. Who owns me (as a site of valuable data), and what happens to the economic value of the data extracted from me? Should I be my own owner — the sole proprietor of my data mine and its wealth? Should I be nationalized, my little mine becoming part of some sort of public collective? Or should ownership rights be transferred to a set of corporations that can efficiently aggregate the raw material from my mine (and everyone else’s) and transform it into products and services that are useful to me? The questions raised here are questions of politics and economics.