Bookmarked More on the mechanics of GDPR (Open Educational Thinkering)
Note: I'm writing this post on my personal blog as I'm still learning about GDPR. This is me thinking out loud, rather than making official Moodle pronouncements. 'Enjoyment' and 'compliance-focused courses' are rarely uttered in the same breath. I have, however, enjoyed my second week of learning from Futurelearn's
Doug Belshaw breaks down a number of points associated with the GDPR. During TIDE, he also makes the point that this will set a precedence moving forward in regards to the collection of data so will therefore have an influence on everyone. Eylan Ezekiel also provided a useful discussion a few months a go.
Bookmarked Fitness tracking app Strava gives away location of secret US army bases by Alex Hern (the Guardian)
Data about exercise routes shared online by soldiers can be used to pinpoint overseas facilities
Alex Hern reports that Strava data inadvertently reveals a number of supposed military secrets. In response, Bill Fitzgerald also provides some interesting commentary on Twitter:

Arvind Narayanan also wrote a series of tweets:

Listened Digital dystopia: tech slavery and the death of privacy – podcast by Jordan Erica Webber from the Guardian
Jordan Erica Webber explores whether our privacy has been compromised by the tech giants whose business models depend on harvesting and monetising our data. We speak to cyborg rights activist Aral Balkan; the executive director of UK charity Privacy International Gus Hosein; and to Kevin Kelly, founding executive editor of Wired magazine and author of The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future.
In the first episode of our four-part miniseries, Jordan Erica Webber asks whether our digital selves are owned by tech firms in a new form of slavery? One of the interesting points made was that in the past, people were often private in public spaces, whereas today things have been reversed, where we are public in private places.
Bookmarked Google Maps’s Moat (Justin O’Beirne)
Google has gathered so much data, in so many areas, that it’s now crunching it together and creating features that Apple can’t make—surrounding Google Maps with a moat of time
Justin O’Beirne discusses the addition of ‘Areas of Interests’ to Google Maps. He wonders if others, such as Apple, can possibly keep up. The challenge is that these AOIs aren’t collected—they’re created. And Apple appears to be missing the ingredients to create AOIs at the same quality, coverage, and scale as Google.

O'Beirne's table demonstrating the difference between Google and Apple

Google’s is in fact making data out of data:

Google’s buildings are byproducts of its Satellite/Aerial imagery. And some of Google’s places are byproducts of its Street View imagery.

For a different take on Google Earth’s 3D imagery, watch this video from the [Nat and Friends]:

https://youtu.be/suo_aUTUpps

Reply to Chris Betcher and Location Tracking

I am wondering if this is the way of the future Chris? Are we coming to a time when insurance companies, car manufacturers or platforms collect our data whether we like it or not? It is baked into the Maps API infrastructure. I worry with the way that data is shared whether some of these companies even need our explicit permission anymore? Take for example the recent analysis of tracking on Android:

The tracker allows marketers to use machine learning to discover personas, uses cross-device ID, and even uses behavioral analysis to guess when a user is sleeping, and a probabilistic matching algorithm to match identities across devices.

What is disconcerting is that it may not be the application designed for location which provides a company with location information.

📓 Questions for Data

Audrey Watters writes down a series of questions to consider when thinking about data:

Is this meaningful data? Are “test scores” or “grades” meaningful units of measurement, for example? What can we truly know based on this data? Are our measurements accurate? Is our analysis, based on the data that we’ve collected, accurate? What sorts of assumptions are we making when we collect and analyze this data? Assumptions about bodies, for example. Assumptions about what to count. Assumptions and value judgments about “learning”. How much is science, and how much is marketing? Whose data is this? Who owns it? Who controls it? Who gets to see it? Is this data shared or sold? Is there informed consent? Are people being compelled to surrender their data? Are people being profiled based on this data? Are decisions being made about them based on this data? Are those decisions transparent? Are they done via algorithms – predictive modeling, for example, that tries to determine some future behavior based on past signals? Who designs the algorithms? What sorts of biases do these algorithms encode? How does the collection and analysis of data shape behavior? Does it incentivize certain activities and discourage others? Who decides what behaviors constitute “a good student” or “a good teacher” or “a good education”? source

Continuing this conversation, Jim Groom suggests that the key question is:

How do we get anyone to not only acknowledge this process of extraction and monetization (because I think folks have), but to actually feel empowered enough to even care source

Speaking about assemblages, Ian Guest posits that:

When data is viewed in different ways, with different machines, different knowledge may be produced. source

Benjamin Doxtdater makes the link between power and data:

The operation of power continues to evolve when Fitbits and Facebook track our data points, much like a schoolmaster tracks our attendance and grades.source

Kin Lane provides the cautionary tale of privacy and security violations via APIs, in which he suggests:

Make sure we are asking the hard questions about the security and privacy of data and content we are running through machine learning APIs. Make sure we are thinking deeply about what data and content sets we are running through the machine learning APIs, and reducing any unnecessary exposure of personal data, content, and media.source

Emily Talmage questions the intent behind platform economy and the desire for correlations that detach values from the human face:

For whatever reason – maybe because they are too far away from actual children – investors and their policy-makers don’t seem to see the wickedness of reducing a human child in all his wonder and complexity to a matrix of skills, each rated 1, 2, 3 or 4. [source}(https://emilytalmage.com/2017/07/31/how-data-is-destroying-our-schools/)

Yael Grauer documents how researches at Yale Privacy Lab and French nonprofit Exodus Privacy have uncovered the proliferation of tracking software on smartphones, finding that weather, flashlight, ride-sharing, and dating apps, among others, are infested with dozens of different types of trackers collecting vast amounts of information to better target advertising.

“The real question for the companies is, what is their motivation for having multiple trackers?” asked O’Brien.source

Ben Williamson collects together a number of critical questions when addressing big data in education:

How is ‘big data’ being conceptualized in relation to education?
What theories of learning underpin big data-driven educational technologies?
How are machine learning systems used in education being ‘trained’ and ‘taught’?
Who ‘owns’ educational big data?
Who can ‘afford’ educational big data?
Can educational big data provide a real-time alternative to temporally discrete assessment techniques and bureaucratic policymaking?
Is there algorithmic accountability to educational analytics?
Is student data replacing student voice?
Do teachers need ‘data literacy’?
What ethical frameworks are required for educational big data analysis and data science studies?source

Discussing personal data, Kim Jaxon asked her students to consider the platforms they frequent:

I invited our class to look closely at Google, Facebook, Snapchat, Blackboard Learn, TurnItIn, and many other platforms they frequent or are asked to use, and to think critically about the collection and control of their data. Borrowing from Morris and Stommel’s work, we are asking: Who collects data? Who owns it? What do they do with it? Who profits or benefits? What is left out of the results: what is hidden?

Nicholas Carr wonders if we are data mines or data factories:

If I am a data mine, then I am essentially a chunk of real estate, and control over my data becomes a matter of ownership. Who owns me (as a site of valuable data), and what happens to the economic value of the data extracted from me? Should I be my own owner — the sole proprietor of my data mine and its wealth? Should I be nationalized, my little mine becoming part of some sort of public collective? Or should ownership rights be transferred to a set of corporations that can efficiently aggregate the raw material from my mine (and everyone else’s) and transform it into products and services that are useful to me? The questions raised here are questions of politics and economics.source

Chris Gilliard poses some questions to consider associated with the racial bias built into the surveillance state:

What would it look like to be constantly coded as different in a hyper-surveilled society — one where there was large-scale deployment of surveillant technologies with persistent “digital epidermalization” writing identity on to every body within the scope of its gaze? I’m thinking of a not too distant future where not only businesses and law enforcement constantly deploy this technology, as with recent developments in China, but also where citizens going about their day use it as well, wearing some version of Google Glass or Snapchat Spectacles to avoid interpersonal “friction” and identify the “others” who do or don’t belong in a space at a glance. What if Permit Patty or Pool Patrol Paul had immediate, real-time access to technologies that “legitimized” black bodies in a particular space?

Reflecting on Microsoft’s attempts to game Hacker News, Kicks Conder poses a few questions about data and algorithms:

does gaming the algorithm undermine the algorithm? Or is it the point of the algorithm? I’m asking all of you out there—is the algorithm designed to continue feeding us the same narrative that we are already upvoting? Or can the upvotes trend away?