Bookmarked Opinion | It’s Time to Panic About Privacy (nytimes.com)
We claim to want it, companies claim to provide it, but we all just accept that, well, you have no privacy online.
This interactive post from the New York Times is a useful provocation to think about privacy, data and the internet of things. For more resources on the topic, read Chris Croft’s spring clean and Ian O’Byrne’s series on digital hygiene.
Replied to Data Obfuscation and Facial Recognition Faceoff: Possible #Netnarr Field Guide Topics (CogDogBlog)
Do the assignments you assign? I blabbed about this recently for the Ontario Extend mOOC I was facilitating, so it’s also appropriate for the Networked Narratives course I co-teach with Mia Z…
Alan, I wonder about obfuscating data via mobile browser? I am also interested in the way that our participation with things such as facial recognition ironically is what makes them possible. For examples, I was reading about how DNA testing is partly dependant on past tests, while Turn It In works because institutions allow us to turn our work into billions for others.
Liked How DNA ancestry testing can change our ideas of who we are (The Conversation)
The bigger picture that’s emerging from DNA ancestry testing is that we’ve underestimated the extent of mixing between ancestral groups throughout human history. Looking at the pie chart might give you the impression that there are discrete borders within you and boundaries between your different ancestries, but as Aeromexico so eloquently put it, “there are no borders within us”.
Bookmarked Vast amounts of data about our children are being harvested via apps used by schools. This is what is being collected and stored (AARE)
A major problem with creating reports like this is that they only judge students on a small number of behaviours that ‘count’. They ignore, and even deter, diversity. For example, teachers have to identify behaviours they want students to exhibit so they can monitor them using ClassDojo. Default options include working hard, on-task, and displaying grit. This list has to be limited to a number of behaviours that is manageable by the teacher to track. The selected behaviours end up being the ones that count, others are ignored, thus promoting conformity.
Jamie Manolev, Anna Sullivan and Roger Slee explore the sensitive data collected on students, teachers and schools by educational apps. The authors document some of these points:

This data includes

  • First and last names
  • Student usernames
  • Passwords
  • Students’ age
  • School names
  • School addresses
  • Photographs, videos, documents, drawings, or audio files
  • Student class attendance data
  • Feedback points
  • IP addresses
  • Browser details
  • Clicks
  • Referring URL’s
  • Time spent on site
  • Page views
  • Teacher parent messages

Moreover, ClassDoJo says it ‘may also obtain information, including personal information, from third-party sources to update or supplement the information you provided or we collected automatically’.

This reminds me of Ben Williamson’s point about Class Dojo that sensitivity is produced over time:

The ‘sensitive information’ contained in ClassDojo is the behavioural record built up from teachers tapping reward points into the app.

I think that it needs to be noted that although there is a focus on ‘wellbeing’ the affordances of the application can be used in different ways. For example, Bianca Hewes has used it to monition 21st century learning.

Bookmarked Suggestion Box History: The Small Data Before Big Data by an author (Tedium: The Dull Side of the Internet.)
How the suggestion box, once a simple tool for giving feedback, played a role in the weirder and darker data-hungry present for many companies.
Ernie Smith provides a dive into the world of the suggestion box. This seems in contrast to Megan Ward’s investigation of feedback and computational thinking. Smith seems to capture both worlds, however it is confusing as a to whether they are they are actually the same or if feedback is in fact different for different people.
Liked Time Magazine: Data, Privacy, Politics and the Mess We Are In by an author (Kevin's Meandering Mind)
Roger McNamee lays out some major topics and areas of concern where Facebook may be a threat to a civil and civic society:
  • Democracy (see, election interference)
  • Privacy (see, data surveillance of every click and view and share by Facebook)
  • Data (see, sale of data to third-party vendors)
  • Regulation (see, not any to speak of)
  • Humanization (see, or lack thereof)
  • Addiction (see, the world around you)
  • Children (see, bullying and alarm bells about the brain)
Bookmarked Why Data Is Never Raw (The New Atlantis)
“Raw data is both an oxymoron and a bad idea; to the contrary, data should be cooked with care.” “Raw” carries a sense of natural or untouched, while “cooked” suggests the result of cognitive processes. But data is always the product of cognitive, cultural, and institutional processes that determine what to collect and how to collect it. In this sense, “raw data” is indeed a contradiction in terms. In the ordinary use of the term “raw data,” “raw” signifies that no processing was performed following data collection, but the term obscures the various forms of processing that necessarily occur before data collection. (Summary via Tom Woodward)
Sometimes I wonder if I write just to be glad when I find my thought more clearly articulated by somebody else. As I wrote elsewhere, data ain’t data, it is never raw, instead it is always involves some sort of bias and interpretation.

Assumptions inevitably find their way into the data and color the conclusions drawn from it. Moreover, they reflect the beliefs of those who collect the data. As economist Ronald Coase famously remarked, “If you torture the data enough, nature will always confess.” And journalist Lena Groeger, in a 2017 ProPublica story on the biases that visual designers inscribe into their work, soundly noted that “data doesn’t speak for itself — it echoes its collectors.”

via Tom Woodward