📑 Think You’re Discreet Online? Think Again

What is to be done? Designing phones and other devices to be more privacy-protected would be start, and government regulation of the collection and flow of data would help slow things down. But this is not the complete solution. We also need to start passing laws that directly regulate the use of computational inference: What will we allow to be inferred, and under what conditions, and subject to what kinds of accountability, disclosure, controls and penalties for misuse?
Zeynep Tufekci explains that with the use of computational inference, purchasing of data and the creation of shadow profiles, companies know more about use than what we maybe explicitly sharing online.
Ben Thompson unpacks the world of data and shadow profiles derived by platforms, such as Google and Facebook. He discusses some of the issues with this:
This is interesting to read in light of the Facebook’s release of a tool to view data collected.
Continuing the conversation about forgetting and ethics, Antony Funnell speaks with Kate Eichhorn and Kate Mannell about digital forgetting.
Eichhorn, the author of The End of Forgetting, discusses the long and complicated history that children have and challenges associated with identity. She explains that our ability to control what is forgotten has been diminished in the age of social media. Although new solutions may allow us to connect, this also creates its own problems and consequences, such as the calcification of polarised politics. Eichhorn would like to say things are going to change, but she argues that there is little incentive for big tech. Although young people are becoming more cynical, there maybe resistance, but little hope for a return to an equitable utopian web.
Kate Mannell explores the idea of forcing a sense of ethics through the form of a hypocratic oath. Some of the problems with this is that there are many versions of the oath, it does not resolve the systemic problems and it is hard to have an oath of no harm when it is not even clear what harms are actually at play. In the end, it risks being a soft form of self regulation.
I found Eichhorn’s comments about resistance interesting when thinking about my engagement with the IndieWeb and Domain of One’s Own. I guess sometimes all we have is hope. While Mannell’s point about no harm when it is not even clear what harm is at play reminds me about Zeynep Tufekci’s discussion of shadow profiles, complications of inherited datasets and the challenges of the next machine age. In regards to education, the issue is in regards to artificial intelligence and facial recognition.
Also on: