For those who want to use this structure to create your own Privacy Postcards, I have created a skeleton structure on Github. Please, feel free to clone this, copy it, modify it, and make it your own.
Say no to defaults. A clickable guide to fixing the complicated privacy settings from Facebook, Google, Amazon, Microsoft and Apple.
The controversial and distinctive yellow bicycles operated by Singaporean company oBike will soon disappear as quickly as they appeared.
It will be interesting to see how competitors respond and what – if any – changes they make.
In the furor that followed, Facebook’s leaders said that the kind of access exploited by Cambridge in 2014 was cut off by the next year, when Facebook prohibited developers from collecting information from users’ friends. But the company officials did not disclose that Facebook had exempted the makers of cellphones, tablets and other hardware from such restrictions.
1. The Performative, Public Self
2. The Quantified – or Articulated – Self
3. The Participatory Self
4. The Asynchronous Self
5. The PolySocial – or Augmented Reality – Self
6. The Neo-Liberal, Branded Self
I think there is a reasoned response to technopanic. Perhaps a sense of technoagency is necessary. Now more than ever, faster than ever, technology is driving change. The future is an unknown, and that scares us. However, we can overcome these fears and utilize these new technologies to better equip ourselves and steer us in a positive direction.
Although this was designed as a case of ‘what if’, it is a reminder of what could happen. It therefore provides a useful provocation, especially in light of Cambridge Analytica and GDPR. O’Byrne suggests that this is an opportunity to take ownership of our ledger, something in part captured by the #IndieWeb.
I agree with the thinking about this ledger, but do not agree with how it is situated in the video. I would see an opportunity for the individual to determine what information comes in to the ledger, and how it is displayed. As an example, each of the arrows coming pointing in to the ledger could be streams of information from your website, Twitter feed, Strava running app, and any other metrics you’d like to add. Each of these would come in with a modified read/write access, and sharing settings from the originating app/program/service. As the individual, you’d be in control of dictating what you present, and how you present this information in your ledger.
Interestingly, Douglas Rushkoff made the case in a recent episode of Team Human for including less not more on the ledger:
At the national level, however, the story is different. What NAPLAN is good for, and indeed what it was originally designed for, is to provide a national snapshot of student ability, and conducting comparisons between different groups (for example, students with a language background other than English and students from English-speaking backgrounds) on a national level.
This is important data to have. It tells us where support and resources are needed in particular. But we could collect the data we need this by using a rigorous sampling method, where a smaller number of children are tested (a sample) rather than having every student in every school sit tests every few years. This a move that would be a lot more cost effective, both financially and in terms of other costs to our education system.
The history of Anthropology tells us that categorizing people is lesser than understanding them. Colonial practices were all about the describing and categorizing, and ultimately, controlling and exploiting. It was in service of empire, and anthropology facilitated that work.
It shouldn’t any more, and it doesn’t have to now.
You don’t need to compile a typology of students or staff. You need to engage with them.
I want to draw a line from quiz-type testing that offers people an opportunity to profile themselves and the problems inherent in reducing knowledge work to a list of skills. And I also want to draw attention to the risks to which we expose our students and staff, if we use these “profiles” to predict, limit, or otherwise determine what might be possible for them in the future.
Lanclos suggests that we need to go beyond the inherent judgments of contained within metaphors and deficit models, and instead start with context:
We need to start with people’s practices, and recognize their practice as as effective for them in certain contexts.
And then ask them questions. Ask them what they want to do. Don’t give them categories, labels are barriers. Who they are isn’t what they can do.
Please, let’s not profile people.
When you are asking your students and staff questions, perhaps it should not be in a survey. When you are trying to figure out how to help people, why not assume that the resources you provide should be seen as available to all, not just the ones with “identifiable need?”
The reason deficit models persist is not a pedagogical one, it’s a political one.
She closes with the remark:
When we ask students questions, it shouldn’t be in a survey.
This reminds me of coaching the fluidity of the conversation. This also touches on my concern with emotional intelligences as a conversational tool.
There is also a recording of this presentation:
US investigators recently tracked down the suspect of a 40-year-old murder case after uploading DNA to a genealogy website. Jordan Erica Webber weighs up the pros of finding ancestors with the cons of selling privacy
Maggie Koerth-Baker discusses changes in data arguing that we need to stop seeing privacy as a ‘personal’ thing:
Experts say these examples show that we need to think about online privacy less as a personal issue and more as a systemic one. Our digital commons is set up to encourage companies and governments to violate your privacy. If you live in a swamp and an alligator attacks you, do you blame yourself for being a slow swimmer? Or do you blame the swamp for forcing you to hang out with alligators?
The shift of data ownership from the private to the public sector may well succeed in reducing the economic power of Silicon Valley, but what it would also do is reinforce and indeed institutionalize Silicon Valley’s computationalist ideology, with its foundational, Taylorist belief that, at a personal and collective level, humanity can and should be optimized through better programming.
While Google says “digital wellness” is now part of the company’s ethos, not once during the Google I/O keynote did anyone mention “privacy.”