During our research, we also found ourselves reflecting on the unique position of the school as an institution tasked not only with educating its students but also with managing their personal data. Couldn’t one then argue that, since the school is a microcosm of the wider society, the school’s own data protection regime could be explained to children as a deliberate pedagogical strategy? Rather than something quietly managed by the GDPR compliance officer and conveyed as a matter of administrative necessity to parents, the school’s approach to data protection could be explained to students so they could learn about the management of data that is important to them (their grades, attendance, special needs, mental health, biometrics).
The ACCC has recently released their final Digital Platforms Inquiry, which raises important points that I believe should have led the recent debate when Victorian education minister, James Merlino announced a ban of smart phones in Victorian schools.
This report, coupled with a major project on human rights and technology by the Australian Human Rights Commission and the Artificial Intelligence: Australia’s Ethics Framework (A Discussion Paper) by the CSIRO, provide a collective warning about the vast amounts of personal data being collected and its implications.
A key question to building any software in the modern age is: “In the wrong hands, who could this harm?”
Decades ago, software seemed harmless. In 2019, when facial recognition is used to deport refugees and data provided by online services have been used to jail journalists, understanding who you’re building for, and who your software could harm, are vital. These are ideas that need to be incorporated not just into the strategies of our companies and the design processes of our product managers, but the daily development processes of our engineers. These are questions that need to be asked over and over again.
If you want to start at the beginning and understand why even a date is such a hot bed of debate, misunderstanding, inconsistency and irregularity, then go read D is for Dangerous. If you consider yourself to be someone who is part of the data industry, you might find this a little light .. so move on. But first .. if you think you know that Samuel Morse died on 04/02/72 … you might want to dip in and check your facts.
A is for Articulate provides a little history of how we came to understand the building blocks of the world we live in. Data is not the central theme but is a necessary part of the series because it connects to and provides some context for part 4.
Since 2006, the world has suffered (and I do mean ‘suffered’) through a series of analogies as people have attempted to describe data as the ‘new
’. T is for Terminating Analogies kills off oil, soil, water and music analogies. Data is not the new anything … it just IS, which I get to in …
Part 4 – A is for Another Way Of Looking At Data – a new way of thinking about data (no spoilers) but does start to explain why Data ‘Lakes’, ‘Warehouses’, ‘Mountains’ and ‘Farms’ are probably the wrong way of approaching the challenge, let alone the thinking!
Imagine if every single person on the planet had their own dashboard that allowed them to indicate their needs, desires, wants and flag it so that anyone who felt that they could satisfy those needs, desires and wants could respond with an offer human-readable terms of the contract, pricing, expected timelines, etc. (Source)
This reminds me James Bridle’s discussion of metaphors in New Dark Age. This is also a topic that Kin Lane has been exploring lately, reflecting on surplus, ownership, the emotional trap and what goes unseen.
I wanted to take a moment to understand what some of the surplus data that is generated from me just reading my RSS feeds for about an hour in my Feedly web application:
- Subscribe To – Every time I subscribe to an RSS feed, this information is added to my profile use later.
- How Long – How much time I put into cultivating feeds is a default part of surplus data being generated.
- Click and Read – Everything I click on and read adds a layer of behavioral surplus to be extracted.
- Tag and Organize – Everything I tag and organize shares my approach to taxonomy and understanding.
- Share With Others – The tags I turn into feeds and share continue painting a picture of what matters.
When you take these behavioral data points and multiply them by a couple thousand feeds, and hundreds of thousands of individual blog posts, GitHub updates, and Tweets that I subscribe to via my Feedly, it can paint a pretty relevant, real-time portrait of what Kin Lane is thinking about.
Putting a price-tag on your data to solve privacy abuses is like giving a bullied kid extra lunch-money so that he won’t go hungry. Until you do something about the bullies, you can shower the kid in lunch money and all it will do is enrich the bullies.
Leaders need to take a hard look at what gets in the way of promoting women in their organizations. Clearly, the unconscious bias that women don’t belong in senior level positions plays a big role. It’s imperative that organizations change the way they make hiring and promotion decisions and ensure that eligible women are given serious consideration. Those making those decisions need to pause and ask, “Are we succumbing to unconscious bias? Are we automatically giving the nod to a man when there’s an equally competent woman?” And, as our data on confidence shows, there’s a need for organizations to give more encouragement to women. Leaders can assure them of their competence and encourage them to seek promotions earlier in their careers.
This relates to Caroline Criado Perez, author of the book, Invisible Women: Data Bias in a World Designed for Men, argument that, women are often absent from the data.
As organisations gather huge stockpiles of data, they seem to grow increasingly tightfisted with their data and insights. They’ve found a gold mine – why share? The problem with this line of reasoning is that it quickly dead-ends in a world where the only conceivable use of data is as zero-sum competitive advantage: “I know something you don’t.”
Not really. But there are ways to cover your tracks online and skirt urban surveillance.
Lee Tien advocates a certain degree of self-protection. He views these measures as a kind of digital hygiene—the “equivalent of washing your hands when you go to the bathroom,” or getting a flu shot. But he stresses that they’re only a partial prophylactic: “Nothing that will make you immune from the problem.”
This is something that I tried to capture in my post on being.
We are losing the spaces we share across socioeconomic strata. Slowly, but surely, we are building the means for an everyday urbanite to exist solely in their physical and digital class lanes. It used to be the rich, and then everyone else. Now in every realm of daily consumer life, we are able to efficiently separate ourselves into a publicly visible delineation of who belongs where.
We lost the lunch line. We lost the coffee cart. We’re losing the commute. Innovation has bestowed upon us an entire homescreen worth of transportation options that allow us to congest the roads and never brush elbows with those taking the subway. Meanwhile, the crumbling of the subways aren’t felt by an ever growing number of the somewhat well-to-do.
Classrooms are just too complicated for research ever to tell teachers what to do. Teachers need to know about research, to be sure, so that they can make smarter decisions about where to invest their time, but teachers, and school leaders need to become critical consumers of research – using research evidence where it is available and relevant, but also recognising that there are many things teachers need to make decisions about where there is no research evidence, and also realising that sometimes the research that is available may not be applicable in a particular context.
What this means, I think, that those who call for “evidence-based education” are missing the point. Evidence is important, of course, but what is more important is that we need to build teacher expertise and professionalism so that teachers can make better judgments about when, and how, to use research.
If we accept that every teacher needs to improve, not because they are not good enough, but because they can be even better, professional development becomes welcome – it is just the way we become better.
I am however reminder here of Jon Andrews adage, “I’m interested in development, not improvement.”
Learning analytics give us data on student behaviors, but they do not provide explanations for these behaviors, so they do not tell the whole story of the human being, what motivates them, what barriers/obstacles stand in their way. They do not get to the root of lack of engagement.
Learning analytics focus on observable and quantifiable behaviors that are easy for the LMS to collect. When someone logged in, how long they stayed, which tool they used. We don’t know for sure that if analytics tell me someone watched the same video 4 times… if this means they literally sat and watched it 4 times from beginning to end (though some systems do tell how many mins were watched) but we don’t know if they tried again because their internet connection was bad, or because they were distracted by something happening at home, or something else they were doing on their computer at the same time… we just don’t know. We can know that someone has not submitted their last two assignments. But we don’t know why they didn’t submit them, what kind of barriers they were facing, and how to motivate them in future?
Google says it doesn’t use your Gmail to show you ads and promises it “does not sell your personal information, which includes your Gmail and Google Account information,” and does “not share your personal information with advertisers, unless you have asked us to.”
But, for reasons that still aren’t clear, it’s pulling that information out of your Gmail and dumping it into a “Purchases” page most people don’t seem to know exists. Even if it’s not being used for ads, there’s no clear reason why Google would need to track years of purchases and make it hard to delete that information.
Cooperation and regret is noted, but it doesn’t excuse the conduct. At some point in their lives, the people who helped Mark Zuckerberg, Jack Dorsey, and Travis Kalanick execute their schemes may regret what they’ve done, they may even cooperate in undoing it. But, like the judge said—it won’t excuse the conduct.
Today, each new device we purchase is a conscious decision to share an intimate piece of ourselves with a company whose goals may not align with our own. This exchange represents a fundamental shift in our relationship with technology and the companies that produce it. Adoption is no longer an ephemeral transaction of money for goods. It’s a permanent choice of personal exposure for convenience—and not just while you use the product. If a product fails, or a company folds, or you just stop using it, the data you provided can live on in perpetuity. This new dynamic is the Faustian bargain of a connected life, and it changes the value equation involved in choosing to adopt the next.