At the national level, however, the story is different. What NAPLAN is good for, and indeed what it was originally designed for, is to provide a national snapshot of student ability, and conducting comparisons between different groups (for example, students with a language background other than English and students from English-speaking backgrounds) on a national level.
This is important data to have. It tells us where support and resources are needed in particular. But we could collect the data we need this by using a rigorous sampling method, where a smaller number of children are tested (a sample) rather than having every student in every school sit tests every few years. This a move that would be a lot more cost effective, both financially and in terms of other costs to our education system.
The history of Anthropology tells us that categorizing people is lesser than understanding them. Colonial practices were all about the describing and categorizing, and ultimately, controlling and exploiting. It was in service of empire, and anthropology facilitated that work.
It shouldn’t any more, and it doesn’t have to now.
You don’t need to compile a typology of students or staff. You need to engage with them.
I want to draw a line from quiz-type testing that offers people an opportunity to profile themselves and the problems inherent in reducing knowledge work to a list of skills. And I also want to draw attention to the risks to which we expose our students and staff, if we use these “profiles” to predict, limit, or otherwise determine what might be possible for them in the future.
Lanclos suggests that we need to go beyond the inherent judgments of contained within metaphors and deficit models, and instead start with context:
We need to start with people’s practices, and recognize their practice as as effective for them in certain contexts.
And then ask them questions. Ask them what they want to do. Don’t give them categories, labels are barriers. Who they are isn’t what they can do.
Please, let’s not profile people.
When you are asking your students and staff questions, perhaps it should not be in a survey. When you are trying to figure out how to help people, why not assume that the resources you provide should be seen as available to all, not just the ones with “identifiable need?”
The reason deficit models persist is not a pedagogical one, it’s a political one.
She closes with the remark:
When we ask students questions, it shouldn’t be in a survey.
This reminds me of coaching the fluidity of the conversation. This also touches on my concern with emotional intelligences as a conversational tool.
There is also a recording of this presentation:
US investigators recently tracked down the suspect of a 40-year-old murder case after uploading DNA to a genealogy website. Jordan Erica Webber weighs up the pros of finding ancestors with the cons of selling privacy
Maggie Koerth-Baker discusses changes in data arguing that we need to stop seeing privacy as a ‘personal’ thing:
Experts say these examples show that we need to think about online privacy less as a personal issue and more as a systemic one. Our digital commons is set up to encourage companies and governments to violate your privacy. If you live in a swamp and an alligator attacks you, do you blame yourself for being a slow swimmer? Or do you blame the swamp for forcing you to hang out with alligators?
The shift of data ownership from the private to the public sector may well succeed in reducing the economic power of Silicon Valley, but what it would also do is reinforce and indeed institutionalize Silicon Valley’s computationalist ideology, with its foundational, Taylorist belief that, at a personal and collective level, humanity can and should be optimized through better programming.
While Google says “digital wellness” is now part of the company’s ethos, not once during the Google I/O keynote did anyone mention “privacy.”
The first question is, are the algorithms that we deploy going to improve the human processes that they are replacing? Far too often we have algorithms that are thrown in with the assumptions that they’re going to work perfectly, because after all they’re algorithms, but they actually end up working much worse than the system that they’re replacing. For example in Australia they implemented an algorithm that sent a bunch of ridiculously threatening letters to people saying that they had defrauded the Australian Government. That’s a great example where they actually just never tested it to make sure it worked.
The second question is to ask, for whom is the algorithm failing? We need to be asking, “Does it fail more often for women than for men? Does it fail more often for minorities than for whites? Does it fail more often for old people than for young people?” Every single class should get a question and an answer. The big example I have for this one is the facial recognition software that the MIT Media Lab found worked muchbetter for white men than black women. That is a no-brainer test that every single facial recognition software company should have done and it’s embarrassing that they didn’t do it.
The third category of question is simply, is this working for society? Are we tracking the mistakes of the system? Are we inputting these mistakes back into the algorithm so that it’ll work better? Is it causing some other third unintended consequence? Is it destroying democracy? Is it making people worse off?
ClassDojo has been dealing with privacy concerns since its inception, and it has well-rehearsed responses. Its reply to The Times was: ‘No part of our mission requires the collection of sensitive information, so we don’t collect any. … We don’t ask for or receive any other information [such as] gender, no email, no phone number, no home address.’ But this possibly misses the point. The ‘sensitive information’ contained in ClassDojo is the behavioural record built up from teachers tapping reward points into the app.
Williamson does however close with a warning, that with GDPR coming in, ‘data danger’ is quickly becoming its own genre:
So on a general level, the case for evidence-based practice has a definite value. But let’s not over-extend this general appeal, because we also have plenty of experience of seeing good research turn into zealous advocacy with dubious intent and consequence. The current over-extensions of the empirical appeal have led paradigmatic warriors to push the authority of their work well beyond its actual capacity to inform educational practice. Here, let me name two forms of this over-extension.
Simply ask ‘effect on what?’ and you have a clear idea of just how limited such meta-analyses actually are.
While in regards to RCT’s, he states:
By definition, RCTs cannot tell us what the effect of an innovation will be simply because that innovation has to already be in place to do an RCT at all. And to be firm on the methodology, we don’t need just one RCT per innovation, but several – so that meta-analyses can be conducted based on replication studies.
Another issue is that Research shows what has happened, not what will happen. This is not to say no to evidence, but a call to be sensible about what we think that we can learn from it.
What it can do is provide a solid basis of knowledge for teachers to know and use in their own professional judgements about what is the best thing to do with their students on any given day. It might help convince schools and teachers to give up on historical practices and debates we are pretty confident won’t work. But what will work depends entirely on the innovation, professional judgement and, as Paul Brock once put it, nous of all educators.
This is not easy. This is not normal. This is a bit challenging as I’m forcing myself to redirect the streams that the social networks have made super simple for me (and others) to use over time. This is not easy as general users are conditioned to the sorts of signals, environments, and features that are rolled out over time. What I’m trying to do here will not make sense to most people who I interact with. This will confuse and possibly anger some f my followers. This may also cause many users to unfollow me, or (better yet) the algorithms on the social networks will just filter me out of the discussions.
In reality, Facebook is designed to allow its partners to violate its users’ privacy, so the fact that Cambridge Analytica got caught with its hand in 80 million of our cookie-jars is an indication of how incompetent they were (they were the easiest to detect, in part because of their public boasting about their wrongdoing), and that means there are much worse scammers who are using Facebook to steal our data in ways that makes CA look like the petty grifters they are.
There is probably little doubt that the analysis of data will play an increasing role in teacher recruitment. I am sure that among the companies involved in the development of such platforms there are many good people with solid beliefs and values, individuals who will want to see these systems used in conjunction with personal connections, interviews, and relationships. In other words, in very humane ways, using the algorithm as a guide, not a decision-maker, and this is where biometric data may prove initially attractive. The question, of course, with all “data-driven” initiatives lies not so much with the intent or even the veracity of the data collected, but with how it is used. Data can too easily become the decision-making tool of lazy convenience and ends up being used in ways never intended. When I consider my teaching colleagues, I recoil at the prospect of viewing them as data points. Someone needs to shout stop.
We should stop treating tests like moral agents that can define the future. I agree with David Rutkowski’s point about agency, perhaps we’d be well-advised to think about what is enabled, and what we don’t have to do, when we cede our agency to tests and ask whether we really breath a sigh of relief that it is our responsibility we can explain away. The desire for a testing regime is a symptom, not a cause, and it seems to me if you better understand those individual and collective desires at work, you may understand why it is that reconciliACTION and social justice remain distractable.
Google is already buttoning up its data policies in anticipation of Europe’s General Data Protection Regulation, or GDPR, which kicks in next month. The company restricted the number of third-party companies that can serve and track ads through its advertising exchange and on YouTube. Google is also requiring publishers to get user consent for targeted ads to comply with GDPR.
Precision education represents a shift from the collection of assessment-type data about educational outcomes, to the generation of data about the intimate interior details of students’ genetic make-up, their psychological characteristics, and their neural functioning.
Have You Heard discusses the rise of the “data boyz,” the quantitative methodologists who increasingly determine what counts–and what doesn’t–in education research. Special guest: UC Berkeley economist Jesse Rothstein.
Some interesting points made about the Mafia pact that silences critique.
If we want a full and comprehensive debate about the role of data in our lives, we need to first appreciate that the analysis and use of our data is not restricted to the types of figures that we have been reading about in these recent stories – it is deeply embedded in the structures in which we live.
Mining tweets? Just provide your search terms, enter a couple commands at the terminal, and voilà, … instant dashboard!
In simple terms, datafication can be said to refer to ways of seeing, understanding and engaging with the world through digital data. This definition draws attention to how data makes things visible, knowable, and explainable, and thus amenable to some form of action or intervention. However, to be a bit more specific, there are at least ten ways of defining datafication.
- Legally & ethically
This is a good introduction to his book Big Data in Education.
Facebook has been designed to be an information-gathering engine in order to more effectively sell personalized advertising. Its algorithm also attempts to deeply understand your interests in order to “optimize for engagement”: keep you using the site, and therefore viewing those personalized ads, for as long as possible. Its users access Facebook for 50 minutes a day.
In order to gather the most information it can, Facebook has been engineered to be the world’s most efficient peer pressure engine. Users on the platform are constantly being persuaded to stay; those who try and leave report being relentlessly emailed with personalized, emotional content to try and get them to come back.
Tantek Çelik explains this in the IndieWeb Chat:
The big reveal (IMO) of the FB/CA disclosures is that nothing you post to FB is actually “private”, in practice it is silently shared with random apps (that you happen to use your FB ID to sign into), which then are sharing it with other orgs via acquisition or just outright selling your data.
It is time to support parents and teachers to ask critical questions about ClassDojo. As the owners and controllers of a vast global database of children’s behavioural information and a global social media site for schools, its entrepreneurial founders need to be more transparent about what they intend to do with that data, how they intend to generate income from it, and how they want ClassDojo to play a part in interactions between children.