Tag: Testing
So what are you going to do about it? Lock the child up in a box? Or in other words, regularly test them to the point they begin to hate and fear school because of all the stress it causes them? Testing is a bit like uprooting a plant every other week to check on how much it has grown. The quantum indeterminacy of education is that you can either regularly test children, or you can stand back and let them grow. We need to think outside the box. Assessment can be done without stress, because there are many alternatives to testing – and there’s more than one way to skin a cat.
Pisa, Whitehall and Ofsted are obsessed with maths not because algebra is the key to happiness, or geometry to great riches, but because it is easy to score globally. Bereft of an ideal of a good education, government, and especially central government, likes anything that yields mass data. It holds the key to control, to the regime of rewards and penalties that underpins modern administration and its funding.
At the national level, however, the story is different. What NAPLAN is good for, and indeed what it was originally designed for, is to provide a national snapshot of student ability, and conducting comparisons between different groups (for example, students with a language background other than English and students from English-speaking backgrounds) on a national level.
This is important data to have. It tells us where support and resources are needed in particular. But we could collect the data we need this by using a rigorous sampling method, where a smaller number of children are tested (a sample) rather than having every student in every school sit tests every few years. This a move that would be a lot more cost effective, both financially and in terms of other costs to our education system.
We should stop treating tests like moral agents that can define the future. I agree with David Rutkowski’s point about agency, perhaps we’d be well-advised to think about what is enabled, and what we don’t have to do, when we cede our agency to tests and ask whether we really breath a sigh of relief that it is our responsibility we can explain away. The desire for a testing regime is a symptom, not a cause, and it seems to me if you better understand those individual and collective desires at work, you may understand why it is that reconciliACTION and social justice remain distractable.
📓 Measuring NAPLAN Performance
The tests provide parents and schools with an understanding of how individual students are performing at the time of tests.
This is such a hard thing to communicate. It is easy to read ‘performing’ as some sort of exact since, such as Johnny got 33 out of 40 in the recent test on whatever. The problem though is that NAPLAN is not ‘exact’ either at the time or as a measurement of growth. This is highlighted by Richard Olsen in his look at the limitations:
In practice, NAPLAN relative growth is a so unreliable that I cannot believe that it is a suitable measure and I would personally discourage anyone from using it. The narrow range of questions that define average growth, compounded by the error inherent to NAPLAN’s testing method make it an extremely unreliable measure.
I think that Margaret Wu captures this best when she explains:
In summary, we would say that a NAPLAN test only provides an indicative level of the performance of a student: whether the student is struggling, on track, or performing above average. The NAPLAN tests do not provide fine grading of students by their performance levels because of the large uncertainties associated with the ability measures.
I’d ask anyone who is criticizing PBL in the classroom to talk to the teachers and students who have had this opportunity. I’d ask them to look at what students are creating, making, and building during this time. I’d ask them to talk to the parents about their students’ attitude towards learning.
I give two answers to the question above:
Try it for a day and see what happens. Start small and build from there.
Teach through the project, instead of using the project as an “end-of-unit” assessment that takes more time than a multiple choice test. When kids learn during the project, the time constraint goes away.
Cameron Malcher speaks with Marten Koomen about his research into the process by which large-scale tests like PISA and NAPLAN affect school management and curriculum.

Marten Koomen frames the conversation around a discussion of collectivism, neoliberalism and skepticism. For collectivists, school is the responsibility of the state, whereas neoliberals consider it as another product to be consumed. While without effective governance, skepticism ends up in tragedy. Our current climate is very much in response to neoliberalism, however:
We are all part collectivist, individualists neoliberals and skeptics, so to identify in one corner is disingenuous.
The key question that Koomen tries to address is: How did Victoria go from a state that was a leader in content knowledge and democratic values to the launch of a content-free platform driven by the terror of performativity? As he explains,
They had this idea of the net, but no idea of the content … a complete infatuation with the technology.
Discussing PISA, Koomen provides some background to computer-based testing and the ‘Koomen Model’. The model involved providing schools with standardized devices for the consistency of data. It failed based on pressure.
In part, Koomen’s model tells us something about the data and what it tells us. There are groups out there that want the outcomes without the content or context. Koomen returns again and again to the difference between entity realism vs. constructivism:
Entity Realism = things are real
Constructivism = things agreed upon
Realists ignore context as it is not mapped back to a central curriculum. It also allows for the insult of the human spirit through comparison of outcomes, ratio and market results. For example, NAPLAN uses Item Response Theory, a format that does not allow any direct recall or reference to learning and development. This leads to the situation where a student can ‘improve’ yet remain on the same score. Margaret Wu explains this in her chapter in National Testing in Schools, while Sam Sellar, Greg Thompson and David Rutkowski elaborate on it in The Global Education Race.
For Koomen our decline in these scales comes back to a focus on the market:
Neoliberalism considers content as: self-evident, real, axiomatic, socially constructed and marketable. In a way that supports the status quo.
This leads to conversations with students in regards to points on a scale, rather than aspects of context and development. For example, it is easier in the media to talk about a change in ratios or job rates, rather than the collapse in the car industry and what impact that has for the state. This allows for the rise of education conferences based around data with little reference to the local context.
The answer Koomen closes with is to work together though associations to make systemic change.
Main Features: ACARA’s Acting Director, Curriculum, discusses the new literacy and numeracy progressions, their relationship to curriculum, and intended applications in teaching and assessment practices; Annabel Astbury outlines the ABC’s new education initiative.
Regular Features: Off Campus, …

00.000 Opening Credits
01:31.214 Intro
01:55.324 Off Campus – Dan Haesler
12:48.141 Education in the News
20:44.068 ABC Education – Annabel Astbury
28:50.180 Feature Introduction
30:52.440 Interview – Hilary Dixon
59:28.218 Announcements
1:01:52.482 Quote & Sign Off
In this edition of the TER Podcast, Cameron Malcher interviews Hilary Dixon about the new Literacy and Numeracy Progressions released earlier this year from ACARA. Although the interview discusses what the progressions are, it also provides a critical context to their creation and where they might sit within the wider debate around NAPLAN and back-to-bacics curriculum.
SSES extends the reach of datafication of education beyond school walls into the surveillance of home contexts and family life, treating them as a ‘home learning environment’ to be assessed on how it enables or impedes students’ development of valuable socio-emotional skills
Ben Williamson provides a (very partial) overview of some of the key features of SSES. However, it does raise a few headline points:
SSES extends international-large scale assessment beyond cognitive skills to the measurement of personality and social-emotional skills
SSES will deliver a direct assessment instrument modelled on psychological personality tests
SSES enacts a psychological five-factor model of personality traits for the assessment of students, adopting a psychometric realist assumption that personality test data capture the whole range of cross-cultural human behaviour and emotions in discrete quantifiable categories
SSES extends the reach of datafication of education beyond school walls into the surveillance of home contexts and family life, treating them as a ‘home learning environment’ to be assessed on how it enables or impedes students’ development of valuable socio-emotional skills
SSES normalizes computer-based assessment in schools, with students required to produce direct survey data while also being measured through indirect assessments provided by teachers, parents and leaders
SSES produces increasingly fine-grained, detailed data on students’ behaviours and activities at school and at home that can be used for targeted intervention based on analyses performed at a distance by an international contractor
SSES involves linking data across different datasets, with direct assessment data, indirect assessments, school admninistrative data, and process metadata generated during assessment as multiple sources for both large-scale macro-analysis and fine-grained micro-analytics–with potential for linking data from other OECD assessments such as PISA
SSES uses digital signals such as response times and keystrokes, captured as process metadata in software log files, as sources for stealth assessment based on assumptions about their correlation with specific social-emotional skills
SSES promotes a therapeutic role for education systems and schools, by identifying ‘success’ factors in SELS provision and encouraging policymakers to develop targeted intervention where such success factors are not evident
SSES treats students’ personalities as malleable, and social-emotional skills as learnable, seeking to produce policy-relevant psychometric knowledge for policymakers to design interventions to target student personalities
SSES exemplifies how policy-relevant knowledge is produced by networks of influential international organizations, connected discursively and organizationally to think tanks, government departments and outsourced contractors
SSES represents a psycho-economic hybridization of psychological and psychometric concepts and personality measurement practices with economic logics relating to the management of labour market behaviours and human resources
The idea that the solution to the nefarious effects of constant high-stakes measurement is to bring in more high-stakes measurement – albeit of a different thing – is palpably insane. It is further evidence, if we needed any, that we have surrendered our profession to a cultish scientism whose mantra is measurement.