A thought provoking conversation the other day with @YuniSantosa and @MKPolly, in relation to various workshops we will lead in the new year, highlighted the following questions assessment capable teachers and learners might ask ourselves
Whilst the HSC has been in continuous review for decades it now needs refurbishment. In doing so, we need to keep the best of what it offers and replace what needs to go with new metrics which offer a far more complete picture of each young adult’s knowledge, understanding, skills, capabilities and dispositions, and how they are applied.
As I have said, what the HSC is and what it needs to be are two very different things.
Greg, this seems to be the wicked problem of our time. It has been interesting to see various universities form connections with schools, such as Templestowe and Swinburne University. The problem is that the status quo still seems to be based on scores and ranking.
Intrigued with University of Melbourne’s ‘New Metrics’ program. They have a bit of history with exploring new areas for assessment with the ATC21s program (whitepaper can be found here), however I am not sure what really came of that work.
if the questions on your assessment can be Googled AND you are worried about cheating, then you have written a bad assessment.
Bill Ferriter suggests that before you worry about how you are going to assess learning online, you need to address the question of what you are assessing for.
We need to know the level of rigor of the essential standard that we are assessing before we can write a question that will generate reliable information on student mastery.
We need to decide on the kinds of things that students should know and be able to do if they have mastered the essential standard that we are assessing.
We need to write and then deliver a small handful (3-5) of questions for each essential standard that we are assessing.
We need to think through the common misconceptions that we are likely to see in student responses to our questions.
For any constructed response questions or performance assessments, we need to decide together what “mastery” will look like in student responses.
That might include developing exemplars of different levels of student performance or creating shared scoring rubrics.
If the focus is multiple choice questions, Ferriter uses MasteryConnect, while if it is about deomonstrations, he uses Flipgrid. Although there are many other options out there, these work within his context. As he explains:
Your goal is to find tools that:
Have little to no learning curve for you or your students.
Aren’t blocked by your district’s firewall.
Fit into your budget — or the budget of your school.
Ferriter closes with a reflection on how he deals with the threat of students cheating. FIrstly, he makes a concerted effort to lower the stakes on my classroom assessments by making them smaller and providing students the opportunity to repeat where needed. In addition to this, he suggests that if the answer is in fact Google-able then maybe it is actually just poor assessment.
Your piece about cheating reminds me about an experience I had in Year 10 Science when we had an open-book test. I remember Ms. Hé not paying too much attention to our chatter during tests. We would turn and talk with colleagues to get the answer. The funny thing was that it did not really make a difference. I cannot remember what grade I got, but I know it was not great. I think it clearly highlighted the lack of care I had for the subject. Cheating made little difference. In hindsight, I wonder if that was in fact her strategy, not sure. It was a useful lesson to learn.
If a nation agreed to classrooms consistently developing an environment of Assessment for Learning where there are open and transparent activities designed for students and teacher to track, feedback and reflect on strengths, weaknesses and gaps in knowledge and skills as part of the learning, then maybe this “AFL record” could be what formed the final record of achievement for a student. This record would have been visible and moderated all along as it developed with the student, teacher and school agreed in what it reported about the learner.
If we had no exams and a exiting school was centred on students’, teachers’, schools’ and parents’ involvement in a national system of learning progress and transparent dialogue, teachers could return to a focus on learning and progress and not preparation for the divisive and alien environment of exam silence.
With an eye towards reforming assessment practices, Jon Dron compiles a list of principles associated with assessment:
The primary purpose of assessment is to help the learner to improve their learning. All assessment should be formative.
Assessment without feedback (teacher, peer, machine, self) is judgement, not assessment, pointless.
Ideally, feedback should be direct and immediate or, at least, as prompt as possible.
Feedback should only ever relate to what has been done, never the doer.
No criticism should ever be made without also at least outlining steps that might be taken to improve on it.
Grades (with some very rare minor exceptions where the grade is intrinsic to the activity, such as some gaming scenarios or, arguably, objective single-answer quizzes with T/F answers) are not feedback.
Assessment should never ever be used to reward or punish particular prior learning behaviours (e.g. use of exams to encourage revision, grades as goals, marks for participation, etc) .
Students should be able to choose how, when and on what they are assessed.
Where possible, students should participate in the assessment of themselves and others.
Assessment should help the teacher to understand the needs, interests, skills, and gaps in knowledge of their students, and should be used to help to improve teaching.
Assessment is a way to show learners that we care about their learning.
He elaborates on these further in regards to credentials and objective quizzes. Dron believes that students should have autonomy when it comes to assessment and the best model for this is the creation of a portfolio of evidence.
A portfolio of evidence, including a reflective commentary, is usually going to be the backbone of any fair, humane, effective assessment … It is worth noting that, unlike written exams and their ilk, such methods are actually fun for all concerned, albeit that the pleasure comes from solving problems and overcoming challenges, so it is seldom easy.
This is a useful provocation in regards to assessment and feedback. It is also interesting to think about in regards to things like open badeges.
The question regarding retakes isn’t simply, “Should students get a second chance?” Rather, it is, “How can we use assessments to help students improve?” If we incentivize success on the first assessment by planning enticing enrichment activities and guide students in correcting the learning errors identified on that assessment, we’re much more likely to realize Benjamin Bloom’s dream of having all students, ALL students learn well.
Thomas Guskey responds to concerns raised around offering students the opportunity to retake tests and assessment.
To bring improvement, Bloom stressed formative assessments must be followed by high-quality, corrective instruction designed to remedy whatever learning errors the assessments identified. Unlike reteaching, which typically involves simply repeating the original instruction, correctives present concepts in new ways and engage students in different learning experiences.
He explains that concerns about time and coverage can be overcome by using a corrective process, that this is what real life is like (i.e. surgeon, pilot), and the everyday reality of mastery and fair grades (i.e driver’s license.)
I guess it raises the question, what is the point of feedback, if students are not given the opportunity to act upon it?
Specific and detailed criteria with examples can raise the bar and reduce the likelihood of students handing in C-R-A-P, but they can also limit the format, creativity and extension of learning that could be possible if we left things more open, and provided more choice.
Damian Gordon collates an extensive list of alternative assessment ideas. There has been a lot written about the tools to use in association with online learning, but less in regards to the various assessment practices.
The report proposed reducing more than 170 senior-level courses to a “limited set of rigorous, high-quality, advanced courses”. Vocational and academic subjects would slowly be brought closer so that eventually every course would mix theory and application.
HSC students would also have to complete a single major project, which would allow the development and assessment of skills such as gathering and analysing, as well as so-called general capabilities such as team work and communication.
FYI – the NSW curriculum debate over 'crowded curriculum' versus 'where students are' pertains to two educational assessment models (domain sampling & continuum), described by Reckase here. Masters is a proponent of the continuum model (Rasch).https://t.co/7bwHrTw823
Grading is good at ‘encouraging people’ to do complicated tasks that are often represented by memorization, obedience and linear thinking. If those are our actual goals. If our goals are complex and include things like creativity… we’re looking to support intrinsic motivation. Grades don’t support intrinsic motivation.
I like having personal conversations with students and developing TAGs-Targeted Areas of Growth. What are the one or two criterion a student should focus on when improving writing. Never try to get an 8 year old writer to adresss six different indicators of quaility at once. I don’t think adult writers should undertake such an endeavor.
For older students in primary schools and secondary schools, marks for each subject will be rounded off and presented as a whole number, without decimal points – to reduce the focus on academic scores. Parents will continue to receive information about their child’s progress in school during parent-teacher meetings.
Is this that different from Australia? I find that there is a lot of confusion about what schools do and are required to do when it comes to assessment and reporting. This is something discussed in Episode 139 of the TER Podcast.
This experience reminded me that our young people learn this addiction to grades from us, the adults. They don’t desire a mark or a grade for the mud pie they make and proudly display when they are 3. They don’t want to be given a piece of paper with an A on it when they learn to ride a bike. We make this unnatural framework for their learning, and often all it does is create anxiety, perfectionism, conflict, competition and, worst of all, not great learners.
One of the most powerful elements of the MTC design to date is the input they received from colleges in advance of launching the initiative. In discussion with directors of admissions and college presidents, Scott and his team found a receptive audience “if you can give us something that we can initially scan in two minutes”. It is also more than serendipitous that this effort was launched the same year that dozens of colleges and universities signed on to the “Turning the Tide” manifesto that refocuses college admissions on depth, interest, and passion, and away from multiple advanced placement courses, grade point average, and shallow community service experiences.
I think that it is something that Templestowe College has touched in the development of alternative pathways to higher education. There is also a PYP primary school near me that has mapped out the various learnings and marks them off, I don’t see that as any different?
What is “competency”? Who decides? How is it different from current assessment decisions? (Is it?)
According to Will Richardson if the focus of ‘mastery’ is about better teaching then we are still missing the point.
The other thing to consider is the place of ‘grades’ in US schools. How prevalent are ‘grades’ in Australia? I am not against mastery or any such intervention, I am just mindful of it being seen as the solution.
Art tells us that educational assessment simply produces symbols that are at best a pale reflection of a preconceived reality. These symbols can be distorted and exploited, until one day their utility will diminish, and a new dawn will emerge.
Being in a role that supports the implementation of biannual reporting, it is an intriguing question. What I find the most interesting is how little schools are actually mandated to do. Even though they need to provided judgements (for some things) twice a year and feedback to parents twice a year (which can be in person), it sometimes feels as if we have bought into some myth that we must provide written reports and that parents want it. Even worse, everyone has a belief as to how they must look.
It has been good to see some of the schools that I have spoken to really strip back some elements, especially in regards to specialists. It always amazes me the amount of time spent by a teacher who would potentially see the children for an hour a week.
It will be interesting to see if Gonski 2.0 brings any changes, but I guess that is your point about solutions being pushed on schools. I also look forward to reading ACER’s research into the area and the general guidelines that they put forward.