I propose that we think of assessment as occurring on two dimensions. The first dimension (let’s set this on a horizontal continua) is the degree of evaluation in which we engage. At the far end of this continua (we’ll place it on the right), we are highly evaluative, desiring scores and measures that quantify outcomes in a fairly precise way. Here, we judge work against clearly defined criteria that we apply to see just how close to the mark a student gets. Such evaluation can produce ranks and comparisons. On the other end of this continua (we’ll place it on the left) we might seek to understand students where they are, making sense of their actions and respond through our grounded interpretation. Here, rather than come with predetermined criteria, we open ourselves to the possibilities and variations in both learning styles and outcomes that a close examination of our students’ learning might provide.
“With this map of the terrain in hand, we can begin to place our various assessment practices in the appropriate quadrant. ”
The second dimension (let’s set this on a vertical continua) is the extent to which our assessments are integrated in our instruction and part of the ongoing learning of the classroom. At one end (we’ll place it at the top) we have assessment that is highly embedded in our teaching and students’ learning. That means that we don’t stop or pause our instruction in order to assess but instead embed it as a regular part of our practice. At the other end of the continua (placed at the bottom) we have assessment that is set apart from instruction and student learning. Here, we declare a formal end to our instruction and move into a deliberate assessment phase that we hope will reveal something about students’ learning. A basic graph of these two dimensions produces four quadrants that we might use to map the terrain of assessment (see Figure 1).
A thought provoking conversation the other day with @YuniSantosa and @MKPolly, in relation to various workshops we will lead in the new year, highlighted the following questions assessment capable teachers and learners might ask ourselves
Intrigued with University of Melbourne’s ‘New Metrics’ program. They have a bit of history with exploring new areas for assessment with the ATC21s program (whitepaper can be found here), however I am not sure what really came of that work.
if the questions on your assessment can be Googled AND you are worried about cheating, then you have written a bad assessment.
- We need to know the level of rigor of the essential standard that we are assessing before we can write a question that will generate reliable information on student mastery.
- We need to decide on the kinds of things that students should know and be able to do if they have mastered the essential standard that we are assessing.
- We need to write and then deliver a small handful (3-5) of questions for each essential standard that we are assessing.
- We need to think through the common misconceptions that we are likely to see in student responses to our questions.
- For any constructed response questions or performance assessments, we need to decide together what “mastery” will look like in student responses.
- That might include developing exemplars of different levels of student performance or creating shared scoring rubrics.
If the focus is multiple choice questions, Ferriter uses MasteryConnect, while if it is about deomonstrations, he uses Flipgrid. Although there are many other, these work within his context. As he explains:
Your goal is to find tools that:
- Have little to no learning curve for you or your students.
- Aren’t blocked by your district’s firewall.
- Fit into your budget — or the budget of your school.
Ferriter closes with a reflection on how he deals with the threat of students cheating. FIrstly, he makes a concerted effort to lower the stakes on my classroom assessments by making them smaller and providing students the opportunity to repeat where needed. In addition to this, he suggests that if the answer is in fact Google-able then maybe it is actually just poor assessment.
Your piece about cheating reminds me about an experience I had in Year 10 Science when we had an open-book test. I remember Ms. Hé not paying too much attention to our chatter during tests. We would turn and talk with colleagues to get the answer. The funny thing was that it did not really make a difference. I cannot remember what grade I got, but I know it was not great. I think it clearly highlighted the lack of care I had for the subject. Cheating made little difference. In hindsight, I wonder if that was in fact her strategy, not sure. It was a useful lesson to learn.
If a nation agreed to classrooms consistently developing an environment of Assessment for Learning where there are open and transparent activities designed for students and teacher to track, feedback and reflect on strengths, weaknesses and gaps in knowledge and skills as part of the learning, then maybe this “AFL record” could be what formed the final record of achievement for a student. This record would have been visible and moderated all along as it developed with the student, teacher and school agreed in what it reported about the learner.
If we had no exams and a exiting school was centred on students’, teachers’, schools’ and parents’ involvement in a national system of learning progress and transparent dialogue, teachers could return to a focus on learning and progress and not preparation for the divisive and alien environment of exam silence.
- The primary purpose of assessment is to help the learner to improve their learning. All assessment should be formative.
- Assessment without feedback (teacher, peer, machine, self) is
judgement, not assessment, pointless.
- Ideally, feedback should be direct and immediate or, at least, as prompt as possible.
- Feedback should only ever relate to what has been done, never the doer.
- No criticism should ever be made without also at least outlining steps that might be taken to improve on it.
- Grades (with some very rare minor exceptions where the grade is intrinsic to the activity, such as some gaming scenarios or, arguably, objective single-answer quizzes with T/F answers) are not feedback.
- Assessment should never ever be used to reward or punish particular prior learning behaviours (e.g. use of exams to encourage revision, grades as goals, marks for participation, etc) .
- Students should be able to choose how, when and on what they are assessed.
- Where possible, students should participate in the assessment of themselves and others.
- Assessment should help the teacher to understand the needs, interests, skills, and gaps in knowledge of their students, and should be used to help to improve teaching.
- Assessment is a way to show learners that we care about their learning.
He elaborates on these further in regards to credentials and objective quizzes. Dron believes that students should have autonomy when it comes to assessment and the best model for this is the creation of a portfolio of evidence.
A portfolio of evidence, including a reflective commentary, is usually going to be the backbone of any fair, humane, effective assessment … It is worth noting that, unlike written exams and their ilk, such methods are actually fun for all concerned, albeit that the pleasure comes from solving problems and overcoming challenges, so it is seldom easy.
The question regarding retakes isn’t simply, “Should students get a second chance?” Rather, it is, “How can we use assessments to help students improve?” If we incentivize success on the first assessment by planning enticing enrichment activities and guide students in correcting the learning errors identified on that assessment, we’re much more likely to realize Benjamin Bloom’s dream of having all students, ALL students learn well.
To bring improvement, Bloom stressed formative assessments must be followed by high-quality, corrective instruction designed to remedy whatever learning errors the assessments identified. Unlike reteaching, which typically involves simply repeating the original instruction, correctives present concepts in new ways and engage students in different learning experiences.
He explains that concerns about time and coverage can be overcome by using a corrective process, that this is what real life is like (i.e. surgeon, pilot), and the everyday reality of mastery and fair grades (i.e driver’s license.)
I guess it raises the question, what is the point of, if students are not given the opportunity to act upon it?
Specific and detailed criteria with examples can raise the bar and reduce the likelihood of students handing in C-R-A-P, but they can also limit the format, creativity and extension of learning that could be possible if we left things more open, and provided more choice.
What if we assume that students will get support, have access to their notes, and can’t be fully supervised, how does and should that affect our assessment practices?
via Stephen Downes
This report is the result of an experts meeting exploring assessment in universities and colleges and how technology could be used to help address some of the problems and opportunities.
Grading is good at ‘encouraging people’ to do complicated tasks that are often represented by memorization, obedience and linear thinking. If those are our actual goals. If our goals are complex and include things like creativity… we’re looking to support intrinsic motivation. Grades don’t support intrinsic motivation.
One of the most powerful elements of the MTC design to date is the input they received from colleges in advance of launching the initiative. In discussion with directors of admissions and college presidents, Scott and his team found a receptive audience “if you can give us something that we can initially scan in two minutes”. It is also more than serendipitous that this effort was launched the same year that dozens of colleges and universities signed on to the “Turning the Tide” manifesto that refocuses college admissions on depth, interest, and passion, and away from multiple advanced placement courses, grade point average, and shallow community service experiences.
I also remember Scott Looney talking on the Modern Learners podcast:
I think that it is something that Templestowe College has touched in the development of alternative pathways to higher education. There is also a PYP primary school near me that has mapped out the various learnings and marks them off, I don’t see that as any different?
I still think though Audrey Watters sums it up best when she asks:
What is “competency”? Who decides? How is it different from current assessment decisions? (Is it?)
According to Will Richardson if the focus of ‘mastery’ is about better teaching then we are still missing the point.
The other thing to consider is the place of ‘grades’ in US schools. How prevalent are ‘grades’ in Australia? I am not against mastery or any such intervention, I am just mindful of it being seen as the solution.
Art tells us that educational assessment simply produces symbols that are at best a pale reflection of a preconceived reality. These symbols can be distorted and exploited, until one day their utility will diminish, and a new dawn will emerge.