Bookmarked Mapping Assessment by Written By RON RITCHHART (ronritchhart.com)

I propose that we think of assessment as occurring on two dimensions. The first dimension (let’s set this on a horizontal continua) is the degree of evaluation in which we engage. At the far end of this continua (we’ll place it on the right), we are highly evaluative, desiring scores and measures that quantify outcomes in a fairly precise way. Here, we judge work against clearly defined criteria that we apply to see just how close to the mark a student gets. Such evaluation can produce ranks and comparisons. On the other end of this continua (we’ll place it on the left) we might seek to understand students where they are, making sense of their actions and respond through our grounded interpretation. Here, rather than come with predetermined criteria, we open ourselves to the possibilities and variations in both learning styles and outcomes that a close examination of our students’ learning might provide.

“With this map of the terrain in hand, we can begin to place our various assessment practices in the appropriate quadrant. ”
The second dimension (let’s set this on a vertical continua) is the extent to which our assessments are integrated in our instruction and part of the ongoing learning of the classroom. At one end (we’ll place it at the top) we have assessment that is highly embedded in our teaching and students’ learning. That means that we don’t stop or pause our instruction in order to assess but instead embed it as a regular part of our practice. At the other end of the continua (placed at the bottom) we have assessment that is set apart from instruction and student learning. Here, we declare a formal end to our instruction and move into a deliberate assessment phase that we hope will reveal something about students’ learning. A basic graph of these two dimensions produces four quadrants that we might use to map the terrain of assessment (see Figure 1).

Ron Ritchhart provides a model for mapping assessment based on two dimensions: integration and evaluation. He provides examples for each of the quadrants, including providing feedback on performance (Quadrant A), checking for understanding and misconceptions (Quadrant B), examination of teacher’s documentation of learning (Quadrant C) and formal summative assessments (Quadrant D). In the end, the purpose of the map is to ‘to know where we are, and where we might go or want to be’.
Replied to The HSC – what it is and what it needs to be. by gregmiller68 (gregmiller68.com)

Whilst the HSC has been in continuous review for decades it now needs refurbishment. In doing so, we need to keep the best of what it offers and replace what needs to go with new metrics which offer a far more complete picture of each young adult’s knowledge, understanding, skills, capabilities and dispositions, and how they are applied.

As I have said, what the HSC is and what it needs to be are two very different things.

Greg, this seems to be the wicked problem of our time. It has been interesting to see various universities form connections with schools, such as Templestowe and Swinburne University. The problem is that the status quo still seems to be based on scores and ranking.

Intrigued with University of Melbourne’s ‘New Metrics’ program. They have a bit of history with exploring new areas for assessment with the ATC21s program (whitepaper can be found here), however I am not sure what really came of that work.

Bookmarked Remote Teaching Tip: Assessments in an Online Environment by Bill Ferriter (blog.williamferriter.com)

if the questions on your assessment can be Googled AND you are worried about cheating, then you have written a bad assessment.

Bill Ferriter suggests that before you worry about how you are going to assess learning online, you need to address the question of what you are assessing for.

  • We need to know the level of rigor of the essential standard that we are assessing before we can write a question that will generate reliable information on student mastery.
  • We need to decide on the kinds of things that students should know and be able to do if they have mastered the essential standard that we are assessing.
  • We need to write and then deliver a small handful (3-5) of questions for each essential standard that we are assessing.
  • We need to think through the common misconceptions that we are likely to see in student responses to our questions.
  • For any constructed response questions or performance assessments, we need to decide together what “mastery” will look like in student responses.
  • That might include developing exemplars of different levels of student performance or creating shared scoring rubrics.

If the focus is multiple choice questions, Ferriter uses MasteryConnect, while if it is about deomonstrations, he uses Flipgrid. Although there are many other options out there, these work within his context. As he explains:

Your goal is to find tools that:

  • Have little to no learning curve for you or your students.
  • Aren’t blocked by your district’s firewall.
  • Fit into your budget — or the budget of your school.

Ferriter closes with a reflection on how he deals with the threat of students cheating. FIrstly, he makes a concerted effort to lower the stakes on my classroom assessments by making them smaller and providing students the opportunity to repeat where needed. In addition to this, he suggests that if the answer is in fact Google-able then maybe it is actually just poor assessment.

Your piece about cheating reminds me about an experience I had in Year 10 Science when we had an open-book test. I remember Ms. Hé not paying too much attention to our chatter during tests. We would turn and talk with colleagues to get the answer. The funny thing was that it did not really make a difference. I cannot remember what grade I got, but I know it was not great. I think it clearly highlighted the lack of care I had for the subject. Cheating made little difference. In hindsight, I wonder if that was in fact her strategy, not sure. It was a useful lesson to learn.

Liked Visible Learning could end exams (EDUWELLS)

If a nation agreed to classrooms consistently developing an environment of Assessment for Learning where there are open and transparent activities designed for students and teacher to track, feedback and reflect on strengths, weaknesses and gaps in knowledge and skills as part of the learning, then maybe this “AFL record” could be what formed the final record of achievement for a student. This record would have been visible and moderated all along as it developed with the student, teacher and school agreed in what it reported about the learner.

If we had no exams and a exiting school was centred on students’, teachers’, schools’ and parents’ involvement in a national system of learning progress and transparent dialogue, teachers could return to a focus on learning and progress and not preparation for the divisive and alien environment of exam silence.

Bookmarked
With an eye towards reforming assessment practices, Jon Dron compiles a list of principles associated with assessment:

  • The primary purpose of assessment is to help the learner to improve their learning. All assessment should be formative.
  • Assessment without feedback (teacher, peer, machine, self) is judgement, not assessment, pointless.
  • Ideally, feedback should be direct and immediate or, at least, as prompt as possible.
  • Feedback should only ever relate to what has been done, never the doer.
  • No criticism should ever be made without also at least outlining steps that might be taken to improve on it.
  • Grades (with some very rare minor exceptions where the grade is intrinsic to the activity, such as some gaming scenarios or, arguably, objective single-answer quizzes with T/F answers) are not feedback.
  • Assessment should never ever be used to reward or punish particular prior learning behaviours (e.g. use of exams to encourage revision, grades as goals, marks for participation, etc) .
  • Students should be able to choose how, when and on what they are assessed.
  • Where possible, students should participate in the assessment of themselves and others.
  • Assessment should help the teacher to understand the needs, interests, skills, and gaps in knowledge of their students, and should be used to help to improve teaching.
  • Assessment is a way to show learners that we care about their learning.

He elaborates on these further in regards to credentials and objective quizzes. Dron believes that students should have autonomy when it comes to assessment and the best model for this is the creation of a portfolio of evidence.

A portfolio of evidence, including a reflective commentary, is usually going to be the backbone of any fair, humane, effective assessment … It is worth noting that, unlike written exams and their ilk, such methods are actually fun for all concerned, albeit that the pleasure comes from solving problems and overcoming challenges, so it is seldom easy.

This is a useful provocation in regards to assessment and feedback. It is also interesting to think about in regards to things like open badeges.

Replied to Feedback on the Capabilities for a Changing World. by gregmiller68 (gregmiller68.com)

Our next challenge is to turn an improving ‘back end’ tracking tool into a more interactive and intuitive online experience for students and parents which engages them more than twice a year.

Thank you Greg for continuing to share the journey of your school. I am really intrigued as to how well the students are able to speak to this data?
Bookmarked Why Should We Allow Students to Retake Assessments? by Peter DeWitt (blogs.edweek.org)

The question regarding retakes isn’t simply, “Should students get a second chance?” Rather, it is, “How can we use assessments to help students improve?” If we incentivize success on the first assessment by planning enticing enrichment activities and guide students in correcting the learning errors identified on that assessment, we’re much more likely to realize Benjamin Bloom’s dream of having all students, ALL students learn well.

Thomas Guskey responds to concerns raised around offering students the opportunity to retake tests and assessment.

To bring improvement, Bloom stressed formative assessments must be followed by high-quality, corrective instruction designed to remedy whatever learning errors the assessments identified. Unlike reteaching, which typically involves simply repeating the original instruction, correctives present concepts in new ways and engage students in different learning experiences.

He explains that concerns about time and coverage can be overcome by using a corrective process, that this is what real life is like (i.e. surgeon, pilot), and the everyday reality of mastery and fair grades (i.e driver’s license.)

I guess it raises the question, what is the point of feedback, if students are not given the opportunity to act upon it?

Bookmarked The Hitch-hiker’s Guide to Alternative Assessment (damiantgordon.com)
Damian Gordon collates an extensive list of alternative assessment ideas. There has been a lot written about the tools to use in association with online learning, but less in regards to the various assessment practices.

Along with Bianca Hewes’ discussion of Project Based Learning and Pernille Ripp’s Choose Your Own Adventure, this guide is useful in helping us rethinking the options.

via Stephen Downes

Replied to Sweeping changes to HSC and syllabus proposed by government review (The Sydney Morning Herald)

The report proposed reducing more than 170 senior-level courses to a “limited set of rigorous, high-quality, advanced courses”. Vocational and academic subjects would slowly be brought closer so that eventually every course would mix theory and application.

HSC students would also have to complete a single major project, which would allow the development and assessment of skills such as gathering and analysing, as well as so-called general capabilities such as team work and communication.

It is interesting to consider the proposed changes in the NSW Curriculum Review Interim Report against other curriculum frameworks, like New Zealand. It also reminds me of a comment someone once made to me that curriculum is the best guess for tomorrow. I was also intrigued by Marten Koomen’s take, especially highlighting Masters’ Rasch over Reckase. It makes me rethink the use of ‘crowded curriculum‘.
Replied to

Thank you Marten for the link to this, it is intriguing to think how the models we build upon can morph into the natural way of being as if there are no other alternatives.
Liked Connecting assessment goals to our education practices – a historical perspective by dave dave (davecormier.com)

Grading is good at ‘encouraging people’ to do complicated tasks that are often represented by memorization, obedience and linear thinking. If those are our actual goals. If our goals are complex and include things like creativity… we’re looking to support intrinsic motivation. Grades don’t support intrinsic motivation.

Replied to Criterion vs Holistic Rubrics? #EDU407Sum19 by Greg McVerryGreg McVerry (quickthoughts.jgregorymcverry.com)

I like having personal conversations with students and developing TAGs-Targeted Areas of Growth. What are the one or two criterion a student should focus on when improving writing. Never try to get an 8 year old writer to adresss six different indicators of quaility at once. I don’t think adult writers should undertake such an endeavor.

Greg, this reminds me of Bianca Hewes ‘two medals and a mission‘ for providing feedback.
Replied to Singapore abolishes school exam rankings, says learning is not competition (Citi Newsroom)

For older students in primary schools and secondary schools, marks for each subject will be rounded off and presented as a whole number, without decimal points – to reduce the focus on academic scores. Parents will continue to receive information about their child’s progress in school during parent-teacher meetings.

Is this that different from Australia? I find that there is a lot of confusion about what schools do and are required to do when it comes to assessment and reporting. This is something discussed in Episode 139 of the TER Podcast.
Replied to When will the ‘grade addiction’ end? Probs never. (Bianca Hewes)

This experience reminded me that our young people learn this addiction to grades from us, the adults. They don’t desire a mark or a grade for the mud pie they make and proudly display when they are 3. They don’t want to be given a piece of paper with an A on it when they learn to ride a bike. We make this unnatural framework for their learning, and often all it does is create anxiety, perfectionism, conflict, competition and, worst of all, not great learners.

I have always loved your use of ‘Medals and Missions‘ to support students in becoming more self-determined learners. It also reminds me of a post from Bernard Bull arguing that letter grades are the enemy of authentic and humane learning.
Replied to
I first heard of Mastery Transcript Consortium via Grant Lichtman’s blog:

One of the most powerful elements of the MTC design to date is the input they received from colleges in advance of launching the initiative. In discussion with directors of admissions and college presidents, Scott and his team found a receptive audience “if you can give us something that we can initially scan in two minutes”. It is also more than serendipitous that this effort was launched the same year that dozens of colleges and universities signed on to the “Turning the Tide” manifesto that refocuses college admissions on depth, interest, and passion, and away from multiple advanced placement courses, grade point average, and shallow community service experiences.

I also remember Scott Looney talking on the Modern Learners podcast:

For me it picks up on what Todd Rose discusses in his book End of Average, as well as some of what is being attempted in the Open Badges space.

I think that it is something that Templestowe College has touched in the development of alternative pathways to higher education. There is also a PYP primary school near me that has mapped out the various learnings and marks them off, I don’t see that as any different?

I still think though Audrey Watters sums it up best when she asks:

What is “competency”? Who decides? How is it different from current assessment decisions? (Is it?)

According to Will Richardson if the focus of ‘mastery’ is about better teaching then we are still missing the point.

The other thing to consider is the place of ‘grades’ in US schools. How prevalent are ‘grades’ in Australia? I am not against mastery or any such intervention, I am just mindful of it being seen as the solution.

Liked Modern Art, and the Art of Educational Assessment by Marten Koomen (Tulip Education Research Blog)

Art tells us that educational assessment simply produces symbols that are at best a pale reflection of a preconceived reality. These symbols can be distorted and exploited, until one day their utility will diminish, and a new dawn will emerge.