Tread Carefully Into the Rocky Realm of Rubrics
Although there is some instructional value to rubrics, they should never be used by themselves as the sole, high-stakes, summative declaration of student achievement.
Using rubrics is a frequent practice among schools trying to demonstrate how they measure learning, however – so what’s a teacher to do? Both denying their use across the grade levels and declaring them assessment’s Holy Grail are shortsighted.
There’s a cautious path in between these approaches, one that accounts for their weaknesses while embracing their merits, turning them into constructive tools for the differentiated classroom.
Let’s take a look at the cautions, as recognizing those limitations will inform how we design rubrics more effectively.
Concerns and cautions
The moment we declare to students what is and is not acceptable performance with any given course content or skills via a rubric or list of evaluative criteria, we’ve limited students to our imaginations.
The message is clear: There is one, correct way to do the task, and achieving the teacher’s version of it is paramount. Such an approach does not cultivate creativity or autonomy.
In addition, creating a rubric mandated for use across the grade level or department in an attempt to codify and report students’ complex and creative endeavors gives teachers a false sense of certainty.
Using common rubrics across classrooms also requires us to negotiate with colleagues in our departments what we will accept as evidence of performance for each level of the rubric. This can be a very difficult process, especially if teachers do not have skills in the give-and-take nature of negotiating a hierarchy of content and then changing staunchly held opinions.
Which descriptors will we use for each of our rubric’s labels: Satisfactory? Secure? Sophisticated? Adequate? Exemplary? Proficient? Masterful? Emergent? Developing? Excellent? Poor? Decent? Completed all that was asked of them?
How do we define each of these terms when it comes to knowing the interactions within a particular ecosystem or appreciating the power of narrative?
If we’re too generic in our chosen terms, the descriptors don’t provide meaningful feedback. Students, parents, and colleagues will be confused, interpreting levels of progress in so many ways as to make the rubric’s use frustrating and unhelpful.
If we’re too specific in our descriptors and their labels, we will overly limit growth and veer toward the prescriptive. Students may be labeled successful when all they did was follow someone else’s recipe and prove they were compliant—the antitheses of real learning and growth.
It’s frightening to negotiate these things with one another. We make ourselves vulnerable when we do: We’re revealing the extent of our personal knowledge to respected colleagues, working with an imperfect language, and we may come up lacking.
In addition, sometimes we may have a rocky relationship with our colleagues, and we don’t want to add tension to the tenuous peace. It’s easier to be less candid in such cases and accept “you do your thing, and I’ll do mine.” Assessment’s consistency and integrity in such cases evaporates.
Should we use rubrics at all?
I believe we have a responsibility to teach what’s important to the next generation’s success. We can’t respond with a laissez-faire or do-whatever-you-want attitude.
Let’s aim to incorporate clear principles of rubric design that get us closer to having positive instructional impact with diverse, innovative learners and farther from didactic declarations that thwart their—and our—enterprise.
If we’re collaborating, begin by taking some time with colleagues and design a rubric for a routine task that doesn’t have anything to do with school. You can use symbols, scale designations, and evidence descriptors, among other measures of quality. Here are some suggested tasks to describe that might work:
- Ordering a pizza
- Telling a joke
- Explaining a religious ritual
- Properly washing the dishes or loading the dishwasher
- Tying a shoe
- Drawing a circle
Let’s consider the last example: What are the qualities of a well-drawn circle? Is it just a shape that satisfies mathematical definitions for a circle, or is there something aesthetic in its portrayal? How do we draw a circle? Should we aim for efficiency, accuracy, flair, or all of these?
Now come to consensus on each element in each descriptor with your grade-level colleagues. It’s eye-opening. Notice how easy it is to become bogged down in the details. To create a good rubric, we have to ask these questions:
- What does the task require?
- What constitutes proficiency in each level of performance?
- Which steps are more important than others?
- Are the criteria clear to the person performing the task?
- Does the rubric provide clear and useful feedback?
Getting everything clear and accurate can be overwhelming. We will undoubtedly miss some criteria, overemphasize other qualities, and fail to account for students whose knowledge doesn’t fit neatly within the rubric’s frame.
Using the previous example, we may develop a rubric that kills students’ interest in drawing circles or any geometric figure ever again! Every school year as we design our units of study, we need to reevaluate our rubrics to see that they still hold up, and when they don’t, adjust them.
Consider the following guiding questions as you design and reexamine rubrics for differentiated classes:
- Does the rubric account for everything we want to assess?
- Is a rubric the best way to assess this product?
- Is the rubric tiered for this student group’s readiness level?
- Is the rubric clearly written so anyone doing a cold reading of it will understand what is expected of the student?
- Can a student understand the content yet score poorly on the rubric? If so, why – and how can we change the rubric to make sure that doesn’t happen?
- Can a student understand very little content yet score well on the rubric? If so, how can we change it so that it doesn’t happen?
- What are the benefits to us as teachers of this topic that prompt us to create a rubric for our students?
- What are the benefits to students when they create their own rubrics and the criteria against which their products will be assessed?
- How do the elements of this rubric support differentiated instruction?
- Which steps did we take to make the rubric?
- What should we do differently the next time we use this rubric?
- After completing one of the rubrics, what tips would we give first-time rubric creators?
4 practical tips for building a helpful rubric
You’ll find many more tips in the 2nd edition of Fair Isn’t Always Equal, but here are four practical ways you can improve your rubric construction projects.
Use fewer levels. Three, four, or five levels are enough. The fewer the levels, the higher the reliability. Imagine the ridiculous nature of writing evidence descriptors for every level of the 100-point scale.
Don’t let reports of compliance distort reports of learning. Helpful rubrics are not reports of what students did. They are descriptions of what students learned. Double-check that the rubric doesn’t merely state what students completed, so much as it describes where they are in relation to the learning goals. For example, if the rubric is supposed to report content proficiency or mastery, we shouldn’t see these rubric descriptors: “Completed all parts of the report.” “Submitted everything on time.” “Put a nice cover on the project.” or “Followed directions.”
Reference the same domain all the way through the rubric or scale. Rubrics and scales should focus on clear communication, so let’s not muddy the waters. If we describe a student’s level of strategic thinking in one descriptor, we should refer to different proficiencies in strategic thinking in all the levels. It’s not helpful when one level of performance describes whether certain portions of the project were completed while another level describes only the degree to which the student demonstrated strategic thinking. They are not degrees of the same domain.
Avoid using the terms, average, above average, or below average for the descriptor at any level. These terms all speak to how the student is performing in relation to others. If we claim to be evidence based in our assessments, we report student performance in relation to the lesson’s goals, the standards, or outcomes: Can Jake identify persuasive techniques used by the advertising company? Can Nora use and interpret a Punnett square? Is Tricia using proper weight-lifting technique? Can Omar change the mood of his artwork by adjusting his technique? It’s not very helpful to hear that a student’s work is above average when the average doesn’t identify specific content and skill targets.
Improving a rubric’s instructional impact
Again, you’ll find more suggestions in the book. Here are three to prompt fresh thinking.
Ask students to design the evaluative criteria and rubric themselves. Let students examine exemplar work with a partner, searching for the qualities of excellence. Then ask them, as a class, to design the rubric to be used for the project under way. Once the class agrees on an acceptable rubric, ask them to apply it to the assessment of another exemplar (i.e., a completed project from the past) to see whether it holds up. Help them to adjust the wording and criteria of the rubric as needed. This process dramatically increases students’ success.
Test-drive the rubrics you create on real student work before giving them to students. We may find places where descriptors were so generalized that students could interpret them in a dozen different ways, far from the true evaluative criteria, and we’d have to accept their responses, because technically they satisfied the criteria. By test-driving rubrics, we also find elements we forgot to include in the criteria, so we can add them to the mix, and we find some elements that really aren’t that important, so we remove them.
Provide exemplars for each level of performance. Students and parents need a clear understanding of what constitutes each level of performance. There should be no surprises. As assessment expert Rick Stiggins is fond of saying, “Students can hit targets they can see and that stand still for them” (Stiggins et al. 2004, 57).
The key is transparency. One way to see whether we’re accomplishing this is to ask students to analyze their final product in light of the standard of excellence cited at the top of the rubric scale and then make a prediction about what their final evaluation will be. Their predictions should come very close to what is actually recorded.
A final tip
Rubrics are used to clarify both teacher and student thinking, and they provide a helpful mentoring for students as they analyze and reflect on their work. There are cautions in their use, however, which effective teachers take time to investigate.
As you explore rubrics more deeply and begin to refine your creation process, here’s a final tip. Occasionally, exchange student work with another teacher and, using the rubric attached to the work, assess the students in each other’s classes.
This is a fairly objective assessment, undistorted by our familiarity with our own students. This blind assessment activity has helped me to clarify my thinking as I compare my own evaluations of student work with those of my colleagues. I can provide better feedback to students as a result.
Adapted by the editors from Chapter 9 of Fair Isn’t Always Equal, 2nd Edition (2018).
__________________________
Rick Wormeli is one of the most sought-after middle grades teaching experts in America. He has spent the past 38 years teaching math, science, English, physical education, health, and history, as well as coaching teachers and principals. Rick is a columnist for AMLE Magazine and a contributor to ASCD’s Educational Leadership, and has presented in all fifty states and around the world. He was among the first educators to become a National Board Certified Teacher, in 1995.
Rick’s latest book from Stenhouse is a new edition of the bestselling Fair Isn’t Always Equal: Assessment and Grading in the Differentiated Classroom. Visit his website and follow him on Twitter @rickwormeli2.