How to Design Effective Rubrics

Rubrics can be effective assessment tools when constructed using methods that incorporate four main criteria: validity, reliability, fairness, and efficiency. For a rubric to be valid and reliable, it must only grade the work presented (reducing the influence of instructor biases) so that anyone using the rubric would obtain the same grade (Felder and Brent 2016). Fairness ensures that the grading is transparent by providing students with access to the rubric at the beginning of the assessment while efficiency is evident when students receive detailed, timely feedback from the rubric after grading has occurred (Felder and Brent 2016). 

Because the most informative rubrics for student learning are analytical rubrics (Brookhart 2013), this video on how to construct an analytical rubric explains the relevant steps.

 

Five Steps to Design Effective Rubrics

1 Decide What Students Should Accomplish

The first step in designing a rubric is determining the content, skills, or tasks you want students to be able to accomplish (Wormeli 2006) by completing an assessment. Thus, two main questions need to be answered:

  1. What do students need to know or do? and
  2. How will the instructor know when the students know or can do it?

Another way to think about this is to decide which learning objectives for the course are being evaluated using this assessment (Allen and Tanner 2006, Wormeli 2006). (More information on learning objectives can be found at Teaching@UNL). For most projects or similar assessments, more than one area of content or skill is occurring, so most rubrics assess more than one learning objective. For example, a project may require students to research a topic (content knowledge learning objective) using digital literacy skills (research learning objective) and presenting their findings (communication learning objective). Therefore, it is important to think through all the tasks or skills students will need to complete during an assessment to meet the learning objectives. Additionally, it is advised to review examples of rubrics for a specific discipline or task to find grade-level appropriate rubrics to aid in preparing a list of tasks and activities that are essential to meeting the learning objectives (Allen and Tanner 2006).

2 Identify 3-10 Criteria

Once the learning objectives and a list of essential tasks for students is compiled and aligned to learning objectives, the next step is to determine the number of criteria for the rubric. Most rubrics have three or more criteria with most rubrics having less than a dozen criteria. It is important to remember that as more criteria are added to a rubric, a student’s cognitive load increases making it more difficult for students to remember all the assessment requirements (Allen and Tanner 2006, Wolf et al. 2008). Thus, usually 3-10 criteria are recommended for a rubric (if an assessment has less than 3 criteria, a different format (e.g., grade sheet) can be used to convey grading expectations and if a rubric has more than ten criteria, some criteria can be consolidated into a single larger category; Wolf et al. 2008). Once the number of criteria is established, the final step for the criteria aspect of a rubric is creating descriptive titles for each criterion and determining if some criteria will be weighted and thus be more influential on the grade for the assessment. Once this is accomplished, the right column of the rubric can be designed (Table 1).

Table 1. Structure of a rubric with three different criteria (Content Knowledge, Research Skills, and Presenting Skills) and five levels of performance (mastery, proficient, apprentice, novice, missing). Note that only three performance levels are included for the “Research Skills” criterion.
CriteriaMasteryProficientApprenticeNoviceAbsent
Content Knowledge (weight = 3)Details for highest performance levelDetails for meeting the criteriaDetails for mid-performance levelDetails for lowest performance levelNo work turned in for project
Research Skills (weight = 1)Details for highest performance level.Details for mid-performance level.No work turned in for project
Presentation Skills (weight = 2)Details for highest performance levelDetails for meeting the criteriaDetails for mid-performance levelDetails for lowest performance levelNo presentation given
3 Choose Performance Level Labels

The third aspect of a rubric design is the levels of performance and the labels for each level in the rubric. It is recommended to have 3-6 levels of performance in a rubric (Allen and Tanner 2006, Wormeli 2006, Wolf et al. 2008). The key to determining the number of performance levels for a rubric is based on how easy it is to distinguish between levels (Allen and Tanner 2006). Can the difference in student performance between a “3” and “4” be readily seen on a five-level rubric? If not, should only four levels be used for the rubric for all criteria. If most of the criteria can easily be differentiated with five levels, but only one criterion is difficult to discern, then two levels could be left blank (see “Research Skills” criterion in Table 1). It is also important to note that having fewer levels makes constructing the rubric faster but may result in ambiguous expectations and difficulty providing feedback to students.

Once the number of performance levels are set for the rubric, assign each level a name or title that indicates the level of performance. When creating the name system for the performance levels of a rubric, it is important to use terms that are not subjective, overly negative, or convey judgements (e.g., “Excellent”, “Good”, and “Bad”; Allen and Tanner 2006, Stevens and Levi 2013) and to ensure the terms use the same aspect of language (all nouns, all verbs ending in “-ing”, all adjectives, etc.; Wormeli 2006). Examples of different performance level naming systems include:

  • Exemplary, Competent, Not yet competent
  • Proficient, Intermediate, Novice
  • Strong, Satisfactory, Not yet satisfactory
  • Exceeds Expectations, Meets Expectations, Below Expectations
  • Proficient, Capable, Adequate, Limited
  • Exemplary, Proficient, Acceptable, Unacceptable
  • Mastery, Proficient, Apprentice, Novice, Absent

Additionally, the order of the levels needs to be determined with some rubrics designed to increase in proficiency across the levels (lowest, middle, highest performance) and other designed to start with the highest performance level and move toward the lowest (highest, middle, lowest performance).

4 Describe Performance Details

The final step in developing a rubric is to fill in the details for each performance level for each criterion. It is advised to begin by filling out the requirements for the highest performance level (what constitutes mastery for the criterion for the assessment), then fill out the lowest performance level (what shows little or no understanding for the criterion), before filling in the other performance levels for each criterion (Wormeli 2006, Stevens and Levi 2013). When writing the descriptions for a performance level avoid using subjective language (basic, competent, incomplete, poorly, flawed, etc.) unless these terms are defined explicitly for students. What tangible metrics constitute poor over adequate performance? If the instructor cannot answer this question explicitly, students will have difficulty interpreting the rubric. The details need to be objective, clear, and non-overlapping between performance levels (Wolf et al. 2008). For example, a criterion for grammar in a writing assessment would be difficult to understand or grade if the language in the mastery performance level was “Excellent use of grammar” instead of “Only one or two grammatical errors are present in the paper”.

5 Test and Evaluate the Rubric

It is essential to evaluate how well a rubric works for grading and providing feedback to students. If possible, use previous student work to test a rubric to determine how well the rubric functions for grading the assessment prior to giving the rubric to students (Wormeli 2006). After using the rubric in a class, evaluate how well students met the criteria and how easy the rubric was to use in grading (Allen and Tanner 2006). If a specific criterion has low grades associated with it, determine if the language was too subjective or confusing for students. This can be done by asking students to critique the rubric or using a student survey for the overall assessment. Alternatively, the instructor can ask a colleague or instructional designer for their feedback on the rubric. If more than one instructor is using the rubric, determine if all instructors are seeing lower grades on certain criterion. Analyzing the grades can often show where students are failing to understand the content or the assessment format or requirements.

Next, look at how well the rubric reflects the work turned in by the students (Allen and Tanner 2006, Wormeli 2006). Does the grade based on the rubric reflect what the instructor would expect for the student’s assignment? Or does the rubric result in some students receiving a higher or lower grade? If the latter is occurring, determine which aspect of the rubric needs to be “fudged” to obtain the correct grade for the assessment and update the criteria that are problematic. Alternatively, the instructor may find that the rubric is good for all criteria but that some aspects of the assessment are under or over valued in the rubric (Allen and Tanner 2006). For example, if the main learning objective is the content, but 40% of the assessment is on writing skills, the rubric may need to be weighed to allow content criteria to have a stronger influence on the grade over writing criteria.

Finally, analyze how well the rubric worked for grading the assessment overall. If the instructor needed to modify the interpretation of the rubric while grading, then the levels of performance or the number of criteria may need to be edited to better align with the learning objectives and the evidence being shown in the assessment (Allen and Tanner 2006). For example, if only three performance levels exist in the rubric, but the instructor often had to give partial credit on a criterion, then this may indicate that the rubric needs to be expanded to have more levels of performance. If instead, a specific criterion is difficult to grade or distinguish between adjacent performance levels, this may indicate that too much is being assessed in the criterion (and thus should be divided into two or more different criteria) or that the criterion is not well written and needs to be explained with more details. Reflecting on the effectiveness of a rubric should be done each time the rubric is used to ensure it is well-designed and accurately represents student learning.

Rubric Examples & Resources

UNCW College of Arts & Science “Scoring Rubrics” contains links to discipline-specific rubrics designed by faculty from many institutions. Most of these rubrics are downloadable Word files that could be edited for use in courses.

Syracuse University “Examples of Rubrics” also has rubrics by discipline with some as downloadable Word files that could be edited for use in courses.

University of Illinois – Springfield has pdf files of different types of rubrics on its “Rubric Examples” page. These rubrics include many different types of tasks (presenting, participation, critical thinking, etc.) from a variety of institutions

If you are building a rubric in Canvas, the rubric guide in Canvas 101 provides detailed information including video instructions: Using Rubrics: Canvas 101 (unl.edu)


Allen, D. and K. Tanner (2006). Rubrics: Tools for making learning goals and evaluation criteria explicit for both teachers and learners. CBE – Life Sciences Education 5: 197-203.

Stevens, D. D., and A. J. Levi (2013). Introduction to Rubrics: an assessment tool to save grading time, convey effective feedback, and promote student learning. Stylus Publishing, Sterling, VA, USA.

Wolf, K., M. Connelly, and A. Komara (2008). A tale of two rubrics: improving teaching and learning across the content areas through assessment. Journal of Effective Teaching 8: 21-32.

Wormeli, R. (2006). Fair isn’t always equal: assessing and grading in the differentiated classroom. Stenhouse Publishers, Portland, ME, USA.


This page was authored by Michele Larson and last updated September 15, 2022

 

Was this page helpful?