Introduction to Rubrics

A rubric is an assessment tool that provides information on performance expectations for students. Essentially, a rubric divides an assessment into smaller parts (criteria) and then provides details for different levels of performance possible for each part (Stevens and Levi 2013). Because rubrics are used to assess performance-based activities, rubrics provide a method for grading a wide range of assessments including discipline-specific skills (playing an instrument, using a microscope, repairing a transmission, executing a specific dance technique, etc.), student-created products (written reports, constructed objects, works of art, concept maps, models, etc.) or specific student behaviors (presentation skills, peer-review of student writing, discussions, group evaluations, etc.; Brookhart 2013, Stevens and Levi 2013).

What is a Rubric?

Rubrics are constructed in a matrix (table) with different levels of performance explained for each specific criteria within the matrix (Table 1). A rubric differs from a grading sheet as the rubric provides details for each performance level for each criterion (Allen and Tanner 2006, Felder and Brent 2016) instead of just stating the criteria with point designations (total points possible for each criterion). Most rubrics are designed to have the criteria for an assessment as rows and the columns used to indicate the different performance levels (Table 1).

Table 1. Structure of a rubric with three different criteria and three levels of performance.
CriteriaExcellentAverageLimited
Criterion #1 Details for Criterion #1 at the highest performance level Details for Criterion #1 for mid-performance level Details for Criterion #1 at the lowest performance level
Criterion #2 Details for Criterion #2 at the highest performance level Details for Criterion #2 for mid-performance level Details for Criterion #2 at the lowest performance level
Criterion #3 Details for Criterion #3 at the highest performance level Details for Criterion #3 for mid-performance level Details for Criterion #3 at the lowest performance level

There are two main types of rubrics: holistic and analytical rubrics. Holistic rubrics provide criteria and performance levels but uses generic statements for each level regardless of the criteria being discussed (Allen and Tanner 2006, Wormeli 2006). For example, all criteria in the “Excellent” performance level would be “Demonstrated mastery of the skill” or “Shows deep understanding of the concept without any errors”. Alternatively, an analytical rubric provides specific statements for how criteria are reached for each performance level (Allen and Tanner 2006, Wormeli 2006). For instance, “Demonstrated ability to create a microscope slide and use a microscope to draw different types of bacteria” or “Explains the main factors that culminated in the start of World War I with details (names, dates, etc.) and referencing at least two sources” would be possible statements for the “Excellent" performance level of different rubrics. Holistic rubrics are faster to create and score, but do not explicitly communicate what information is necessary to meet the criteria and therefore lack the level of detail needed for rubrics to be helpful to students (Brookhart 2013). Therefore, it is recommended that educators use analytical rubrics for most assessments, especially formative assessments.

In addition to the basic layout and type of rubric (analytical vs. holistic), there are specific steps to building a well-designed rubric that are explained on the webpage “Designing Effective Rubrics.”

How are Rubrics Used?

Rubrics can be used for a variety of purposes in a course. A rubric can be used to aid in the development of an assessment since working through the steps to design a rubric allows for deeper thought on the requirements and alignment of different aspects of the assessment (Brookhart 2013). Rubrics can also be provided to students as a guide for planning and creating a project. In this way, students are provided with the rubric prior to completing an assessment to better explain the expectations for an assessment (and the rubric may or may not be used in the grading process; Allen and Tanner 2006, Brookhart 2013). Often, rubrics allow for streamlining of grading by providing a framework that allows for quicker scoring of student work while simultaneously providing feedback to students (Allen and Tanner 2006, Stevens and Levi 2013, Felder and Brent 2016, Francis 2018). When more than one instructor is teaching a course, the use of rubrics allows for standardization of grading (Allen and Tanner 2006), but only when the group of instructors use calibration practices to ensure the rubric is being used consistently among instructors (Feldman 2019). Rubrics can also be used to track student improvement when used repeatedly for similar assignments or when students are developing a skill (Allen and Tanner 2006, Stevens and Levi 2013). Additionally, rubrics can be used to encourage self-reflection and self-regulation in students (Allen and Tanner 2006, Stevens and Levi 2013, Panadero and Romero 2014, Zhao et al. 2021). Overall, rubrics can increase the clarity and transparency of grading for students (Francis 2018) especially when the assessment criteria are well aligned to the learning objectives (Burton 2006).

What are the benefits of using Rubrics?

Well-designed rubrics can be beneficial for students by reducing aspects of the “hidden curriculum” that many students experience (especially first-generation students or students from historically marginalized groups) by making assessment expectations clearer to students (Allen and Tanner 2006, Wolf et al. 2008, Stevens and Levi 2013). Rubrics have been shown to increase student performance on assessments, however, increased scores were often only found if students were required to use the rubric prior to completing the assessment (Felder and Brent 2016, Francis 2018). For example, when researcher provided students in different class sections of the same course with different levels of engagement with a rubric (no rubric control group, rubric provided, rubric explained in class, rubric explained in class and students given access to additional resources), results showed no difference in student scores between the no rubric and rubric provided groups indicating that just giving students a rubric does not positively influence student grades on the assessment (Francis 2018). Yet, when the rubric was explicitly explained during a class session (regardless of availability of additional resources), significant improvements were seen in these student’s scores on the assessment (Francis 2018). In a separate study on self-assessment, researchers found that when a rubric was provided, students with access to the rubric (compared to the control group) reported higher levels of self-regulation, increased performance on the assessment, and higher self-accuracy (self-assessment scores better matched the earned grade) for the assessment (Panadero and Romero 2014). Thus, instructors must provide students with opportunities to meaningfully engage with the rubric for the rubric to be beneficial for student performance.

One method to achieve the level of engagement needed for rubrics to be helpful for students is to create assignments that require students to assess an example or conduct a peer-review using the rubric (Francis 2018). Alternatively, involving students in creating the rubric can also provide the level of engagement needed for rubrics to be beneficial for students (Panadero and Romero 2014, Francis 2018). In fact, student-instructor generated rubrics (usually created during a whole class activity) increased student engagement with the rubric while also reducing resistance to using the rubric for grading and promoted the development of skills including negotiation, self-regulation, and self-reflection in students (Zhao et al. 2021). It is also noteworthy that most peer-generated rubrics for grading of group dynamics or group projects often are highly aligned to what an instructor would have created for students to use (Zhao et al. 2021).

Students are not the only ones to benefit from rubrics. Rubrics can assist instructors by reducing the time required to grade assignments, providing timely feedback to students, and allowing for more consistency in grading (Allen and Tanner 2006, Stevens and Levi 2013, Felder and Brent 2016, Francis 2018). Designing a rubric can help instructors develop a better assessment and provides a framework for grading when multiple instructors are teaching different sections of a course (Allen and Tanner 2006, Brookhart 2013). Thus, rubrics can benefit instructors once they are constructed and aligned to the learning objectives and assessment.

What challenges can occur when using rubrics?

Not all educators find rubrics to be useful for their courses. Many find the time and effort needed to develop a well-designed rubric does not result in higher performance from students or that the rubric does not accurately reflect the students actual understanding of the topic being assessed (Wormeli 2006, Felder and Brent 2016). Although some pre-made rubrics are available, most analytic rubrics (those that best assist students) require either substantial adaptation of these pre-existing rubrics or construction of new rubrics (Allen and Tanner 2006). These rubrics also require updating and re-evaluation to determine if the criteria and descriptions for each level of performance are accurately reflecting student learning (Allen and Tanner 2006, Wormeli 2006). Thus, rubrics can require a large investment of time from instructors and may not result in increased learning by students.

Others find using rubrics (and grades in general) in courses causes unintended negative consequences that undermine learning (Kohn 2006). When students are graded (especially using a rubric), they are less inclined to learn the material deeply (instead just doing the minimum to meet the requirements), to take risks (including creativity and innovation), and often develop a loss of interest in learning and take on a more fixed mindset (Kohn 2006). Grading in general impedes critical thinking (Nieminen 2020), reduces student motivation (Schinske and Tanner 2014, Chamberlin et al. 2018, Schwab et al. 2018), and widens inequalities (Feldman 2019, Link and Guskey 2019). By trying to make something subjective (assessment) and make it more quantitative (using rubrics, standardized tests, and similar approaches), instructors rationalize and legitimize the grades they give students over any real understanding of what students are learning (Kohn 2006). Additionally, the underlying reasoning for student improvement when rubrics are provided may not reflect increased “pedagogical value” in the rubric but instead relate to reductions in student anxiety and stress (due to having a better idea of assessment requirements) allowing them to focus and create better assignments (Francis 2018). If this is the case, then any method that allows students to gain a better understanding of the assessment requirements will improve learning and rubrics may not be the best method. Thus, it is up to the instructor to determine the best method to explicitly convey assessment criteria and expectations while also providing students with timely feedback on assessments.

References

Allen, D. and K. Tanner (2006). Rubrics: Tools for making learning goals and evaluation criteria explicit for both teachers and learners. CBE – Life Sciences Education 5: 197-203.

Brookhart, S. M. (2013). How to create and use rubrics for formative assessment and grading. ASCD, Alexandria, VA, USA.

Burton, K. (2015). Continuing my journey on designing and refining criterion-referenced assessment rubrics. Journal of Learning Design 8: 1-13.

Chamberlin, K., M. Yasue, and I. A. Chiang (2018). The impact of grades on student motivation. Active Learning in Higher Education DOI: https://doi.org/10.1177/1469787418819728

Felder, R. M., and R. Brent (2016). Teaching and Learning STEM: A practical guide. Jossey-Bass, San Francisco, CA, USA.

Feldman, J. (2019). Grading for Equity: What it is, why it matters, and how it can transform schools and classrooms. Corwin, Thousand Oaks, CA, USA.

Francis, J. E. (2018). Linking Rubrics and academic performance: an engagement theory perspective. Journal of University Teaching and Learning Practice 15: DOI: http://ro.uow.edu.au/jutlp/vol15/iss1/3

Kohn, A. (2006). The trouble with rubrics. English Journal 95:1-5.

Link, L. J., and T. R. Guskey (2019). How traditional grading contribute to student inequities and how to fix it. Educational, School, and Counseling Psychology Faculty Publications 53:

https://uknowledge.uky.edu/edp_facpub/53

Nieminen, J. H. (2020). Disrupting the power relationships of grading in higher education through summative self-assessment. Teaching in Higher Education DOI:

https://doi.org/10.1080/13562517.2020.1753687

Panadero, E. and M. Romero (2014). To rubric or not to rubric? The effects of self-assessment on self-regulation, performance and self-efficacy. Assessment in Education: Principles, Policy & Practice 21: DOI https://doi.org/10.1080/0969594X.2013.877872

Schwab, K., B. Moseley, and D. Dustin (2018). Grading grades as a measure of student learning. SCHOLE: A Journal of Leisure Students and Recreation Education 33 87-95.

Schinske, J., and K. Tanner (2014). Teaching more by grading less (or differently). CBE – Life Science Education 13:159-166.

Stevens, D. D., and A. J. Levi (2013). Introduction to Rubrics: an assessment tool to save grading time, convey effective feedback, and promote student learning. Stylus Publishing, Sterling, VA, USA.

Wolf, K., M. Connelly, and A. Komara (2008). A tale of two rubrics: improving teaching and learning across the content areas through assessment. Journal of Effective Teaching 8: 21-32.

Wormeli, R. (2006). Fair isn’t always equal: assessing and grading in the differentiated classroom. Stenhouse Publishers, Portland, ME, USA.

Zhao, K., J. Zhou, and P. Dawson (2021). Using student-instructor co-constructed rubrics in signature assessment for business students: benefits and challenges. Assessment in Education: Principles, Policy & Practice 21: DOI https://doi.org/10.1080/0969594X.2021.1908225

This page was authored by Michele Larson and last updated September 15, 2022

Was this page helpful?