Recap: Rubrics Brown Bag

By Maura MacPhee

April Journal clubPlease see the reference and notes (below) from our April 14 brown bag session. In this brown bag, we began the discussion of rubrics–should we be working together as faculty to create consistent, well-designed rubrics to guide students in their courses and across the curriculum? There was particular interest in clinical evaluation tools and rubrics. We will be holding a May workshop to begin this work: identifying core practice competencies and measurable indicators for rubrics or checklists—beginning with our Term 1 students. Stay tuned for the May workshop announcement…

Rubrics

Shipman, D., Roa, M., Hooten, J., & Wang, Z.J. (2012). Using the analytic rubric as an evaluation tool in nursing education: the positive and the negative. Nurse Education Today, 32, 246-249.

“The desire to sufficiently evaluate students’ achievement has created the passion for utilizing rubrics.” (p. 246)

“For the past 30 years there has been overwhelming evidence that new graduate nurses are not prepared to enter the workforce… An effective evaluation process begins in education with reliable tools ensuring nursing students are competent and safe to enter into practice.” (p. 246)

“A rubric is an assessment tool that uses clearly defined evaluation criteria and proficiency levels to gauge student achievement of those criteria’” (Montgomery, 2000, p. 235 cited in article on page 248)

The History:

  • Since the 1960s
  • Secondary and higher education
  • One study found that 94% of 300 student papers were not graded consistently
  • The ‘first’ 5-point rubric:
    • Relevance of ideas
    • Organization
    • Style
    • Mechanics/grammar
    • Wording
  • The analytic rubric:
    • Concise performance criteria
    • Rating scale
    • Descriptions of expected performance at each rating level

 

  • Well-designed analytic rubrics:
  • Minimize inconsistency in grading
  • Delineate instructor expectations for grading criteria
  • Increase grading efficiency and feedback
  • Identify specific areas where students are having difficulty
  • Equalize “understanding” for all students-no matter what their background is
  • Produce more concrete, reliable grading guidelines for faculty
  • Promote student self-assessment
  • Provide an open, transparent assessment process
  • Directly link assessment to expected learning outcomes
  • Avoid career-altering decisions without concrete, documented evidence
  • Capture ‘hard-to-assess’ concepts, such as critical thinking, communications, collaboration

 

  • Rubrics have been used for:
    • Clinical evaluation
    • Simulation/lab
    • Oral and written assignments
    • Online discussion forums and blogs*

*One 2015 study found that students have different learning outcomes from peer-shared postings vs private postings. There are more intellectual risks and gains with online peer-shared postings; more personal insights and introspection with private postings.

  • There are pitfalls to avoid:
    • When graders know the students, subjectivity/bias may result
    • Faculty often have a mental conception of different types of students: their mental representations can bias their use of rubrics
    • Rubrics rarely include categories for student effort-graders like to reward conscientious students for effort
    • Poorly designed rubrics are typically too rigid, inflexible, and narrow (criteria)
    • Criteria/levels are at the instructor’s discretion: instructors may differ in criteria they use*
    • Instructors may have inconsistent ‘rules’ for transforming rubric levels into grades
    • Faculty fail to have regular conversations about grading fairness, reliability, validity.**

*Faculty need to agree on consistent criteria for rubrics used in an undergraduate program. “As more instructors use the rubric tool, an increase in inter-rater reliability may be evident.” (p. 248)

** Reliability=everybody grades the same way. Validity=grades truly reflect student learning/competence

NOTES:

  • There are many great rubrics available online to use/adapt. See examples (handout).
  • Lasater (2007, 2011) is known for her clinical judgment rubric-used in practice and with simulation.

QUESTION: Should we create rubrics for PeP competencies to standardize clinical performance evaluation in lab and clinicals?

 

 

Pin It

Leave a Reply