This challenge explores the potential of generative AI tools—specifically Microsoft Copilot—to assist educators in developing clear, structured, and objective-aligned rubrics for student assessments.
By using AI to draft initial rubric structures, educators can save time while still ensuring clarity in expectations and fairness in evaluation. This activity emphasizes the partnership between AI support and human judgment, encouraging careful review and customization of AI-generated content to meet pedagogical goals.
Challenge
Use Microsoft Copilot to generate a rubric for assessing student work. This activity showcases how AI can support structured assessment design while reinforcing the critical role of educator oversight to ensure the rubric is effective, fair, and meaningful for students.
Instructions
- Define Your Assessment Context
- Choose a specific type of student work (e.g., essays, lab reports, presentations, blog posts, code submissions).
- Identify the academic level (e.g., high school, first-year university, graduate level).
- Provide a Clear Prompt to Copilot
- Specify the AI’s role (e.g., “Act as a university professor specializing in science communication.”)
- Include details on the rubric’s structure, such as:
- The number of criteria (e.g., clarity, organization, accuracy, creativity).
- The number of performance levels (e.g., “Excellent,” “Good,” “Fair,” “Needs Improvement”).
- Descriptions of what constitutes each level of performance.
- Generate and Review the Rubric
- Ask Copilot to generate a rubric based on your prompt.
- Review the output—assess its clarity, fairness, and alignment with learning objectives.
- Make any necessary refinements to improve specificity and usability.
Reflect and Share
What does this challenge reveal about the role of AI in teaching and learning? Try it out and share what you discovered in the comment box below, whether it’s your final product, a reflection, or a surprising insight.
Example
Rubric Prompt
“Act as a university professor specializing in science communication. Create a rubric to assess third-year students’ blog posts about a citizen science project. The rubric should include:
Descriptions of what constitutes each level of performance for every criterion.”
Specific criteria such as clarity, accuracy, engagement, and critical analysis.
A scale with four performance levels (‘Excellent,’ ‘Good,’ ‘Fair,’ and ‘Needs Improvement’).
Leave a Reply