Help me understand this. Recently, I’ve come across several examples of a rubric being used in a way I don’t quite understand. The rubrics all had the standard four levels of proficiency for the criteria being assessed, and the levels moved in typical fashion with one being inadequate and four being exceptional. That’s all fairly standard practice as I understand it.
But here’s where I get confused.
The rubric then took each criterion score, and multiplied it by 3. So, if one were to gain a “3” on any criterion, he or she would effectively earn a 9 out of 12 in that category. It seems the instructor used the multiplier so that in the end, the entire assignment was worth more points. I’ve seen a score multiplied by four or even five as well.
I’m not sure why someone would do this.
If you stop to consider, once you begin introducing a multiplier to a rubric, you are essentially removing the opportunity for a student to earn points in significant intervals. If a multiplier of five is used in a rubric, that means just one step from exceptional to proficient will cost the student 5 points, which is often half a letter grade. I’m not sure I understand why an assessment would be given if a student can’t earn each point along the way.
I know there’s a bigger topic here regarding points and grades and all that, but for this post, I’m just trying to understand the practice of inflating, or weighting, depending on how you want to name it, an assessment.
The same applies here to quizzes or tests or even assignments that are multiplied to make them worth more. I often see teachers administer a test that has 30 questions, then they make the test out of 100. That means every question a student gets wrong costs them 3.3 points. And I’ll admit, I certainly did this when I was a teacher. Looking back, I’m just not sure why I did it.
Anyone out there good enough at math to help me understand this? And can you explain why we want to do this to students?