# Funny Math

Help me understand this. Recently, I’ve come across several examples of a rubric being used in a way I don’t quite understand. The rubrics all had the standard four levels of proficiency for the criteria being assessed, and the levels moved in typical fashion with one being inadequate and four being exceptional. That’s all fairly standard practice as I understand it.

But here’s where I get confused.

The rubric then took each criterion score, and multiplied it by 3. So, if one were to gain a “3” on any criterion, he or she would effectively earn a 9 out of 12 in that category. It seems the instructor used the multiplier so that in the end, the entire assignment was worth more points. I’ve seen a score multiplied by four or even five as well.

I’m not sure why someone would do this.

If you stop to consider, once you begin introducing a multiplier to a rubric, you are essentially removing the opportunity for a student to earn points in significant intervals. If a multiplier of five is used in a rubric, that means just one step from exceptional to proficient will cost the student 5 points, which is often half a letter grade. I’m not sure I understand why an assessment would be given if a student can’t earn each point along the way.

I know there’s a bigger topic here regarding points and grades and all that, but for this post, I’m just trying to understand the practice of inflating, or weighting, depending on how you want to name it, an assessment.

The same applies here to quizzes or tests or even assignments that are multiplied to make them worth more. I often see teachers administer a test that has 30 questions, then they make the test out of 100. That means every question a student gets wrong costs them 3.3 points. And I’ll admit, I certainly did this when I was a teacher. Looking back, I’m just not sure why I did it.

Anyone out there good enough at math to help me understand this? And can you explain why we want to do this to students?

## 6 Comments

## Mike Anderson

October 7, 2009Maybe the teacher was trying to make the assignment worth a specific percent…for example, if they wanted to make the assignment out of 30 percent and there were 3 different criteria, then they might be multiplying the levels by 2.5 so that the maximum score equalled 30.

3 criteria x 4 levels = 12 (perfect on all 3 criteria)

12 * 2.5 multiplier = 30%

The problem with this is that it assumes that the criterion are all of equal value.

## Elizabeth Lyon

October 7, 2009You pose a great question Ben, which underscores the fact that rubrics and point based grades are two very different animals. Each rubric score describes a level of proficiency. It’s job is to make each of those levels clear, distinct, and understandable.

No one can convince me that the “100 point” scores are as accurate. Could anyone come up with 100 different levels of quality?! Does a student who gets an 88 really understand something significantly more than one who gets an 87?

So glad that you’ve started this conversation!

## Hank Thiele

October 8, 2009This is done to make the assignment either fit into a percentage grading system or to “weight” the assignment enough so it is reflective of the amount of work done. There is a big difference between the ability to understand a concept and a grade. Which is why grades don’t reflect knowledge (they usually correlate with attendance).

## LeeAnn

October 8, 2009I think teachers don’t spend/don’t have the time to really think about translating a rubric score to a grade. I learned that I had teachers who were adding up the score points and making them out of total, so that if 4pt rubric had 5 criteria, they would make it worth 20, not realizing that if a student had a 2 on each criterion, the descriptor might be “developing” but a number grade would be 10 out of 20; 50%, F!

In my perfect world, grades wouldn’t exist. Teachers would use descriptive feedback to help students get to mastery on authentic, valuable assignments.

## John McCullough

October 8, 2009Great questions, Ben. I can think of two reasons that I use a multiplier.

One reason I multiply is to give extra weight to specific criteria. So I might weigh “Content” differently than “Professionalism” for a specific assignment.

The other reason is for the students. I’ve tried this with rubrics that have 5 levels with 3 criteria…i.e.15 points per assignment. I’ve said, “Class, for the entire semester, there are 6 major projects which means there are 90 points possible… for the entire semester.” Students freak. If I multiply, they get happy point totals like 300 or 600 or 900 or whatever makes them feel better. They think, incorrectly so, that 90 is somehow different than 900.

You are correct that multipliers make scores that jump in specific intervals. However, in my mind (as old and tired as it is), 3 out 4 is exactly the same as 75 out of 100. There’s still an interval that is exactly the same, in terms of percentage.

Finally, to confuse the issue completely, sometimes I give fractional points and fractional multipliers. 🙂

My 2¢.

~ John

## kris jacobson

October 8, 2009I think the above comments that refer to the psychology of the larger numbers (900 makes it seem like I have more opportunities to “get” points than 90–though this is obviously flawed logic) are spot on.

I suspect it just comes down to a combination of that psychology of the larger number being “better” & the general use of a decimal system in our culture–students & teachers often seem to like scores to add up to 100–they probably can’t explain why–it just seems “right”.

From my own perspective (as a non-classroom teacher–specifically,a librarian) I see nothing inherently wrong with using multipliers–the example the first poster gave with the 2.5 multiplier makes sense to me. At the end of the day, what’s the harm–unless you’re doing something goofy like the example where the teacher is creating a situation where the rubric translates badly?

But maybe I’m missing something….