Friday, August 3, 2012

Should we apply the d-factor differently?

It seems like forever since my last blog post. Between getting married, visiting Mickey Mouse, facilitating online professional development and online content development... well it's been a busy summer. And down time on a plane is as good a time as any to revisit my blog. So here goes...

In our province we have standardized tests at the high school level called Public Examinations. These exams are worth 50% of a students overall grade in the courses that administer a Public Exam. In these courses the overall provincial average or score is calculated and the overall school average is also calculated. If there is a large discrepancy between the two then there is what is called a d-factor applied to bring the grades that the school submitted more in line with what the students achieved on the Public Exam provincially. It is designed to bring fairness in situations where evaluation at a school may have been too rigid or too flexible... hard or soft marking... covering outcomes... or not covering them. The principle of the application of the d-factor is sound and I agree with its implementation. However I feel it may need to be applied different... or more individually. Let me see if I can clarify what I mean.

In a larger school you do have the potential to have many teachers teaching the same Public Exam course. We all know, as professional educators, that even though the evaluation instruments may be the same or similar in a particular school, there is and always be some subjectivity and discrepancy between teachers and how they evaluate.

In addition, teachers will apply their profession judgement to individual circumstances, sometimes altering the evaluation scheme or plan from one student to another.

Both situations above are examples to illustrate that there are many things that hide beneath the surface of the overall average result that a school submits for comparison to the Public Exam overall average result.

There is a potential that the overall average result that a teacher sends in for his or her students is close to the overall average Public Exam result, yet their grades may be d-factored (or marked up or down) because of the discrepancy that occurred with the grades submitted by the other teachers in that school.

Looking at the case on a more individual basis, a student may have had a school grade similar or close to the result he or she scored on the Public Exam, yet he or she may be d-factored (or marked up or down) based on the performance of the rest of the students in their class (if the school is small) or the school population (if the school is large).

There are many other permutations and combinations of scenarios that I could list here... but I think these are enough to make my point... grades may have the d-factor applied when it is not needed.

So... what is my idea on changing the d-factor? Simple... calculate the d-factor on an individual student basis and apply it as such.

Every year after the Public Examinations are all scored, the Department of Education compares the overall average grade submitted by all teachers in the province against the overall average Public Exam Mark. Quite statistically valid and obviously no d-factor applied within that difference. Now instead of calculating a d-factor on a school basis, calculate it for every student individually. If a teacher's evaluation has been too rigid or hard, the individualized d-factor will still bring the class up overall as intended. Same goes for evaluation that may be lenient or soft, the d-factor will bring the results down and more in line to where they should be.

The difference in calculating and applying the d-factor on an individual basis will avoid d-factoring the students whose school results are comparable to what they scored on their Public Exam. In addition it provides protection or immunity to a student whose performance on the Public Examination is comparable to what he or she did throughout the year, even if the other students in the class were to drop significantly.

So... how hard is this to implement? I'm not sure but it seems like a good task for a good programmer and a fast processor...

1 comment:

Anonymous said...

I strongly agree with you Richard. It's unfair to penalize stronger students based on the performance of others. The technology exists to efficiently carry out these calculations on a per-student basis -- why aren't we taking advantage of it?