This last week was our week for reporting. It's the time when we collect up all our grades, average them out, put them on a report and add some comments. It's a necessary exercise but I'm not sure it's what provides teachers with the most useful information in terms of increasing student achievement.
A report card does not show a teacher where a student went wrong, what a student needs extra help and remediation with. If it doesn't, then it begs the question... what does?
In a regular classroom a teacher gets that information almost daily through things like homework checks or even "blank" looks or expressions. In the virtual world it is a bit more problematic to collect data from these sources.
For those of you who follow my blog you know that we have been using Desire2Learn(D2L) and Examview to create unique, randomized assessment items for more summative types of evaluation. Because they are uploaded inside our Learning Management System (LMS) we have been able to see analytic data provided by the LMS for every question that we have given this year. From this data we can now formulate a final exam review plan that will focus on areas that students have struggled in throughout the year. That is something that we would not be able to do as easily because we would not be able to extract that information as easily from typical pencil/paper assessment items. Now we can. Very useful!
This is just one of many examples where we can and will use analytic data from summative assessment. The problem is that we have not been as successful as we should be in identifying the outcomes students are struggling with before the summative evaluation occurs. The reason being I think is that we have not been using the tools that are available to us for more formative type of assessment. Here is my plan for changing that.
To begin, ExamView and D2L need to be used for formative assessment in the same way that we are using them presently for summative assessment. To do that we need to create shorter assessment items and have them occurring with much more frequency. For example a simple multiple choice quiz of 4-5 questions at the end of each section will provide more information than one quiz at the end of a chapter/unit. And the struggles can be identified earlier in the unit/chapter as opposed to before the test is written. By creating a sufficiently large bank of questions form which to create question sets in addition to randomizing answer choices within questions, these short items will be sufficiently valid from which to draw accurate conclusions.
Furthermore I would advocate very short constructive response assessments 1-2 questions given throughout the unit/chapter for the higher level questions. Again this would be created from a bank and and randomized to increase validity. Simply put, no conclusions can be accurately drawn from work that can be easily copied or plagiarized.
These items of formative assessment can be graded (most automatically by the LMS). Students can see how well they are performing throughout a unit/chapter and these items can be automatically added to the gradebook. It makes formative assessment worth something, increasing the likelihood of completion. And that is what we really want students to do... practice... so we can help them through the learning process.