Over the past two weeks, I have been preparing for a faculty workshop on using assessment tools in Sakai. While drafting a section on discussion forums, I scoured the internet looking for an exemplary discussion board rubric to share with workshop attendees. Either my expectations are too high, my internet search skills are lacking, or such an exemplar doesn’t exist. It seemed like the perfect opportunity to create a rubric that reflected what I valued.
The first thing I noticed during my search was that most rubrics placed a disproportionate value on what I term “process requirements”. Evaluation criteria often included timeliness of responses (i.e. when were students posting and replying), quantity or frequency of posts and responses (how many times were students posting), originality (how different was a student’s post from classmates’), and mechanics (grammar and spelling). While these procedural requirements are important, I believe they shouldn’t hold the same weight as the actual content of a post. And yet, rubric criteria often weighted mechanics equal to critical thinking. It seems to me that the central goal of an assessment is to demonstrate mastery of learning objectives more than mastery of secondary processes. In short, what you say is more important than how you say it.
I can already hear the arguments against this philosophy. However, I believe the key to avoiding disappointment is all about setting firm and clear expectations. Students need to be explicitly directed on how their posts should look. Let students know that you expect them to contribute with specific frequency, timeliness, or tone. If it becomes an issue, address the problem with individuals or the class as a whole. My point is simply this: focus the graded assessment largely on the learning objective while coaching students on the process.
I have linked here to my generic online online discussion rubric for you to use or improve. I would love to hear your feedback on this topic in the comment section below. Do you agree or disagree with this philosophy of grading online discussions? How would you improve this discussion forum rubric for future releases?
Originally posted 9/25/2013 in the PLU Instructional Technologies Blog