Evaluating Online Professional Development
I’ve been musing over how to revise the online professional development offerings in our district for awhile. Our district is getting close to the point where we can start implementing changes that ensure meaningful learning. I’ve been studying different aspects of evaluations, namely the basics of how to conduct them and collect data.
A goal-based evaluation would be ideal for our online inservice. This type of evaluation measures efficiency (the timeliness in which the learning is conducted), effectiveness (whether the participants actually learned the material following the instruction), and impact (how their behavior is affected long-term). There is value to both qualitative and quantitative measurement tools, and the data we gather should consist of both.
It’s important to understand the stakeholders involved as well. I would like to tie our online inservice with curricular standards, particularly if any online learning is extended to students, and not just employees. We already allow our teachers to earn state CACTUS credit through our inservice portal, but I think without proper assessments the credit given does not demonstrate actual learning.
It’s strange that we have overlooked evaluation in a lot of our online professional development. It seems obvious now. We should set clear goals and objectives, outlining what we wish to accomplish. Evaluation should occur every step of the way, through both formative and summative assessments. Self-directed courses should be kept to a minimum, since it can be more difficult to collect formative assessment in this venue. In directed courses, the instructor can observe how the learners interact with the material, and take notes. I tend to favor project-oriented learning, so I don’t necessarily prefer quiz-based summative assessments. Final projects which effectively demonstrate all the material learned in the online class could be constructed instead, and assessed through a rubric. Another assessment, perhaps conducted through observation only, should also provide a means to determine the impact of the training one, three, or six months down the road. Has the material been applied to the participant’s instructional practices? Has their behavior changed? For example, if they participated in introductory blog training, are they now actively using their blog for instructional purposes and parent outreach?
Determining exactly how to form these assessments is what I’m still unclear about, and I still struggle with deciding how to form the questions in an evaluation, and knowing what to ask. I would like to focus more and get some practice determining and writing questions that lead to clear process descriptions and goal statements.
Leave a Reply