I’m getting close to wrapping up my reading of a rather interesting and insightful book: The ABCs of Evaluation (Boulmetis & Dutwin, 2005). It’s been an eye-opener for me, and has caused me to rethink how a lot of our professional development programs are evaluated.
I’ve been writing a report about the survey results we collected last month at BrainBlast, our district’s annual technology conference. It’s a fantastic event that we put on every year in the summer. Up to 300 teachers and administrators attend the conference, participate in hands-on workshops, and win cool prizes. And every year we try to get good feedback about how the year’s conference went, by encouraging everyone to take a survey.
I started my report of BrainBlast 2010’s survey back in August, without realizing what I was doing was an evaluation report. However, I’ve since realized I made quite a few mistakes in my methodology, and I’ll probably need to start again from square one. For one, my evaluation was based largely on data that was improperly quantified. We collected some ordinal data in that we prompted each participant to rate their courses as Poor, Fair, Good, or Outstanding, but then I converted these to numerical quantities — 1 for Poor, 2 for Fair, 3 for Good, 4 for Outstanding — even though the division between each level is not necessarily equal. A lot of my report was based on this faulty assumption, and I made the same mistake a couple years ago in my comments about the BrainBlast 2008 survey. As a result I’ll need to reassess the data we collected this year.
In general, the survey we administered wasn’t really comprehensive and designed with a full-scale evaluation in mind, but I’ll do the best I can. Boulmetis & Dutwin (2005) outline a good format for writing evaluation reports, consisting of sections for a  summary, evaluation purpose, program description, background, evaluation design, results, interpretation and discussion of the results, and recommendations. I think this is a good model to follow for any evaluations. At the very least it will be good practice for me as I hone my evaluation skills, and next year I’ll make sure I play a more important role in how we evaluate BrainBlast. Putting on BrainBlast is a significant financial investment for the district, albeit a very worthwhile one. It’s important that we make the most of how we conduct this valuable form of professional development.
References
Boulmetis, J., & Dutman, P. (2005). The ABCs of evaluation. San Francisco, CA: John Wiley & Sons.