We solve problems every day. Sometimes the problems are simply figuring out where you left your car keys. Other times, it’s determining the best way to reach the unique needs of your students. Often we solve these problems without really thinking through the process. We just mentally “connect the dots” and arrive at conclusions. The conclusions may not necessarily be the best ones, or we may not always explore all the options at hand. In most cases, for mundane tasks, it’s not really necessary. When you can’t find your car keys in one location, you naturally move on to another until they’re located. But some situations may warrant a more in-depth analysis of the problem.
Analyzing a problem is the intermediary step between recognizing the problem and arriving at a solution, and it involves using data collection and forming decision-making strategies. Defining clear goals and objectives is important, too. For BrainBlast 2010 last summer, we had all the attendees participate in a survey on the final day of the conference. The goal was to collect data to enhance the quality of instruction for future conferences, and we collected some valuable information to this end, through a combination of Likert scale questions, and prompted constructive criticism. With these data, we can form the necessary decisions to improve future BrainBlasts, and avoid repeating any mistakes we made in the past.*
An evaluator must assess aspects such as the needs of the program and its users. It’s important to be aware of the different data-collection tools at one’s disposal as well. My own forthcoming Moodle evaluation will be largely interview-based, with some backend data collection assessing general academic performance averages and usage of online course activities. Interviews in particular are useful formative evaluation tools. It’s important to not neglect formative evaluation during a program, as it can reveal scenarios, ideas, and possible venues for improvement before the conclusion.
Objectivity is important as well when analyzing a problem. After all, if an evaluation isn’t objective and free of bias, it is worthless. While it’s likely not possible that an evaluator can completely cast their biases aside, especially when offering professional recommendations, it’s important they make every effort to do so. Also, detailing the steps the analysis took and the efforts to collect objective data goes a long way.
* An analysis of the BrainBlast 2010 survey results will be posted in a few days. (Update: survey results are now available.)