5.4: Long-Range Planning

Long-range planning that focuses on the organization as a whole is strategicplanning….Long-range is usually defined as a future period of about three to five years
or longer. During strategic planning, managers are trying to decide in the present what
must be done to ensure organizational success in the future.” (Certo et al., 1990, p. 168).
SMETS candidates demonstrate formal efforts to address the future of this highly
dynamic field including the systematic review and implementation of current SMET
developments and innovations.

An Evaluation of Moodle as an Online Classroom Management System

0

I’ve finished an evaluation of how our district uses Moodle, and areas where we can improve. You can read the report below.

Over the next few weeks, I’ll be releasing some follow-ups to this evaluation. Addendums, if you will. I was originally planning to supply personal assessments of each participating online course in the initial report, using a rubric for hybrid course design. After some thought I scratched this idea, because much of the rubric falls outside the scope of the district’s objectives for using Moodle, which were included in the report from the outset.
Instead, I will be delivering each a brief mini-report to teachers who participated in the survey (there were 17), with suggestions and recommendations stemming from a combination of the rubric and the survey assessments, tailored specifically to your courses and instruction.
There’s a few things I’ve learned, that I’ll have to remember for the next time we do an evaluation like this.

  1. Don’t underestimate your turn-out. I was expecting 100, maybe 200 survey participants tops. What I did not expect was receiving 800 survey submissions that I had to comb through and analyze. There were some duplicate data that had to be cleaned up as well, since a few of you got a little “click-happy” when submitting your surveys. I also had to run some queries on the survey database to figure out what some of the courses reported in the student survey actually were, since apparently students don’t always call their courses by the names we have in our system. Since this was a graduate school project, there was a clear deadline that I had to meet, and it was a little more than difficult to manage all that data in the amount of time I had. I should have anticipated that our Moodle-using teachers, being the awesome group they are, would actively encourage all their students to participate in this project. I will not make that mistake again.
  2. Keep qualitative and quantitative data in their rightful place. Although I provided graphs with ratings on each of the courses, I had to remind myself that these were not strict numerical rankings, as they were directly converted from Likert scaled questions. The degrees between “Strongly Disagree” and “Strongly Agree” are not necessarily the same in everyone’s mind, so when discussing results, one has to look at qualitative differences and speak in those terms. For example, if the collaboration “ratings” for two courses are 3.3 and 3.5 respectively, one can’t necessarily say that people “agreed more” with Course 2 than Course 1. There may be some tendencies that cause one to make that assumption, but one shouldn’t rely on the numerical data alone. To brazenly declare such a statement is hasty. If the difference had been significant, such as 1.5 vs. 4.5, then there would likely be some room for stronger comparisons.
  3. Always remember the objectives. When analyzing the data, it’s easy to get sidetracked with other interesting, but ultimately useful data. For example, a number of students and teachers complained about Moodle going down. This was a problem awhile ago, and it caused some major grief among people. However, it was not relevant to any of the objectives for the evaluation, even though it was tempting to explain/comment/defend this area. I take it personally when people criticize my servers! (Just kidding.)
  4. If the questions aren’t right, the whole evaluation will falter. Even though I had the objectives in mind when I designed the survey, I still found it difficult to know which were the right questions to ask. I’ve been assured that this aspect gets easier the more you do it. In the end, there was some extraneous “fluff” that I simply did not use or report on, because it wasn’t relevant (e.g. “Do the online activities provide fewer/more/the same opportunities to learn the subject matter?” My initial inclination was to include this in the Delivery, but when I finally looked at the finished responses, I realized it didn’t really fit anywhere and wasn’t relevant to any of the objectives.

Right now, being the lowly web manager that I am — and I only use the term “web manager” because “webmaster” is so 1990s; I don’t actually “manage” anyone — I don’t get many opportunities to do projects like this. But I’m aspiring to do more as a future educational technologist. Evaluations are more than just big formal projects … they underscore every aspect of what we as educators do. Teachers would do well to perform evaluations of their own instructional practices. When we add new programs or processes in our district, evaluations should accompany them. And as I complained in my mini-evaluation of BrainBlast 2010 a few days ago, all too often the survey data we gather just aren’t properly analyzed and used. We have some of the brightest minds in the state of Utah working in our Technical Services Department, but often we just perform “mental evaluations” and make judgments of the direction things should go, when we would do well to formalize the process, gather sufficient feedback, and use it to make informed decisions.

Technology Use Planning Presentation

0

Below are some presentation slides I coauthored with two of my classmates. The topic is designing a Technology Use Plan.

Go to Top