5.2: Criterion-Referenced Measurement

“Criterion-referenced measurement involves techniques for determining learner masteryof pre-specified content” (Seels & Richey, 1994, p. 56). SMETS candidates utilize
criterion-referenced performance indicators in the assessment of instruction and SMETS
projects.

An Evaluation of Moodle as an Online Classroom Management System

0

I’ve finished an evaluation of how our district uses Moodle, and areas where we can improve. You can read the report below.

Over the next few weeks, I’ll be releasing some follow-ups to this evaluation. Addendums, if you will. I was originally planning to supply personal assessments of each participating online course in the initial report, using a rubric for hybrid course design. After some thought I scratched this idea, because much of the rubric falls outside the scope of the district’s objectives for using Moodle, which were included in the report from the outset.
Instead, I will be delivering each a brief mini-report to teachers who participated in the survey (there were 17), with suggestions and recommendations stemming from a combination of the rubric and the survey assessments, tailored specifically to your courses and instruction.
There’s a few things I’ve learned, that I’ll have to remember for the next time we do an evaluation like this.

  1. Don’t underestimate your turn-out. I was expecting 100, maybe 200 survey participants tops. What I did not expect was receiving 800 survey submissions that I had to comb through and analyze. There were some duplicate data that had to be cleaned up as well, since a few of you got a little “click-happy” when submitting your surveys. I also had to run some queries on the survey database to figure out what some of the courses reported in the student survey actually were, since apparently students don’t always call their courses by the names we have in our system. Since this was a graduate school project, there was a clear deadline that I had to meet, and it was a little more than difficult to manage all that data in the amount of time I had. I should have anticipated that our Moodle-using teachers, being the awesome group they are, would actively encourage all their students to participate in this project. I will not make that mistake again.
  2. Keep qualitative and quantitative data in their rightful place. Although I provided graphs with ratings on each of the courses, I had to remind myself that these were not strict numerical rankings, as they were directly converted from Likert scaled questions. The degrees between “Strongly Disagree” and “Strongly Agree” are not necessarily the same in everyone’s mind, so when discussing results, one has to look at qualitative differences and speak in those terms. For example, if the collaboration “ratings” for two courses are 3.3 and 3.5 respectively, one can’t necessarily say that people “agreed more” with Course 2 than Course 1. There may be some tendencies that cause one to make that assumption, but one shouldn’t rely on the numerical data alone. To brazenly declare such a statement is hasty. If the difference had been significant, such as 1.5 vs. 4.5, then there would likely be some room for stronger comparisons.
  3. Always remember the objectives. When analyzing the data, it’s easy to get sidetracked with other interesting, but ultimately useful data. For example, a number of students and teachers complained about Moodle going down. This was a problem awhile ago, and it caused some major grief among people. However, it was not relevant to any of the objectives for the evaluation, even though it was tempting to explain/comment/defend this area. I take it personally when people criticize my servers! (Just kidding.)
  4. If the questions aren’t right, the whole evaluation will falter. Even though I had the objectives in mind when I designed the survey, I still found it difficult to know which were the right questions to ask. I’ve been assured that this aspect gets easier the more you do it. In the end, there was some extraneous “fluff” that I simply did not use or report on, because it wasn’t relevant (e.g. “Do the online activities provide fewer/more/the same opportunities to learn the subject matter?” My initial inclination was to include this in the Delivery, but when I finally looked at the finished responses, I realized it didn’t really fit anywhere and wasn’t relevant to any of the objectives.

Right now, being the lowly web manager that I am — and I only use the term “web manager” because “webmaster” is so 1990s; I don’t actually “manage” anyone — I don’t get many opportunities to do projects like this. But I’m aspiring to do more as a future educational technologist. Evaluations are more than just big formal projects … they underscore every aspect of what we as educators do. Teachers would do well to perform evaluations of their own instructional practices. When we add new programs or processes in our district, evaluations should accompany them. And as I complained in my mini-evaluation of BrainBlast 2010 a few days ago, all too often the survey data we gather just aren’t properly analyzed and used. We have some of the brightest minds in the state of Utah working in our Technical Services Department, but often we just perform “mental evaluations” and make judgments of the direction things should go, when we would do well to formalize the process, gather sufficient feedback, and use it to make informed decisions.

Digital Parent: Facing Facebook

0

A few months ago, I co-founded a project called Digital Parent with a few other educators across North America. The goal of Digital Parent is to deliver technology workshops for parents. The basic idea is to help parents better understand technology, and provide training that will benefit them as they seek to understand the benefits of educational technology, as well as technology tools relevant to their personal lives. The project is still in its formative stages, and although we’ve been on hiatus for awhile, I’m hoping with this new instructional project I’ve designed we can get the project moving again.
My original role in Digital Parent was simply to provide technical support. However, since I’ve been learning quite a bit about instructional design, I plan on taking the initiative and helping the team form organized models to develop and assess future workshops.
Below is the instructional design document for “Facing Facebook,” a workshop to help parents better understand how children use Facebook, and how to talk to their kids about the service. The document is a little long (it was written for a graduate school class), and I feel it could use some trimming so it only addresses the basic needs of any Digital Parent instructor who downloads and uses it.

I’ve realized that Digital Parent will need both formative and summative evaluations included in the process, which will be a daunting yet important task since the workshops will be delivered as downloadable “modules,” so the instructional designers will likely never see the instruction put into practice. We can still hold our own one-to-one and small group evaluations, however, but any field trials will likely consist of an actual instructor presenting the content to an actual group of interested parents, without the presence or the knowledge of the instructional design team. Every workshop module should have a summative assessment for all participants (teachers and students), accessible on the Digital Parent web site, that is automatically reported back to the Digital Parent team. This will allow us to keep a careful watch over the effectiveness of our instructional design projects, and be able to revise and improve our work.

Go to Top