Often times I am asked by training managers to evaluate their eLearning initiatives and specifically, types of eLearning courseware. At first, it may seem like a simple task in the broad view of it, but when you begin to scratch the surface, you find there are a lot of nuances that go into the evaluation. This leads me to begin by asking a few questions: what is the purpose of the evaluation, who are the key stakeholders, and how extensive should the evaluation be? Occasionally I am confronted with the “rapid evaluation” type, but in other cases, I must develop a clear and concise criteria set by which I can base my findings on.

First off, eLearning courseware and eLearning initiatives are not always mutually exclusive. In fact, it’s difficult sometimes to separate the quality of the overall eLearning program from individual course efforts. Some organizations have purposely created varying levels of course quality in order to accommodate stakeholders who desire rapid delivery, versus those that want more immersive interactivity within their courses.   Here I must ask: For what purpose is the eLearning developed?

Finding a good balance between creating an eLearning course rapidly and creating one with quality in mind, is an ongoing dichotomy that many of us in the eLearning community struggle with. And it begs the question of what the definition of “quality” even is. To determine this, I believe we must first define three things: the type of eCourse we are examining, the eCourse’s quality by itself, and finally, how the eCourse fits in with the entire eLearning initiative.

Defining the type of eCourse to be evaluated
When I talk about the type of eCourse to be evaluated, I am basically referring to three general categories of eCourses including self-paced tutorials similar to those offered by SkillSoft, higher education courses, and experiential immersive learning simulations. For the sake of this discussion, I am going to look at what makes a “quality” self-paced tutorial.

Defining eCourse quality
When developing a basis of my evaluations, I derive some of my factors from the ASTD Certification Institute’s, E-Learning Courseware Certification Standards that was released in the early part of the century. I do not think the ASTD quality certification program exists. I assume that interest in evaluating “quality” succumbed to the “Rapid Development Craze” of the last decade. What ASTD did was brake down their evaluation process into clusters of standards: Interface Standards, Compatibility Standards, Production Quality Standards, and Instructional Design Standards.

  • Interface Standards
    • Is the overall course template visually appealing to the eye? Of course, this could be a matter of judgment, but I believe it is still a relevant factor.
    • Is the navigation for the template user friendly?
    • Does the learner have an indication of where they are in the course and how much work they have left to do to complete the course?
    • Is the template SCORM compliant?
    • Is all template text clear, and easily understood?
  • Level of Interactivity (Used to be Compatibility Standards; more on that later though…)
    • Interactivity can be measured in many different ways, and in my opinion, it comes down to this question: is the learner required to take an action?
    • Relevant actions:
      • Reflecting on how the learned concept can be applied to their job,
      • Writing in a learning journal,
      • And/or performing an exercise that requires dragging and dropping objects to correct categories in order to demonstrate knowledge.
    • The basis of relevant interactivity comes down to whether the learner is engaged in doing something, or connecting concepts/information to applied practice. The more the learner does and connects, the more successful the course is (Remember Bloom?).  In the genre of the self-paced tutorial, the learner should be doing or connecting at least once every five pages to be considered a quality course. Can the rubric as you wish.
  • Production Quality Standards
    • Production quality is similar to Interface Standards, but comes down to more of the functionality rather than the usability of the interface. Consider some of the following elements:
      • Character art and other illustrations are professional and aesthetically pleasing,
      • Text and graphics are appropriate and functional for varying audiences,
      • Audio and visual items are practical and can maintain usability across different computer and bandwidth limitations,
      • And visual consistency is maintained between different areas of the eCourse.
  • Instructional Design Standards
    • Focuses on whether the instruction is based on achieving certain learning outcomes, and how well it does this.
    • Are learning objectives clear to the learner?
    • Is there a pre-assessment?
    • Is there a post-learning assessment?

Other than the Compatability Standards, three out of four of the clusters developed by ASTD Certification Institute are relevant today. My reasoning behind removing compatibility from this list and replacing it with Level of Interactivity is because a great deal has been done to quash compatibility issues. With new web standards and attention being brought to the importance of consistency across platforms, many of the old issues we had with browser incompatibilities and the like are quickly becoming a thing of the past. Of course, one could argue that Google Chrome or Safari can’t render something properly, but for the most part, these instances have either been mitigated, or are declining at a rate high enough to relieve most of the problems when developing eCourses. Basically, I evaluate if the eCourses work on Microsoft Internet Explorer, and if they do, then we can satisfactorily move on.

Defining how eCourses fit the entire eLearning initiative
It is important to look at the eLearning initiative as a whole, and assess how the self-paced tutorial course fits in with it before giving any final evaluation. For instance, if a Learning Management System is used, how is it enhancing the learning experience? Is there discussion forums incorporated that urge learners to think and connect the concepts they reviewed? Does the LMS remediate the learner to help them understand the concepts they had trouble with? It’s important to understand how these and other aspects can enhance the learning experience, especially when the eCourses themselves integrate with these tools.

When evaluating eCourses and eLearning in general, it is essential to take a step back and view the entire structure as a whole, as well as minutely and in detail. Often times it is the case that the sum is greater than all of its parts, but in order for this to be true, the parts must be able to hold their own within the grand scheme of things.