E-learning
has not kept pace with the development of increasingly rich IP-based delivery
platforms because the e-learning experience is far too often puerile,
boring, and of unknown or doubtful effectiveness.
Developers
don’t seem to be aware of how people learn, for they continue
to use mostly flawed models.
Corporations
are more interested in throughput and low unit cost, so solid measures
of effectiveness are infrequently developed or applied.
The
available platform drives the instructional strategy, which may
not be appropriate to the learning style of trainees or to the learning
objectives.
The
cost of development is high, so bad (cheap) programs drive out the
good ones in the absence of any commitment to measure effectiveness.
Effective
e-learning experiences for complex competencies are rarely scalable.
Why
does the situation persist, when so many knowledgeable people have sat
through a course they know to be bad? Habit, and perhaps low expectations
by trainees—we don’t expect to find the courses stimulating
or engaging, so we don’t complain too much when they are pretty
much like the boring lectures we used to sit through.
A
flawed model of cost-effectiveness
At
a moment when higher education has become increasingly convinced that
the standard classroom lecture is not a particularly effective way of
teaching, how ironic that many of those responsible for e-learning say
the ultimate goal is to mimic the classroom experience as much as possible.
Perhaps that’s one indication that e-learning is no longer an unproven
cutting-edge experiment, but has moved into the mainstream. A few years
ago, only a minuscule percentage of corporate training was technology-based,
but in the year 2000, that figure was up to 24 percent and compares impressively
to the 57 percent delivered through traditional classroom methods. There
are other signs that higher education is looked to for systems of learning
management and measurement. The “Carnegie units” model of
counting noses (one person in one course for one term) is a standard component
of ROI calculations and, while no school system or college would ever
mention ROI publicly, they do employ all kinds of ratios to determine
“efficiency.”
It
is difficult not to conclude, however, that there is relatively less emphasis
on outcomes measurement in corporate training, certainly in comparison
with higher education, where it is intense; my experience over more than
30 years in the corporate world suggests that most businesses give more
weight to anecdotal accounts than to efforts to measure outcomes rigorously.
Where there is an effort, it seems to be directed toward measuring the
cost side of the dyad, especially where training staff can claim substantial
cost savings. The trade press is replete with articles quoting training
managers boasting of how many hundreds of thousands of dollars (or more)
they expect to save with e-learning, generally through less travel, fewer
hours lost to training, and lower staff costs. Years ago, ROI never came
up in discussions of corporate training budgets, primarily because the
knowledge/skill level of the workforce was regarded as an intangible asset
that did not show up on the balance sheet. That may still be the case,
but the telecommunications and systems infra-structure necessary to deliver
e-learning does appear on the balance sheet, so ROI has become
a tool of the trade in training departments. “In tough economic
times, you have to demonstrate the ROI of an e-learning project back to
the business sponsors,” said an HR director at a major firm quoted
in Online Learning.
Why
development of standards is a distraction
In
addition to the emphasis on cost savings, there is another dimension that
has received considerable attention in e-learning circles—the development
of standards such as SCORM (Shareable Courseware Object Reference Model)
and IMS (Instructional Management System). These are not standards that
treat learning outcomes, but instead deal with tagging, coding, and indexing
Learning Objects to facilitate reuse of digitized training materials.
Some have likened that effort to “rearranging the deckchairs on
the Titanic,” but that is perhaps harsher than necessary. The emphasis
on adoption of standards is clear: “Implementation of SCORM specifications
can help learning technology to become reusable, interoperable, stable,
and accessible.” Who would be opposed to standards, except there
is nothing in any of those standards that focuses attention on the effectiveness
of the Learning Objects. Indeed, the term Learning Objects itself ought
to cause some unease. An LO (Learning Object) is defined as a “discreet
small chunk that can be used alone or dynamically assembled to provide
just enough and just-in-time learning. Learning Objects can also enable
learners to select the training that is most relevant for them, perhaps
even in a media format that matches their preferred learning style (auditory,
visual, etc.).” A Learning Object is, thus, a thing that has physical
dimensions (type, number of megabytes) that can be measured; it can be
tagged and indexed for future use. No one knows, however, whether that
LO has ever resulted in anyone learning anything or subsequently demonstrating
any competency.
We
know that learning doesn’t happen in discrete chunks. An acquaintance
at the University of Colorado once said, “We have to cross the line
between ignorance and insight many times before we truly understand.”
We get it, then lose it, then kind of get it again, then find out we don’t
quite have it right, and ultimately, after struggling to master the concept,
we have it. Learning often appears a little ragged, and does not generally
come in nicely packaged objects, no matter how systematically tagged.
Efforts to measure outcomes are difficult enough, but to substitute for
those efforts a set of standards which tag and index inputs seems to me
to be mistaken.
The
lack of emphasis on outcomes
When
the e-learning industry attempts to quantify content elements, the concern
is misplaced; it diverts attention from the more important issue of measuring
effectiveness, e.g., under what conditions does e-learning work?
The drive for standards, originating in the mutual efforts of the aviation
industry and the Department of Defense, appears to be part of an attempt
to make e-learning programs more acceptable to IT departments, who are
reluctant to consider anything that involves audio, video, and other features
with bandwidth or security issues. The next step, presumably, will be
to measure the mean number of megabytes in a Learning Object, so IT can
estimate how much additional capacity they will need to add in order to
teach the sales force how to sell the company’s new gizmo.
It
appears that this push for standards really has little to do with measurable
learning outcomes. The move toward standards arose, we are told, because
of “complaints about previous generations of e-learning products
[which] range from integration issues and interoperability concerns to
bandwidth and scalability problems.” Those complaints did not, it
appears, come from trainees, who often found the training dull, rigid,
and not related to their work, but rarely complained about interoperability
and integration issues.
In
the absence of a sustained emphasis on measurable outcomes, there is little
incentive to value anything but “throughput” and low unit
cost. But dropout rates (defined as failure to complete a course) for
e-learning are much higher (70 percent) than for standard instruction
in four-year colleges (about 15 percent). Although three-fourths of corporations
use course completion as a measure of effectiveness, some vendors and
training executives seek to downplay completion rates as a significant
measure of success. Community colleges, on the other hand, pay close attention
to course completion rates and consider them a most significant indicator
(though not the only one) of their success.
Some
concerned voices within the industry have been raised: Eliot Masie, in
response to an InformationWeek question about the scariest question
[regarding e-learning]: “Does it work? If I invite 50 people into
a session, is there learning? If it’s well-structured, there’s
the right content, we’ve taken care of who we invite, and there’s
a payoff at the end, they’ll probably learn as well as [they would]
in the classroom—which isn’t very well.” Still, Masie
is among those leading the push for adoption and dissemination of standards,
so he apparently sees no inherent contradiction between the centrality
of learning effectiveness to the long-range success of e-learning and
the drive for interoperability. Indeed, he specifically notes that “all
the work on standards and specifications will play a similarly critical
role in causing the ‘take-off’ of the learning industry, [but]
they do not, in and of themselves, look after ensuring the quality or
effectiveness of learning.”
The
fact is, e-learning has become well established and will only grow, whether
there are standards or not. The cost savings are too great to ignore,
regardless of the lack of measurable outcomes, and e-learning has made
available to people in remote locations a variety of courses they would
not otherwise have had access to; if their motivation was high and their
perseverance strong, I have little doubt that many of them learned. So
we are not talking about the survival of e-learning. But we may be talking
about a degrading of quality if we are content to measure only the cost
savings, the compliance with standards and the number of Learning Objects
dispensed. Clearly, we should be under no illusions about effectiveness
if the failure-to-complete rate remains at 70 percent.
What
can be done?
Let’s
begin with the learning experience. If that is not engaging, only the
most highly motivated (or those under duress) are likely to complete the
course. How would the typical trainee describe the typical e-learning
experience? Boring is the first word that comes to mind, whether the instructional
strategy is reading text, watching a streaming video of the average instructor,
or following an audio-over-PowerPoint presentation. The developer’s
attitude seems to be similar to my high school biology teacher’s,
who often reminded us, “If you’re smiling, you’re not
learning.”
Some
may call it a masochistic tendency, but I have an irresistible urge to
examine e-learning courses whenever I get a chance. Not to complete the
course, but to sample it and see how the designer engages my interest,
allows me to move through the material, tests my understanding, reinforces
appropriate responses and my ability to apply the learning, and corrects
my mistakes. I like to see if that designer has made any attempt to adapt
to people with different learning styles or perhaps with a different purpose.
So I examine the free demonstration courses offered online whenever possible,
expecting that purveyors would put their best foot forward and show content
that was interesting and well designed. But it is not so. I hope those
demos are their throwaway material—dogs they couldn’t get
anyone to register for—because if they are representative of the
rest of their curriculum, a lot of customers are being taken.
At
the heart of the problem lie a couple of factors beyond the unwillingness
to insist on measurable outcomes: 1) available technology is driving the
instructional strategy, 2) developers don’t know anything about
how people learn, and 3) a desire to produce courses at the lowest unit
cost leads to cutting corners and/or to repurposing of material that wasn’t
very good to begin with. Absent the chance to network with peers, students
find e-learning technologies to be very unforgiving. Let’s examine
the first of those factors.
Technology
is not an e-Learning strategy
The
need to calculate the ROI for a training initiative should lead to an
insistence on definition of an e-learning strategy, which is a very good
thing. But the strategic statements I’ve seen are driven by technology,
not by corporate objectives. The infrastructure (largely network bandwidth
and telecommunications capability) is the strategy in some of those statements.
To me, that’s backward. Begin with the organization’s objectives,
extract the competencies required to attain those objectives, examine
the constraints (time, distance, trainee’s experience, corporate
culture, etc.), and then you can begin to outline the kind of learning
experiences that will be necessary to develop those competencies. Only
at that point (or when describing the constraints) do you consider the
technology and whether its capabilities and limitations are congruent
with the learning experiences necessary to achieve the outcome.
Because
there is not an established track record for the effectiveness dimension
of e-learning, we might examine the extent to which the available programs
and the enabling technologies rely on established models of how adults
learn. There are two dominant learning models that, consciously or not,
are employed in IP-based learning systems: Presentation and Programmed.
Presentation
models range from streaming audio and video to PowerPoint programs
that have been repurposed and sent over platforms such as PlaceWare. This
is the traditional learning model, used for centuries. Sometimes called
the “information transmission” model or, more skeptically,
“the-sage-on-a-stage,” it assumes that most people can learn
the content through aural and visual means. At its worst, it is simply
a talking head, or a voice over a slide show. Frank Zvi, President of
the webcasting vendor Interwise, makes it seem very simple: “If
you’re an enterprise, human resources can use [streaming audio,
video, and data] to have the CEO talk to everyone in [broadcast] mode,
and at the same time also talk to specific groups in [seminar] mode.”
At
best, the speaker may be excellent and the graphics, video clips, and
other visual aids add materially to the listener’s understanding.
Presentation models have been one-way until recently, when live, interactive
videoconferencing has become available, if somewhat unreliable. Still,
there are doubts.
We
teachers—perhaps all human beings—are in the grip of an astonishing
delusion. We think that we can take a picture, a structure, a working
model of something constructed in our minds out of long experience and
familiarity, and by turning that model into a string of words, transplant
it whole into the mind of someone else. Perhaps once in a thousand times,
when the explanation is extraordinarily good, and the listener extraordinarily
experienced and skillful at turning wordstrings into nonverbal reality
. . . the process may work, and some real meaning may be communicated.
Most of the time, explaining does not increase understanding, and may
even lessen it.
The
other dominant model, programmed instruction/tutorials
is particularly popular for asynchronous learning. Now frequently referred
to as “traditional (!) CBT,” most of the courses available
on the Internet are based on this model. The developer essentially chops
the content into manageable chunks of text (perhaps augmented by audio/video
clips and graphics), and lets the trainee work through the screens at
her own pace. There are frequent questions interspersed with the instruction,
and immediate feedback. Some programs offer remediation for wrong answers,
but most simply ask the trainee to try the question again. Tutorials can
be individualized (by means of a pretest or self-inventory) but few offer
contingent tracks based on the trainee’s profile. Many of the capabilities
are entirely consistent with basic learning theory, but the content is
mostly text and is frequently criticized as boring and puerile. The IP-based
platforms did not (until very recently) build in opportunities to interact
with other learners or to ask questions of the instructor. One feature
of this model is that the instruction was often built around quantifiable
learning objectives, which were usually measured in some kind of post-test.
That doesn’t meet Kirkpatrick’s Level Four criterion, but
it’s more than most presentation models build in.
There
are other instructional models that have occasionally been used with IP
technologies, including what might be called the apprenticeship/coaching
model. Combined with case studies, projects, or simulations,
there is exceptional potential for learning complex competencies. Unfortunately,
they are rarely employed, presumably because of the development cost and
the fact that case studies and projects are not particularly scalable.
An excellent example of the use of the project model is Unext.com’s
Cardean University course on Promotion and Principles of Marketing. Each
unit is structured around a project, which the trainee has to complete
(e.g., preparation of a brand marketing plan), and offers readings,
data, competitive information, etc.; it encourages interaction by means
of e-mail with other students and includes video/audio clips and rapid
feedback from the course’s instructor. [A demonstration course is
available at www.cardean.edu.]
From
my own experience, the case study/simulation model can offer an exceptionally
rich learning experience. Working with a network security firm whose objective
was to teach network field engineers how to configure a security system,
my company designed a series of increasingly complex networks, represented
by detailed network architecture diagrams, which trainees would have to
protect from a variety of viruses, Trojan horses, denial of service (DoS)
attacks, and other hacks by means of firewalls, VPNs, Intrusion Detection
Systems, honeypots, and DMZs, appropriately placed and configured. Trainees
have access to explanations of what the various security devices are,
how they work and how to configure them for several levels of protection
appropriate to different kinds of clients. Trainees were asked to design
a security system for a client by inserting symbols into the network architecture
diagram and identifying key configuration items. If they get it right,
the DoS is foiled, viruses are kept out, and no data is compromised. If
they get it wrong, they can “watch” hackers get in and destroy
data or use the site to launch DoS attacks on other sites. Trainees were
provided with a text-based briefing of the vulnerability/nature of the
hack, IDS records of the sequence, and operations involved in the hack,
and given another chance to reconfigure the security system.
There
are also hybrid models in use in higher education and
corporate training that combine e-learning with classroom or lab sessions;
my experience suggests these can be particularly productive, assuming
the learning model for each part has been carefully thought through. There
is no advantage to a hybrid delivery system, however, if both e-learning
and classroom use a lecture/presentation strategy. Community colleges
have employed IP technologies to make the lecture and lab sessions more
intense and better focused by assuring that students are well prepared
for them, then using e-mail and chat to respond to questions and reinforce
the experience. Toshiba’s Telecommunications division was using
this model six years ago to cut the lab time on digital key telephone
systems by half while increasing every measure of competency, including
helpdesk calls and time to install.
Why
we're missing the real potential of IP technologies
Obviously,
there can be no such thing as a generic e-learning model because the range
of potential instructional strategies and learning models is significantly,
but not entirely, dependent on the capabilities of the delivery platform.
Vendors with a repository of content that has been repurposed for the
Internet favor the Programmed instruction model; vendors with a rich media/streaming
video platform favor the presentation (sage-on-a-stage) model. Those with
collaborative tools haven’t done much yet, so no convention has
emerged (in what little I’ve seen). Most common are the electronic
page-turners that are often PowerPoint programs or texts reformatted into
HTML; they don’t give any evidence that the developer has thought
much about how people learn. How else could one create 400 courses overnight,
as some firms claim? The delivery media drives the learning model, not
the other way around.
Given
the rich videoconferencing-plus-collaboration platforms that are emerging
(Polycom, Tandberg), we still have a chance to show how the Internet can
enhance the learning experience and not merely extend traditional models
to wider audiences. There is the potential now to develop models that
are highly suitable for a wide variety of learners and objectives, so
let us examine what is known about how adults learn.
Matching
technology to adult learning styles
Let’s
consider the broad conclusions about adult learning that have emerged
in recent years. Earlier generalizations that informed much of the best
practices of CBT remain largely valid (self-paced, individualized tracks,
frequent practice, immediate reinforcement, emphasis on outcomes), but
Howard Gardner’s work, Multiple Intelligences, stimulated
a lot of rethinking and research into learning styles. Among the most
suggestive conclusions to emerge from that work are these:
People
have different learning styles. Only 30 percent of adults say they
learn best by listening; another 30 percent report they’d
prefer to learn by reading and reflection.
The
subjective difficulty of the material (i.e., for that trainee) affects
the learning style, as does gender (sometimes) and perhaps (ethnic)
culture in certain areas.
On
complex topics/judgment issues, people need to get comfortable,
to mess around with the topic before they can understand it; understanding
does not necessarily flow in a linear manner from breaking the task/object
into simpler component parts.
Learning
is often a gradual process that happens through a series of shaping
activities, which are not always instructor initiated. This is sometimes
called tacit learning. The coaching process recognizes this, and
so do many lab courses where we expect student skills will develop
over the semester without explicit focus on those skills.
Learning
communities work; there is a social as well as cognitive dimension
to learning. Students transform the information they get from instructors
and texts into meaningful knowledge through conversations, arguments,
lunches, discussion groups, and other real-world activities. “Bull
sessions actually do have a lot of value.”
Capabilities
of IP-based platforms
Now
let’s consider the capabilities of the current and developing IP-based
platforms:
Sharing
& collaboration, messaging & chat systems, such as Groove
and eRoom, hold exceptional promise for individual/group tutoring, as
well as for building learning communities. They have low bandwidth and
processing requirements, but high potential for many learning tasks, both
synchronous and asynchronous. This capability enables tutorial and presentation
models, of course, but may be particularly suited to those built around
case studies and projects.
Presentation
systems: Streaming audio & video (live and canned), including
WebEx and HorizonLive, bring multimedia to multiple points at low cost.
There are relatively modest bandwidth/processing requirements compared
to conferencing systems, but the communications are essentially one-way,
so, in the absence of other capabilities, this technology is locked into
a presentation model. It is widely used for both synchronous and asynchronous
presentations.
Conferencing
systems: Live, real-time audio & video conferencing, like
Polycom and Tandberg, offers an enriched classroom experience, plus the
power of collaborative tools. High bandwidth/processing requirements and
other issues related to security means this technology is not for the
casual user. The systems are not yet robust enough to move into the mainstream,
but close. There is a significant cost savings over ISDN-based systems,
as well as considerable improvement compared to the uneven quality of
that older technology. Conferencing systems offer potential for using
a variety of learning models, but they are largely intended for synchronous
learning. An e-learning strategy with access to this capability might
choose to offer a significant amount of instruction by means of other
IP-based technologies, then periodically use multipoint video-conferencing
to, say, review a case or project, asking the team to defend it in the
face of questions from other trainees or the instructor. But I fear that
once a robust IP-based conferencing system is in place, the tendency will
be to emphasize the sage-on-a-stage learning model because it will be
cheaper and faster to develop.
With
those capabilities, developers have the ability to create more effective
learning experiences by creating communities of online learners who can
share experiences, questions, and tentative solutions and generally noodle
with a task until they’ve solved it. They can question the instructor,
instead of just listen to him. Technology can offer alternative and complementary
ways of approaching a topic: read, listen, observe, discuss, reflect,
construct. Simulations may be inexpensively done, supplemented by Instant
Messaging and e-mail among the trainees.
Do
we need to use all the capabilities of e-learning technology for every
training task in the curriculum? No. Some cognitive skills can be learned
with a minimal Internet platform, although pace, practice, feedback, and
remediation are probably necessary if you are to reach an 80-80 standard
(80 percent of trainees score 80 percent or better on the post-test).
The effectiveness of the course is less dependent upon the enabling technology
than on the skill with which the developer uses the available technology
to construct learning experiences appropriate to the trainee and to the
topic. But many firms are likely to be reluctant to embrace one platform
for one set of tasks and a different one for other instruction, so the
availability of a delivery platform is likely to continue to drive the
learning model unless management is unusually sophisticated.
Conclusion
Increasingly
rich delivery platforms are available, at a fraction of the cost of just
a few years ago, but a trainee’s e-learning experiences are mired
in a technology that’s not much advanced from the teaching machines
of the early 1960s. Developers don’t seem to be aware of how people
learn, or if they are, they nevertheless continue to use mostly flawed
models of adult learning. For those vendors, that business model may be
cost-effective in the short term. Corporations are giddy about the savings
the P&L statement is showing, but the hangover will come when they
realize that costs have been saved at the expense of competencies.
The
technology platform is driving the instructional strategy, warping our
focus, which should be on creating an engaging learning experience that
reliably contributes to the organization’s objectives. We are going
to have to accept the fact that the cost of development of good e-learning
courses is high (should that really come as a surprise to anyone?), and
that the effectiveness of those courses has to be measured as carefully
as one measures cost savings. Only then can e-learning realize its potential.
What
is the outlook?
Mixed.
For many learning tasks that are not too complex (and especially if motivation
is high), they will be accomplished via e-learning for many trainees at
least as well, cheaper, and with more people getting more training in
a convenient manner than before. For that, we should be grateful.
For
more complex skills, such as designing and configuring a network security
system, we’ll have the illusion of learning because we have our
headcounts, class hours, and certificates awarded, but competencies on
the job will be marginal until experience will gradually bring up the
more highly motivated people to a level that could have been achieved
with the application of better learning models. Dropout rates for e-learning
will continue to be considerably higher than those for traditional instruction.
Educational technology has long been seen as promising, but has rarely
lived up to the promises. Not because it wasn’t effective, but because
it was cumbersome, boring, and did not adapt to the way people wanted
to learn. The e-learning industry is in danger of repeating that cycle.
Frank
L. Greenagel is Managing Director of Guided Learning Strategies; he can
be reached at flg@guidedlearning.com.