by
Stephen C. Ehrmann
Some college administrators and faculty members act as though
the Web has magical educational powers. "Use it and outcomes
will improve (even if no change is made in the processes or
structures of learning)!" But major improvements in educational
results are far more likely when the Web is used in ways that
enable significant change in who can learn, what they learn
(educational goals), and/or what they do when learning. Using
the Web to support a distance learning program ought to be
accompanied by a reexamination of educational goals, a fresh
look at students (including new kinds of students), and a
reexamination of support for staff and students. Those, however,
can be "sea changes" for many colleges and universities.
Unfortunately, errors and ambushes are much more
likely when educators and their institutions grope their way
into such unfamiliar territory. Unable to use many of their
well-honed skills and their intuition, they find themselves far
more likely to make mistakes; it can amount to flying blind.
Some of these mistakes are repeated time and again as new
generations of innovators appear on the scene; others are new
(Ehrmann,
in press).
Many faculty members, administrators, and
legislators respond to the risk of "blindness" and ambush by
avoiding the danger. These people may use or recommend the Web,
but they explicitly or implicitly restrict its use to familiar
approaches. Unchanged practices unfortunately offer little
additional benefit for students even though the use of
technology may increase the cost of education. For example, if
students read three articles on paper before and now read three
similar articles on the Web, one may not detect much difference
in what they learn. The failure to reexamine the fundamentals is
a major reason why three decades of promises about educational
uses of computing have led to frustration, both inside and
outside of institutions.
We need more grassroots diagnostic studies of
Web-enabled efforts in order to make certain programmatic
improvements in the process and outcomes of education.
- "Evaluation" has many correct but conflicting definitions.
By "evaluation" we mean any inquiry into whether and how an
educational process produces certain results.
- By "diagnostic" we mean that the goal is to provide
information that will immediately help instructors and
administrators improve their programs. (This kind of
evaluation is also sometimes called "formative" or "action
research."). Centralized, systemic research is also necessary
but is not sufficient.
- By "grassroots" we mean that the practitioners themselves
should do at least some of the studies in order to make sure
that the studies produce the information that those
individuals need and want most.
Flying Blind
The
TLT Group
has done more than 70 evaluation workshops for faculty and
administrators. Even among those interested enough to attend
such workshops, few have prior experience in doing studies
themselves. Nor have most participants ever heard of even one
study (grassroots or national) that produced usable, useful
findings. This situation is understandable. For centuries,
educators and educational institutions could thrive while doing
things more or less as they always had. Twenty years or more
could pass from the first appearance of an innovation until even
half of all colleges and universities had implemented the new
technique: time enough to learn by osmosis. In those days, one
could be a fine faculty member without ever having seen or done
research on the shape of one's classrooms, one's instructional
materials, or the fine points of lecturing. The great teachers
spent decades honing their reactions so that the tiniest quiver
of action or inaction in a classroom or in a student's essay
would help them interpret what was happening, anticipate what
was about to happen, and make informed decisions about what to
do next. Experienced teachers were capable of simultaneously
planning months ahead and altering their plans on the fly as
circumstances changed. Administrators had comparable skills for
looking into their crystal balls.
But widespread reliance on technology driven by
Moore's Law destroyed that relative stability. Moore's Law
states that computer chips double in power every eighteen months
or so. If that steadily growing computer power is used to make
periodic, qualitative changes in educational practice and
structure, the result can be continual turbulence, lurking
problems, and hidden opportunities. The old instincts are not
worth as much anymore. Suddenly we are all groping in the dark,
worrying about the consequences of our next use of technology,
and guessing what might go wrong next.
Imagine, for example, that you are teaching a
course that uses the Web. You have redesigned the course in ways
that depend on students using the Web to collaborate on homework
projects, even though you have never asked students to do much
work together on homework before. Now, two weeks into the term,
it is hard to know for sure, but you fear that students are not
collaborating online as much or as well as you had hoped. The
course's schedule and success might be in jeopardy. Or maybe
everything is OK. Is there really a problem? If so, why?
- Do some students have misconceptions about how to use
threaded conferencing?
- Do some students believe that collaborative learning is a
waste of time?
- Do some students fear that if they work together they will
be labeled as cheaters?
- Are your assignments so easy for a student to do alone
that it is not worth his or her effort to collaborate?
- Did some students not take the training in how to use
e-mail and computer conferencing?
- Are some students buckling under the load of e-mail coming
their way?
These and literally dozens of barriers could
hinder collaboration online. But which few of these barriers are
actually hindering your students? Unless you can find
out quickly, the class may never achieve what you have hoped…
Educators Need Help Studying Their Own
Studies
Asking such questions is not "rocket science,"
but many educators need a little help knowing what questions to
ask and how to interpret the answers. There is no need for
educators to start from scratch in designing and carrying out
such an inquiry.
You could implement a survey, for example: ask
students some pointed questions about how they are, and are not,
working together online. Focus your attention on the problems
that are typically otherwise invisible but that do happen often
enough to be worth checking on.
Such a survey, though seemingly simple, may be
difficult to design. As any pollster will tell you, good surveys
are hard to create. It is difficult to write an unambiguous,
unbiased question, and few educators have enough experience to
recognize common hidden problems and opportunities. Finally,
because it is so hard to do a good study and because, in the
past, few educators needed to do them, such studies rarely get
done.
Recently, that trend has been changing. The
non-profit
Flashlight Program, which I direct, helps educators evaluate
their own educational uses of technology. Forty-nine
institutions around the world are members of our growing
evaluative network, and approximately 250 others have also made
some use of our evaluative tools. For example, Flashlight Online
helps users gather data from currently enrolled students about
how the Web and other technologies are actually being used.
Another example: our cost analysis handbook helps educators
study the use of time, money, and other resources in
technology-intensive programs. To help educators learn to use
these tools, we offer training both face-to-face and online.
Flashlight's emphasis has been on general-purpose
approaches to evaluation of programs that depend on technology;
however, there is a weakness in our approach. In order to make
sure that each educator and each institution can tailor a unique
study to its own local circumstances and needs, we sacrifice the
option of sharing survey data across institutions, and we
sacrifice time. It takes time for each educator and each
institution to design its studies. Moreover, if the studies turn
out to be very much alike, that investment becomes wasted time.
If people have chosen not to spend the time, then an opportunity
to gather useful data has been wasted. There is no such thing as
a predesigned, general purpose study. But it is
possible to design targeted "turnkey" study packages that can be
used by many educators who share the same "need to know" about a
particular facet of Web use in education.
Nationally (and by that term, we mean to include
not only the federal government but also national funders, state
legislatures, and institutions), we need to do several things.
Recommendation
We need research on the educational possibilities
and hazards of the most promising educational changes that can
be supported with the Web. If we study enough similar efforts to
make programmatic changes using the Web, we can uncover some of
the most common hidden problems and opportunities. Those
findings could guide the creation of easy-to-use diagnostic
tools to help educators quickly discover whether any of those
problems or opportunities are present locally.
This research can also gather insights into how
to respond to these issues. For example, what if you want your
students to collaborate online but it turns out that 40% of your
students believe that collaborative learning is an inferior,
inefficient way to learn? What options does an instructor have
in such a situation? Appropriate research would gather a library
of responses to some of these problems and opportunities.
We must develop the specialized investigative
tools that educators need to study and improve their own
instructional programs. Among the elements likely to be
important for any such study package are:
- standard surveys with normed data from other users of that
same survey,
- suggested interview questions,
- methods for analyzing other relevant data such as student
comments in a computer conference,
- background articles to help local investigators interpret
their findings and decide what to do next,
- online conversational opportunities with other educators
who are using the same package, and
- reports and case studies from other educators who have
used these tools to study their own programs.
Some of these tools will be comparatively general
purpose, but most should focus on specific educational
improvements that depend heavily on Web use.
Practitioners will require some training to use
these study packages, even though the study tools should be
designed to be relatively simple and easy to use. We suggest a
mix of train-the-trainer, face-to-face, and online methods. This
training ought to be organized on a national or even
international basis, but some of it must be provided locally
because some of it will need to be face-to-face.
Training will be the most expensive element of
the program because the number of educators who will need at
least a little training is enormous. The cost of the
alternative, however, is even higher: allowing instructors to
continue to fly blind, with the result that they fail to make
meaningful use of this very expensive technology.
Paying for this program will not be easy, despite
its cost-effectiveness and the need. Many institutions are
starting from almost zero—no skilled trainers to help their
faculty members and staff, no budget, no tradition of work in
the area. One strategy calls for banding together with other
institutions and sharing the work and investment. The Flashlight
Program began in this manner. Institutions that are members of
the same consortium or system could take similar measures,
agreeing on a shared plan to develop or acquire training
materials or evaluation tools. They could create a shared R&D
agenda. One member might create study designs for engineers who
want to safely expand the role of design in the curriculum and
who need help in monitoring the risks of faculty and student
burnout. Another institutional member might be responsible for
devising evaluative tools for monitoring and improving
intercultural interaction online. As a reviewer of this article
pointed out, it would also be useful to get the involvement of
experts from relevant professional associations such as the
American Evaluation Association (AEA) and the Association for
Educational Communications and Technology (AECT).
A Long-Lived and Productive Investment
To make widespread grassroots diagnostic
evaluation possible, we need to invest in evaluative tools and
focused training. Fortunately, such an investment is likely to
be far more long-lived and efficient than an investment in a
piece of computer hardware. Generations of technology may make
one another obsolete with almost absurd rapidity, but
educational activities change far more sedately. A study package
to help educators detect and deal with barriers to online
collaboration could have been developed a quarter century ago
(when Plato, a system for creating and administering
computer-aided instruction, began using e-mail and
conferencing), because few barriers to online collaboration are
specific to the details of one generation of hardware, software,
or telecommunications. Thus, a 1970s study package on diagnosing
the problems that block online collaboration could be used with
only slight modification today.
Unfortunately, no such package was developed for
use with Plato. We have paid the price in failed courses and
lost opportunities ever since. However, if we do invest in such
packages today, they should be useful with only minor
modifications in many disciplines, on many levels of education,
in many cultural contexts, and for many years. A little money
can go a long way when invested wisely. Over the coming years we
could gradually build quite an extensive system of evaluative
tools and training for educators.
Summary
The Web is of little instructional use unless it
makes possible ambitious (and thus risky) changes in the
organization, content, and support of instructional programs.
Most educators are reluctant to make such changes in part
because they sense hidden dangers. Local studies could help make
such initiatives safer by revealing those dangers (and some
hidden opportunities as well) in time for local educators to fix
the problems (or seize the opportunities).
We urge public sector funders to invest in the
development of study packages and training that could help
hundreds of thousands of educators avoid flying blind. Because
these study packages and this training will focus on
programmatic issues rather than just on the particulars of
today's technology, they should have a useful life measured in
decades.
Editor's Note: This article is adapted from the author's
e-testimony to the
Web-Based Education Commission, August 15, 2000.
Reference
Ehrmann, S. C. Technology and educational
revolution: Ending the cycle of failure. Liberal Education.
Retrieved 26 September 2000 from the World Wide Web:
http://www.tltgroup.org/resources/ V_Cycle_of_Failure.html