|
Return to
Table of
Contents of Flashlight Evaluation Handbook
This text is adapted from
an article of the same title by Stephen C. Ehrmann that
originally appeared in Change magazine, the Magazine of
Higher Learning, XXVII:2 (March/April 1995), pp. 20-27.
Copyright of the original article is owned by the Helen Dwight
Reid Educational Foundation. This article copyright is owned by
Stephen C. Ehrmann. Table
of Contents
-
Bad
Questions about the
Higher Education Machine
-
If
You're Headed in the
Wrong Direction,
Technology Won't Help
You Get to the Right
Place
-
The
Medium isn't the Message
-
Computer-Based
Tutorials Are Valuable
But...
-
Worldware:
Software that wasn't
Designed for Instruction
can be Valuable for
Teaching and Learning
-
Strategies
Matter Most
-
Tools
for Evaluating Strategy:
The Flashlight Program
-
Reading
List
Click
here to return to
Flashlight Handbook
Table of Contents
"I've
got two pieces of bad
news about the
experimental English
composition course where
students used computer
conferencing. The first
bad news is that, over
the course of the
semester, the
experimental group
showed no progress in
their ability to compose
an essay. The second
piece of bad news is
that the control group,
taught by traditional
methods, showed no
progress either."
-
Paraphrased from a talk
by Roxanne Hiltz
reporting on an early
use of computer
conferencing
I've
been involved with
innovation in higher
education--its funding,
its evaluation, and
research about it--for
twenty years, especially
innovations having to do
with computing, video
and telecommunications.
During that time I've
often been asked: What
do computers teach best?
Does video encourage
passive learning? And Is
it cheaper to teach with
telecommunications? I
don't have answers to
those questions. I don't
think they can be
answered in any
reliable, valid way. It
takes just as much
effort to answer a
useless question as a
useful one. The quest
for useful information
about technology begins
with an exacting search
for the right questions.
This essay discusses
some useless questions,
a few useful ones (and
the findings that have
resulted), and one type
of question that ought
to be asked next about
our uses of computing,
video and
telecommunications for
learning.
1.
BAD QUESTIONS ABOUT
THE HIGHER EDUCATION
MACHINE
The first
group of useless
questions seek universal
answers to questions
about the comparative
teaching effectiveness
and costs of technology.
These kinds of
evaluative questions are
phrased like, Do
computers do a better
job of teaching English
composition than
traditional methods?
Think
about it. That question
assumes that education
operates something like
a machine, and that each
college is a slightly
different version of the
same ideal machine.
Questions like these use
the phrase
"traditional
methods" to
represent some widely
practiced method that
presumably has
predictable acceptable
results. If technology
performs better than
traditional methods,
such questions imply,
everyone should use it.
A neat picture, but
traditional methods
doesn't define the
higher education that I
know and love, nor is it
the higher education
that research reveals.
Postsecondary
learning is not usually
so well-structured,
uniform or stable that
one can compare an
innovation against
traditional processes
without specifying in
explicit detail just
what those processes
are. And by specifying
in detail what
traditional means (what
materials, what methods,
what motives), you limit
your study to a very
small and temporary
universe.
Organizationally our
institutions don't
behave like machines,
either. Cohen and March
did a classic study of
presidential decision
making some years back,
coining the term
organized anarchy to
describe how our
institutions function.
The term describes any
institution, they said,
which, like the typical
college or university,
has:
- problematic
goals (it appears to
operate on a variety of
inconsistent and
ill-defined
preferences),
- unclear
technology (i.e.,
methods) (Although the
organization manages to
survive and (where
relevant) produce, it
does not understand its
own processes.), and
- fluid
participation in
decision making (the
boundaries of the
organization appear to
be uncertain and
changing).
Sound
like a machine being
fine tuned toward a
Platonic ideal of
efficiency? To me it
sounds not only like
what colleges are (and
ought to be) but also
like what college
courses are (and ought
to be). Unfortunately
this means one can't
ask, "How well is
this technology-based
approach working,
relative to the
norm?" since there
usually isn't a norm.
It
also seems useless to
search for global
generalizations about
the costs of technology
relative to traditional
methods. Howard Bowen, a
noted economist of
higher education, found
that institutions of
higher education each
raise all the money they
can, spend all they get,
and spend it in ways
that relate closely to
the way they spent the
money last year. His
1980 study found little
relationship in patterns
of spending even among
institutions that
appeared on the surface
quite similar. They
spent rather different
amounts per student, and
they spent each dollar
differently. Bowen found
no way to state
rationally what it ought
to cost to educate a
student properly.
Tougher economic times
may have forced some
convergence in costs
among institutions. But
we still have no
rational way of
describing what
traditional education
should cost per student.
Platonic
ideals aside, it's also
difficult to determine
what education does
cost. Prices and
accounting methods vary
by institution and
situation. Services that
are inexpensive to some
institutions are quite
expensive for others.
Complicating the cost
question still further
is the rapid and not
always predictable
change in technology
prices and performance.
None
of this suggests that we
should ignore issues of
cost in looking at new
investments in
technology. But caution
flags should go up
whenever you hear
someone say the nation
can teach English
composition more cheaply
if it uses technology X,
be that technology old
or new.
2. IF
YOU'RE HEADED IN THE WRONG
DIRECTION,
TECHNOLOGY WON'T HELP
YOU GET TO THE RIGHT
PLACE
Questions
are also be useless if
we fail to ask them.
Many advocates of
technology want to
improve current
teaching. But too often
they fail to ask whether
traditional education
has been teaching the
right content. They seek
to change the means of
education but don't ask
hard questions first
about its objectives.
What makes me uneasy
about the content goals
of undergraduate
education is grades, and
what research tells us
about them.
Any
undergraduate can tell
you that grades are the
key to interpreting the
mysteries of higher
education. Faculty give
you high grades when you
learn what they value,
right? We tell students
repeatedly: study hard,
get good grades and you
will learn what you need
in order to do better in
life.
But is
that true? Let's assume
that the curriculum
teaches knowledge,
skills and wisdom that
is of advantage to
graduates. We'll also
assume that faculty
members are grading
rationally. And although
higher education has
many goals, not all of
them professional or
vocational, at least
some of them are meant
to foster later success
in the workplace (e.g.,
salaries, chances of
winning a Nobel Prize,
etc.) In that case,
research ought to reveal
a positive correlation
between cumulative grade
point average and work
outcomes. In other
words, your A graduates
should have learned
enough to do better in
their work life than
your C graduates. (I'll
use graduate to denote
anyone who has completed
a course of study,
whether or not the
person receives a
degree.) In contrast, if
the curriculum were
irrelevant to work
outcomes (or if grading
were random), then the
correlation would be
zero. It wouldn't matter
how efficiently we
taught the wrong stuff,
or whether we used
technology to teach it
three times as well. The
correlation between GPA
and life outcomes would
still be zero.
In
1991 Pascarella and
Terenzini synthesized
all the research they
could find bearing on
higher learning. Going
to college and
graduating pays off in
many ways, they found.
Choice of major makes a
difference in life
outcomes. All that is
good news. But while
Pascarella and Terenzini
discovered many studies
finding a tiny positive
correlation between
grades and work
achievement after
graduation, the
correlation is so small
(about 1-2% of the total
variation) as to be
meaningless for the
individual student.
Why do
grades not predict how
well our graduates
perform? Is it because
we are not even trying
to teach them certain
knowledge, skills and
wisdom that they need?
Or does the problem lie
in the way that faculty
assess learning? Are
Students Being Taught
the Right Stuff?
One
possibility is that the
curriculum is failing to
focus on the knowledge,
skills and wisdom that
graduates need. For
example, some studies of
GPA and work outcomes
focus just on MBA
graduates and their
success in their first
jobs (e.g., starting
salaries, likelihood of
promotion, etc.).
Findings about MBA
graduates by Crooks and
by Livingston are
consistent with
Pascarella and
Terenzini's: little
relationship between
GPAs for business school
grads and their work
achievement. Perhaps the
reason for the tiny
relationship is that
there are important
skills that the
curriculum fails to
teach or reward. That's
the implicit message of The
Competent Manager by
Richard Boyatzis, a
classic work published
in 1982. The volume
summarizes many
empirical studies of the
cognitive skills of
effective managers. Each
study compared the
patterns of thinking of
superlative managers to
those of average
managers.
Boyatzis
found that the cognitive
skills of highly
successful managers
didn't seem to bear much
relationship to what
business schools were
teaching. For example,
one of the key skills is
the ability to shape and
achieve goals by working
through coalitions of
peers. The habits of
thought and action
needed to be a good
coalition builder need
to be developed over
many courses and
extracurricular
activities. Do today's
business schools do
that, so much so that
their highest GPAs are
usually earned by
students who are best at
organizing teams?
Boyatzis'
findings have broader
significance. Skills of
working with people and
in organizations are
important for just about
every graduate, not just
business school types.
Most forms of work,
citizenship and even
family life require such
skills, knowledge and
wisdom.
If you
study your own graduates
and find that there is
no apparent difference
in the fate of those who
got A's and those who
got C's. Perhaps it is
because your program is
not teaching the right
stuff.
Or Is
Grading the Problem?
A
second way to account
for Pascarella and
Terenzini's finding is
to infer that grading is
irrational. Let's assume
that most faculty
members have no idea
what their students
think or have learned.
By this argument, the
students who learn the
most may be as likely to
get a C as an A. One of
the most devastating
studies in support of
that notion is embodied
in a video. "A
Private Universe"
opens in Harvard Yard
during Commencement in
the late 1980s.
Twenty-two graduating
seniors, faculty and
alumni were asked one of
two questions, "Why
is it warmer in the
summer than in the
winter?" or
"Why does the moon
seem to have a different
shape each night?"
Only two of them
answered their question
correctly. Yet they
should have learned
about both these
phenomena repeatedly
while still in school.
The
scene then shifts to a
good high school nearby.
We see ninth graders
answering those same two
questions incorrectly in
the same ways the
Harvard seniors did. The
ninth graders are
interviewed before
they're taught the
material that year, and
then again right
afterward. The
instruction looks good.
But the teacher does not
seem to be learning
anything about what
students believe about
these phenomena, despite
the fact that she
repeatedly asks them
canned questions and
gets canned answers
back. The videotaped
interviews show that the
students' preexisting
theories remained
invisible to the
teacher, and often
untouched by
instruction.
"A
Private Universe"
is not the only study
that shows that students
can get A's without
truly understanding the
material or being able
to apply it. When
faculty don't understand
what students believe,
know and can do, they
are unlikely to teach or
to grade appropriately.
So we
have two pieces of bad
news. We're probably
failing to teach the
right stuff but even if
we were trying to teach
the right stuff, many
instructors wouldn't
notice whether their
students were learning
it or not.
I'm
not suggesting that we
rush out and faddishly
transform our curricula.
But I do believe that
most institutions of
higher education are
facing a Triple
Challenge of outcomes,
accessibility, and
costs. If not now then
in the next few years
they will find it
increasingly difficult
to offer a modern,
effective academic
program that reaches and
retains the students
they should be serving
for a price that those
students and their
benefactors can afford.
For many institutions,
these three issues of
outcomes, accessibility,
and costs pose real
threats to their
reputation and
well-being. I see no
evidence that most
institutions will be
able to meet this Triple
Challenge without
substantial use of
computers, video and
telecommunication. (In
fact this Triple
Challenge is one reason
why technology has been
rising to the top of
budgets and presidential
agendas for the last few
years. One can no longer
afford to ignore
technology and still
maintain institutional
health.) However, if we
rush out and buy new
technologies without
first asking hard
questions about
appropriate educational
goals, the results are
likely to be
disappointing and
wasteful.
3.
THE MEDIUM
ISN'T THE MESSAGE
Several
decades ago, as
educators began to think
seriously about using
the new technology of
the day for teaching,
you'd hear things like
television will ruin
learning and computers
will revolutionize
instruction.
(Twenty-five hundred
years earlier in Greece
you'd have heard the
same debate about the
written word and its
impact on dialogue-based
education.) In other
words they were asking
whether a technology
could teach without
specifying anything
about the teaching
methods involved.
Richard
Clark responded to that
type of assertion by
arguing, in effect, that
the medium is not the
message. Communications
media and other
technologies are so
flexible that they do
not dictate methods of
teaching and learning.
All the benefits
attributed by previous
research to
"computers" or
"video," Clark
asserted, could be
explained by the
teaching methods they
supported. Research,
Clark said, should focus
on specific teaching-
learning methods, not on
questions of media.
Clark's studies provoked
a blaze of responses
because he seemed to be
saying that technology
was irrelevant. A good
set of these attacks,
with rejoinders by
Clark, can be found in
two recent issues of
Educational Technology
Research and
Development, cited in
the reading list at the
close of this essay.
Robert Kozma argues, for
example, that any
particular technology is
not irrelevant. Any
particular technology
may be well or poorly
suited to support a
specific
teaching-learning
method. There may indeed
be a choice of
technologies for
carrying out a
particular teaching
task, he argues, but it
isn't necessarily a
large choice. There are
several tools that can
be used to turn a screw,
but most tools can't do
it, and some that can
are better for the job
than others. Kozma
suggests that we do
research on which
technologies are best
for supporting the best
methods of teaching and
learning.
I
agree with both of them.
Clark's message is the
more important, however.
Too many observers
assume that if they know
what the hardware is
(computers, seminar
rooms), they know
whether student learning
will occur. They assume
that if faculty get this
hardware, they easily,
automatically, and
quickly change their
teaching tactics and
course materials to take
advantage of it. Thus
technology budgets
usually include almost
no money for helping
faculty and staff
upgrade the
instructional programs.
As for
useful research, we have
both the Clark and the
Kozma agendas before us:
- to
study which teaching
learning strategies are
best (especially those
that would not even be
feasible without the
newer technologies) and
- to
study which technologies
are best for supporting
those strategies.
At this
point it may seem like
all the research and
evaluation are useless.
It's time to turn to
some questions that have
yielded important
information. Since the
1960s the popular image
of the computer
revolution has rested on
individualized
computer-assisted
instruction. This type
of software teaches by
offering some text or
multimedia instruction,
asking the student
questions, and providing
feedback and new
instructional material
based on the student's
answers. Each student
moves through the
materials in a different
way, and at a different
rate. James Kulik and
his colleagues at the
University of Michigan
have summarized the vast
research about such
software. They
reanalyzed data from
large numbers of small
studies in order to draw
more general
conclusions. Their basic
finding: this method
results in a substantial
improvement in learning
outcomes and speed,
perhaps around 20% or
more on average. Such
instruction works best,
of course, in content
areas where the computer
can tell the difference
between a student's
right answer and wrong
answer, e.g., in
mathematics or grammar
exercises. Few other
teaching methods have
demonstrated such
consistently strong
results as this type of
self-paced instruction.
The
news is not all good,
however.
Studies
such as those analyzed
by Kulik and his
colleagues have focused
purely on the
educational value of
software, not on factors
influencing its
viability.
Unfortunately, even the
best computer assisted
instruction of this type
has often not found a
substantial number of
users in higher
education. Software
intended for educational
use often fades away,
its revolutionary
promise unfulfilled.
A
group of us led by Paul
Morris created a
casebook that analyzed
twenty pieces of
software developed in
the 1980s and early
1990s. These software
packages had already
demonstrated not only
value (educational
power, as evidenced by
evaluations and awards)
but also viability
(extensive use over many
years). If software is
not widely used by many
faculty over many years,
it is unlikely to foster
lasting, national
improvement in the way
one or more courses are
taught. We wanted to
understand why a few
software packages had
proven viable, while so
many others were not.
Perhaps
our most important
finding was that it
usually takes years for
curricular software to
be developed and then to
become widely accepted.
There are many reasons
for this. Support
services are often
under-funded, so faculty
couldn't be certain that
the basic hardware and
software would be
consistently available
and in working order.
Changing a course
involves shifts to
unfamiliar materials,
creation of new types of
assignments, and
inventing new ways to
assess student learning.
It's almost impossible
for an isolated faculty
member to find the time
and resources to do all
these things, and to
take all these risks.
Few institutions provide
the resources and
rewards for faculty to
take such risks. For
these and other reasons,
the pace of curricular
change is slow.
The
more revolutionary the
software, the longer and
more arduous was the
task of getting a
critical mass of users.
For large pieces of
curricular software, the
journey from conception
to wide use might take
ten years or more.
Unfortunately,
long before most
curricular software
found such wide use,
computer operating
systems and interfaces
had changed. Instead of
looking revolutionary,
the software began
looking obsolete. Use,
instead of growing,
began to decline. The
lack of obvious returns
discouraged funders and
publishers from
investing in the
creation of version 2.0.
The original developers
had often lost interest,
too. Faculty knew that
making uninteresting
upgrades would win them
few rewards. Thus many
valuable curricular
software packages died
without ever fulfilling
their promise.
We did
find a few small
families of curricular
software that found a
niche. However many of
these packages gained
use because they were
inexpensive to develop
(and thus inexpensive to
update regularly) and
familiar. They got into
use by comfortable, not
by making instructional
waves. Hardly the stuff
of revolution.
That
doesn't mean that
software isn't used for
learning. Ironically,
while software designed
for learning has had a
hard time finding a
postsecondary market,
most software used for
learning was not
designed for that
purpose.
Worldware
is the name we gave such
software. Worldware is
developed for purposes
other than instruction
but is also used for
teaching and learning.
Word processors are
worldware. So are
computer-aided design
packages. So are
electronic mail and the
Internet.
Worldware
packages are
educationally valuable
because they enable
several important facets
of instructional
improvement. For example
online libraries and
molecular modeling
software can support
experiential learning.
Electronic mail,
conferencing systems and
voice mail can support
collaborative learning
by non-residential
students.
Worldware
packages are viable for
many reasons. They are
in instructional demand
because students know
they need to learn to
use them and to think
with them. Faculty
already are familiar
with them from their own
work. Vendors have a
large enough market to
earn the money for
continual upgrades and
relatively good product
support. New versions of
worldware are usually
compatible with old
files. Thus faculty can
gradually update and
transform their courses,
year after year, without
last year's assignment
becoming obsolete.
For
reasons like these,
worldware has often
proven to have great
educational potential
(value) and wide use for
a long period of time
(viability). Has that
educational potential
been realized in
improved learning
outcomes? There is no
substitute for each
faculty member asking
that question about his
or her own students.
Here are two such
studies.
Karen
Smith pioneered what is
now an increasingly
common application of
electronic mail--as an
important element in
teaching foreign
languages. Students of
Spanish at the
University of Arizona
were told to write to
one another using a form
of electronic mail
called computer
conferencing. The
faculty suggested some
topics, e.g., the film
the class had just seen,
reviews for upcoming
quizzes. Other topics
came from the learners,
e.g., an upcoming party
and one student's
existential angst. Some
of these e-mail
conversations were
private. Conversation in
the public conferences
was graded but only for
fluency of expression,
not for content or
grammar.
I met
the first cohort of
students taking this
course. I've never seen
a group, before or
since, so excited about
their course's use of
technology. In part they
were pleased because
computer conferencing
was more accessible than
a language lab; they
could participate from
any computer at any
time. More important, as
several put it, I'm
using Spanish for the
first time. And they
didn't need to feel
self- conscious about
speaking quickly or with
a good accent. All they
needed to do was take
the time to interpret
what had been said
(i.e., written) to them
and then decide how to
express their replies.
Surprisingly,
Smith's study showed
that, relative to a
class taught using a
traditional language
laboratory, the oral
performance of these
students excelled. In
the slower paced, more
anonymous world of the
computer conference,
they were speaking
Spanish with a purpose,
and learning to express
themselves. The
evaluation proved that
worldware had been used
in a way that opened a
new dimension of
learning for these
students.
Another
of my favorite
evaluations of teaching
tactics was never
published. The faculty
member was simply
interested in seeing
whether his use of
technology was improving
his student's learning. Bob
Gross, a professor
of Biology at Dartmouth
College, was an early
user of personal
computers to create
animations. In the late
1980s, he became
impatient about a
bottleneck in his
teaching. It was taking
him two class hours to
teach about a complex
series of interactions
in biochemistry--48
blackboards worth as he
put it. He would draw
the molecules, talk,
erase some, draw some,
and talk some more.
Gross wanted to speed up
the process and make it
more effective. In
several weeks of work
with an undergraduate
student, he used
worldware to create an
animation that enabled
him to teach the same
material in half an
hour. The students could
also study the
computer-based animation
outside class, frame by
frame if need be. I was
initially disappointed,
he told me the day I
visited him at
Dartmouth, some months
afterward. There was
very little excitement
or discussion when I
showed it in class. But
later, when I gave them
my regular exam on the
subject, they did better
than any previous class.
These two studies show
that each faculty member
can do his or her own
research, asking the
kinds of questions about
what students are
learning. That's what
Schneps and others have
shown is so important:
know thy own students
and what they are
learning. Without asking
hard questions about
learning, technology
remains an unguided
missile.
Studies
by individual faculty of
their own students and
their own teaching
methods and resources
are necessary. But such
studies are not enough.
I suggest the following
hypothesis:
Education
can affect the lives of
its graduates when they
have mastered large,
coherent bodies of
knowledge, skill and
wisdom. Such coherent
patterns of learning
usually must accumulate
over a series of courses
and extracurricular
experience. Thus, to
make visible
improvements in learning
outcomes using
technology, use that
technology to enable
large scale changes in
the methods and
resources of learning.
That usually requires
hardware and software
that faculty and
students use repeatedly,
with increasing
sophistication and
power. Single pieces of
software, used for only
a few hours, are
unlikely to have much
affect on graduates'
lives or the
cost-effectiveness of
education (unless that
single piece of software
is somehow used to
foster a much larger
pattern of improved
teaching).
Thus
far few educators,
evaluators and
researchers have paid
much attention to
educational strategies
for using technologies.
Too often they've been
victims of "rapture
of the
technologies."
Mesmerized, they focus
on individual pieces of
software and hardware,
individual assignments
and, occasionally, to
individual courses.
[Enrolling more adult
learners has been a more
powerful motive to
change strategies, and
to study those
strategies. For a fine
strategic evaluation of
seven institutional
projects to transform
whole degree programs, I
suggest Markwood and
Johnstone's study, New
Pathways to a Degree:
Technology Opens the
College.]
Few
educators are thinking
much about educational
strategies for using
technology to improve
learning outcomes. Does
that mean we're not
employing such
strategies yet? Quite
the contrary. Here's an
example.
Back
in 1987 Raymond J. Lewis
and I were looking for
faculty members who had
at least two years of
teaching in an
environment where
students had unfettered
access to personal
computing.
One
place we visited was
Reed College in
Portland, Oregon, where
the current seniors had
four years of easy
access to Macintosh
computers. I talked to
faculty members from
eight departments,
asking what they liked
about teaching in this
environment.
Surprisingly,
there was one thing that
all of them had noticed.
As two of them put it,
I'm no longer
embarrassed to ask the
student to do it over
again. Because computer-
based documents and
projects are
mechanically easier to
revise, their students
pressed to get a second
chance to improve their
work and their grade.
Gradually the texture of
the curriculum in each
course was changing:
toward projects
developed in
stages--plan, draft,
conversation, another
draft, final version.
Each stage of work was
marked by rethinking,
and by learning. We
called this strategy
Doing It Again,
Thoughtfully (DIATing).
I also
talked asked a couple of
seniors if they thought
their education had been
influenced by their use
of computers. One of
them replied that he'd
learned that it's not
one's first draft or
thought that matters,
but the final version.
In what course had he
learned that, I asked.
He replied that it had
been over a series of
courses. Similarly,
several faculty members
and the director of the
writing program
independently suggested
that the most tangible
impact of computer
availability would be at
the capstone of the
curriculum, in the
intellectual tightness
and coherence of
bachelor's theses.
The
day at Reed had a
surprise ending. When
Ray and I sat down with
several of the College's
educational and
technology leaders, they
were astonished by what
we'd heard that day. The
growth of DIATing had
been an ecological
change, not directed
centrally. They hadn't
known that their
technology was being
used in that way or with
those kinds of outcomes.
That's because their
institutional strategy
was the sum of large
numbers of independent
actions by many faculty
members and students
across the college.
From
this story (and my other
experiences with
educational uses of
information technology).
I'd suggest three
lessons:
1)
Technology can enable
important changes in
curriculum, even when it
has no curricular
content itself.
Worldware can be used,
for example, to provoke
active learning through
work on complex
projects, rethinking of
assumptions, and
discussion.
2)
What matters most are
educational strategies
for using technology,
strategies that can
influence the student's
total course of study.
3) If
such strategies emerge
from independent choices
made by faculty members
and students, the
cumulative effect can be
significant and yet
still remain invisible.
(Unfortunately, the
converse can also be
true. We may be
convinced that we have
implemented a new
strategy of teaching
across the curriculum,
and yet be kidding
ourselves.) As usual,
there is no substitute
for opening our eyes and
looking.
Ordinarily
what matters most is:
- not
the technology per se
but how it is used,
- not
so much what happens in
the moments when the
student is using the
technology, but more how
those uses promote
larger improvements in
the fabric of the
student's education, and
- not
so much what we can
discover about the
average truth for
education at all
institutions but more
what we can learn about
our own degree programs
and our own students.
How
can departments and
institutions study their
educational strategies
for using technologies?
A faculty can't do this
alone by looking at just
one course. As we saw in
the DIATing example from
Reed, a strategy is a
pattern of teaching and
learning that extends
over many courses. Only
a college, university or
department has the range
of responsibility and
resources to study
strategy.
The
Annenberg/CPB Project is
taking some steps to
make it easier for
educators to obey the
commandment--know thy
students and what they
are learning. January
1995 saw the birth of
the Flashlight Program.
It's an effort to
develop and share
evaluation procedures.
Colleges and
universities will be
able to use these
procedures to assess
their educational
strategies for using
technology. The founding
organizations included
Annenberg/CPB, the
Western Interstate
Commission on Higher
Education (WICHE),
Indiana University
Purdue University at
Indianapolis (IUPUI).
IUPUI, the University of
Maine at Augusta, the
Maricopa Community
Colleges, the Rochester
Institute of Technology,
and Washington State
University. (In 1996,
Flashlight was moved to
the American Association
for Higher Education and
in 1998, it became part
of the new non-profit Teaching,
Learning, and Technology
Group.)
In a
previous planning phase,
supported by the Fund
for the Improvement of
Postsecondary Education
(FIPSE), Flashlight
identified the
educational strategies
that participating
institutions most needed
to study. Developing
good evaluation
procedures is expensive.
We wanted our procedures
to be widely used and
important, so we focused
them on educational
strategies for using
technology that are
widely used and
important.
As its
name indicates,
Flashlight's evaluative
procedures will not
answer all questions
that an institution
might have. Nor will it
be easy or inexpensive
to ask these evaluative
questions. We do hope
that the answers will
prove unusually useful
for transforming
teaching and setting
policy.
1982 -
Boyatzis, Richard, The
Competent Manager, NY:
Wiley.
1980 -
Bowen, Howard R., The
Costs of Higher
Education: How Much Do
Colleges and
Universities Spend per
Student and How Much
Should They Spend? San
Francisco: Jossey-Bass.
1974 -
Cohen, Michael D. and
James G. March,
Leadership and
Ambiguity: The American
College President, New
York: McGraw-Hill.
1977 -
Crooks, Lois, Personal
Factors Related to the
Careers of MBAs, in
Findings, IV:1,
Princeton NJ:
Educational Testing
Service.
1994 -
Educational Technology
Research and
Development, XLII:2, 3,
Washington, DC:
Association for
Educational
Communications and
Technology.
1991 -
Kulik, Chen-Lin C. and
James A. Kulik,
Effectiveness of
Computer-Based
Instruction: An Updated
Analysis, Computers in
Human Behavior, VII:1-
2, pp. 75- 94.
1971 -
Livingston, J. Sterling,
The Myth of the Well-
Educated Manager,
Harvard Business Review,
n. 71108, January.
1994 -
Markwood, Richard A. and
Sally M. Johnstone
(eds.), New Pathways to
a Degree: Technology
Opens the College and
New Pathways to a
Degree: Seven Technology
Stories, Boulder,
Colorado: Western
Interstate Commission
for Higher Education.
1994 -
Morris, Paul, Stephen C.
Ehrmann, Randi
Goldsmith, Kevin Howat,
and Vijay Kumar,
Valuable, Viable
Software in Education:
Cases and Analysis, New
York: Primis Division of
McGraw-Hill.
1991 -
Pascarella, Ernest T.
and Patrick T. Terenzini,
How College Affects
Students. Findings and
Insights from Twenty
Years of Research, San
Francisco: Jossey-Bass.
1987 -
Schneps, Matthew H.
(Producer, Director),
"A Private
Universe," [Film],
Washington, DC: The
Annenberg/CPB Project.
1990 -
Smith, Karen L.,
"Collaborative and
Interactive Writing for
Increasing Communication
Skills," Hispania,
LXXIII:1, pp. 77-87.
Return
to Top of this Chapter
Return
to Evaluation Handbook
Table of Contents
Note:
This essay was also adapted as a
chapter of the same
name, written by the
author, for the second
edition of the
Technology Costing
Methodology Handbook,
published by the Western
Cooperative for
Educational Technology
(forthcoming).
|