| “Transformative
Assessment” (perhaps
better entitled
‘Assessment for
Transformation’) is a
label that EDUCAUSE’s
National Learning
Infrastructure
Initiative (NLII), the
Coalition for Networked
Information (CNI), and
The TLT Group’s
Flashlight Program have
applied to a
joint project.
Our aim: help
institutions use data to
document, guide, and
accelerate their efforts
to use make major
technology-intensive
improvements in teaching
and learning. Part I of
this essay describes the
background of this idea.
Part II suggests some
research you or your
colleagues could do that
could be useful to help define what assessment can, and can't
do, to aid transformation.
Note
on authorship:
Throughout this essay,
when I say “we,” I
am acknowledging that we
on the project team have
been working on these
issues together.
However, I am the author
of this particular paper
and do not mean to imply
that all of my
colleagues necessarily
agree with any
particular point in this
essay.
- Background
- Research
Ideas
Last
year, I wrote a brief
paper describing
three institutions where
such uses of assessment
seemed to be underway.
It was a quick
effort – one brief
interview per
institution.
The research was
done as part of this
joint project, to
prepare for a workshop
in Denver last winter
and an online workshop
run during Spring 2002.
This coming June
NLII, CNI and Flashlight
will run a second
workshop on
Transformative
Assessment and again
will offer an online
workshop.
As part of our
preparations, I’ve
been thinking about
Transformative
Assessment and how
little we know about its
possibilities and
limitations.
It’s
easy enough to describe
ideas behind
“transformative
assessment”:
Transformation
usually isn’t called
“Transformation.”
For example,
if the institution is
working to
internationalize the
curriculum (using the
Web, international
partner institutions,
faculty development,
streaming video,
expansion of parts of
the library’s
collection, etc.), the
effort is likely to be
called
“internationalizing
the curriculum,” not
“transformation.”
If, to choose
another example, a
department is
collaborating with three
other institutions to
create a joint degree
program, their effort is
also unlikely to be
tagged locally as
“transformation.”
When our project
refers to a local effort
as “transformative,”
we mean that the effort
involves qualitative
change in both outcomes
and the activities
producing those
outcomes, that the
change is relatively
large scale, and that
the change usually
involves more attention
to learners, more active
learning, more
collaboration, and a
wider range of resources
than before.
Assessment
as Guidance: We see
data gathering as far
more than a way of
deciding after the fact
whether an innovation
has succeeded or failed.
We primarily see
assessment as a way of
guiding and accelerating
the improvement, while
controlling costs, risk,
and stress on staff.
To achieve those
goals, the assessment
needs to be planned and
budgeted as an integral
part of the improvement
effort.
When technology
is a pivotal part of a
change effort, the need
for guidance from data
is even more urgent.
Because of the pace of
change, and the expense,
of such technologies and
connectivity, the stakes
and risks are even
higher than normal.
Gaining
Leverage from Assessment:
Assessment (also known
as program evaluation;
assessment; feedback)
can be pivotal in
several different ways:
1. The data can help
sustain institutional
and outside attention on
the improvement effort,
if reports are issued
regularly and raise
issues for that must be
discussed and resolved. That’s important
because higher education
suffers from attention
deficient disorder.
Change is often
powered partly by the
“excitement of the
new,” so, as the
months and years go by
and the effort is no
longer new, energy
wanes. Unfortunately, it
can easily take 5-10
years to change a
program or institution
enough to visibly
influence outcomes or
costs. Because
institutions rarely pay
attention to one issue
that long, most such
efforts die before
achieving real results.
Technology has
made it more important,
and more difficult, to
sustain attention long
enough for improvements
to succeed.
Periodic
evaluation ought to be
able to help if the
studies are
well-designed.
2. Data can be used
to provide early
warning before
problems grow large
enough to sink the
initiative. For example,
imagine a pilot program
using a new web
authoring package. The
early users are few in
number, pioneering by
nature, and easy for a
small tech staff to
support. But a study
might find that each
faculty member
nonetheless requires a
great deal of help from
support staff. The study
predicts that, when the
program grows larger and
involves more mainstream
faculty, the burdens on
the support staff and
the faculty would become
insupportable.
Solution options
include:
q
shift to a
less demanding form of
the innovation,
q
invent
ways for a small support
staff to help that many
faculty efficiently,
q
increase
support budgets and hire
staff so that, when the
program expands, the
institution can respond;
q
abandon
this particular
innovation before people
are too bruised
q
go ahead
and hope the study is
wrong.
3.
The data can be
used to attract more
resources from
outside the institution.
This can be a win-win
situation if the studies
are well designed.
Well-designed
studies can produce
documentation of success
when the program is
going well, and evidence
of the kinds of
resources needed to
solve problems if there
are indeed problems.
Importance
of alignment: The
data gathering process
should help illuminate
the change program, as
we’ve said. We often
refer to this as
“alignment” between
the two. Equally
important, assessment
needs to align with the
other aspects of the
institution’s
operation that are
supporting the change.
Those relationships
should usually be
two-way.
q For
example, data would be
gathered about faculty
development for our
example program of
internationalization the
curriculum. Meanwhile,
some of the faculty
development would need
to educate faculty and
staff in how to gather
and interpret data about
the unfolding progress
of internationalizing
the curriculum. For
example, faculty might
be trained in gathering
data to help diagnose
sources of conflict
online among students of
different cultures.
q Another
example of this two-way
alignment: data would
need to be gathered
about the budget
investments the
institution is making in
the change.
Meanwhile, the
budget makers would need
to remember that funding
for the data gathering
and its staff are a
priority if the overall
program
(internationalizing the
curriculum) is to move
forward.
For a
more detailed
description of the role
of data-gathering in
guiding, accelerating,
and reducing the risks
of major educational and
organizational
initiatives, see this
longer essay originally
published in EDUCAUSE
Review and other
resources
being collected by the
Transformative
Assessment Project.
What
I’ve just described is
what we’d like to
imagine that
transformative
assessment can be.
But
what does today’s
reality look like, at
best? That’s where the
research comes in. As
usual in these columns,
I’m imagining that
you, the reader, are
considering writing a
grant proposal to do
research or are a
student seeking a
dissertation topic.
What examples
could you discover of
real-world use of data
in such transformative,
technology-enabled
initiatives in colleges
and universities?
What might you
learn from their
experience? Answering
such questions is an
exercise in ‘applied
history.’
Here are some
slightly more focused
queries:
q Is there
any pattern in the kinds
of issues for which data
played a role in
charting, guiding, and
accelerating (or
blocking) such an
improvement effort (at
one or more educational
institutions)?
q What sorts
of coalitions of
interest and resource
were behind the more
successful uses of data?
q Did the
role of technology in
the reform raise special
concerns about the use
of data? For example,
some people tend to
denigrate data gathering
(wrongly, in my view)
because they imagine
that technological
change will render
findings obsolete before
they can be used.
(I think that the
right kinds of study can
be useful while not be
especially vulnerable to
this concern).
q Can we
learn anything useful
from the factors that
supported or hindered
such efforts to use
data? While some issues
are likely to be
relatively generic, it
could be useful to know
whether some issues are
specific to the nature
of such changes, the
attitudes and
assumptions around the
use of information
technology, the role of
corporate vendors, etc.
q Were the
efforts to gather data
seen mainly as part of a
larger scale effort to
boost assessment at the
institution? Or as part
and parcel of the
improvement effort? For
example, at a
hypothetical institution
using technology as part
of its effort to
internationalize the
curriculum, was the
effort to use data
backed mainly by the
assessment program at
the institution? Mainly
by the advocates of
internationalization and
their funders? And/or by
other parties?
What can other
advocates of data use
learn from these
experiences about how to
organize and sustain
multi-year efforts to
use data to guide an
improvement effort?
q What
unanticipated
opportunities and
problems did the
advocates of data use
encounter?
q In
retrospect did the
results of data use
justify the investment
in gathering the data?
If you
find these ideas
provocative and would
like to chat about them,
or related issues, please
let me know. |