By "Transformative assessment” we mean the fostering
of educational improvement through 1) the systemic use of inquiry:
studies of courses (often by the instructors who teach them), studies of
majors, studies of libraries and other services, studies at the institutional
level – many of which focus on the same issues, and 2) the alignment of such
studies with mission, planning, budgeting, development and other key
institutional processes.
Transformative assessment can potentially play several
constructive roles in the improvement process, including:
That’s the definition but reality has a habit of being more
interesting, if not as neat. A few
faculty used computing to improve learning as early as the 1960s and e-mail made
its appearance in educational programs as early as the PLATO system in the
1970s. But only in the last
decade have many institutions had enough infrastructure, enough trained faculty,
and enough interested students to justify thinking about technology playing a
role in programmatic and institutional improvement efforts.
Most have attained that level only in the last five years. The last
decade has also seen a considerable increase in societal interest in assessment.
The two shifts are not unrelated. Technology
adds to both the cost and the uncertainty of improvement efforts.
Assessment can help to decrease risks while providing some guidance for
efforts to make better use of time and money.
In recent weeks, I have begun a series of interviews with
people at institutions whose assessment of educational uses of technology seemed
likely to be playing a transformative role. These sketches are an early report from the front.
I spoke with Chuck Dziuban, Professor and Director of the
Research Initiative for Teaching Effectiveness at UCF.
“Distributed learning” has been the focus of improvement work,
including assessment there.
SCE: What was the foundation of your studies to advance the
development of distributed learning at UCF?
CD: Our evaluations had to align with our institutional
culture, a large metropolitan research university with a real commitment to
serve our region while also building programs of national promise in selected
areas. We also decided as early as 1996 to warehouse large amounts of data,
because we didn't know which data would ultimately be important. We knew that
data don't equal information. Data need to be translated into forms
decision-makers can use.
SCE: Can you describe some of your findings?
CD: Early on we discovered that a large majority of students
who took fully online courses also took courses on campus and spent time here.
That led UCF to develop the "M" course: reduced seat time courses that
used the Web for part of the instruction. M
courses helped us use our classroom space more efficiently. In a later set of
studies, we then documented that the M courses have equal or superior success
rates and comparable or lower withdrawal rates compared to similar traditional
face-to-face courses. We compared success rates in W courses (fully online), M
(web instruction replaces a portion of face-to-face instruction), E (web use,
but not reduced seat time), and no use of IT.
We're also looking at the cognitive style of students who
choose W courses (they tend to be dependent learners; they need some kind of
approval from authority figures; high academic profile); independent learners
tend not to be drawn to W courses; we don't yet know why.”
SCE: I gather UCF is helping faculty to do studies of their
own courses, studies that often align with research done at the institutional
level.
CD: We’re trying to develop a collage of related studies.
But we make no judgment on the projects faculty are doing; they don’t have to
line up with ours.
SCE: Are there instances where findings related to one
another, or where studies fed off one another?
CD: Communications patterns of online courses... Individual
faculty members have found that interaction is better in quality and quantity.
That comes from several individual studies. Our office is now going to
look at that at the institution level, at communications patterns.
Another example: Individual faculty have studied gender, ethnicity impact
– are there differential impacts of these courses on success rates of
different types of students? One thing we've found consistently is that fully
online courses (W) are disproportionately female and it's not an artifact of
discipline, and the females succeed more than males. Why? Maybe women are more
able to take advantage of the medium by collaborating – we don’t yet know.
These are the kinds of things we're going to look at next.”
SCE: When faculty do studies of their courses, does it help
in promotion and tenure decisions, or is it a distraction that can pull them
away from the things that are really rewarded?
CD: It depends on the department and the college. It helps in
some, and not just in the College of Education at UCF where research on pedagogy
is highly valued.
SCE: Any studies done at the level of the department or the
major?
CD: “Our first attempt will be to look at the outcomes of
our web-based nursing program. When they enter, the students have already
finished their clinical component; we provide the courses. We're working with
our nursing program to evaluate the job performance. It's a complex and difficult study. Our next such study will be focusing on our Educational Media
Masters program, and we hope they’ll work together to at least some degree,
because some of the issues are similar.”
SCE: Have your findings influenced faculty support?
CD: “When faculty started teaching in M and W courses, they
encountered all kinds of problems. M
courses typically have one class meeting a week instead of three. We got
feedback from M courses indicating that students had real IT problems. So our
Course Development and Web Services department developed a CD-ROM – the
Pegasus Disc – to answer the majority of the questions and do some automatic
debugging.
SCE: How is this transformative assessment focused?
CD: “We began this by calling it a distributed learning
impact evaluation. A criticism we got a couple years was that we were spending
resources to evaluate distributed learning but that no such effort was being made to evaluate the
"traditional" program. The provost responded that was good point and
changed our goal to evaluation of teaching effectiveness. So our mission now is to
help all faculty.”
SCE: Institutions are easily distracted. How has UCF
maintained the focus for five years?
CD: Our President, John Hitt, is very strong on the idea of
strategic planning. All units develop plans and they all relate to five broad
institutional goals. That's how we get this cross validity of initiatives. The
Provost, Gary Whitehouse, has been a strong supporter, too. Also, our Vice
Provost for Information Technologies and Resources, Joel Hartman, has assembled
a wonderful infrastructure. That’s not the only thing that helps us maintain
focus but it’s been really important. We
have reached the point now where our students and faculty are enthusiastically
asking to go online.”
SCE: What sort of financial commitment has the institution
made to this kind of assessment? How big is it?
CD: “We have an adequate unit. We're funded internally each
year. We have two faculty members full time, two graduate student assistants, so
we can provide support for all faculty who ask. We're able to work with about 40 faculty a year. We've helped
produce countless presentations for conferences by faculty and about 10 faculty
articles have come out of the work so far.”
SCE: What's been frustrating?
CD: “It's been like turning an oil tanker. This is a big
institution. Institutional change
is a long, incremental kind of process. What's positive is that we know that
it's a long term commitment and we're committed to it in terms of funding and
opportunities.”
SCE: What’s next?
CD: “We’re de-emphasizing comparison kinds of studies to
see how good “M” or “W” courses are relative to others –we're past
that. We're interested in tracking how the institution changes. What are the
changes in faculty development? What are the revitalization possibilities for
our faculty?– our uses of the Web seem to be energizing some faculty who had
seemed to be burned out. I'm also
very much interested in how one indexes the transformation of an institution and
the infusion of technology. What are the predictors of success?
For example, I suspect that the department is a very important
predictor.”
I
spoke with Patricia Derbyshire, Coordinator of Institutional Assessment and
Market Research at Mount Royal, which is Calgary, Alberta, Canada.
SCE:
Patti, tell me about the TITLE Project.
PD:
TITLE stood for “Technology Integration in Teaching and Learning
Environments.” The project lasted for about 18 months, from fall 1997 to
spring 1999.
TITLE
related to the College’s strategic plan but there was also a strategic plan
called TIP (Technology Integration Plan) to get computers onto faculty desk and
to train them, especially regarding the use of technology for teaching and
learning. TITLE affirmed that
faculty preferred to tie technology into educational ideas such as the ‘seven
principles of good practice’ and it gave them an opportunity think about
technology in terms of how to use it instruction.
SCE:
What role did the TITLE studies play in moving that line of work forward?
PD: This
was an important focus for administrative and strategic support early in the
integration process. There had been no formal snapshot to this point of what was
actually going on with technology in teaching at Mount Royal. For faculty who
were unsure of even starting to use IT, the findings from the TITLE studies gave
them something to review and consider. There were about a dozen studies carried
out as part of TITLE.
SCE:
TITLE studies impressed because they were both useful and extremely varied.
Can you tell me about examples of some of the different kinds of studies
that were done?
PD:
One type of study looked at individual courses.
Our first study of this type was a pilot in the English department
involving 5 sections of a composition course, each taught by a different
instructor, and about 100 students in all. One thing that study showed was that
students were surprisingly diverse in their past experience with computers
coming into a course, and that that difference made a difference in what
happened to them in the course. Students who were new to computers and feeling
intimidated often skipped the training. Ironically, students who were very
experienced with computers also skipped the training. Skipping the training hurt
both groups because much of the training was specific to the course.
In
the course, students were asked to e-mail essays to one another, edit essays
they received, and then send them to the instructor. But our study showed that
many students didn’t understand they were being asked to do this. The next
time the explanation was changed and those student complaints vanished.
Our
study also uncovered problems with the infrastructure and its reliability; those
findings went to the administration. If an institution wants faculty to rely on
technology, the infrastructure needs to be reliable, or the window of
opportunity created by initial faculty good will closes in a hurry.
When
we repeated the study of that course the next term, after our findings had been
used to change things, those concerns about training and competence had
disappeared. Now we always advise faculty to formally think about the support
that students will need, how to invite the students to use the support, and how
to engage them.
SCE:
At least one of your studies focused on courseware, didn’t it?
PD:
One focused on a CD-ROM on sports injuries, developed by our athletic therapy
program, a post-degree certificate program.
The CD-ROM enabled students to look at streaming videos to help them
learn how to assess injuries. While I was doing observations of the use of the
disc, I saw students working together at computer terminals in the college labs.
The instructor hadn’t realized how actively the students were going to
engage the material.
SCE:
Did TITLE do any studies of majors or departments?
PD:
No. Even today at Mount Royal it’s still unusual to think on that scale. After
close to five years, we’ve had our first request from our School for Business
Insurance Program.
SCE: How
about studies of services?
PD:
We studied the student technology assistant support program (“START”)
separately. These are trained
undergraduates who help other students and faculty use technology.
We found that they were really effective in helping improve student and
faculty competence and confidence -- so that faculty could focus more on course
content. That study helped create the basis for moving the START program into
the operating budget.
SCE: What
other sorts of studies did TITLE do during the brief life of that project?
PD:
We studied three separate programs that dealt with faculty incentives and
support for using technology to improve teaching and learning.
What we learned, especially from the initial studies, was that
equipment-centered training was a very superficial way to deal with
instructional issues. The Academic Development Center (a T&L excellence
center) expanded from three staff in 1997 to close to a dozen plus to provide
support that was more T&L centered.
SCE:
What was the relationship between TITLE and the strategic planning process at
Mount Royal?
PD:
We were constantly referring to the College’s strategic plans and to the TIP
document. The way the College’s
plan and budget were written, these technologies would be put in place, faculty
were to be trained in using them, and the outcome was supposed to be increased
integration of technology into teaching and learning. We learned quickly that we
were giving faculty very little support to actually think through that process
of use technology to improve a course or their instruction.
Faculty needed occasions that would help them think about how to think
about their own courses – a cookie cutter approach wouldn’t work.
SCE:
It seems to me that was an unusually good effort to do assessment and line it up
with institutional strategy at MRC. How did that happen at MRC when it’s so
unusual elsewhere?
PD:
A concern here with accountability is part of it.
Also, there are people here who want assessment to be meaningful;
that’s made a difference. These days we’re looking at full alignment of
assessment across the strategic plan, and integrating it into the work of the
College rather than seeing it as an add-on. We’ve see assessment as one of our
tools for creating improvement. Also, our approach to assessment uses
contemporary models that are inclusive and participatory. Interested faculty are
involved in the design and refinement of the assessment at every stage—it’s
not done “to” them. It’s respectful approach and utilizes their expertise.
In the end, we have a strong assessment process and relevant results that
contribute to change.
SCE:
What are the promotion and tenure implications for faculty who do their own
studies? I know that, since TITLE ended, you’ve been working on a new project
at Mount Royal to redefine the end-of-course evaluation.
SCE:
It seems like the College has put substantial resources into assessment.
My respect for the work at WSU
was kindled by many conversations with Gary Brown, Director of their Center for
Teaching, Learning and Technology (CTLT). For this essay, I interviewed Tom
Henderson (Assessment Coordinator), Dennis Bennett (Information Systems
Coordinator), and Carrie Myers (Graduate Staff Assistant), all associated with
CTLT.
SCE: WSU has become known for
its Goals, Activities and Practices (GAPs) survey system for helping faculty get
a better understanding of what students expect in a course.
Tell me about that.
Tom Henderson: GAPs is used
mainly to improve courses developed with Course Management Systems. It’s
really a series of three surveys.
SCE: So survey #2 is asking
whether the technology is being used in ways that usually improve learning
outcomes. That makes sense. How
have GAPs findings affected programs at WSU?
TH: Our analyses helped
influence the distance learning program to request strongly that all web-based
courses be created with a formal course development process that involves both
the critical thinking rubric that WSU has developed and the GAPs analysis.
Dennis Bennett: Carrie Myers
and I are using GAPs data to predict the kinds of faculty that are most likely
to be successful teaching with technology. Our work is still in early stages.
SCE: In looking at
transformative uses of assessment, we’re especially interested in how and why
studies align with one another and with budgeting, mission statements, faculty
development, day-to-day improvements in teaching and other elements of operation
and change. How is that alignment process working itself out at WSU?
TH: It’s the absence of
structure that has been most important. The people were there and interested in
the issues and there was no other institutional solution or structure getting in
the way of them working together. That
gave us more of a free rein to do this ourselves.
Jane Sherman, our Associate Vice Provost, and Doug Baker, Vice Provost
for Academic Affairs, were key in recognizing all this.
SCE: What are the career
implications for faculty who do studies?
DB: It’s becoming common for
faculty to study their own courses. I
just talked with a faculty member who wants to study the online journaling (peer
to peer interaction) as it relates with instructor-centered conversation. I talk
to 6-12 faculty a semester who are interesting in starting a study.
TH: But few colleges in the
University formally reward faculty who study their own courses.
Many administrators personally encourage this kind of work, however.
SCE: What’s been most
frustrating so far?
TH: There are few signs as yet
that faculty are talking with one another about the GAPs data or using it to
improve their courses. The
lesson from GAPs was that gathering good data alone is insufficient for
meaningful transformation.Many faculty do not know how to translate survey data
on the teaching and learning practice into substantial changes in pedagogy.
WSU has had significant gains in this respect with a critical thinking
rubric, which we are trying to integrate with the GAPs Survey process.
The rubric provides clear and discrete goals that can be applied all or
in part to individual assignments and learning activities. As we put the rubric
and GAPs together, we should be able to help faculty see what is going on in
their courses and provide them more direct guidance for improving the courses.
SCE: How can people find out
more about GAPs?
These glimpses of transformative assessment at three
institutions are probably a bit of a Rorschach test, but here’s what I see so
far in the inkblots.
First, “transformation” may (for some readers) set too
high a bar for the kinds of improvement we’re talking about. In the
institutional context, it might even be threatening to label the goal
"transformation."
Instead, the assessment efforts are identified with a particular kind of
improvement that has some internal priority and legitimacy (distributed
learning, using technology in teaching, aligning student and faculty goals)
rather than with “transformation."
Second, the theme may or may not explicitly relate to
technology. It does at UCF where
technology is examined in relation to access, equity and richer forms of
instruction. But at Mount Royal and WSU the focus in better
teaching-learning practices with technology as a subtext.
Third, the term “transformative assessment” might imply that assessment by itself can cause transformation. That’s the opposite of our point. To use Patti Derbyshire’s words, assessment shouldn’t be seen an add-on. Chuck Dziuban sees institutional culture at UCF as a foundation for their work in assessment. Assessment is only effective in fostering change when it’s closely linked to other facets of institutional life and change.
Fourth, this alignment of studies and action seems to result
from a mix of top-down and bottom-up strategies, with assessment specialists
playing a keystone role, not just by their own work but also by the way they
help others. People in the chief academic officer’s office also are often
mentioned. So a team approach makes
sense, especially if there is someone or some set of people on staff who can
provide continuity, support, and leadership, whether it’s overt or subtle.
Fifth, all of this is part of dramatic, relatively recent
growth in assessment activity. As
director of the Flashlight Program, I’ve visited over a hundred institutions
over the last nine years, talking with folks about the kinds of studies they
were doing, or hoped to do, about educational uses of technology. One thing
I’ve noticed is that the scope and number of those studies has increased.
In the 1980s and early 1990s, it was rare to see studies of more than an
assignment or a course, and even those were pretty rare. Like college courses,
each study was usually the concern of one person.
That’s changing. One index: the number of institutions subscribing to
the Flashlight tools and services has been doubling every year, and now tops
200. Institutions such as the three
described above are at the cutting edge.
Technology isn’t the only reason for the growth of
attention to assessment and the role it can play in improving the academic
program. But I think many observers have missed the role technology has played
in the growth of attention to assessment. The
more money is invested in technology, and the more institutions try to rely on
rapidly changing and comparatively fragile boxes and networks, the greater the
uncertainties, risks, and costs. All
that raises concerns for faculty careers and institutional accountability. So
the hunger for data has grown, not for its own sake, but for our institutions,
our staff, and our students.
Headquarters office hours: 10AM to 6PM Eastern
Directions to: One Columbia Avenue, Takoma Park, Maryland 20912 USA
phone (301) 270-8312 fax: (301)270-8110 e-mail: online@tltgroup.org
learn about tltg || events & registration || programs || resources || listserv & forums || corporate sponsors || related links || home