|
Return to
Flashlight Evaluation Handbook Table of Contents
Traditional evaluations of distance
learning programs compare outcomes of the DL program with
outcomes of "comparable" courses on campus. Measured
outcomes often include learning (test scores), enrollment&
retention, costs, and satisfaction of students and faculty.
Problem: by themselves, outcome comparisons provide
little guidance for program improvement. [For more on this
point, read "What
Outcomes Assessment Misses."]
Here are four complementary strategies for
using data to improve distance learning, each with examples
of Flashlight Online
surveys. Cost analysis can also be used to improve programs,
as the fifth button below describes. Together these five
strategies of inquiry can directly guide and accelerate
program improvement ("formative evaluation"). These
resources were developed for, and with support from
TLT/Flashlight subscriber institutions; this page is a free
sample of subscriber benefits. Click the "Subscriptions"
button to the left to learn more.
-
Tracking
activities; benchmarking
-
How
well does the technology support the activity?
-
Diagnosing
barriers to using the technology for the activity
-
Classroom/course
research
-
Controlling costs
and stress
-
Related
pages
I. Tracking the
kinds of teaching-learning activities that can improve
outcomes
Program outcomes are determined by what
students do as they learn (activities such as reading, doing
homework, conversing with the faculty member, doing research
and so on). If the technology makes it possible, or easier
for students or faculty to carry out such an activity, then
the technology has helped determine the outcome.
That's why Flashlight inquiries usually
begin by finding out what people are actually doing. Our
survey items focus on those activities most likely to
improve outcomes, activities such as those described by the
"seven principles of good practice" (click
here for a page describing the seven principles and ways of
using technology to implement them.)
Here is a
Flashlight Online survey that could be used to compare
distance learning and campus courses, while simultaneously
providing guidance for how each such course could be
improved.
Findings and their use: Findings
can be used to
- identify courses that are doing
comparatively well in order to spotlight their
instructional practices and resources,
- maintain long-term faculty,
administrator, and development focus on an issue such
as 'active learning' for enough years to actually make
substantial improvements in performance (for more on
this use of tracking, see "Using
Technology to Improve Outcomes: Learning From Past
Mistakes."
Another way to use data to improve program
outcomes is to assess the strengths and weaknesses of the
technology for carrying out the kinds of activities that
determine outcomes. Each of these items in
this
survey is designed to provide direct guidance for
program leaders and faculty support staff for helping
faculty and students use technology appropriately and/or for
guiding future choices of technology.
Findings and their use: Subscales
(answers to several questions with similar topics, such as
student-student interaction) provide clues about student
judgment about the suitability of the technology, and the
assignments for its use. Similar questions should be asked
of faculty. Together the findings would help faculty and
administrators decide a) which technologies were being used
well, b) where better training or course design is needed,
and c) where new technology may be required.
III. Diagnostic Surveys:
Removing Barriers to 100% Participation and Success
Learning results from what students do.
Technology is valuable when students use it to carry out
those activities. But what if not every student
participates in key course activities? For example, we've
identified almost 50 barriers that can hinder some students
from participating in online discussion and teamwork.
Barriers range from problems in uploading files to problems
in dealing with a loafing team-mate.
This third type of survey asks each
student to describe any and all barriers that affect that
student's participation in online discussion or teamwork.
Faculty using this survey would ask only about problems
where they and the institution can help. The
attached survey includes 19 of the 45 items in this item
bank.
Findings and their use: Students
are encouraged to give their names. So if one student says
that finding a computer at the times she has available for
study is a problem, she might be told about a local school
that is open in the evenings and has a computer lab. If half
the students in the course believe that they learn best when
they learn alone, the faculty member might report after each
assignment and test which group scored higher: those who
worked on the team projects (or engaged in online
discussion) or those who didn't.
Flashlight Online can also be used to
gather information to improve courses as they unfold. (
Subscribers: click
here for pages on classroom assessment techniques
adapted from the work of Angelo and Cross.) For example, the
faculty member may ask students to think about recent
assignments, both to get feedback for future teaching but
also to stimulate students to think about their own
strengths and weaknesses.
Here's one survey that could be adapted for this purpose.
And here are a variety of other ways to improve feedback for
students and faculty. (Standard TLT
Group username and password required; to see if your
institution is a current subscriber and to get the log-in
information,
click here.)
Traditional cost studies focus on the
bottom line: of two or more types of program design, which
is most/least expensive? Here, too, Flashlight focuses on
the ways that time, money and other resources are invested
in specific activities, and why. Christine Geith's chapter
in the Flashlight Cost
Analysis Handbook, for example, found tremendous
variation in how individual faculty spend time in different
modes of off- and on-campus teaching.
Dziuban and Moskal showed that one reason that faculty
spend more time in web-based courses at the University of
Central Florida is because they like the enhanced ability to
interact with students.
Brown,
Henderson and Myers found that the investment in
up-front help in instructional design at Washington State
University was paying off in both improved quality and lower
operating costs for distance learning courses.
Bottom line: if you want to gather
the kinds of data that can directly lead to program
improvement, don't just focus on the bottom line!
Return to
Flashlight Evaluation Handbook Table of Contents
|