|
Flashlight Evaluation
Handbook
Think of
a triad as a kind of grammar for analyzing the purposeful
use of computers and other technologies and resources. It
suggest the centrality of activities as an organizing point
for studies. For example, many evaluations of technology
focus on the technology. Sounds sensible. But what we're
suggesting is that, usually, it makes more sense to focus on
one or more things that people may choose to do with the
technology. For example, to study a "clicker" system, it may
make sense to focus on the activity of using feedback to
cause students to question and reshape their prior
conceptions and misconceptions (aided by clickers). To study
a use of iPods, it may make sense to focus on how students
do homework for foreign languages classes (aided by iPods).
By
shifting the focus to the activity, the investigator is
freed to look at any and all factors that influence that
activity, not just the technology. A student response
system may in theory be great for helping students confront
their misconceptions, but a study of the whole activity may
reveal that necessary student conversations aren't
happening. Further study might reveal why those
conversations aren't happening. If the investigation focused
only on the hardware and software, the investigation might
never have revealed why the hoped-for outcomes hadn't
happened (or, if the news is great, how the outcomes got to
be that good!)
The triad
structure suggests at least five sets of related questions
for an investigation of how a technology might be
contributing to a desired outcome (or failing to do so):
-
Questions about the technology per se
(e-mail, in this example). For example, one ought to
study the availability and reliability of the
technology. If the technology can't be used (for any
purpose), then this triad can't work.
-
Questions about the use of the technology (e-mail) for
the activity (student collaboration on homework or
projects). For example, do users find the technology
to be a supple and effective tool for this
activity, or is it harder to use, or riskier, or more
costly, than reasonable alternatives for this activity?
-
Questions about the activity per se
(collaboration on homework and projects, in this
example). For example, how much do teachers value
teaching in this way; if they don't believe this
activity is valuable, they are not likely to use
technology to carry out the activity for the first time,
or to do the activity better than before? Questions
about the activity are also useful in comparing courses
that use different (old and new) technologies for the
same activity. For example, in one course that relies
strictly on students meeting one another face to face
outside the classroom, how much collaboration on
homework is there compared with another where students
also use e-mail? This question would be about how much
students collaborate on their homework (without
mentioning the medium).
-
Questions about whether and how the activity is
contributing to the outcome (in this example, improved
retention). For example, do people who achieve the
outcome report that they actually participated in the
activity? Do they claim that the activity was valuable
in achieving the outcome?
-
Questions about the outcome per se
(retention, in this example). For example is the
retention looking good? Is there evidence that the
retention is valuable when attained?
Flashlight Evaluation
Handbook
|