Have you ever consciously paused in what you've been doing
and deliberately sought to gather some evidence about what
you've been doing, in order to help you decide what to do
next? You might have done this in many ways. Before diving
into a muddy pool, you might have probed to see how deep it
was. Or perhaps you asked for a show of hands, or called
people on the phone, or did a survey, or administered a
diagnostic test (i.e., a test designed to help decide how to
help the person being tested).
Perhaps that sounds obvious to you. But ordinarily
people don't try, inquire, reflect and then perhaps try
something else. They try, try harder, and try something
else. In fact, most of the time we all just make an
assumption and go ahead without taking a fresh look first.
We do that so often that, when asked, "Which of your
assumptions is in fact open to question," the scary answer
is, "All of them, but if I checked all of them, I'd be
paralyzed!"
This handbook is about the exceptions - when it's
worthwhile to 'inquire' in order to improve practice and
results, and how to do it. Our focus is on using inquiry to
improve educational uses of technology.
Some educators are very good at using inquiry to improve
practice. Interestingly enough, many other educators only
try this rarely, and a few don't even understand the
concept and can't recognize inquiry when they see it. (For a
crude self-test of your understanding of the idea of using
inquiry to improve practice,
click here.)
'Inquiry-to-improve' comes under many labels, each of
which has somewhat different (and sometimes conflicting
definitions): assessment, formative evaluation, market
research, needs analysis, classroom research, scholarship of
teaching and learning, cost modeling, or "look before you
leap." [For definitions of these terms, see our web page on
"confusors".]
The next section of
this Handbook describes how programs and institutions have
wasted tremendous amounts of time and money because of
repeated patterns of failure in the way they've invested in
educational uses of technology. One of several features of
that cycle of error: evaluation that was either lacking or
looking in the wrong direction. This next section then
describes a strategy for improving outcomes that is based,
in part, on a different approach to using evaluation.