"Finding a Great Evaluative Question: The Divining Rod
of Emotion"

 

Handbook and Other Materials l Asking the Right Questions (ARQ) l Training, Consulting, & External EvaluationFAQ

Stephen C. Ehrmann, Ph.D.
Director, The Flashlight Program
Published in Assessment Update (San Francisco: Jossey-Bass) 
Volume 12, Number 6, November-December 2000, pages 5-6.

The premise of this little essay is simple.  One of the most difficult and least discussed elements of evaluation is deciding what to study.  To do this requires using both reason and emotion. This essay deals with the subjective side of the process.

I. Fear as Divining Rod

During a recent visit to a institution that is part of our Flashlight Network[1], I spent the better part of two days working with a team of people from several offices who were considering what educational use of technology at their institution to study. 

A curious thing happened. After some conversation, one person grinned nervously. "You know what we really should study," she muttered cryptically to the others.  Everyone in the circle nodded silently, but no one volunteered to tell me what she was talking about.  

Eventually I realized what they were alluding to: a distance learning program that had been publicized extensively by the institution, a program that had helped to establish the institution's current reputation.  These folks, several of whom were involved in the operation of this program, were afraid that the program might not be as good as senior administrators were telling the world that it was.  Their anxiety was real, and realistic. Powerful interests in the institution had staked their reputations, and the institution's, on this program.  An evaluation that appeared to threaten that reputation might be squashed flat, and anyone doing such a study squashed along with it.

A divining rod is a forked stick made of witch hazel that traditionally has been used to find water underground.  The trained dowser walks along, holding the divining rod in a certain way.  When the rod seems to jerk downward, the dowser stops and digs for water. 

Fear can be a divining rod for finding a good evaluation target.  That's because evaluation is a tool for reducing important uncertainty.  But most people aren't accustomed to doing evaluations to reduce uncertainty so they hide the anxiety, sometimes even from themselves. Fortunately, some uncertainty is unjustified: evaluation can uncover facts indicating that things are better than people fear, or facts showing how to solve the underlying problem. 

In short, their gut response, "Of course, we can't ask that question!" was a hint that this distance learning program's quality was dangerously uncertain – maybe good, maybe not so good – an important uncertainty that would actually make the program an ideal target for study.

So after talking about other things for a while, we returned to the unthinkable and began to think about it.  "Why do you think this program might not be as good as advertised?" I asked.  We brainstormed for a while. Eventually we concluded that, for at least some instructors, the heart of good teaching lies in their ability to notice, diagnose and respond to learning difficulties experienced by individual students.  When using the current technologies and routines of this distance learning system, instructors could not see students. Furthermore, their interaction with students was constrained in other ways, too. Blinded, instructors might not be teaching as well as they could on-campus. 

Next I asked, "How might we design a study that could improve, rather than just threaten, the performance and reputation of the program?" 

A first step would be to check how many faculty members really did believe that they had problems diagnosing student learning difficulties when using this combination of technologies. 

That study could have three possible outcomes, all useful:

a)       Almost all the instructors might say they were satisfied with their ability to notice, diagnose, and respond to problems students were having learning the material and skills in their courses.  (That seemed unlikely to the participants in my meeting but it was important to test their perception first.)

b)       Some instructors would indicate that they had real problems but others would say, and demonstrate, that they had found ways to use available technologies to learn about their students' learning difficulties.  If so, further investigation might indicate that all faculty members could learn to do what a few faculty members could do, now. In short, the solution might lie in analyzing ways of using the system and then designing faculty development programs.

c)       Almost no faculty members have found a way to use the current technologies to notice, diagnose and respond to learning difficulties. In that case, the investigation should check whether investments in additional technology might help instructors.

So far as I could tell, the fear in the room had melted away by this point in the conversation.  Participants had realized that this study could been seen as helping the program rather than attacking it.  And one key administrator was in charge of the technology.  This evaluation might give this person ammunition for requesting a bigger budget.  Rather than being an opponent of the study, this administrator might help champion their inquiry.

                               

II. Excitement as Divining Rod

In a Flashlight workshop at the National Conference on Higher Education a few years ago, the director of a university library admitted, "We spend huge and increasing amounts of money on print and electronic material.  We need better data to decide whether these expenditures are really paying off in better education for students." 

Everyone else in the group functioned as consultants for him as we attempted to convert his need into an evaluation design.  After about a half hour, we had narrowed the discussion to this process:

1.        Outcome: graduates who knew how to find and critically evaluate information (in print or electronic form)

2.        Activity that fosters the outcome: students working on projects using primary source materials

3.        Technology that supports the activity: print and electronic materials

[In the Flashlight Program, we call these three elements of a study a "triad".]  Was each cohort of new graduates more competent than the last in the outcome?  Were students indeed doing more of the activity each year? Were both technologies being used extensively in that activity? And how well was that educational process working? If it wasn't doing well, why not?

We then began naming specific types of data that could shed light on whether the this process – this triad – was being used more extensively each year at the institution and, if not, why not. (If the university could illuminate barriers to progress, perhaps those barriers could be lowered.) 

Some of our questions had to do with the "technology" per se.  If people couldn't use the technology, the triad wouldn't work.  For example, if one were to find that graduates had a poor understanding of the library catalogue system, it would help explain why they hadn't learned to be critically literate in using print.  Ditto if they didn't have computer access or if they didn't know how to use a search engine. We quickly identified a dozen types of data that could be used to examine barriers preventing use of the technologies (library, Web).

Then we turned to the issue of whether, how, and how much undergraduates were using the technology to carry out the activity: papers and projects requiring primary sources.  "How many sources did the students use?" "How often do instructors assign such papers and projects?" "If they don't assign such papers and projects, why not?"  ""Did they check references cited by the first reference they chose?"  The questions spilled out. It was interesting but I admit it was pretty routine.

And then came the question that shook us.  "When using a Web source, how often do students click on the e-mail button to ask for more information to help them evaluate what they've just read?"  People gasped, murmured, leaned forward in their chairs.  For the first time we realized that the Web was not just a poor imitation of a paper library, or even just a larger equivalent of a paper library.  Paper is a one-way communication medium but the Web sometimes can be a two-way medium! And that changes the ballgame when it comes to becoming critically literate. When using the Web, learning when and how to ask questions of authors and editors can be an important skill.  None of us had ever thought of that. We realized that, like us, most faculty members and students hadn't even conceived of this competence.  One "aha!" followed another.  Suppose that, in this university where most people had never thought of this competence, an evaluation started asking about it, once a term.  Wouldn't the study itself help to increase recognition of this possibility.  Wouldn't the study itself, as well as the data, help enhance this kind of learning?  I'll always remember the adrenalin that shot through us that day. 

Emotions as Divining Rod

Deciding what to study is usually difficult and it can be dull, especially at first.  Many factors conspire to keep evaluation discussions that way.  Dull is safe: it threatens no one.  "Let's get going. Let's not waste more time figuring out what to do.  Let's just do it!" 

But lack of emotional response can be a clue that you have not yet identified a good question.  I was once in a small meeting when a doctoral student announced that she wanted to study "flaming" and other disasters in online courses in order to see whether such problems could be anticipated and converted into teachable moments for students.  The faculty members in the room all leaned forward.  I think they all had just one question in mind. "How quickly can you complete this study?!"  An important question is one that yields data that people realize (often unexpectedly) that they must have, and as soon as possible.

It's hard to come up with a good evaluation question, difficult to figure out what data to collect.  It's inevitable that the first dozen ideas to emerge will be dull, and safe, and not worth the effort.  Keep on dowsing.  Fear and excitement offer clues that you are finally on the right track.


 

[1] The Flashlight Program for the Study and Improvement of Educational Uses of Technology is located at the non-profit Teaching, Learning and Technology Group < http://www.tltgroup.org >.  Flashlight is sustained in part by 140+ institutions and corporations that subscribe to the Flashlight Network. These members receive tools and services, while helping to develop and guide the Program.

 

 

 

 

 

 

PO Box 5643
Takoma Park, Maryland 20913
Phone
: 301.270.8312/Fax: 301.270.8110  

To talk about our work
or our organization
contact:  Sally Gilbert

Search TLT Group.org

Contact us | Partners | TLTRs | FridayLive! | Consulting | 7 Principles | LTAs | TLT-SWG | Archives | Site Map |