Education is Not (Just) a Machine: Uniform Impacts, Unique Uses
 

Handbook and Other Materials l Asking the Right Questions (ARQ) l Training, Consulting, & External EvaluationFAQ

Uniform ImpactsUnique Uses l The Flashlight Approach l Flashlight Evaluation Handbook

Any educator knows that education is not (just) a machine.  Their interpretation of that statement, however, will differ to some degree by discipline and by personal preference. What they'll usually mean, regardless of discipline or preference, is that things never quite happen the same way twice. No two students react quite the same way to the 'same' teaching and, indeed, teaching is never quite the same each time. Education researchers look at the same data and notice that no theory of education can precisely predict what will happen to an individual who is taught in a particular way. US Congressman "Tip" O'Neill remarked, "All politics is local."  The same is true for education: context is even more important than method or theory in influencing outcome.

Beyond that kind of variation, the statement, "Education is not (just) a machine," has an additional meaning.  Any educator, any designer of programs, usually has two kinds of goal in mind: one kind that applies to all students and the other that applies to each student.

Uniform Impact: Goals that Apply to All Students

What’s a typical example of the kind of outcome goal that ought to be measured? “All students should learn to think critically (though perhaps to different degrees of skill).” “All students should get jobs (perhaps at different salaries).”  In other words, the goals assume that everyone is supposed to benefit in the same ways.  If that were true, it would certainly make things simpler to measure – the analyst could devise one test of achievement of benefit (e.g., a test of critical thinking skill) and apply it to all the beneficiaries.  But what if some students are gaining in critical thinking while others are mainly improving their creativity and still others are gaining in interpersonal skills?

As those examples indicate, there are two ways to look at almost any educational program.  One perspective focuses on program benefits that are the same for everyone (“uniform impacts”) while the other perspective focuses on benefits are qualitatively different and somewhat unpredictable for each learner (“unique uses”) (Balestri, Ehrmann, et al., 1986; Ehrmann and Zúñiga, 1997, 2002).  This section of the chapter explains these complementary perspectives on education. The following section will use these ideas to suggest ways to assess specific types of benefits.

A. Uniform Impacts

To some degree, all students in an educational program are supposed to learn the same things.  As shown in Figure 1, such learning by two people can be represented by two parallel arrows. The length of each person’s arrow represents the amount of growth during (and sometimes after) the program.   Students usually enter a program with differing levels of knowledge, grow to differing degrees, and leave with differing levels of achievement. The uniform impact perspective assumes that the desired direction of growth is the same for all students.

In an English course, for example, uniform impact assessment might measure student understanding of subject-verb agreement, or skill in writing a 5 paragraph essay, or even love of the novels of Jane Austen.  The analyst picks one or more such dimensions of learning and then assesses all learners using the same test(s).  I’ve labeled this perspective “uniform impact” because it assumes that the purpose of the program is to benefit all learners in the same, predesigned way.

B. Unique Uses

However, that same English course (or other educational activity) can also be assessed by asking how each learner benefited the most, no matter what that benefit might have been.  I’ve termed this perspective “unique uses” because it assumes that each student is a user of the program and that, as unique human beings, learners each make somewhat different and somewhat unpredictable uses of the opportunities that the program provides.

In that English course, for example, one student may fall in love with poetry, while another gains clarity in persuasive writing, and a third falls in love with literature, and a fourth doesn’t benefit much at all.  (See Figure 2

Faculty members cope with this kind of diversity all the time. An instructor may give three students each an “A” but award the “A” for a different reason in each case. The only common denominator is some form of excellence or major growth that relates to the general aims of the course.  There are multiple possibilities for growth and it’s likely that different students will grow in different directions. 

Notice that uniform impact methods tend to miss a lot when benefits are better described in unique uses terms. In that English class for example, imagine that the instructor had decided to grade all students only on poetry skills. One student would pass and the others would fail. Or imagine that the instructor tested all students on poetry, persuasive writing, and love of literature, and only passed students who did well on all three tests: everyone would fail the course.  Meanwhile, an instructor using a unique uses approach (seeking excellence in at least one dimension of learning) would pass three of the four students.

Uniform impact and unique uses are both valid, and usually are both valid for the same program. The challenge for the analyst is to make sure that the assessment approaches are in tune with the program’s goals and performance. If, for example, the program’s goals are strongly “unique uses” then it is inappropriate to employ only “uniform impact” measures, and vice versa.

How can unique uses benefits be assessed?  Most unique uses assessments follow these steps:

  1. Decide which students to assess. All of them? A random sample? A stratified random sample?
  2. Assess the students one at a time. Ask the student what the most important benefit(s) of the program have been for him or her. (At this point, the respondent’s statement should be treated as a hypothesis, not a proven fact.) This hypothesis about benefits can also be created or fine-tuned by asking the instructor(s), peers, or job supervisors about the program’s benefits for that student.
  3. Gather data bearing on this hypothesis. If the student said that the program helped her get a job, what data might help you decide whether to believe the assertion?  (For example, did the student really get a job? If the student said that certain skills learned in the program were important in getting the job, did the interviewer notice those skills?)  If appropriate, assess the benefit for the student (for example, if the benefit is a skill, assess how skilled the student is).
  4. If appropriate, quantify the benefit for that student. Panels of expert judges are sometimes useful for this purpose. Their expertise may come from their experience with programs of this type.  (This is exactly what teachers do when they grade essays.)
  5. Identify patterns of benefits.  Was each student completely unique? Or, more likely, did certain types of students seem to benefit in similar ways? These findings about patterns of benefit may suggest ways in which the program can be improved. For example, suppose program faculty consider “learning how to learn” to be only a minor goal of the program. But 50% of their graduates report that “learning how to learn” was the single most important benefit of taking the program. In that case, the faculty might want to put more resources into “learning how to learn” in the future.
  6. Synthesize data from the sample of students in order to evaluate the program’s success.

 

PO Box 5643
Takoma Park, Maryland 20913
Phone
: 301.270.8312/Fax: 301.270.8110  

To talk about our work
or our organization
contact:  Sally Gilbert

Search TLT Group.org

Contact us | Partners | TLTRs | FridayLive! | Consulting | 7 Principles | LTAs | TLT-SWG | Archives | Site Map |