TLTG graphic
TLTG graphic
TLTG graphic
flashlight
PROGRAM INFORMATIONTOOLSCONSULTING EVALUATIONPARTICIPATING INSTITUTIONSNETWORK SUBSCRIPTION
 articlescase studiespresentationsworkshop materialsFAQs
line graphic
LEARN ABOUT TLTG
EVENTS AND REGISTRATION
PROGRAMS
in resources section
LISTSERV AND FORUMS
CORPORATE SPONSORS
RELATED LINKS
HOME
line graphic
line fade graphic

 
A Taxonomy of Goals 
For Educational Uses 
(and Assessment) 
Of Computers and the Web

Stephen C. Ehrmann, Ph.D.
July 10, 2001
Revised March 13, 2002 and May 9, 2006

Abstract: One of the barriers to successful use of technology, and to its evaluation, is the assumption that the goal is to "improve" teaching and learning.  One of the barriers to successful evaluation is the assumption that its aim is to assess whether teaching and learning are now "better." 

"Improve?" "Better?"  These concepts are too global and vague to be of much help in focusing either strategy or studies.  

This essay suggests some simple ideas about the kinds of educational goals that can be advanced with technology. Once the goal is focused, it also becomes easier to talk about how to assess progress in that particular direction. This article describes three sets of categories to describe the educational goals of technology use: a) six families of goals, b) transformational or not, c) uniform impacts or unique uses.

I. Six Families of Goals

II. Specific Learning Gains Related to IT Use
  A. Old (traditional) Learning Goals
     1. Course of Study
     2. Learning Bottlenecks

  B. New (IT-related) Learning Outcomes

III. Learning Outcomes Associated with Educational Transformation

IV. Uniform Impacts and Unique Uses

Additional References


 

 

I. Six Families of Goals

Many assessments of technology use begin, and end, with goals, that is with the assessment of outcomes and comparison of those outcomes with desired goals.  If your interest lies in assessment for improvement, looking just at goals and outcomes is often inadequate.  Imagine, for example, starting a new activity as a novice and getting feedback only (and then only occasionally) on the ultimate outcome of the activity.  Try teaching someone to ride a bike if all you know is how often they have fallen, or try teaching someone to hit a baseball a long way if all you know is how far some of their previous hits have traveled.  The same difficulties occur when one tries to help someone improve an educational program while only providing data on outcomes. It's not enough and sometimes it's not even necessary. Providing data on the activities of the program, and on the resources (including technology) used to carry out those activities are even more useful.

Nonetheless it makes sense to begin this discussion of assessment with goals and outcomes.

For evaluators: remember a focused study need not assess every possible outcome; instead begin by taking the time to figure out which few outcomes are so important that it is worthwhile spending the time and money to assess them.  Caveat: Assessing outcomes such as these can be crucial for improving a program but this kind of data is usually not sufficient and sometimes is not even necessary; for an explanation of some of the limits to using only outcomes data to improve programs, see "What Outcomes Assessment Misses."

There are at least six families of goals for educational investments in technology, each of which must be assessed in a manner different from the other five.  Your study may well have to deal with more than one of these families of outcomes.  The first four types of outcome usually result from educational uses of technology, while the latter two outcomes are not the results of teaching and learning.

 

Goal Family

Notes on Assessing This Goal

1. *Improving learning outcomes (‘old’ goals).  Seeking improvement in existing learning goals.  For example, are students learning calculus better when we provide Web-based quizzes online than when we use traditional quizzes? See below for a more discussion of the kinds of 'old' outcomes technology use is most likely to improve.

Compare test scores (and we mean "test" in the broadest sense of that word) between two different program designs (e.g., before and after; experimental approach and its best competitor)

2. New goals for learning, that are often directly dependent on computer use e.g., computer science, new forms of graphic arts, geographic information systems, analysis of large databases, and other skills and knowledge that derive directly to computer use. 

Assessment may focus on measures of the value of the new goal. Is it worth pursuing?

3. *Extend access to education

Examine enrollment and retention, as a whole and for particular groups of interest

 

4. *Improve efficiency

Ø       Micro (e.g., person uses a computer spreadsheet instead of a paper spreadsheet because it's easier and/or more capable)

Ø       Macro (e.g., if we this course management system, how does it affect the costs associated with developing and teaching courses in our distance learning program?)

 

Ø       Micro: study or survey the use of resources by individuals to do particular tasks; this may be complemented by studies of hidden costs (e.g., staff needed to help them learn the skill and to cope with problems)

Ø       Macro: Similar study but at the program or institutional level

5. Attracting-retaining faculty, staff, and students by providing them with the technology they want.  For example, the department may invest in equipment that a faculty candidate expects as a condition of employment. Dormitories may be wired in order to attract students who might otherwise attend a more 'wired' institution.

Test these hypotheses by seeing who comes (or stays), who doesn't and why. One possible test of this hypothesis that's reasonably efficient: interview people who leave or who turn you down. Give them an open-ended, non-prompted opportunity for them to explain their decision. Then ask them about a number of potential factors: on a scale of 1-5, how important were each of these to your decision. Make the technology in question one of those factors.

6. The fear of losing face or place.  This factor is similar to #5 above but is more institutional in nature, the hope that the institution will increase its status and clout if technology investments are made or that it will lose status or clout if the investments are not made

Test this hypothesis by measuring the institution's standing and seeking the reasons why people rate it as they do.  Is there a relation of changes in status-clout and their respective investments in technology?  A national study on this topic might be even better than a local evaluation.

 

* The activities that produce this goal can be studied with tool kits available from the Flashlight Program.

Hint:  Don't confuse outcomes with the means of assessing them.  Instead of describing an outcome as "students have mastered the use of 'plot' as a tool for analyzing a novel" or "students learned what I wanted them to learn in my course" some people make the mistake of saying "higher test scores" is the outcome. 

 

II. What Specific Learning Gains Might Be Most Important (And Most Likely) When Technology is Put to Good Use?

If you're trying to gather evidence about benefits of educational uses of technology, it's useful to look for big gains in outcomes.  So which gains are likely to be the biggest and most important?

What follows is a bit more detail about of two of the outcomes areas summarized above: old learning goals and new learning goals. 

A. "Old Goals" of Special Relevance to IT Use

"Old goals" (the first row in the table above) are traditional educational outcomes.  The 'seven principles' of good practice in undergraduate education suggest that almost any goal can potentially be advanced by technology if technology is used to support better teaching-learning activities such as active learning, student-faculty interaction, and the like. For more on this argument, see a later document in this module, "Seven Principles."

One way to divide goals of interest:

q       Learning outcomes for graduates

q       Learning bottlenecks.

The former focuses on major skills and bodies of knowledge: the kinds of things that one might observe in use by graduates, and valued by them, their supervisors and peers.  The latter is likely to be far more specific (e.g., understanding a graph that has time as one axis; understanding the concept of a 'model').

1. Learning Outcomes for a Course of Study

If your study is of a college or university, you will probably be more interested in 'old goals' that are not discipline-specific.  It doesn't seem likely to us that all such goals for a college education would be advanced equally, or even noticeably, by advancing uses of technology.  But a few such goals seem especially linked to technology uses. If one or more of these goals are of special importance to your institution, and to technology-using faculty, it might be worthwhile to study them.

What follows is intended to a suggestive, not a comprehensive, list of such outcomes:

q       Skills of inquiry: Technology offers many opportunities to develop this skill, including a wider range of tools for research and inquiry (often tools that make revision easier, and thereby open opportunities to learn by rethinking one's query), simulations, and a world of data.

q       Skills of composing, designing and other forms of creative work.  'Learning by doing', especially by working on open-ended problems and needs, can be a powerful way of understanding ideas (e.g., learning about poetry by writing poetry) and of improving ability to do the creative work itself.  By providing easier to use design tools, simulated design environments, tutorials and other supports, technology has greatly expanded the curricular options for learning to design and designing to learn.

q       Skills of collaboration: Aside from the new goal of online collaboration (see below), face-to-face collaboration skills can also be developed with technology. Technology can be used to offer students more.

q       Ability to apply academic learning to real world situations. Technology can help here by lowering the walls of academe to enable students to work on real world problems, use real world data, confer with experts from outside, and work with clients from outside.

q       Learning to understand other cultures.  Similar uses of technology can help students encounter people from other cultures, in their cultures, by communicating with them, getting information about the region or country where they live, and so on. This can be especially important for students in colleges that have relatively homogeneous student bodies.

q       Learning how to learn. As technology multiplies both the content and the learning modes available to each learner, the skill of knowing how to (help) teach oneself becomes even more important.

In this brief article, I'm not even going to attempt to describe how to assess each of the foregoing competences. It's not easy and, as with other skills, there are "quick and dirty" approaches that produce useful answers and more searching, highly tested approaches that are more costly to create but that produce better, more reliable measurements.

 

2. Opening Up Learning Bottlenecks

Sometimes the course of study, or other pervasive technology-enabled activity is not really the issue for decision-makers.  Instead they come to realize that a particular bottleneck is impeding progress and that's where the study's attention should focus.

Learning bottlenecks, for example, are problematic topics or ideas that are:

Ø      difficult to learn well, and

Ø      if not learned well, a barrier to later learning. 

In mathematics, the idea of an "unknown" can be a learning bottleneck, for example.  The idea may be taught year after year (but with limited success); this lack of mastery of the difficult idea may make it more difficult to teach a variety of other topics that depend on that idea.  So it may be worthwhile focus attention on whether an experimental technology-enabled activity can open this learning bottleneck. As in the previous section I won't attempt to create a general purpose discussion of "how to assess learning" here but one thing is important to remember about learning bottlenecks: If the bottleneck has really been opened, then one should be able to measure a large "downstream" improvement in learning in later courses or success in parallel courses that require the same skills or insights as prerequisites.  The improvement in learning outcomes in later courses may not be manifested in overall grades for the course. But if the learning bottleneck in course #1 is (for example) learning to understand a mathematical function over time and if that bottleneck has been opened --  if students have gained a deep understanding of such functions -- then students should do better than expected in course #2 assignments that require that understanding.

For one such example of an observation of improved performance that correlates with opening the bottleneck of study skills, take a look at Gary Brown's Flashlight-based study at Washington State University.

 

B. New (IT-Related) Goals for Learning Outcomes

Any discipline will have certain learning goals that are tied to professional (and thus student) use of technology (e.g., thinking about geographical questions by using data from a geographical information system).  Computer science and a number of other newer fields wouldn't even exist were it not for computers

Some such goals transcend individual disciplines.  A college or university might wish to include such goals in a study of how technology is improving educational outcomes for all of its graduates.  Several of these new goals relate to the 'old goals' listed in the previous section, e.g.,

q       Skills of inquiry in an online and multimedia environment.  Finding information online using generic and topic-specific search engines and online bibliographies. Skills of validation of information (e.g., when and how to use a site's option to e-mail its webmaster as part of evaluating what is posted there). Skills of interpretation of different media (e.g., interpreting a photograph requires skills in just as interpreting a document requires skills.)

q       Skills of collaboration online, including collaboration with people from other cultures.  For example, what skills do individuals and groups need to make an emotion-laden, difficult decision using asynchronous media? How does one build a working relationship online with people whose cultural backgrounds, genders, and personalities one doesn’t initially know and whose faces one cannot see?

q       Multimedia literacy. This topic includes not only the interpretation of multimedia (skills of inquiry described above) but, perhaps even more important, the skills of creating multimedia.  Multimedia literacy seems to possess at least two kinds of importance in a college education: a) the importance of authoring skills in life after college, and b) the educational options created by multimedia authoring.  For example, when students create a web site, that site can be visible to the public and receive responses from the public, or from selected novices or experts off campus. The value of such feedback for motivation and education can be surprising. For one example, read what happened in Linda Crider's course.  The University of Southern California has been working on infusing such multimedia literacy across the curriculum and Flashlight has helped with evaluation, including the development of rubrics for assessing student work.  Contact your consultant for more details about this work. 

q       Learning how to learn online.  What special skills are useful in finding and evaluating online learning options? In managing one's time in an online learning environment?

III. Goals Associated with Educational Transformation

Advancing some or all of the six goals in the table can leave the soul of an institution unchanged.  Or they can be part of a fundamental transformation of higher learning.  

If you suspect that your program, institution, or system are in the early stages of such a transformation, you would be in a better position to document, fund, and guide such a transformation if you were collecting data on these particular dimensions of change and the problems that will inevitably accompany it.  My views on this subject are laid out in two articles: "Access and/or Quality: Redefining Choices in the Third Revolution" and "Grand Challenges Raised by the Third Revolution: Will This Revolution Be a Good One?"  If you find those views persuasive, you may decide that several of the outcomes described above are important to monitor in order to track institutional transformation.  Several kinds of activities (e.g., where students get the resources for study; with whom students study; institutional alliances) would also be important to monitor over a period of years.  If this topic is of interest, send me e-mail.

IV.  Uniform Impacts and Unique Uses

(The following section is adapted from The Flashlight Evaluation Handbook, v. 1.0, 1997)

Up until now, you might assume that we're talking about selecting one set of learning outcomes and using them to assess the progress of all students. But that need not be the case. 

Two Ways of Looking at Outcomes

There are two ways to look at almost any educational program. (Balestri, Ehrmann, et. al., 1988)  Each perspective has different, almost opposite, implications for how one should evaluate.  Typically, an evaluation needs to be designed to include both.

1.     Is the educator (and evaluator) looking for pre-set learning outcomes that are qualitatively the same for all students, with the educator in a commanding role? ("uniform impact" – see figure 1), and/or

2.     Is the educator (and evaluator) looking for outcomes that are important, but qualitatively and surprisingly different from one student to the next, in part because the student's own intentions and skills play a role in what the student achieves and where the student has problems? ("unique uses" – see figure 2)

These two perspectives, "uniform impacts" and "unique uses," have both always shaped teaching and thus should shape evaluation.  Unfortunately most people, especially non-evaluators, assume that evaluation by definition pays attention only to uniform impacts.  These folks assume that evaluation always must begin by stating behavioral objectives, even before the program begins, that it always tests learners by their progress on these objectives, and so on down the line. That's only half the story: the uniform impact half.  We use the term "uniform" because these approaches assume that the goal is for all learners to learn the same things (and usually in the same ways). It's also typically assumed that the program acts upon the student: thus "impact."

Uniform impact evaluations are supposed to use validated "tests" (in the broadest sense of that term) of the desired knowledge or ability.  There's an important reason for that: such tests are especially good at detecting even relatively small gains in achieving the desired objective (but are no good at all in detecting gains at any other objective). 

In the uniform impact perspective, the educator's intentions are dominant, e.g., an English course is designed, taught, and evaluated (in part) according to whether students master certain principles of grammar that are the course objectives.  An "excellent" program is one that achieves large learning gains in this chosen direction, compared to competitive programs and despite difficulties (e.g., students who are initially not motivated).  An "excellent" program design is one that can be implemented in the same way in different courses or colleges, always with the same results.

Table 1

Two Important Ways of
Conceiving, and Evaluating, the Same Educational Program

 

 

Uniform Impact

Unique Uses

Purpose of the Program

Produce certain (observable) outcomes for all beneficiaries

Help each person learn something valuable, whatever it is.

Role of student?

Object, who is impacted by intervention.

Subject, who makes use of the intervention (educational opportunity)

The best improvements in education?

Those improvements whose outcomes are replicable, so past excellence portends identical excellence in the future, even in different settings

Those improvements that can consistently produce excellent, creative outcomes, where different settings stimulate new kinds of excellence

Variation among outcomes for different students

Quantitative

Qualitative and quantitative

Most important outcomes?

The objectives that the educator used in planning the program

The outcomes that turn out to be most important for each subject

How to assess learning outcomes

1. Ask the educator to state goal in advance

2.  Create assessment procedures that can measure progress toward that goal.

(This approach typically ignores or underplays achievements that don't fit these goals.)

1. Observe user and educator goals.

2. Gather data about subjects' achievements, problems.

3. Choose "connoisseurs" to assess what's happened to each learner, and then to evaluate the meaning of the collective experience.

(This approach typically ignores or underplays outcomes that don't seem 'important' even if many students achieve the same outcome.)

Implication for defining the "activity" in a triad

It is more likely that the activities can be behaviorally defined, if one assumes that, to achieve the same goal for all students, one can and should use the same process

The unique uses approach strongly suggests that different learners engage in somewhat different activities, on their way to achieving different outcomes.

Implication for defining the "technology" in a triad

Technology is relatively easy to define

A set of tools, resources, and experts ('technology') are the common element that ties together those unique users with their unique activities and unique outcomes- the users all selected from the same collection of 'technologies'.

"Did the intervention cause the outcome?" How can you tell?

Statistics, using control groups

Historical analysis of a chain of events for each subject; control group less important

Quantitative and qualitative data: which method uses which data?

Use both

Use both

 

In contrast, a unique uses evaluation of the same English course would attend to the most important outcomes of the course, whether or not they matched the teacher's original intention, and whether or not the important outcomes were the same from one student to the next. 

That's because a "unique uses perspective" assumes that each learner is unique, that each learner encounters the educational program, interprets it, makes choices about using it, uses it, benefits from it, and has problems with it – all in personally distinctive and sometimes unpredictable ways.

So, in the English class we referred to above, the fact that one student became fired with the desire to become a poet would still be spotlighted as an important outcome of the course, even if "all students should become poets" was not the teacher's primary intent and even if no other student was affected in that way. 

In general, unique uses evaluations begin by assessing the most important learning gains and problems for individual learners.  Each judgment is made in context, looking not only at outcomes but at how they came to happen. Afterward, one or more experts in this type of program (or "connoisseurs," to use Eliot Eisner’s term) make an interpretative judgment of those individual cases in order to evaluate the program's generative power -- its ability to produce excellence in a variety of forms.

Unique uses assessments can use 'tests' but they tend to be somewhat different from uniform impact tests.  A uniform impact assessment is designed to detect one specific type of learning (e.g, a grammar test) while a unique uses assessment is designed to reveal any of a number of types of learning (e.g., an essay on 'how I spent my summer vacation').

In the unique uses perspective, an "excellent" program is one that consistently produces varied and even surprising outcomes, year after year. An "excellent" program idea is one that generates varied and even surprising program designs and individual successes in different settings.  Shakespeare's plays are just one example of instructional materials that are excellent (by unique uses standards) precisely because of the variety of creative responses they create in both teachers and learners.  Words like "variety," "creativity" and "generativity" turn up in descriptions of excellence where unique uses are valued.

A unique uses evaluation is capable of uncovering large, diverse, and sometimes surprising outcomes that would not be revealed by standard accountability measures.  Unique uses information can be especially important for formative evaluations where the participants are trying to make decisions that will directly impact the program's design. Unique uses information can help to explain confusing or undesirable results gathered by “uniform impact” measures.  "Unique uses" can also be particularly relevant when the role of information technology is increasing, because I.T. is often used to "empower" students, i.e., to increase their options and give them more opportunities to study what and how they like.  For an example of an evaluation that used both uniform impact and unique uses approaches (though not with those labels), see Bruce, Peyton and Batson (1993).

Concluding Thoughts

So there are at least two dimensions to consider in clarifying the goals of IT use: what are you hoping to achieve and for whom? One form of assessment is appropriate if the direction of progress is more or less the same for each learner (or staff member, or department).  Another form of assessment is appropriate if the outcomes are likely to vary substantially, qualitatively and somewhat unpredictably from one learner (or staff member, or department) to the next.  For example, if your internal processes of discussion, debate and decision were to focus on "collaboration and community" as a long term goal for IT use, you might focus on some uniform impact outcomes that would be measured more or less the same way for all learners (skill at working in teams; allegiance to the institution as measured by alumni contributions from people of equivalent income).  Other aspects of the investigation might take on a "unique uses flavor" ("community" may also be manifested in different ways for various individuals and groups).  

The point here is not that there is one best outcome or one best way to assess an outcome. Quite the contrary: computers, video and telecommunications are so empowering and the needs of higher education are so varied that it's important to do the difficult intellectual and political work of clarifying goals and developing appropriate ways to assess them.

Additional References

Balestri, Diane P, Ehrmann, Stephen C., and Associates (1988). Ivory Towers, Silicon Basements: Learner-Centered Computing in Postsecondary Education, McKinney TX: Academic Computing.

Balestri, Diane P., Stephen C. Ehrmann, and David Ferguson (1992) Learning to Design, Designing to Learn: Using Technology to Transform the Curriculum,  NY: Taylor & Francis.

Bruce, Bertram, Peyton, Joy, and Trent Batson.  (Eds.)  (1993).  Network-Based Classrooms: Promises and Realities.  New York: Cambridge University Press.

Ehrmann, Stephen C., (2000) "Finding a Great Evaluative Question: The Divining Rod of Emotion," Assessment Update, (San Francisco: Jossey-Bass), Volume 12, Number 6, November-December, pages 5-6.

Eisner, Eliot W. (1979).  The Educational Imagination.  New York: Macmillan.

  

learn about tltg || events & registration || programs || resources || listserv & forums || corporate sponsors || related links || homeTLTG logo
Headquarters office hours: 10AM to 6PM Eastern
Directions to: One Columbia Avenue, Takoma Park, Maryland 20912 USA
phone (301) 270-8312 fax: (301)270-8110 e-mail: online@tltgroup.org