TLT Group Image

Computer-Intensive Academic Programs:
An Implementation-Evaluation Process
That Does Not Focus on Technology

TLT Group Image
A TLT/Flashlight Resource

The Problem

Many educational programs and institutions have recently installed extensive networks, created requirements for all students to own computers, or have taken other steps toward universal computer use.  Let's call this the objective of "ubiquitous" computing: everyone that wants to use computing and networking is able to, and they can use computing and the Internet from many places. In institutions[1] striving toward ubiquitous computing, it's not unusual for people to feel concerned about a lack of evaluative data:

  • "How can we tell whether all this is worthwhile?"

  • "I have the feeling that a lot of these computers are idle or are not being put to good use. What went wrong?"

 

It's obvious that evaluation should proceed hand-in-hand with planning and implementation.  What might not be so obvious is that the necessary evaluation should not focus on the technology.

 

A Basic Tool For Thinking About Educational Uses of Technology: The "Triad"

There are at least three styles of thinking about educational uses of technology, each with different implications for how to buy and evaluate.

 

The first is a blind faith in technology (or opposite to it).  I'll call this monadic because the person only has one thing in mind: the technology itself.  Folks who think this way almost never do evaluations because they already know that technology is great, or awful.  This kind of thinking is still rather common; if someone is arguing to buy the latest e-mail client without talking about what it's for, they're probably thinking this way.

 

More common are dyadic thinkers: they think in terms of goals.  A dyadic thinker buying an e-mail system might argue that the institution (which, let's imagine, has many commuting students) needs it because students have been too isolated.  Because they split right after class, the dropout rate unacceptably high. "If, on the other hand, we buy this e-mail system," the advocate argues," these problems will be solved."

 

Dyadic thinkers often do want to see evaluations.  Imagine that this institution – Old Siwash – uses the Easy E-Mail System and that North-South U down the road uses the Speedy E-mail System.  If the evaluation were to show that the outcomes were better down the road, the advocate might well argue that institution ought to switch to Speedy. Achievement of the outcome is the (sole) test of the technology. That's the premise of the dyadic thinker.

 

But might there be other reasons why one institution would make better use of e-mail to improve collaborative learning and cut attrition? 

 

Suppose, for example, that North-South has a long history of helping students work in teams. The faculty encourages it. Students are recruited in part on this basis. Homework assignments routinely require collaboration; students working alone would find the assignments difficult or impossible.  Wouldn't it be likely that these students would find almost any e-mail system of great value, especially if they commute? 

 

Or suppose that Old Siwash has a shortage of tech support staff and poor quality training; wouldn't that discourage use of e-mail? Similarly if Old Siwash has a low percentage of skilled computer users, they would have fewer people with whom to communicate over e-mail.  Many other contextual reasons might affect the "productivity" of the e-mail investment, too. 

In fact, it seems likely that most of the factors affecting the return on investment in e-mail are not themselves related to the choice of the system, and the most important factors aren't related to technology at all.

 

That's because technology use is more an issue of "pull" than "push".  Some technology advocates see it as a driver of change. Buy the technology and it will tend to force improvement, they argue.  There's a bit of truth to that, but the opposite is truer, in my experience.  Technology is a tool and the factors affecting the need for the tool usually are the major determinants of the real value of the tool.  If the demand for collaboration is great and the barriers to collaboration few, then e-mail investments are likely to be far more valuable.

 

If this is true, and if an institution is seriously interested in substantial investments in computing and related infrastructure, where should the processes of planning and evaluation begin?

 

Step 1. What Are Your Academic Goals? Key Activities? Appropriate Uses Of Technology?

If there is already serious interest in increased use of computing, chances are that at least some people are thinking in terms of dyads or even triads.  In other words, if asked, they can tell you what they think the computers should be used for. 

 

So pull together the most important uses of computing.  Make a list. You might use a chart such as the one appended at the end of this paper.

 

Ideally the left hand column should include both a goal and the activity that achieves it; it need not mention technology explicitly.  Examples:

  • Internationalize the curriculum so that we're more distinctive, draw more foreign students, and place more of our graduates in jobs abroad

  • Serve more students in ways that can stretch available resources;

  • Serve students who can't get to campus by increasing the fraction of academic activities that can be done off-campus

  • Implement four new majors in fields whose content is highly computer-dependent

  • Improve learning outcomes by a more pervasive use of the "seven principles of good practice in undergraduate education" (Chickering and Ehrmann, 1996)

 

In assigning a score to column 2 and 3, think of the goal as a result: in other words, imagine that the computing environment is OK and other needed conditions have been met, too.  In those conditions, how important would the goal be to the program and how much would the goal have advanced relative to the days before ubiquitous computing.

 

Then pick one to three of these goals and activities that rank high in both column 2 and column 3: they'd be important to the larger program and also they can be done much better if computing were ubiquitous.

 

Step 2: Base Line Data and Barriers To Entry

Next devise evaluative studies of these key activities.  Such studies should be designed to tell you at least three things:

  1. How extensive is the activity and how well is it achieving the goal?

  2. What's hindering even better performance? What are the most important barriers?

  3. Where computing already is common, is it being used to advantage to help achieve this goal?

 

Tools developed by the Flashlight Program can be useful in creating such surveys and interviews. 

 

You should design these studies so that they can be replicated in future years.  As we'll see below, you'll want to do them again, perhaps annually, to see whether the barriers are being lowered, whether the activity is doing a better job achieving its goal, and whether the technology is playing a distinctively valuable role.

 

For example, if "internationalizing the curriculum" is the goal, the study might find that the curriculum so far is only moderately international.  That's not surprising. Institutions usually pick as goals issues where they are currently OK – not awful but not as good as they'd like to be, either.  The study might also find that, while lack of computers is one problem, many other barriers prevent rapid progress. For example, many faculty members may lack formal or informal education about the countries of interest. There may be little in the way of faculty development to help them.  The library may be poorly stocked. Only a small number of current students rank "international insight" as an important educational goal.  Students and faculty both complain that computer support is poor, hindering their efforts to use technology (in international and probably in other applications). And so on.

 

Step 3: Lower the Barriers and Take Another Reading

By now, you can see the problem. Computers and computer infrastructure are not only expensive. Their value diminishes rapidly.  No one wants to spend heavily on computers if their use is not going to produce valuable outcomes quickly.

 

It makes more sense to lower the barriers even before the ubiquitous computing is ready for prime time. So while planning and pilot experiments for ubiquitous computing proceed, the main efforts in fund-raising and action may be aimed at lowering other barriers to achieving the goals.   The planning for ubiquitous computing should go ahead. The promise of its value added should help motivate people to lower the other barriers, since this goal was selected because computing could make it so important to the institution's future.

 

As step 3 nears its end, it is a reasonable time to replicate the initial study.  Do the results show that the barriers are indeed coming down?

 

Step 4: Finish Initial Implementation and Take Another Reading

As these barriers come down, the institution can shift its computing efforts into high gear.  This is a good time to replicate the initial studies again.  If step 3 has been done well and luck is with you, the study should show that the new computing capabilities have been put to quick and effective use in rapidly improving the achievement of your goals.

 

Step 5: Diagnostic Evaluation and Next Steps

Now that ubiquitous computing is being used to good effect in achieving the goal (and even if reality has crossed you, and it isn't) it's time to add a new focus to your evaluative studies: diagnostic evaluation. That is useful in a variety of settings, not just ones using ubiquitous computing, and it will be the topic of the next essay in this series.

 

The Crux of the Gist

Computing investments age fast. Because computers are tools, their value is mainly "pulled" by the activities for which they are used and the success of those activities.  That pull comes mainly from non-technological factors: the demand for that activity, other factors affecting the program's ability to carry out the activity. Therefore one important function for evaluation is to diagnose, in advance, the non-technological factors that will affect the use of imminent investments in technology.  With that insight, the program should move quickly to lower barriers to the activity so that, as soon as the computer power becomes available, it is quickly put to productive, efficient use.

 

References

Chickering, Arthur and Stephen C. Ehrmann (1996), "Implementing the Seven Principles: Technology as Lever," AAHE Bulletin, October, pp.  3-6.   Also available on the Web at <http://www.tltgroup.org/programs/seven.html>.

 

Ehrmann, Stephen C. and Robin Etter Zúñiga (1997), Flashlight Evaluation Handbook and Current Student Inventory (version 1.0), Washington, DC: The TLT Group.  Information available on the Web at http://www.tltgroup.org/programs/flashlight.html


Table


 

Goal and/or Activity that Would Benefit Greatly From Ubiquitous Computing (e.g., collaborative learning that reduces attrition)

Score (or rank) in terms of importance of goal to the program

Score (or rank) of value added by new investment in technology (would achievement of goal increase 5%? 500%?)

 

 

 

 

 

 

 

 

 

 

 

 



[1] In this essay the term "institution" and "program" will be used as synonyms. Some readers will be thinking of whole institutions as the base of reform, while others may be thinking of schools or departments within such institutions.  The basic argument of this essay applies equally well to both settings.

 

This essay was originally written by Stephen C. Ehrmann, Ph.D., in 2000 and was slightly updated in Dec. 2004.  Development of the ideas in this essay  was supported by subscription fees from institutional subscribers to The TLT Group.


TLTG logo