The success stories here are mainly in column 1 (infrastructure) and column 4 (use of "worldware" to expand the range of topics we can teach, who can learn (more flexible forms of learning for people who would otherwise be barred by schedules, distance, disabilities, etc). [For a longer list of some of these dreams, many of which have succeeded at the level of individual courses and some of which have had success on a larger scale as well, see Steve Gilbert’s recent paper, “Why Bother?”]
More frustrating is the history summarized in column 2 (we've heard the same promises for the coming age of computer tutorials and other forms of efficient, effective courseware since the 1960s -- promises that almost always come true on a small scale but almost never have an effect on a large scale) and column 3 (a dizzying series of hopes for changes in how teachers teach and how students learn).
Each new technology rode in on claims about the inadequacy of earlier technologies. “Buy the new technology (to augment or replace the old junk) and outcomes will improve.” But it now seems to me that the dreams have each been frustrated for reasons that have little to do with the strengths of the new ‘new technology’ or the weaknesses of the old ‘new technology.’
At least three basic problems have dogged most attempts to translate technological improvements into improvements of programmatic, institutional, or national educational outcomes:
Let’s look at these three problems separately (even though they overlap).
I sometimes imagine that yeast was first discovered as a foodstuff when someone accidentally found it could be used to make a primitive form of bread – a kind of puffy cracker, perhaps a little soggy in spots, but tasty and very promising.
The yeast-masters took over the baking of these new puffy crackers because they knew about yeast, without which puffy crackers couldn’t be made. Also yeast was mysterious to most folks, so the yeast-masters became the “Priests of the Puffy Crackers”. After a time, people became disappointed with this primitive bread; puffy crackers didn’t taste as good nor were they as plentiful as the yeast-masters had predicted a few years earlier. Fortunately the yeast-masters had an answer: a new and more expensive form of yeast had just become available. And this new yeast really was better. But, when it was used, somehow the puffy crackers didn’t get better. No one noticed that at first, because the new yeast was so exciting. By the time hungry people did notice that puffy crackers were still disappointing, the yeast-masters had fortunately discovered the answer. Better yeast!
Let's return to education. Imagine, for example, that a department wants to internationalize its curriculum. Obviously, the World Wide Web can play a crucial role if enough money is spent on computers and connectivity for faculty members, the library, students and other key staff. But it should be obvious that spending money on hardware and connectivity is not sufficient to internationalize the curriculum. Nor is it sufficient to add a small budget to make sure that everyone knows how to turn on the hardware and use the basic commands of the software. Obviously, it's equally crucial to fund appropriate faculty development regarding those other cultures and about ways to teach about unfamiliar cultures. The program will need to buy and produce new curricular materials, not all of them technological. Many books will be needed for the library: paper is a good reading surface and, more important, many important materials are not available electronically. These days, to take advantage of the potential of the technology for internationalizing the curriculum, the institution will also need to create or join a network of people and organizations in the countries under study.
That set of diverse ingredients is what I mean by the metaphor of a “recipe.” In order to have a hope of using technology to improve educational activities and their outcomes, institutions need more than just that technology.
Unfortunately, rapture of the technology often siphons money and attention away from those other ingredients as the institution strives to buy the best computers and the fastest connectivity. Later, we find it easy to believe that dreams aren’t coming true because the technology (that looked so good only a few years earlier) isn’t good enough.
Even worse, rapture of the technology leads people to believe that, each time technology changes, the experiences of the past become irrelevant. Who needs to learn about 35 years of past experience with educational uses of technology now that we have the World Wide Web? Isn’t that history of success and failure now irrelevant? For this reason, and because so many new people are drawn in by each new technology, rapture of technology creates a kind of amnesia.
For all these reasons, rapture of the technology is self-defeating: a tapeworm so greedy that it kills its host. When it comes to using technology to improve outcomes, computers may or may not be their own worst enemy but computer zealots often are.
Whether you are in1968, 1988, or 2008, only twelve to eighteen months “ago” computer chips were only half as powerful. Twelve to eighteen months from “now,” they will be twice as powerful. That's Moore's Law.
Each predictable doubling of chip power enables the development of surprising new tools for thinking, analyzing, studying, creating, and communicating in the world. Products and professions erupt, altering the content of some disciplines, creating new fields, and compelling new forms of interdisciplinary collaboration in the wider world. The level of education required for many jobs is increasing as well. So technological change in the wider world both increases the number of people who need an education and changes what they need to learn as well.
Moore's Law also has created waves of improvement in the processes on which education most relies: how people can get and use information and how they can communicate with one another. That’s one reason why each new generation of technology seems capable of producing unprecedented changes in how students learn and what they learn.
Moore's Law, unfortunately, is a double-edged sword. With advances in chips and other computer equipment (memory, disk drives, connectivity, displays) come changes in operating systems. When hardware and operating systems change, older software may no longer work unless it’s upgraded, and the costs to upgrade it are often enormous. Even if the course material does still work with the new operating systems, it may look so old-fashioned that institutions and students are reluctant to use it.
New technology often doesn't stick around for long: sometimes only a year or two, sometimes five years, occasionally seven years or more. Let's be charitable and say "seven years." Even seven years is often just barely enough to implement that complex recipe (e.g., faculty development, course development, faculty skills, library resources, organizational partnerships, the program’s image and recruiting processes, and so on). It can easily take five or six years before we can glimpse the first valuable improvements in what students are doing after graduation. If the technology is going to become obsolete in seven years, that doesn’t give us much time to accomplish the improvement before the technology is ripped away and we have to start over. Does that sound bad? The reality is even worse – we usually don’t have seven years because other factors prevent us from using the full life cycle of the technology to make progress.
a) Temporary Pedagogical Regression
A curious kind of dance step characterizes the educational capabilities of each new generation of technology in its early years of use: a small pedagogical step backward precedes the larger step forward. Each of these temporary regressions has several typical causes.
1. At first, fewer people have the new technology than the old one, even though the new technology will eventually become more widely used than its predecessor. During this period when few people have the new technology, it’s hard to spread new ideas about changing education with it. People often can’t visualize the improvement until they have the equipment and have become comfortable using it for more traditional purposes.
2. Initially, the new technology is not as good or flexible for instruction as the old technology.
As a result of these backward steps, each new generation of technology begins its teaching-learning life in a promising but relatively primitive state. In two to four years all that will have changed. But, you can hear the ticking of Moore's clock as we wait for the zag to reverse the initial zig. By the time the non-Roman alphabets, screen readers, and authoring tools arrive, the "new" generation of technology often doesn’t have many more years to live and when it goes, the courseware developed to run on it often disappears, too.
b) Moore's Babel
Moore's Law also makes it harder for technology support staff and technology users to speak the same language. If technology remained the same for decades, many faculty members would learn to help themselves (as they have with VCRs), and at least some technology support staff would develop a more sophisticated understanding of education. But thanks to Moore's Law, users are periodically find themselves beginners all over again. Just as bad, that constant technological change helps assure that many technology support staff are new as people are hired because they are among the few who understand the new systems. These new staff members haven’t had much time to learn to look at the world through the eyes of an instructor.
Since the days of the mainframe computer, many hopes for technology have been based the promise of interactive, branching curricular courseware for making learning faster and less expensive. Based on principles of self-paced instruction, this interactive software features frequent assessment of student responses that guide the next instruction that the student sees. If the assessment shows low performance, the next step might be remedial. If the assessment is unexpectedly strong, the student may be leapfrogged to more ambitious material.
Research showed decades ago that self-paced instruction has the power to improve and accelerate learning by about one third, compared with lectures and conventional assignments on the same topic. (For references on this research, see “Resources and Further Reading” at the end of this paper.) Further, because interactive courseware can substitute for at least some teacher time and is inexpensive to copy, the more students who use a particular courseware package, the lower the cost per student.
Interactive self-paced courseware is no panacea, of course. Such courseware only works well in areas where learning can be organized into chunks, and sequences of chunks, and where answers can be predicted and marked right/wrong. Even so there are many areas where interactive courseware is appropriate. That’s why self-paced instruction’s lure of improved results and decreased unit costs has been powerful for so long. But so far, as a means of improving the outcomes of a higher education on a national scale, interactive courseware has proved to be a mirage, always imminent, never quite here.
The problems begin Moore’s Law – such courseware becomes obsolete when the underlying hardware, operating systems, languages and authoring systems change. When computers shift from MS-DOS to Macintosh, or when authors shift from PLATO to Basic to Java applets, they need to start over, developing courseware from scratch. If version 1.0 of a teaching concept was expensive to develop on last year’s technology, version 2.0 may cost as much or more to “upgrade” so that it works on this year’s system.
There are other barriers, too.
For example, interactive courseware remains expensive to develop and update. With each new generation of technology and authoring aids, we hear promises that, at last, interactive courseware will become far cheaper to create. But that hope is based on the false premise that slow computers were to blame for higher development costs of the last generation of courseware. The real costs, however, come from the time it takes humans to conceive, design and debug all those branching educational pathways: the more branches, the greater the pedagogical complexity of the task.
Second, each new generation of technology brings a more sophisticated “look and feel” than its precursor. So there are new development expenses (including new design skills and tools) to buy each time the underlying technology advances.
For these and other reasons, it still usually takes hundreds or thousands of hours of producer time, and a lot of money, to create reliable, distributable courseware that students go through in just an hour or two.
Meanwhile time is passing and with each passing month the window of opportunity for this generation of technology closes a little farther. It takes time to:
To sum up, it can easily take several years to develop courseware with the potential to transform the way a difficult topic is taught, and still more years for that courseware to find national acceptance. In fact, it takes so many years that version 1.0 has begun to look rather old-fashioned before it ever does find wide use.
That might be fixed by creating version 2.0 of the courseware. But version 2.0 appears much later, if at all. No one has the motivation. Developers on a tenure track are rarely promoted for doing a second edition – it isn't enough of an advance in the state of the art – and foundation program officers don't get much credit for funding second editions either. Another motivation is money, but publishers are unlikely to put up the money because they almost never realize sufficient return on their investment in version 1.0. so they’re feeling burned. It takes a lot of money, too, to create version 2.0. Good second editions of courseware often cost as much as the first edition, in part because computers themselves have changed so much in the intervening years (Moore's Law again).
So instead of triggering an educational revolution in its discipline, the award-winning version 1.0 of the courseware fades away. Of course new interactive courseware will spring up, designed from the start for the new generation of technology. But it's often so different from the old courseware that everyone must start over or stop using interactive courseware. Many choose this second option.
Perhaps this explains why this exciting effective type of software has thus far produced so little improvement in the outcomes of higher education after so many decades of predictions that courseware-enabled improvements were just about to change everything.
Six Strategies for Using Technology to Improve Outcomes on a Large Scale
Progress, far from consisting of change, depends on retentiveness. Those who do not learn from the past are condemned to repeat it. -- George Santayana
Moore's Law is real, and there is no way to completely escape the battering. The world's use of technology will continue to change, and if we tried to ignore that fact, we'd be making the biggest mistake of all: allowing education to fall behind the world in which its students must live.
The key is for education itself to learn to live with the rapid pace of change and make choices that enable us to improve effectiveness and outcomes. Here are six inter-related considerations for selecting and implementing a long-term program of improvement.
The single most important message of this essay is that educational outcomes take far longer to improve than the likely lifespan of a single generation of technology. Therefore, if an institution or a nation is to make educational and technological progress, its technological choices must to some degree be subordinated to some long-term educational priorities.
That’s easy to say and almost impossible to accomplish in an environment shaken by the winds of Moore’s Law as well as political change in institutions and governments. However, looking at forty years of waste and frustration may stiffen our resolve. Other strategies outlined here are designed to help make such long-term focus more feasible.
What types of outcomes can technology help your department, institution, system or nation improve? Some examples where technology has already shown promise; some of these overlap and they are not in any particular order:
Obviously institutions and nations made progress in each of these directions long before computers became available. But today there are decades of experience suggesting that technology has already helped improve outcomes, at least on a small-scale, in each of these areas. For example, the educational use of technology for collaboration and community building dates back at least to the PLATO mainframe instructional system of the 1970s and the EIES computer conferencing system of the early 1980s, and early uses of Internet and Web mail in the 1990s.
2. Choose technology that can contribute to long-term, cumulative improvement in the chosen activities.
At the same time that goals are being chosen, the institution needs to be thinking about technology. And the question of technology needs to be readdressed each time the buzz rises about the newest new thing.
At least four questions ought to be asked about any major purchase of technology after the institution has begun to get a sense of educational direction:
Where should we look for such technology for the long term? One direction is “worldware.” Let’s define worldware to be hardware or software that is used for education but that was not developed or marketed primarily for education. Examples include computers, the Web, productivity tools such as word processors and spreadsheets, and research tools such as computer-aided design software and online census data.
Worldware usually has several features that help it support incremental, long-term educational improvement. Because of worldware often has a larger market than software designed only for instruction, it often advances incrementally and faces competition. An important side-benefit -- a new vendor's software can often read files created by its larger and older competitors. That's why any faculty member who began using spreadsheets as a mathematics construction kit in physics courses in 1979 could improve her teaching for the next twenty years, taking advantage of new spreadsheet features and new ideas in physics. She would never have been forced to abandon assignments or handouts simply because a spreadsheet vendor had gone out of business. Without missing a beat, such instructors could move their spreadsheets from MS-DOS to Macintosh, from Windows 3.1 to Windows98. In contrast, colleagues reformed their courses by relying new interactive courseware in 1979 (or ’89, or ’94) might well have been marooned when computer operating systems changed, rendering their packages obsolete.
Worldware has other educational advantages, also, such its familiarity to faculty (they use it in their research), motivation for students to master its use (they know they'll need to learn to use it for future jobs), and a market base that is large enough to help provide good support materials and outside training. In these ways, worldware can reduce stress on the exhausted, understaffed technology support units at your institution.
Worldware may lack of some of the short-term value of interactive courseware but it more than makes up for it in long-term viability and ease of support.
3. Emphasize forms of instructional material that most faculty members find it quick and easy to create, adapt and share
Interactive courseware can be extraordinarily powerful but, as pointed out above, such courseware rarely has enough users to keep it alive for long. Highly interactive courseware has another problem, too. The bigger and more complex the courseware, the more rigid it usually is: a challenge for instructors who want to adapt it to a particular set of students, a particular day’s events, or their own slant on how a skill or topic might best be learned. Such courseware formats work best for widely taught, large enrollment courses where there is a widely-held, lasting agreement about how to teach the course. For such courses, costs per student can be kept down, too, especially if the courseware lasts long enough to make enough a success for its investors to justify upgrading it as operating environments advance.
Obviously that course format doesn’t work for most faculty and most courses. For them, a “small is beautiful” strategy may be more to the point.
These instructors (whether working alone or in relatively small communities that may span institution boundaries) need course materials that they can easily, quickly, and inexpensively modify. For example, a word-processed syllabus is easier, quicker and cheaper to modify than a typed one; a Web syllabus can be even better for those purposes because all students can see the changes at once. The challenge is to come up with materials and assignments that are educationally powerful, but still inexpensive for the typical academic staff to develop, modify and share.
A second requirement is resources be invested in a continuing way to help faculty organize, edit, and share their incremental improvements. Progress can be made without such processes, but it will almost certainly be slower.
If we want to describe how technology can improve the outcomes of a higher education program, we need at least three elements:
Let’s call this three-part vision a “triad.”
The challenge facing the instructional program (or institution, or nation) is to maintain focus on the triad for enough years to achieve meaningful improvements in the outcomes. Longitudinal (periodic) studies of the triad-in-use can help maintain focus and advance the ‘recipe’ in at least three different ways.
First, in the early phases of an improvement program, evaluation can help maintain focus by reporting on whether the ingredients of the strategy are coming into place and working well, and whether the activities have begun to change. It’s too early to expect changes in outcomes – students have been through enough of the altered courses and few, if any, have graduated. Providing feedback indicating that the early steps are going well can help maintain energy.
Of course, the early steps may not be going as planned, which leads us to the second use of longitudinal studies. Such studies can provide data useful for guiding and fine-tuning the strategy. For example, the initial strategy might have neglected the importance of enlarging the library’s collection or the need to form new external partnerships. Early evaluative study may show that teaching activities are not changing as expected and that these gaps are one of the reasons. By drawing attention to a problem, the study can help solve it.
Third, documented achievements can be used to solicit support and raise money for institutions. That may be crucial. Institutions are rarely rewarded for improving teaching effectiveness. In fact, if one or more institutions in your state or nation became 10% more effective, or less effective, it’s quite possible no one would notice. That’s one reason why institutions so rarely provide adequate rewards for faculty who take risks to improve their own courses – where are those rewards to come from? If the institution is no better off as a result of the collective efforts of such risk-taking faculty, then rewards must be cannibalized from other budgets. That’s pretty risky, politically. So this third reason to evaluate progress is so that evaluation data can be used to help make the invisible progress more visible to the outside world. If data can help draw new resources to the institution, then risk-takers can be rewarded without penalizing others.
5. Diagnose problems “on the ground” as they occur in order to increase your chances of success while reducing stress and other costs.
Continued turbulent change almost guarantees that individuals and units will miss important opportunities and, worse, will be ambushed by problems they failed to see in time.
Imagine, for example, that your program has selected “collaborative skill and academic community” as a guiding theme for a decade of progress. You've redesigned the course in ways that depend upon students using the Web to collaborate on homework projects. You've never asked students to do much work together on homework before. It's now week two of the term. It's hard to know for sure but you fear that students are not collaborating on line as much or as well as you'd hoped. The course's schedule and success might be in jeopardy. Or maybe everything is OK. Is there really a problem? If so, why?
These and literally dozens of barriers could hinder collaboration online. But which of these barriers are actually hindering your students? Unless you can find out quickly, you may soon be in trouble you can't escape without real trouble…
Our intuition often doesn't do us much good in such situations because that our insights were shaped by stable times. "Muddling through" isn't good enough anymore. The clock is likely to strike midnight before we have time to learn what is hitting us, within a course or as an institution.
The answer seems to lie with increased use of diagnostic tools. Some of these strategies are quite simple and generic, such as “minute papers.” Others could be developed for specific purposes, e.g., a set of diagnostic procedures designed to help educators and their institutions improve online collaboration among students.
One area most in need of diagnostic help involves the costs and other stresses created by our emerging uses of technology. I once sat briefly with a group of faculty and staff who were analyzing the total costs involved when support staff helped faculty use technology in their courses. It’s fair to say that everyone is this little group was appalled, despite the fact that all of them had been intimately involved in the process for years. They were so surprised even though each had known that his or her piece of the work was stressful, expensive and/or time-consuming. However, each had also assumed that everyone else was doing work that was simpler, easier, and cheaper. Diagnosing the stresses created by the current activities is the first step toward redesigning them in time -- before large-scale burnout and busted budgets derail the improvement program.
6. Create coalitions to make sure that your program has all the ingredients needed in your recipe for improving outcomes.
One of the most unnatural acts in making a technology-enabled improvement of outcomes is for the technology lovers to make common cause with others who also care about the goal and activity but who are neutral or “anti” when it comes to hardware. If these groups succeed in creating a coalition, however, they can campaign to build support for the whole recipe together.
Think of the math this way. Imagine an institution with 100 staff members. Imagine that 5 of them care passionately about computing, 5 care equally about collaboration, about internationalization, and or about academic community. At budget time, or policy-making time, 5 of them would normally battle for what they want against the opposition or apathy of 95 others. Predict the outcome. Now assume that these 20 people make common cause in order to use technology and other ingredients to create an international community of learners, and learning. For each budget issue and policy choice (whether about a hardware buy or a change in course requirements) 20 of them make themselves heard. That’s a different situation entirely.
The greater the value of the improved outcomes, by the way, the easier it may be to assemble a coalition of groups who wouldn’t collaborate if the stakes were lower. Most improvement efforts don’t attract big coalitions because there is little at stake.
Such collaboration was once rather rare in educational institutions. People didn’t have to collaborate in order to function. For that and other reasons, higher education attracted people who valued their autonomy and who “did not work and play well with others.” That’s one reason why traditional universities are relatively poor at information sharing, especially "diagonally" across administrative units and vertically across levels of authority.
With technology and the world changing as fast as they do today, failure to share information, cooperate, and collaborate can have unhappy consequences for an institution. “We must indeed all hang together or, most assuredly, we shall all hang separately,” said Benjamin Franklin on July 4, 1776. His remark applies to institutions of higher education today. Collaboration, within and among institutions, is becoming a survival skill. This is not without precedent. Only a few hundred years ago, most scholars did their work alone. Then the emerging universities enabled scholars to enter a new world, if they were willing (or eager) to specialize and surrender some of their autonomy. Meanwhile, the scholars who remained completely independent faced increasingly limited options for research and teaching.
Today, collaboration is slowly and sometimes painfully increasing among institutions who realize that working closely together on creating virtual learning environments, managing online information, marketing their services to students and other tasks gives them advantages over institutions that continue to go it alone.
The survival value of collaboration within institutions is also one reason why the notion of "Teaching, Learning, and Technology Roundtables" has spread so rapidly since Steven W. Gilbert introduced it in the mid-1990s. Over 400 institutions have created TLT Roundtables that meet regularly to share information, to coordinate activities and, often, to create small action teams to work on problems that are not within the province of any single unit or individual in the institution. Most successful TLT Roundtables are actively supported by the chief academic officer and other leaders, to whom the TLTR makes recommendations on key budget and policy decisions.
It's no coincidence that TLT Roundtables began to appear the same year that studies by Kenneth Green revealed the beginning of the current mainstreaming of computers for teaching. And recently a dissertation by Daryl Nardick revealed that one gain associated with successful TLT Roundtables is an increased ability to move information quickly from outside the institution to all those inside who most need it.
My main point here is that Roundtables and some of these consortial structures provide suitable settings for discussing long term programs of educational improvement and the strategies needed to accomplish them over five or ten years.
Today's world relies upon on rapidly changing computer technology in almost every phase of life. That creates a breakneck pace of change for the academy. In this new world, the old "muddling through" approach to educational improvement doesn't work well anymore. The window of opportunity associated with each new generation of educational technology closes too quickly.
Ironically the solution is not move faster. Nor does it work to follow our instincts about what will work and what will fail. Instead, this time, we need to study forty years of past failures and successes. This time, we need to get it right.
Resources and Additional Reading
For more background on “worldware” and the courseware mirage, see Paul M. Morris, Stephen C. Ehrmann, Randi B. Goldsmith, Kevin J. Howat, and M.S. Vijay Kumar, (eds.) 1994. Valuable, viable software in education: Case studies and analysis. New York: McGraw-Hill.
One example of a meta-analysis of the research on interactive, self-paced courseware is Clark, R. E. (1983, Winter). “Reconsidering research on learning from media,” Review of Educational Research, LIII, 445-459. Clark pointed out that self-paced instruction, whether computer-based or not, tends to enable learners to learn much faster and better (in those content areas where assessment can be automated and where content can be organized into sequence of bits that can be organized into sequences and branches.) For a paper focused more directly on courseware, see Chen-Lin C Kulik and James A. Kulik (1991) “Effectiveness of Computer-Based Instruction: An Updated Analysis,” Computers in Human Behavior, VII: 1- 2, pp. 75- 94. The Kuliks have done such analyses for some time, and earlier papers are referenced in this 1991 summary.
The Flashlight Program, directed by Dr. Ehrmann, provides tools, resources, coaching and consulting for doing the longitudinal and diagnostic studies outlined in this paper.
For an earlier, more detailed version of the evaluation suggestions in this essay, see Stephen C. Ehrmann (2000), "Computer-Intensive Academic Programs: How to Evaluate, Plan, Support and Implement (in that order) Your Campus Technology Investments," AAHE Bulletin, 53:3 (November), pp. 7-11. Draft is on the Web at http://www.tltgroup.org/resources/F_Eval_Computer-Intensive.htm.
For more on TLT Roundtables, see http://www.tltgroup.org/programs/round.html.
About the Author: While doing his doctoral dissertation research in the early 1970s, Stephen C. Ehrmann began his study of educational uses of computing with an analysis of 1960s applications in the MIT Department of Civil Engineering. In 1978 he became a program officer with the Fund for the Improvement of Postsecondary Education (FIPSE) of the US Government, where he spent much of his time reviewing technology-related proposals and monitoring funded projects. In 1985, he left FIPSE to become Program Officer for Interactive Technologies with the Annenberg/CPB Project, where he helped fund and monitor projects involving innovation and research on educational uses of technology. In 1992, while with Annenberg/CPB he began a process leading to the creation of the Flashlight Program for the Study and Improvement of Educational Uses of Technology, a program he still directs in his role as Vice President of the non-profit Teaching, Learning, and Technology Group in Washington DC. For more information on the TLT Group, Teaching Learning and Technology Roundtables and the Flashlight Program, see http://www.tltgroup.org
To talk about our work