Useful Things to Learn about Technology Use in Education (1999)

 

Handbook and Other Materials l Asking the Right Questions (ARQ) l Training, Consulting, & External EvaluationFAQ

As we revamp our web site in 2007, I took a fresh look at this otherwise unpublished essay and realized, to my surprise, that it's still useful reading.  Steve Ehrmann

1. Simple summative questions l 2. Is technology magic? l 3. Is your investment helping the people who most need help? l
4. How does your use of technology affect costs? l 5. Is technology helping a department or school create widespread improvements in practice? l Conclusions

In my visits to colleges and universities, I sometimes ask folks what studies or evaluations of teaching, learning, and technology have been useful for them. "The studies need not have been published or rigorous," I assure them. "I'm just interested in studies whose findings were useful to you. The research may have reported a success or failure, a trend, or a general tendency -- some important finding with relevance to teaching, learning, and technology." After taking some time to think about the question, they usually report that they know of few such studies, if any.

Everyone wishes they had good data about teaching, learning, and technology but few institutions are doing the work to get it. That's dangerous. Technology changes quickly and unpredictably, technology budgets are large and getting larger, money is tight, and the higher education world is turbulent and unpredictable. Faculty and administrators are making large investments of time and money with their eyes closed. In such a world it is important to get some information so institutions can see what they're doing, fix problems, and document achievements.

The good news is that good studies are being done. Local evaluative studies have produced important insights that have reduced risks, guided policy and shaped practice. Provosts and chief information officers just don't hear about these studies at other institutions because their normal information channels don't bring them this kind of information.

It's worth spreading the word, however. These successful studies demonstrate there are certain evaluative questions about teaching, learning, and technology that more institutions need to ask for themselves.

1.  Simple summative questions

Some study questions focus on results. When Rensselaer experimented with offering computer-intensive studio courses for first-year students instead of the classic lecture-discussion-laboratory combination, for example, an early sign of success was a sharp increase in attendance. This was especially dramatic during the middle of the term when lecture attendance had traditionally been low. RPI also asked students in these courses questions such as

  • "Was this course so good that it would justify recommending to a high school senior that he or she attend this university?" and
     
  • "At Thanksgiving, what would you tell your friends at home about the courses at Rensselaer?"
Because RPI has been asking such questions for years about various courses, the substantially higher scores for the studio courses also provided some early signals that the innovative courses were on the right track.

Ramapo College's Teaching, Learning, and Technology Roundtable recently surveyed faculty about their uses of technology in teaching; about half the full-time faculty responded. They asked some pointed questions such as "Do you agree that student competence with technology is not essential for the courses you teach?" and got some equally dramatic answers (63% disagreed - student competence with technology had become essential for their courses.). Similarly 75% of the respondents agreed that their uses of technology were helping students understand course content. Findings such as these help to provide hints about whether technology use has become an integral element of the academic program.

 

Top of Page
 

2.  Is technology magic?

Technologies such as computers (or paper) don't have predetermined impacts. It's the uses of such technologies that influence outcomes. The statement seems obvious, but many institutions act as though they believed that the mere existence of technology would improve learning. They use computers to teach the same things in the same ways as before, yet they expect learning outcomes to improve. It's an easy mistake to make, especially if your attention is distracted.

Good evaluative questions, on the other hand, can help focus attention where it matters. A couple of years ago, for example, Prof. Susanmarie Harrington and her colleagues in the composition program at Indiana University Purdue University Indianapolis (IUPUI) took a fresh look at their program's use of technology in the classroom. Working with our Flashlight Program, she and her colleagues studied the role that technology played in the quality and cost of lower division composition courses. They compared sections taught in computer classrooms with other sections taught in traditional classrooms.

Not surprisingly, it was more expensive to teach composition in the computer classrooms. Not only was the equipment expensive. The classes were smaller (due to the capacity of the rooms); thus the cost of staff per student was higher. Other factors tended to make the sections in computer classrooms cheaper (e.g., staff teaching in these rooms reported spending a bit less time in preparation and teaching than did their colleagues teaching larger classes in traditional classrooms). On balance, however, IUPUI found that it was more expensive to teach composition in these classrooms.

Were those classes better than those taught in traditional rooms? Apparently not. Using surveys designed with the Flashlight Current Student Inventory, Harrington and her colleagues discovered that the teaching and learning practices in the computer classrooms were essentially the same as in the traditional classrooms. This had been done on purpose. The computer classrooms and traditional classrooms housed different sections of the same course. Faculty wanted to be fair to all students so they taught all students in the same way. Why use expensive facilities while carefully not taking advantage of them? The issue of cost had not been of direct concern to faculty members, perhaps because the composition program was not charged for the facilities it used. Flashlight findings reminded faculty that was a conflict between their efforts to use technology wisely and efficiently and their desire to treat students fairly.

Prof. Harrington concluded,

"As a result of the Flashlight inquiry, we are reconsidering whether the [effort] to standardize across sections affects the possibilities for the computer-assisted sections, and whether we should explore ways in which the computer-assisted sections could achieve program goals by different means. At the time the data were collected, however, the common syllabi were a powerful influence on student experiences; given that there were no significant differences between curricula, it is not surprising that student learning practices were so similar across sections."

So your department may also wish to consider the IUPUI question: if large courses are being taught in multiple sections, some of which use more technologies than others, are faculty and students able to take advantage of the technology? Have they, for example, been able to change the content or to make pervasive changes in teaching and learning practices? Or are concerns for equity or other factors insuring that costs increase but that outcomes do not improve?

 

Top of Page
 

3.   Is your investment helping the people who most need help?

Another thing that Harrington found was that some IUPUI composition students faced multiple disadvantages. Students with the poorest computer skills also had lower GPAs. Perhaps because of their poor skills and poor access to technology, these students also spent fewer hours per week using e-mail and word processing for their writing courses. (Poor computer skills were not a gender issue, by the way, even though the students with the best computer skills were mostly male.) Harrington concluded that because "students with low technical skills are so few in number, they can easily become an invisible group, scattered among sections. Teachers must be more vigilant about identifying these students early, and providing assistance early and often."

Are students with poor technical skills actually getting the technical support they need? Mount Royal College, a two-year institution in Calgary, Canada, asked that question. Patti Harvey, a consultant, took the lead in designing the study . Mount Royal faculty had assumed that students with the least computer experience would be the most frequent users of technology support services such as Student Tutor And Resource Technicians (START). For a set of five composition courses early in 1998, Mount Royal's Flashlight-based survey found the opposite: students new to technology were least likely to know about START and other helping resources and least likely to use them. Students with more than ten years of computer experience were also under-using the technology support services: most of them were over-confident. It was students with a moderate amount of experience who seemed most knowledgeable about helping resources, and most efficient in selecting the help they needed. Armed with these findings, Mount Royal instructors explained to different types of students the ways in which the helping resources could be useful to them. Some months later, the evaluation was repeated. This time, data showed that students were using the training and support materials, whether they had just a little computer experience or a lot. As a result, students now reported few problems or issues with instructional technology applications. Mount Royal's studies are instructive in two ways:
  1. revealing what is probably a common problem (computer novices and experts making too little use of important helping resources) and
     
  2. demonstrating how evaluation can both lead to a solution and then document that the problem has indeed been solved.
Are your institution's uses of technology unfair to some students? If so, which technologies discriminate against certain students: the old technologies or the new ones? In a study done in the late 1980s, Roxanne Hiltz of the New Jersey Institute of Technology compared courses taught in virtual classrooms with courses taught in traditional classrooms. (It was Hiltz and her husband Murray Turoff who coined the term "virtual classroom.") In the traditional classrooms, she discovered, students whose native language was not English got lower grades than did native speakers. However, in courses taught in virtual space in that same year and in the same institution, the grades for the two groups of students were the same. In the virtual classroom, students had more time to think about what has been said and to decide how to reply. The virtual classroom did not discriminate against non-native speakers. The traditional classroom facility did.

Are some types of students facing barriers at your institution? In technology-intensive courses? In courses taught in more traditional ways in more traditional facilities? If you can identify the barriers, perhaps you can lower them.

Top of Page
 

4.   How does your use of technology affect costs?

Some of the toughest questions deal with costs. The high stakes make it harder to collect data. And framing the study is difficult. What standard of judgment should be used to decide whether an activity has cost "too much" or whether it has "saved money" (compared with what?). Which expenditures should count in the model? Today's operating expenses are always included, but what about yesterday's capital spending? Spending by students? The value of faculty time? Student time? Staff time? Despite the difficulties, one can make some sensible assumptions, build some models of how time and money are being "spent," and draw some important conclusions.

At Washington State University, for example, Tom Henderson analyzed the costs of three different approaches to creating technology-based course materials and experiences. Using Flashlight Program's approach to cost modeling, Henderson studied:
  • Development of InterFon, an instructional module to help students learn the sounds associated with phonemes in foreign languages. The student can click on a phoneme to hear the sound. Quizzes and writing assignments about phonemes can be integrated into InterFon as part of the program.
     
  • Translation by a faculty member of his course material into an online format (combining content with strategic email response boxes); and
     
  • Development of a "template" web forum for organizing threaded discussions that could be used for many courses.
Henderson and his colleagues summarized their findings in two ways: total costs of each program, and costs per student for each program:
  • Total costs: Creation of the template for a Web forum was the most expensive activity. InterFon was moderately expensive and the translation of course material to an online format was the least expensive activity.
     
  • Costs per student: However, when costs were divided by weeks of student use, the modular approach (development of InterFon) cost $14,000/student week. It was by far the most expensive. The other two approaches were both relatively inexpensive to develop due to the large volume of subsequent student use (each cost about $8/student week to develop).
Henderson and his colleagues concluded that tailored modules need to have very specific and justifiable goals (e.g., opening an instructional bottleneck with substantial benefits for learning in later courses) in order to justify their steep costs per student. The cost of putting a course online is far more affordable but requires advanced faculty skills. The Web template placed some limitations on teaching and learning approaches but combined acceptable costs with a lower threshold of required faculty skill in using technology. The question for your institution: how do different methods of improving courses compare if you compare the cost per week of student use of that improvement?

If your institution has problems with attrition, another illuminating indicator is the cost per student passing a course or the costs per graduate. For example, Baruch College started an experimental course called "College Literacy" designed for students with poor skills in reading and writing. A section that did not use computers suffered a failure rate of almost 50%. Doing better were students in an experimental course that featured considerable use of a real-time computer conferencing system called Daedalus. The instructor's assessment of the exit exams (each of which was graded by three independent readers) was that "the students in the computer-enhanced section consistently wrote longer ...essays rich in ideas and details, organizationally complex, remarkable in their fluency." Pass rates for the experimental course were also much better: about 75%.

Although most costs were the same (space, faculty time) for the two courses, the conferencing software added about 7% to the cost per student. However, so many more students passed this course that Baruch estimated that the costs per passing student were 29% less than the costs per student passing the course that did not use computers.

Time is money in academe: most of our budgets go to the time of a set of people who are on long term contracts. Thus the problem of cost control is closely linked to the challenge of helping staff make rewarding and efficient use of their time. The Rochester Institute of Technology applied the fledgling Flashlight cost analysis approach to several different methods for teaching distance learning courses. They found something quite surprising. When experienced instructors were first interviewed, they all immediately remarked that teaching at a distance was more time-consuming than teaching on-campus. Then they were asked to break down their activities into different functional categories, both for courses taught on-campus and off-campus. Then a dramatically different picture emerged. About one third of the faculty had indeed spent more time teaching their course for off-campus students. But another third had the opposite experience: it was their on-campus courses that were more time-consuming. And the remaining third found that they were spending equal time for on- and off-campus courses. Perhaps first impressions about one's own time use can be deceiving. If so, studies like this may help faculty and staff members make more rewarding use of their time while helping the institution improve education with its current staff and budget.

Top of Page
 

5.   Is technology helping a department or school create widespread improvements in practice?

Very few institutions are asking whether their uses of technology are fostering institution-wide changes in teaching and learning practice. It's easy to understand why: teaching and learning practices are the province of individual faculty teaching isolated courses. It's hard to see what's going on in more than one class at a time. But we need to be able to see across the curriculum. Educational outcomes for graduates are largely determined by patterns of teaching and learning that transcend individual courses. For example, if students graduate knowing how to write well, it's the fruit of their whole education, not a single course. The same is true for other key competencies such as critical thinking or their command of their major subject. So it's important to understand whether technology is enabling changes in those patterns of teaching and learning. For example, if an institution values collaborative learning and the development of skills of teamwork, it's important to discover whether e-mail is being used to foster that kind of learning across the curriculum. It's also crucial to monitor whether the institution is learning to take advantage to its technologies to make such improvements in teaching and learning year by year.

One of the few institutions to study the role of technology in fostering large-scale patterns of change in teaching is Valley City State College in North Dakota. This institution requires student ownership of notebook computers. Kathryn Holleque and her colleagues asked students many questions about their uses of computing. For example, students were asked whether using the notebook computer helped them organize information in personally meaningful ways; most agreed that it did. The Valley City State study also looked directly at some of the fears about computing. For example, responding to a 1997 survey, only a few students reported that they had become even slightly "addicted" to the use of computers. Slightly larger numbers of students resonated at least a little to questions about information overload, sleep deprivation, and social isolation. However, the only negative hypothesis about universal ownership of laptops that pops out from the crowd is the 30% of respondents who agreed "very much" that the laptops distracted them from what was being covered in class.' I wonder whether that figure has dropped as students and faculty members have become more accustomed to the laptops. Studies like these become even more powerful when done on a regular basis so that institutions can discover whether good practices are on increase and whether problems seem on their way to being solved.

These days, most institutions offer a stunning variety of information utilities for students, faculty members, and staff, including publicly available computers, Internet connections to dorms and offices, e-mail accounts, and the like. Each element of this technology infrastructure can be used for many educationally important activities. Each activity can potentially be supported by many elements of the infrastructure. Is it possible to say anything coherent and concrete about whether and how this technology is being used to improve the academic program?

The first step in creating such a study is to select an educational focus. Identify a few absolutely crucial educational activities, practices or processes that technologies should already be making easier and productive. For example, a commuter campus might choose to look closely at how students are collaborating on their homework while off-campus. When you are selecting such educational foci for a self-study, consider activities that fall within Chickering and Gamson's Seven Principles of Good Practice in Undergraduate Education (e.g., student-student interaction, student-faculty contact, active learning÷) These are the kinds of activities that (researchers tell us) often foster better learning outcomes. So if you find that technology is being used to foster such good practices, you're part way toward showing that it's being used in ways that improve quality and access.

Step two, equally focused, is to select just a few technology utilities that are crucial for carrying out those educational activities. For example, if student collaboration on homework and projects has been selected, the study might focus on e-mail and on the Web-as-library. Notice that we're now talking about a very small, very important slice of the institution Ů not all of the educational program, not all of the technology Ů just a slice that is small enough to study and important enough to matter.

Step three: study the extent how, how much, and how well those particular technologies are being used in those particular educational activities. If technologies aren't being used to expand and enrich those activities, why not? Are the activities on the upswing each year thanks in part to better, wiser use of the technologies?

Such studies will probably produce a mix of good news and bad news. The bad news is key to making improvements. For example, if students aren't collaborating on their homework, perhaps instructors are discouraging the practice or are failing to help students learn how to do it. Some barriers may be technology-related: for example, do students have access to the technology they need? Are the support services giving them help when they need it?

Armed with such findings, faculty and staff will be in a much better position to make improvements. And the more faculty and staff have been involved in selecting the focus of the study, the more are likely to act on its findings.

Some Conclusion

Since the mid-1970s, I've been involved with evaluation. Since my first days doing educational research at MIT and The Evergreen State College, I've known that educational innovators, like nocturnal creatures, do most of their work in the dark. Technology -- with its unpredictable changes and vast expenditures -- has made flying blind much more dangerous than it used to be. The good news is that some institutions are learning how to see.

 

Stephen C. Ehrmann, January 19, 1999

1. Simple summative questions l 2. Is technology magic? l 3. Is your investment helping the people who most need help? l
4. How does your use of technology affect costs? l 5. Is technology helping a department or school create widespread improvements in practice? l
Conclusion

PO Box 5643
Takoma Park, Maryland 20913
Phone
: 301.270.8312/Fax: 301.270.8110  

To talk about our work
or our organization
contact:  Sally Gilbert

Search TLT Group.org

Contact us | Partners | TLTRs | FridayLive! | Consulting | 7 Principles | LTAs | TLT-SWG | Archives | Site Map |