
Studying and Improving the Use of Technology to Support Collaborative Learning:
An Illustration of Flashlight Methods and Tools
Stephen C. Ehrmann, Ph.D.
April 25, 2000
This essay describes how an institution might study and improve the use of its technology to support and improve collaborative learning and thus improve educational outcomes. A study is being carried out by a team at "Somewhere College." This series of articles tells the story of that study, from first discussion through the process of focusing the study, developing research tools, gathering data and analyzing findings. The goals of the articles: 1) to illustrate the general approach to studying and improving technology use being explored by the Flashlight Program, and 2) to describe a particular type of study -- on that tracks patterns of change in key instructional practices across the curriculum and over a period of several semesters..
In first article, a faculty member who is helping to lead the study "reports" on the team's first three months of work. The article begins with the early, casual conversations about a need; its end includes a draft of the team's first student survey and notes on the coming stages of the inquiry. This article is a good way to see an example of the kind of survey that can be created with Flashlight Online. It also illustrates key Flashlight concepts such as "triad" and "scenario."
P.S. Read studies don't go progress in quite this "step by step" process. But by oversimplifying a bit, we hope to make the method itself a little easier to understand.
Collaborative Learning and Its Discontents
Building a Base and Beginning to Focus
How Well is the Triad Working?
First Survey of Students (First Draft)
Scenarios of Failure and Success
I've been involved with the Writing Across the Curriculum (WAC) Program here at Somewhere College. And my courses have been using computers for student projects and e-mail for some years. So it wasn't surprising that when people at our institution started have more-than-casual conversations about whether computing was really helping education get better, I should be involved. We knew that education was becoming different. We've had a computer science major for a long time and courses in many other departments teach content that is computer-related: graphics courses in the arts, some of our geography courses, statistics, and the rest. We have a fledgling distance learning program, too. But that's not what we were talking about. Was education any better because of all this money and effort spent on technology?
I said it was important that I'd been involved with the WAC program. I think that involvement is what it made it so natural for me to speak up about computer-enabled changes in teaching and learning practices "across the curriculum." Writing across the curriculum got started partly because folks had begun to notice that student writing could actually deteriorate from sophomore to senior year. Writing ability is something like a muscle. Engage in a long term program of exercising and stretching: you get stronger. Do one quick burst of exercising and then lay off for three years: it's almost as though you'd never exercised at all.
One difference between writing and physical exercise: one course's worth of a writing-intensive course may not make much visible difference in the writing of the average student in that course. Progress is hard. But over lots of such courses, the progress can be substantial.
So it didn't surprise me that neither or nor my colleagues, trapped within our individual courses and unable to see out, should be hopeful yet unable to show with certainty that our graduates were more "educated" (in any sense of that term) than they had been a decade earlier, let along whether computing was responsible for such an improvement. We did know enough about what we and our colleagues had done, and not done, to be aware that the answer to "are they better educated because of our use of computing" might well be "no."
.
Somewhere College has been gobbling too much technology too fast. "Indigestion" is the result: the troubling sense from students, from the administration, and from our own observations that we are not getting full value from all the hope, time and cash we are investing in hardware, software, and networking. It's gotten worse as the use of electronic mail, "threaded conferencing," "chat rooms" and the like have spread. Everyone is using technology but in all the welter of newness, has anything really changed??
So about three months ago a few of us began talking seriously about studying the situation in order to improve it. Some of us were members of the institutional Education and Technology Council (our version of a Teaching, Learning, and Technology Roundtable) . Eventually a small group of us were put in touch with Gary Strong, director of the Teaching and Learning Center who, it turns out, is quite interested in technology issues too. He's our institutional contact with the Flashlight Program and he created authoring accounts for several of us with Flashlight Online. He warned us, however, not to begin using it yet. "The important thing is first to decide what you need to study and improve," he said. "If you aren't focused, Flashlight Online will just overwhelm you with possibilities."
Our real goal is to increase the value (and decrease the problems ) of using technology to improve learning outcomes. Gary pointed out that computers are like paper: they're both enabling technology. Adding paper doesn't itself improve educational outcomes. The key to understanding (and improving) outcomes is to study (and improve) what people do with paper. "Same thing with computers," he said. "Find an activity that is both
We talked some about students working together, and generally relating to one another, outside class. We talked about student-faculty interaction, too, and students using information resources (the library, not just the Web!). Many of us have been concerned for a long time about the isolation of our students. Many of us have also been worried that our graduates aren't good enough at working in teams. Those concerns are part of the reason why "collaborative learning" was often one of the promises made by those of us who have been arguing for more and better computers and networking. We have been especially hopeful that our commuting students would benefit somehow.
But the sad truth is that we don't really know whether there has been any general growth in collaboration or community due to our use of computing and, if there has been, we don't know whether it has produced any perceptible, valuable change in what our graduates know, what they can do, or what they value.
After about an hour of discussion with Gary, that's where our group had progressed: it would be a good idea to use Flashlight to study collaborative learning, the role of computers in fostering it, and its impact on the quality of our graduates - across the curriculum. Many of us were willing to do some work on such a study. But how to get started? Who should be involved?
Then one of my colleagues pointed out that we had a community planning day coming up. Why not involve more people? She went on:
"This study is probably going to find some bad news (maybe some good news, too, but it won't all be good.) We may well find that not as much collaboration is going on as we'd hoped. If collaboration hasn't increased much yet, we probably wont' find any improvement in our graduates, either. If that offends some of our colleagues, their first response may well be to point out the flaws in our study. And there certainly will be lots of flaws. And they'll ignore the findings and they won't change what they're doing.
"We're willing to send some time doing a study because we want the findings to improve education here, not just to report on it. OK - that implies that what we find has to influence the choices that a hundred of our colleagues make about how to teach. It may well need to influence other kinds of decisions too: about services, about budgets. But our colleagues won't make big changes like that unless the study is about something that already worries them and unless they've had a hand in how the study is designed. (And even then some of them won't change!)
"So let's take a chance and involve lots of people right from the start in choosing what's to be studied. If they're worried about collaborative learning, too, they'll pick it. If not, maybe we can agree on something else. But we ought to focus our study on what worries most people the most."
So we spent some time with the Education and Technology Council, the Provost, and the steering committee for the upcoming Community Planning Day. They bought the idea and we began to make our preparations. We got some useful ideas from Frank Parker, who'd done something similar at his institution, Johnson C. Smith University in North Carolina.
We began the Planning Day by having people respond in small groups to the TLT Group's "Fundamental Questions. Then we asked them, "What educational opportunities and problems are most important for us to study and improve?" Everyone could suggest issues and they did.
People wrote issues on over a hundred little sheets of paper. We tacked them onto a large wall chart. Then everyone worked together to group them. Then we linked the groups, took some votes, and generally got a sense from that about which cluster of issues were most widely of concern. "Collaboration and community" showed up on top. We reaffirmed our decision to look at collaboration first (partly because we weren't as sure how to measure community). It was a great day, not just because of what we learned but because so many people in our community now had a personal involvement in deciding what issue was going to be the subject of a real push in the next year or two. (The evaluation we were planning was just one part of this initiative.)
Gary told us that, in Flashlight jargon, this activity is sometimes called scanning because it was our way of looking across lots of potential triads before deciding where to focus. Sometimes scanning is done with a survey or interview program: looking for hints of unexpected success or trouble.
Community Planning Day was about six weeks ago (as I write). Our next step was to quickly convene an enlarge study group which included some new folks who'd been especially interested that day. When the study group met with Gary, we talked again about the issue of using data to influence collaboration. Whose choices were most important? What choices? Faculty members would need one kind of information. Technology support staff would want some different data. There were other relevant decision-makers too (Gary called them "clients.") The clients are the people whose choices are to be directly influenced by our findings. Since student-student collaboration is very much about student choices, he urged us to consider whether students should be our clients: were we trying to use data to influence the choices they made about collaborating and using technology to do it? Or was the data to influence the people buying new communications technology, for example? Or to influence potential donors to the college? Or for the accreditors who would be visiting us in a few years (showing them that we could use data to improve our practices).
We decided to make "faculty" and "technology service administrators" our two major groups of clients while also keeping our eyes the kinds of data that might influence student thinking about whether and how they should work and learn in groups.
Our group was too large to create the study directly but we also knew designing the study was more work than one person handle. We were going to design, administer and analyze at least two surveys (students and faculty) as well as doing some interviews. We wanted a study team that was representative of our clients, too. We finally got three people to volunteer to write the study (they'd be checking with our larger group): a faculty member (me), the associate director of academic technology, and a senior in the education department. Gary has continued to work with the team, too, to help us with Flashlight methods and tools.
By having identified "student-student collaboration" as a key activity we were already most of the way to identifying a triad (the activity; one or two key technologies that support it; and a goal that the activity fosters). For this first study, we narrowed further to the following triad:
The next step was to figure out how to measure those things.
We knew we need to discover the answers to at least two questions:
a) whether available technology really is being used extensively to help collaboration outside the classroom and
b) whether collaboration (technology-supported and not) is currently helping to promote the outcomes.
In other words, how well is the triad working? If the answer is "fine!" then we'd be done. If the answer is "it's not working well (enough)" then we would need to gather more data to help figure out what was blocking improvement..
Our Flashlight materials reminded us to think in terms of two kinds of data: extant (data that's already available) and new (data we have to create by surveys, interviews, developing new measuring software, or whatever).
So we created the following table:
|
|
Extant data |
New data |
|
How much is the technology being used for collaboration outside the classroom? |
|
|
|
How much collaboration is going on outside the classroom? |
|
|
|
Evidence that graduates can work in teams and feel a sense of community |
|
|
If the data we gather in these areas make our clients happy about the answers to our two basic questions, we decided, we would have done enough; we'd probably just repeat the study every year or so to track whether the situation was improving, stable, or backsliding.
Here is how we filled in the table:
|
|
Extant data |
New data |
|
How much is the technology being used for collaboration outside the classroom? |
We could use the course management system to get some crude numbers of how many messages students were addressing to the threaded discussions. |
We could survey students and faculty about how much the technology was being used for this, how distinctively useful it was (were there some kinds of collaboration for which it was particularly appropriate or inappropriate) |
|
How much collaboration is going on outside the classroom? |
We couldn't think of any extant data. |
We could survey students and faculty about this, too. |
|
Evidence that graduates can work in teams and feel a sense of community |
Our school has bachelor's theses and senior projects but students are required to do these alone. One of the suggestions from our community planning day was to change that requirement so, in a year, we'll have some extant data about the quality of this work. Not yet though. |
We could ask faculty teaching seniors, seniors, and recent alumni about behaviors that reflect team skills and community, as well as about whether they feel that team skills and community were important outcomes of their education here. We could interview employers of recent graduates about the team skills of our recent alumni. |
About two weeks ago (almost three months after our discussions began) we started writing our first survey, designed to gather a student-eye view of collaboration. It is intended for use both in courses where faculty are encouraging students to collaborate and ones where they don't, in courses where they encourage the use of electronic communication and ones where they don't. We used Flashlight Online to create it. You can see it (and respond if you like) at http://ctlsilhouette.wsu.edu/surveys/ZS977
At the time I am writing this draft, we have not yet begun drafting the faculty survey but it will be similar to the student survey: each group is being asked for its point of view on the same issues. Also, we have not yet started drafting a telephone interview guide for talking with graduates and supervisors at employers and grad schools where many of our graduates go, about their skills for working in organizations and teams.
Even though our student survey is not yet complete, we began a few days ago to work on the first steps toward the second round of our inquiry: data that could help faculty and staff improve collaboration outside the classroom.
Gary started us off by pointing out that in some classes our triad is probably working superlatively. In others it might be completed blocked. Why? The difference usually lies in the context. Some classes get students who love to collaborate, while others get students who distrust it. Some classes have lots of students who are good with the technology, while others get students who don't know how to use a mouse. And so on. If people knew such facts in time, they'd be in a better position to help students use technology to collaborate. "Everyone has gotten so used to flying blind they're not always conscious of it. But they are. A study can help guide action in the same way that headlights can help drivers drive more safely at night: by helping them spot problems and opportunities in time."
At Gary's suggestion, we started with failure scenarios: stories about the kinds of context that can hinder or block use of technology for collaboration. It's amazing how many different scenarios we could imagine! Our initial list is appended, along with the beginning of a "score sheet" we're using to select which scenarios are most important to use in designing our second set of surveys and other tools of inquiry (e.g., questions to ask of students during small group interviews) that could help us detect problems and opportunities.
Perhaps the most important finding from the 90 minutes we have spent on scenarios was an "aha!" that occurred toward the end of our discussion. Most factors affecting the value of technology for collaborative learning do not directly relate to technology. They are the factors that block or encourage people to collaborate. If people don't want to collaborate, or can't, the technology is of no value. If, on the other hand, they are hungry to collaborate and are good at it, the very same technology can be of enormous educational value. If, we had tried to evaluate the technology just by studying the technology, we'd have missed much of what was actually going on.
What's the bottom line so far? I've got some bad news and some good news. The bad news: it took us almost three months of hard work before we could even start writing our first survey.. The good news: by spending that effort up front to decide how to focus the inquiry, it seems likely that the study itself will be easier and (more important) the study itself already has begun to influence what faculty and staff are doing. And we haven't even gathered any data yet.
More to come. Please send any criticisms and suggestions about this first installment directly to Gary's Flashlight consultant, Steve Ehrmann, at Ehrmann@tltgroup.org. Thanks!!
|
Scenario |
Score (a x b x c) |
|||
|
Faculty unwilling to assign collaborative assignments because too time-consuming to help students work in teams. Too much time required for coverage. |
1 |
1 |
1 |
|
|
Faculty unwilling to assign collaborative projects that
are too open-ended, difficult to grade fairly and quickly in the time
available |
1 |
1 |
1 |
1 |
|
Students believe that collaboration with peers is cheating |
1 |
2 |
2 |
4 |
|
Students believe that teacher is the only source of legitimate knowledge |
1 |
2 |
2 |
4 |
|
Students have trouble with details of technology (e.g., (file transfer) |
1 |
2 |
2 |
4 |
|
One incoming student is a sociopath who destroys not only
his or her own group but also is likely to disrupt the whole class |
0 |
2 |
2 |
0 |
|
Students don't have access to computer and/or connectivity at times and places when they would work on homework |
1 |
2 |
2 |
4 |
|
Students feel that interaction via computer is too impersonal |
1 |
2 |
1 |
2 |
|
Students find it too difficult or irritating to work on homework via telecommunications |
1 |
2 |
1 |
2 |
|
Students become irritated when peers "freeload" and can't cope |
1 |
2 |
2 |
4 |
|
Students writing skills are inadequate for collaborating this way |
|
|
|
|
|
Students think the course itself is boring and so aren't interested in working on such projects; just want a grade and to get out |
|
|
|
|
|
Student doesn't get along with team mates and can't change teams easily enough |
|
|
|
|
|
Student lacks skill to work in teams and complete projects (e.g., time management for team; give and accept critique; tendency to bully) |
|
|
|
|
|
Students too distractable, especially by Web and chat rooms |
|
|
|
|
|
Students lack the skill or patience to manage the threaded discussion, which thus becomes anarchic |
|
|
|
|
|
Faculty member deluged with e-mail and burns out |
|
|
|
|
|
Student deluged with e-mail and burns out |
|
|
|
|
|
Faculty member doesn't notice that some students are going silent until too many have become alienated |
|
|
|
|
|
Students are all waiting for someone else to talk, take the lead; silence seems safest, easiest |
|
|
|
|
|
Grading policy seems to favor individual work, not team work |
|
|
|
|
|
Student abilities too varied for teams to work well in current organization of course |
|
|
|
|
|
Students could interact better if they could see each other, even just once or twice |
|
|
|
|
|
Too many students don't pay attention to, or understand, the directions on how to use technology |
|
|
|
|
|
Student projects get too wrapped up in technology and thus many are content poor; faculty decide to drop it |
|
|
|
|
|
Students have no (electronic) venue to talk about anything other than their projects |
|
|
|
|
|
Students find it difficult to bond or form a shared culture when they can't see each other |
|
|
|
|
|
Students aren't mature enough or familiar enough with content to work on open-ended projects with no "right answer". |
|
|
|
|
|
Students don't believe that learning the skill of working in teams is important |
|
|
|
|
|
Students don't think it's important to talk about life, truth, etc. with other students |
|
|
|
|
|
Web host and e-mail system are too unreliable or overloaded; prevents students from doing projects on time |
|
|
|
|
|
Web system can't handle the special needs of homework projects in this course (math characters, foreign language, images, audio, video) |
|
|
|
|
|
Screens are too hard to read for the demands of this homework |
|
|
|
|
|
Technology support not doing good/enough training of students in using the software |
|
|
|
|
|
Some students have only hardware that is inadequate to handle the system fast enough or well enough |
|
|
|
|
[SCE1] If the scenario seems at all likely (in twenty cases, it might well happen at least once), then it gets a score of "1"; if it's less likely than that, it gets a score of zero, which makes the total score for that scenario zero.
[SCE2] Data required. A score of 2 means that data can reveal both the existence of a situation and useful nuances about it. A score of 1 means that data would be of some use. A score of 0 means data would be of no use.
[SCE3] Import=Importance. If this were happening, how important would it be to know in time? (2= very important improvements could be made if data were available, 1=useful improvements might be made. 0= even if we knew, probably nothing could be done)
[SCE4] A score of "1", not "0", was assigned because this information could be used for to plan a faculty development program if many faculty members fit this description.
(c) The TLT Group, 2000