|
Stephen C. Ehrmann, Ph.D.
The TLT Group
Revision
of article originally published in Active Learning IX
(December 1998),
pp. 38-42.
Table of Contents
Abstract
Why a Tool Kit?
Types of improvement
Elements of a Tool Kit
From Blob to Triad
What Data Should Be Gathered
Further Development of the Flashlight Program
References
Abstract: Oliver and Conole (1998) describe a tool kit for
evaluating communication and information technologies. The Flashlight Program has begun
publishing components of such a tool kit. This note describes the basic methods and
assumptions underlying the Flashlight approach.
Why a Tool Kit? Why Not Rely on Research Done
by Others
An American
Congressman, Thomas P. ("Tip") O'Neill, once remarked that "all politics
is local." One can, with equal accuracy, assert that "all education is
local." Global findings in educational research are hard to uncover and validate.
Once found, such theories and national trends are by no means accurate predictors of what
happens locally: too many other factors intervene. Thus global theories such as
"computer use results in more student-student interaction" and
"student-student interaction tends to result in better learning outcomes" are
hazardous guides at best to local realities. Without also doing local studies, instructors
and administrators are groping through a vast darkness when they use technology, hoping
that their hopes equal reality. It is likely to be a dangerous trip.
Such studies
are difficult and time-consuming to design. Fortunately it is possible to develop a tool
kit that most educators can use to help create focused, useful studies. This note
summarizes some key assumptions underlying the design of the Flashlight tool kit and
illustrates the basic method in use.
Top of Page
Types of improvement
The design of a
tool kit presumes that very different educators need to ask similar questions. Fortunately
this seems to be true. Across disciplines, types of institutions, age levels and national
boundaries educators are making many similar choices about which technologies to use and
why. They also have similar anxieties about what might go wrong. To check on their hopes
and fears, they therefore could begin with some common evaluative questions: using the
same or similar wording for certain survey items, the same or similar study designs. These
similar questions and designs are at the heart of the Flashlight tool kit.
To repeat, such
a tool kit is needed because, although the questions are similar from one place to another
and from one year to the next, the answers are likely to be different. Many people may be
using e-mail and the Web to help collaborative learning and to control costs (hoping that
virtual space is cheaper than bricks and mortar). But one institution may find that
collaborative learning has improved or that costs are less for learning in virtual space,
while another institution may find the opposite.
What kinds of
questions are most useful to ask about the educational uses of computers, video, and
telecommunications? Paper and other traditional technologies give us the clue: the results
of technology investment stem from the details of how the technology is used.
With paper, for
example, it would make little sense to do a study of the average learning gains
attributable to 10% more (or less) paper per student. Instead the local researcher should
begin with a fairly detailed vision of how and why the paper is being used in a particular
school or college: the activities in and for which it is used and the purposes of those
activities. Because paper and other empowering technologies such as computers are used for
a bewildering number of activities, an impossibly large number to study, we should focus
on a few of the most important ones.
What are the
most important technology-based activities and purposes for which a tool kit ought to be
useful? Here is an incomplete set of such general goals:
- Enabling
important new content to be taught: Some of todayís methods of thinking are difficult
or impossible to do (or to learn) unless the thinker is using technology. It is difficult
to teach many of the skills of modern statistics, physics, graphic arts, music, and
sociological research, for example, if students and instructors have no use of computers.
It is difficult to master the emerging genres of electronic expression without using them.
One level of question for a tool kit is trivial: is the technology-dependent skill or
content being taught or not? A second evaluative question is not quite so easy to answer:
how important is it to teach the new subject matter? One response would be to empanel some
external experts and ask them to compare two alternative curricula (along with a
representative sample of student achievement in each, if possible) and let them judge the
relative value of the two teaching-learning possibilities. For example, do they judge one
excellent and the other unacceptable in todayís ëmarketplaceí of graduate schools and
employers? Are both acceptable?
- Changing who
can learn: Another important motive for technology use is to enlarge the number of or
types of people who can learn. Here, too, one component of a tool kit can be fairly
straightforward: one can ask students whether they would have enrolled if technologies had
not been available. One can complement this data with a study of different types of
potential students, to check on the proportions of various types who do enroll and the
ways in which they are using technology. It is also pertinent to check into their rate of
progress to a degree or certificate, rate of attrition, and so on.
- Improving
teaching and learning activities: A third motive is to alter or improve basic teaching
and learning practices. The chosen practices may be valued because of personal theories
and experiences of the teacher or learner in question. Sometimes the practices are also
buttressed by research showing that, more often than not, when the practice is in use,
learning outcomes improve. The "seven principles of good practice in higher
education" (Chickering and Gamson, 1987; Chickering and Ehrmann, 1996) are a good
example of the latter. The next section of this paper will focus on the use of the first
component Flashlight construction kit, the Current Student Inventory, for studies to
describe and evaluate such uses of technology.
- Lowering or
controlling the costs of teaching and learning activities: A fourth common motive for
using technology is somehow to control the growth of educational costs, or to reduce the
costs of specific activities. This should not typically be seen as a motive in and of
itself, any more than one would strive to control a familyís spending as a motive in and
of itself. This motive is intimately connected with one or more of the first three: in
other words, one ought to be trying to create the best possible educational program for
the money (and human striving). The educational value of cost control comes from the
activities that are rendered more feasible (whether for different content, more equitable
access or good teaching and learning activities). The Flashlight Program is developing a
Cost Analysis Handbook which will be published early in 1999.
Top of Page
Elements of a Tool Kit for Studying the
Use of Technology to Improve Teaching and Learning Activities
Letís return
to the metaphor of paper as an educational technology. At minimum, we need three elements
for a meaningful study. We need to specify:
- A technology
(paper)
- An activity for
which it is used (students draw pictures on the paper), and
- The educational
outcome of the activity (increased skill as an artist)
The Flashlight
approach to studying teaching and learning with technology relies on this simple
construct. We call it a "triad" because it has only three elements.
"Simple"
does not mean intuitive, however. Many academics and government officials, when faced with
computers, continue to believe that it is sensible to ask what they think of as a
"bottom line" dyadic question such as "Do computers promote better
learning?" or "is it cheaper to teach students in virtual space than in a
classroom?" One can ask such questions: studies can be done; data can be produced.
But the resulting data do not mean anything. Would one go to the effort of discovering
that in three schools a decrease in the paper supply correlated with an improvement in
learning? Should one then conclude that the absence of paper helps learning? Most of us
would conclude instead that one key to understanding these schools would lie in
understanding the activities for which paper had been an important medium. It is
futile to study technology and learning outcomes without also studying how the technology
has actually been used. (Ehrmann, 1998)
Nor does
"simple" imply "easy." Any evaluation study must begin with an
inchoate, almost incoherent sense of why a study is needed and what should be studied.
This inevitable initial sense of the purpose of the study is the evaluation
"blob." Many novices are startled to discover how difficult it is to work their
way from a "blob" to a construct that is focused to guide a compact, meaningful
study. Flashlight training workshops and consulting work spend a significant amount of
time helping local evaluators create a triad that is focused enough to enable the design
of a study. Many triads can be framed within the same evaluation blob, but the best are:
- Enabled or
supported in crucial ways by the technology in question
- Repeated over
time (in varying forms)
- Repeated in
different courses (in varying forms)
- Exceptionally
educationally important
- Exceptionally
focused
Top of Page
From Blob to Triad, to the Five Sets of
Questions
Consider a
hypothetical study: an institution has significant problems with attrition and a group of
instructors are trying to discover what value e-mail has for their courses. That is their
evaluation blob.
Weíll assume
that, after some discussion, they decide to focus on the following triad:
- Technology:
electronic mail.
- Teaching-learning
practice: students working on homework and projects with one another over barriers of
distance and/or timing
- Outcome:
improved retention.
Our
hypothetical research team has chosen this triad not because it tries to include
everything important about the use of e-mail and attrition, but because it does not.
Theyíve chosen this triad because, at this stage, they think this activity of
collaboration on homework may be the single most likely mechanism by which the use of
e-mail could help with the attrition problem. They would like to know whether this little
educational engine is functioning well and, if not, why not.
Their triad's
structure suggests five sets of related questions:
- Questions
about the technology per se (e-mail, in this example). For example, questions
about its availability and reliability ñ if the technology can't be used, this triad
can't work.
- Questions
about the use of the technology (e-mail) for the activity (student collaboration on
homework or projects). For example, is the technology a supple effective tool for this
activity, or is it problematic and limiting?
- Questions
about the activity per se (collaboration on homework and projects, in this
example). For example, how much do teachers value teaching in this way; if they donít
believe this activity is valuable, they are not likely to use technology to carry out the
activity for the first time, or to do the activity better than before? Questions about the
activity are also useful in comparing courses that use different (old and new)
technologies for the same activity. For example, in one course that relies strictly on
students meeting one another face to face outside the classroom, how much collaboration on
homework is there compared with another where students also use e-mail? This question
would be about how much students collaborate on their homework (without mentioning the
medium).
- Questions
about whether and how the activity is contributing to the outcome (in this example,
improved retention). For example, do people who achieve the outcome report that they
actually participated in the activity? Do they claim that the activity was valuable in
achieving the outcome?
- Questions
about the outcome per se (retention, in this example). For example is the
retention looking good? Is there evidence that the retention is valuable when attained?
If the answers
from these five questions are affirmative, then the result can sometimes be a reasonably
continuous chain of evidence that links technology to activity, and activity to outcome.
The study wonít be absolutely convincing (almost no study is), but it can be done with a
reasonable effort and the findings can support some practical reasoning about the value of
the technology and how to use it better in this context. This kind of evidence has already
proven valuable for making choices in educational institutions. (e.g., Brown, 1998;
Harrington, 1998; Harvey, 1998)
Top of Page
What Data Should Be Gathered: Using Questions
from the Flashlight Current Student Inventory
The Flashlight
Programís services and tools are designed to help individuals and teams develop such
triads and the study designs that can rest on them. The first published component of the
tool kit is the Current Student Inventory, which includes almost 500 items that can be
used when survey or interviewing currently enrolled students. The Current Student
Inventory (CSI) has already been site licensed by 130 colleges, universities, schools,
hospitals, and training programs around the world. Most of its questions deal with the
technology (I, above), the activity (III), and the use of the technology for the activity
(II). The activities that are the focus of the CSI are mainly drawn from the Seven
Principles of Good Practice mentioned above.
After
tentatively identifying a triad as the center of their inquiry, Flashlight users then
typically go through a five-step process at least once:
- Brainstorming
about what kinds of data are already available that bear on these five types of question,
- Brainstorming
what types of data they might create (e.g., through questions in interviews, surveys)
- Looking through
the Current Student Inventory to see which survey and interview questions might be
relevant
- Reconsider the
triad ñ does it need to be rephrased or changed entirely?
- Reduce the
number of questions to a manageable number through one of several possible strategies
Skipping ahead
to step 3, here are a few of the questions from the Current Student Inventory that seem
relevant to each of the five types of queries raised by this particular triad of e-mail,
collaboration on homework, and gains in retention.
I. Sample
questions about the technology (electronic mail) from the Current Student Inventory, e.g.,
- "At this
point in this course, rate your knowledge of how to use electronic mail where
"1" is no knowledge and "5" is an expert user." (scale of 1-5;
this question might be included in a study because, if students donít feel confident
about their use of e-mail, the triad is not likely to work well).
- "My
experience in using electronic mail has helped me learn to come to a deeper understanding
of the personalities of people Iíve never seen." (scale of 1-5; this question is
about life in general and provides clues both about attitudes toward the technology and
how the user may choose to use it.)
- In a typical
week during this [term, semester, or quarter] approximately how much time did you
spend for personal reasons interacting with someone (such as a stranger, a
former instructor/teacher, a content expert) by way of E-mail or other
"time-delayed" electronic communication (such as bulletin boards or discussion
lists)? (The CSI has parallel questions about use of the technology in other courses and
in work; it also has parallel questions about chat rooms and other synchronous
communication. Such questions might help explain why students are either comfortable or
uncomfortable with the technology in this course, depending on how much they use it in
other aspects of their lives.)
II. Sample
questions about the use of e-mail to support collaboration on homework and projects by
learners across barriers of space, timing
- Think about a
similar course you have taken that relied primarily on face-to-face discussions.
Compared to that course, because of the way this course uses Electronic
Communication (computers linked for information exchanges, such as computer
conferences, "chat groups," and electronic mail), how likely are you to Öwork
on assignments with other students?" term (multiple choice from "very
likely" to "very unlikely")
- Öfeel isolated
from other students?" (may relate to findings on retention; if students feel isolated
the e-mail may not be helping them enough to bond in genuine relationships and form ties
that keep them from dropping out.)
- Öobtain help
understanding course material from students/peers who do not attend this
institution?" (This is an exploratory question; some faculty members have found
surprisingly high percentages of students saying that they study with students from other
universities.)
III. Sample
questions about the activity of collaborating on homework and projects
- "Since this
course began, how frequently have you worked on an assignment for this
course with a group of other students?" (multiple choice; helps to know this if
youíre comparing two groups of courses ñ one with homework being done online and the
other using more conventional methods)
- "In your
opinion, to what extent were the following given priority in this courseÖlearning
how to overcome the difficulty of working in teams/groups?" (scale of 1 to 5 where 1
is the lowest priority and 5 is the highest priority; included as one hint about the
importance given this activity; useful in comparing different courses)
- "Indicate
how strongly you agree or disagree with the following statementsÖAssignments for this
course were stimulating (multiple choice from "strongly agree" to "strongly
disagree") (helps compare students in different courses; may also help explain why
some students are doing the homework or projects).
IV. Sample
questions about whether, in this case, collaboration of this type has been fostering of
improved retention, e.g.,
- The Current
Student Inventory (1.0) does not suggest any questions or study designs for investigating
the relationship of student collaboration outside the classroom and the outcome. However,
if one has a way of assessing the skills of graduates in groups, one could also use data
gathered about the activity (collaborating outside the classroom) to see if students who
collaborated extensively graduated with good group skills.
V. Sample
questions about the outcome (retention)
Local
researchers would use their own methods to gather information about the type of retention
they care about, e.g., whether students leave the institution, whether students continue
in the major. They might complement this information with survey questions from the CSI,
such as:
- Indicate how
likely you are to return to school next term (multiple choice from "very likely"
to "very unlikely"; if given early in the term may suggest another explanation
for attrition ñ perhaps the student would have dropped out anyway.)
The foregoing
list is not meant to be a summary of a final survey; these questions are meant to indicate
the range of questions included the Current Student Inventory tool kit. The Current
Student Inventory has includes 500 questions, many of which are specific to the most
common hopes and concerns about some of the most commonly used technologies, such as:
Top of Page
Further Development of the Flashlight
Program
The Flashlight
Program, begun in 1993, is developing along three tracks:
- The tool kit
itself. Its first component, the Flashlight Current Student Inventory, has now been site
licensed by 130 colleges, universities, schools and companies around the world. The next
component scheduled for release is the Cost Analysis Handbook material being developed by
our colleagues at Indiana University Purdue University Indianapolis (IUPUI) with
assistance from team mates at the Rochester Institute of Technology and Washington State
University;
- Web-based
authoring and data analysis capability being developed by our colleagues at Washington
State University, and
- Training and
consulting materials.
More
information and background material about the Flashlight
Program can be found on the World Wide Web.
Top of Page
References
Brown, Gary
(1998) "Flashlight
at Washington State University: Multimedia Presentation, Distance Learning, and At-Risk
Students at Washington State University," in Stephen C. Ehrmann and Robin Etter
Zuniga, The Flashlight Evaluation Handbook (1.0), Washington, DC: The TLT Group.
Chickering,
Arthur and Stephen C. Ehrmann (1996), "Implementing
the Seven Principles: Technology as Lever," AAHE Bulletin, October, pp.
3-6.
Chickering,
Arthur and Zelda Gamson (1987) "Seven Principles of Good Practice in Undergraduate
Education," AAHE Bulletin (March).
Ehrmann,
Stephen C., (1995) "Asking the Right Questions: What Does Research Tell Us About
Technology and Higher Learning?" in Change. The Magazine of Higher Learning, XXVII:
2 (March/April), pp. 20-27.
Ehrmann,
Stephen C. (1998), "What Outcomes Assessment
Misses," In Architecture for Change: Information as Foundation.
Washington, DC: American Association for Higher Education.
Harrington,
Susanmarie (1998) "The Flashlight
Project and an Introductory Writing Course Sequence: Investigation as a Basis for
Change," in Stephen C. Ehrmann and Robin Etter Zuniga, The Flashlight Evaluation
Handbook (1.0), Washington, DC: The TLT Group.
Harvey, Patti
(1998) "Technology Integration
in Teaching and Learning Environment," downloaded from the World Wide Web on Nov.
4.
Oliver, Martin
and Grainne Conole (1998) "Evaluating Communication and
Information Technologies: A Toolkit for Practitioners," in Active Learning 8,)
(July). Downloaded from the World Wide Web on Nov. 4, 1998.
Top of Page
|