Self-Studies: Activities and Outcomes

 

Handbook and Other Materials l Asking the Right Questions (ARQ) l Training, Consulting, & External EvaluationFAQ
 Previous Section: Why Technology?Return to Self-Study Home Page l Next Section: Study Factors Affecting The Activities

In order to get some insight into how to improve outcomes resulting from uses of technology, you'll need probably need to study outcomes somehow and you'll certainly need to study activities (patterns in how students and faculty use the technology and other resources in ways that produce those outcomes). Those outcomes may be good, bad or, most likely, some of each (depending in part on who is looking at them.)  We call this combination of technology used for an activity in ways that produce an outcome as a triad.  The biggest initial step in the design of your self-study is deciding which triads, or pieces of triads, will be the focus of your inquiry. 

Now we'll unpack some of these ideas - some different ways of looking at outcomes and activities.

Study Outcomes/Valued-Added? Process? Or Both?

There are several related ways of analyzing outcomes:

  1. Outcomes (what can students do when they've finished the program) that change quantitatively (for example, are students learning to be better citizens, or better managers, than was the case five years ago?).  A term that is sometimes used for this type of study is 'effectiveness.'  We'll return to this perspective below.
  2. Outcomes that change qualitatively. For example, our geography students 20 years ago knew nothing about Geographic Information Systems while today's graduates are quite adept in using GIS for analysis of a variety of problems. Their skills are different from, and (from today's perspective) more valuable than if today's graduates still were only learning what our students were learning twenty years ago. Here the evaluative question is whether the outcomes are valuable (and from which perspectives).
  3. Value-added (how do those capabilities contrast with the capabilities they had when entering the program). Outcomes and value-added might improve either because the content is changing toward more valuable topics or skills, because effectiveness has improved, or both. Technology can figure in both those kinds of improvement.

Studies that look at outcomes from two or all three of these perspectives are likely to provide better guidance for improvement that studies that look at just one or the other.

Self-studies can sometimes skip measuring outcomes IF there is broad agreement (ideally based on good research) about the kinds of activities that produce superior outcomes. [For more on this and related topics, see "What Outcomes Assessment Misses."]

Choosing Outcomes Most Likely to Be Benefited (or Harmed) by Your Uses of Technology

If you want to study whether technology investments are paying off in better outcomes, it makes sense to study those outcomes most likely to be improved by your program's uses of technology.  For example, many program use technology to help improve students' skills in communication, in collaboration, in designing and composing, in inquiry, in being able to apply what they've learned once they leave college, ...

That may sound obvious, but we've seen programs do the opposite: studying whether a particular outcome has been improved by program investments in technology when the program has made no conscious effort to use the technology to improve that particular kind of learning. That makes no sense. Technology is a tool but outcomes won't be achieved unless the tool is used for that purpose.

What specific kinds of learning can technology be used to improve? For the purposes of institutional self-study, the institution might look for patterns of improvement that characterize many different fields. For example, are skills of inquiry being improved because many or all departments are making more of an effort (helped by using technology as a research tool, simulation tool, tool to aid communication with other researchers, etc.).  This TLT Group web site, "Beyond Computer Literacy: Technology and the Nature of a College Education," describes some dimensions of learning where the use of technology is particularly valuable.

Departments may focus their studies of whether and how learning is improving with the aid of technology. For example, in an engineering department, the department might focus on whether and how students are become more skilled in engineering design, and on the role of computer-aided design software, virtual design teams, and other technology-enabled instructional innovations in helping foster those skills.

Evidence About Outcomes
  • Data gathered about outcomes themselves -what students have learned and what they can do with what they have learned. . Data can be gathered in several ways:
    • Capstone courses and portfolios can be used to describe outcomes as the student moves through and out of the program. (Rubrics can be used to analyze whether student projects and interviews contain evidence of the outcomes being investigated.). 
    • Surveys can ask students about outcomes from particular courses, the courses and experiences of the current year they have taken that year, or their experiences to date at this institution. (Self-assessment of learning can provide reasonably reliable, valid data if the questions are understandable and valid and if the students do not believe they will be rewarded or penalized for their answers.) For TLT/Flashlight subscribers, here is a first draft of a student survey about learning outcomes (as well as relevant activities) that illustrates some of the types of questions that might be used; if you don't recall the institutional username and password, click here to find your local contact.
    • Alumni can be surveyed, asked to comment anonymously about their mastery of the skill in question and their judgment about the strengths and weaknesses of their education in this area.
    • Studies should also attend to access and equity outcomes such as who can enter, learn, and complete the program (not just how many learners, but what kinds of learners) and costs.
    • Occasionally testing of learning can be useful, but college outcomes are usually too varied (from one student to the next, and from one year to the next) and too sophisticated to be captured by traditional tests. There are exceptions -- for example, in physics some tests of conceptual understanding have been of value.

Choosing Activities (The Seven Principles)

The previous section described studies that focused on whether particular skills or bodies of knowledge were better mastered when technology was used in particular ways.  In contrast, this section describes self-studies that ask whether uses of technology are improving the effectiveness of the learning process

The birth of the Flashlight Program can be dated to a request for advice from the Maricopa Community College District in 1992.  "How can we measure," they asked, "whether and how our investments in technology are influencing change in program quality?"  Maricopa is a gigantic multi-campus system with hundreds of majors and thousands of courses; measuring changes in outcomes or value-added was not feasible. However, we do know that certain teaching/learning activities can improve outcomes. So Flashlight planning began by looking for ways to describe the contributions of technology use to quality (e.g., attracting students to spend more time studying; enhancing active learning; improving interaction between faculty and students) that would be valid and important no matter what courses the student took.

Let's imagine a program that teaches students purely through excellent lectures (in which students sit silently and take notes), excellent textbooks, and a single final exam. That's all there is to the educational program.  How would you improve learning outcomes for that program? One answer, according to both educational research and faculty common sense, is by applying the seven principles of good practice in undergraduate education (e.g., better faculty-student contact, more student-student collaboration, more active learning, more frequent feedback and assessment, more time and energy spent studying, etc.). As it turns out, these are also activities for which faculty and students often use technology.

So one way to measure effectiveness is to measure the prevalence of activities that fit the seven principles. If those activities are improving over time, both research and common sense agree that outcomes are probably improving.  And if one reason for the improvement is the use of technology (for example, faculty and students are bonding more because they communicate more via e-mail which also changes the nature of their face-to-face conversation), then there is evidence that technology use is contributing to improved outcomes, even if the outcomes cannot be directly measured. (If there is a way of measuring outcomes change, e.g., improvements in retention) these data on activities and technology can help explain the change.

Evidence About Activities

Data gathered about the activities of learning. Are students frequently practicing the skill, and over many courses? For example, how often are students being asked to carry out inquiries? are they getting feedback on their skills of inquiry (or just on whether they got correct answers)? what problems are hindering some students from carrying out these inquiries?) For TLT/Flashlight subscribers, here is a first draft of a student survey that focuses on several such skills (and their outcomes);  (if you don't recall the institutional username and password, click here to find your local contact.)

Several subscriber-only Flashlight tools have been designed for studying these kinds of activities, including the Current Student Inventory item bank (in Flashlight Online and also in the Flashlight Evaluation Handbook), the Flashlight Faculty Inventory, and the EEUWIN survey (available as Template zs6232 in Flashlight Online). For an example of how these kinds of tools can be used to create several different, complementary strategies for evaluating and improving programs, click here to see sample surveys designed to help improve distance and blended (hybrid) learning.

Another invaluable resource for studying the kinds of activities that influence a wide variety of outcomes is the National Study of Student Engagement - NSSE. Like Flashlight Online, NSSE draws in part on the seven principles of good practice, but NSSE and its companion surveys are benchmarking tools - large numbers of participate, using the same items, so pooling and comparison of data is possible.  Yet a limited amount of tailoring of the instruments is also possible: coalitions of institutions can add addition items.  (Unfortunately each institution can belong to, at most, one such consortium.)  One nice feature of NSSE is the ability of institutions to join with other institutions in adding items to the basic NSSE instrument.

Electronic portfolios also can be used to store and share evidence about activities: the assignments to which students respond and the feedback they received on those assignments, for example.

 Previous Section: Why Technology?Return to Self-Study Home Page l Next Section: Study Factors Affecting The Activities

 

PO Box 5643
Takoma Park, Maryland 20913
Phone
: 301.270.8312/Fax: 301.270.8110  

To talk about our work
or our organization
contact:  Sally Gilbert

Search TLT Group.org

Contact us | Partners | TLTRs | FridayLive! | Consulting | 7 Principles | LTAs | TLT-SWG | Archives | Site Map |