Planning Grant Competitions to Improve Academic Programs

Productive Assessment l Professional Development l Planning: Visions, Strategies l Boundary Crossing
LTAs - Low Threshold Applications l Nanovation Bookmarks l Individual Members Resources

These materials are for use only by institutions that subscribe to The TLT Group, to participants in TLT Group workshops that feature this particular material, and to invited guests. The TLT Group is a non-profit whose existence is made possible by subscription and registration fees. if you or your institution are not yet among our subscribers, we invite you to join us, use these materials, help us continue to improve them, and, through your subscription, help us develop new materials!  If you have questions about your rights to use, adapt or share these materials, please ask us.

Do you have, or are you considering developing, a mini-grant competition for faculty, in order to select courses, or sets of courses, for improvement?  For ideas about how to design and run such competitions, read on!

1. Goals for your initiative?

2. A few basic choices in creating funding guidelines

3. Important to evaluate outcomes of your initiative?

4. Important to 'Close the Resource Loop"?

5. Consider funding redesign of clusters of courses

6. The pros and cons of competition for grants

7. Do applicants already know enough to write great proposals? or should the competition help them?

8. Building a culture of evidence: require recipients to assess and report

9. How can you increase the applicant pool, and the competition?

10. Summary

 

Many institutions and systems make funds available to faculty members for course improvement.  Some are making such grants available on a competitive basis for course redesign: sometimes with very specific goals stated up front, sometimes not.[1] 

 

Here are a few suggestions for organizing such a grants program. This is not a ‘how to’; instead it’s intended a set of questions about issues that might otherwise be ignored during the design of the initiative.

 

1. What are the initiative’s outcomes supposed to be? Should they be made clear to applicants?

 

What do you want this grants program to achieve?  To illustrate what we mean by the question, here are a few possibilities, one or more of which might be stated explicitly in the initiative’s guidelines:

  1. Effectiveness/content: Improve the capabilities of students graduating from the institution. You could challenge applicants to define their own goals, or set a goal that would remain the same over several years of grants, or change goals each year if you think one year of funding is enough to do the job. For one list of goals about which there is agreement among many academic leaders and employers, see the ‘essential outcomes’ described in AAC&U’s LEAP program at http://www.aacu.org .

  2. Increase enrollment and graduation rates –total and/or for certain types of students: Attract new students to the department and/or retain students who might otherwise be lost. For example, in the late 1980s and early 1990s, the Rochester Institute of Technology put almost all of its small grants for faculty to the task of enriching and expanding distance education. 

  3. Foster a particular type of teaching/learning activity (e.g., online courses, learning communities, blended courses, greater use of active learning, greater use of some particular piece of software)

  4. Far-reaching impact beyond the recipients of the funds; pilot tests: Seek projects that can also create a wave of improvements in other courses (not themselves grant-supported) as other faculty hear about this idea and try it too.  If this is the goal, the grants initiative should be seeking ideas which, once tested and made visible, would be low threshold for others to adopt. And the initiative should prepare to invest in active dissemination of those ideas.

  5. Save money by redesigning courses that are currently costing far too much money or time. Carol Twigg’s Center for Academic Transformation (CAT) program supported this not only via guidelines but also by providing tools and instruction in cost modeling.

  6. Explore divergent improvements: Helping your most innovative faculty do whatever they see as most valuable.

 

I would classify goals 1-4 above as campaigns: because the guidelines are the same for everyone, the funded projects would be selected not only on their own merits but also for how they reinforce one another in the advancement of the common goal, increasing the impact of the funding initiative. And in many cases, the institution or system might want to fund other activities that complement the faculty projects (e.g., developing evaluation tools that the projects could share; a dissemination program). 

 

In contrast, goal #6 is an exploration: the applicants set the agenda and the expectation is each redesigned course would push in a different direction.  Explorations can be politically safer but projects funded this way are sometimes difficult to institutionalize.  The ideas may die out when the original faculty begin teaching other courses.

 

The TLT Group has seen many interesting mini-grant programs around the country. One powerful series of mini-grant activities has been run by Johnson C. Smith University in Charlotte, NC. For example, one grant JCSU received has been deployed as mini-grants to faculty in order to support the continuing development of learning communities at the institution. 

 

Caveat: One common goal of mini-grant programs is dissemination - funding faculty member A in hopes that faculty members B, C, and D will somehow benefit from faculty member A's example.  Such programs rarely are designed to produce this result, however: it's a hope rather than a plan. And the hope is only rarely fulfilled.  If this kind of dissemination is a goal for your program, that ought to influence the kinds of projects you fund, the obligations your grantees accept, and the kinds of services you offer to help them meet those obligations. For example, you'd want to fund ideas that can be easily adopted/adapted by colleagues who don't themselves have a grant.  You may want to sponsor symposia to give your grantees an occasion to tell their peers about their work. If you'd like to brainstorm about how to foster wider dissemination of the fruits of faculty mini-grants, give us a call.

 

2. A few basic issues in writing funding guidelines

 

The task of deciding what to fund requires the exercise of judgment. Perhaps the most basic question in creating a process is the extent to which the criteria for funding will be determined in advance and publicized to the applicants.

  • In favor of announcing explicit criteria in advance: the guidelines themselves can influence (a little) what potential applicants think. Rubrics offer a useful format for explaining criteria. If the competition attracts two applicants for every available grant, explicit guidelines may influence the plans of all applicants (including but not limited to those who are funded), and the unsuccessful applicants may go on to do some of what they proposed.  Explicit criteria can also make it easier to explain rejection to unsuccessful applicants.

  • In favor of short guidelines that say relatively little about the criteria for funding: it provides more freedom to the funder to make decisions later in the game ("I know it when I see it"), which also allows covert criteria to be used more easily. Of course, if the competition is run again and unsuccessful applicants suspect the use of covert criteria to make round 1 decisions, they may be less likely to apply again. 

I favor explicit criteria. To me they seem more fair, and more likely to influence both successful and (initially) unsuccessful applicants. Writing funding criteria is a balancing act. For example, it's usually important to be more explicit about the problem to be solved than about the methods used to solve that problem (if you want the applicants to be creative and to draw on their personal energy). Allowing faculty options for how to solve the problem or meet the need also helps one avoid some of the accusations of restricting academic freedom.

 

3. Will you need to evaluate the impact of your initiative?

 

Is it important for you to be able to document the outcomes of this funding initiative over the next few years (e.g., in order to provide evidence that can attract fresh resources)?   

 

In our experience, this is easier to accomplish when the mini-grants support projects that are all working toward the same goal (e.g., mini-grants that are all helping faculty use assessment to improve their teaching; mini-grants that all strengthen writing-intensive courses; mini-grants that all seek to reduce student attrition).  Such evaluations are even more feasible if other elements of your initiative are designed to help document that goal (e.g., finding or developing tools to evaluate that outcome; the original course redesign program created tools for assessing costs, and provided training to applicants in how to use those tools).

 

In any case, start thinking about how to design (and fund) a continuing evaluation of your initiative:

  1. Start immediately. If possible, start before funding projects. For example, if problem-based learning is to be a goal for the new funding program, do a study immediately of faculty members who have already attempted to use PBL techniques: what’s made that easy? What’s made it difficult? Findings can help you design your funding program.

  2. Do studies that can provide feedback the initiative unfolds.  Who applied and who didn’t? why? If your intent is that projects work together, how’s that working out? Are there new needs or opportunities emerging? 

  3. Do studies that provide documentation of program successes (and failures) -- findings that you can use to create and justify future funding requests, for example.

For a brief overview of our suggestions for how to design an evaluation that can help improve a program, click here to see "The Flashlight Approach."  For some more ideas from The TLT Group on how to design productive evaluations, see The Flashlight Evaluation Handbook.

 

 

4. Is it important to ‘Close the Resource Loop’?

 

In evaluation jargon, “closing the loop” means using evaluative findings to improve a program.  We’ve coined the phrase ‘closing the resource loop’; it means improving an academic program in a way that attracts more resources to that program.  For example, in 1980 the math department at the University of California Santa Barbara used a FIPSE grant to create a new minor in applied mathematics consisting of three new applied math courses (all using Apple II courseware they designed) plus an internship program that placed these applied math students in community agencies.  This new minor was designed to attract a stream of new math majors (and budget) as well as to provide continuing community service, which could also attract support for the University and the math department.   The stakeholders in this case were the students (who could decide whether or not to take math courses) as well as the community agencies.

 

You could consider soliciting projects that, if successful, would either attract fresh resources (e.g., enrollment, retention, grants, community support) or create an important reallocation of resources (e.g., reallocating faculty time from less fulfilling to more fulfilling aspects of teaching; better uses for scarce space).

 

5. Consider funding the redesign of clusters of courses, in order to achieve a cumulative outcome

 

What should each of the funded projects do?

  1. improve some element of a course (e.g., new software, new teaching technique, better assessment),

  2. redesign an individual course (e.g., changing balance between homework and classwork; altering the sequence of topics; changing the goals of the course), or

  3. redesign a cluster of courses in order to achieve some cumulative goal.

The third of these three options has special promise, but we hardly ever see it tried.  Redesigning a cluster of courses is an especially good strategy if you're trying to achieve visible, valuable outcomes. We have Writing Across the Curriculum programs because having just one writing-intensive course wouldn’t produce a discernible outcome for the institution’s graduates (even just for most of those graduates who happened to take that one course).  Having one distance learning course won’t usually produce much impact on departmental enrollment.  Funding a cluster of courses is also more likely to help you close the resource loop. Back in 1980, if UCSB had created just one applied math course, that change probably wouldn’t have changed enrollments in the math department, and wouldn’t have provided a sufficient base for that internship program.

 

The courses in such a cluster could be:

  • Parallel, for example:

  • mini-grants for faculty-librarian partnerships to improve information literacy education across many courses;

  • courses that all use the same innovative physical facility, such as the emporium model or studio courses;

  • lower division courses all developing student skills in digital writing), or
     

  • Sequential, so that the first course in a sequence begins fostering a skill which is deepened and exploited in subsequent redesigned courses, internships, etc. For example

  • The UCSB applied math grant developed math skills in a three course sequence and then exploited and assessed those skills with the new internship program.

  • A department might propose to clarify outcomes, develop model assignments to assess progress, and document those assessments in student e-portfolios.

6. The pros and cons of competing for grants

 

How competitive should a grants program be? One view is that competition should be just sufficient so that the initiative need not be forced to fund bad projects or to leave money unspent.  Any more competition than that creates wasted time on the part of applicants who created good proposals and then weren’t funded. Creating a competition that then rejects too many applicants may also be politically risky.

 

That school of thought is common but consider other way of thinking: knowing that the prize is worth winning, but difficult to win, can improve the thinking of applicants. In other words, really competitive programs can create better proposals than would have been submitted had the program been less competitive.  In a piece called “The Paradox of Risk,” Steve Ehrmann has written about the potential of highly competitive grants programs to find innovations with an unusual chance of success.  An external evaluation of the highly competitive Fund for the Improvement of Postecondary Education (FIPSE), which solicits about 40 ideas for every one eventually funded, found that something like half the unsuccessful proposals were also implemented.

 

7. Are faculty ideas for redesign already adequate? Or should the initiative provide incentives and assistance to improve those ideas?

 

One benefit of competition can be to provide incentives and support for applicants to learn about the state of the art:  around the world, when faculty (in my discipline and in other disciplines) have faced the challenge we face, what are the world class solutions?

 

In The TLT Group’s recent study of the iCampus program at MIT, we found that faculty members, even faculty members who were world class in research, were often quite unaware of pioneering instructional practices and materials at other institutions.  And we also realized that internal funders rarely encouraged, supported or forced applicants to discover and exploit that state of the art.   Universities rarely use external reviewers for internal grant programs, for example. Nor do university funding criteria usually stress the importance of adapting state of the art ideas or materials. 

 

When funders do force and help applicants to discover the state of the art, the results can be amazing. For example, when John Belcher, a professor of physics at MIT, applied to the National Science Foundation for funding of his ideas for improving his electricity and magnetism course (a freshman required course), NSF turned down his original proposal and said it would only fund him if he first became more familiar with physics education research. They provided a small grant for this purpose because they liked the direction of his thinking.  As a result Belcher began working with Bob Beichner, develop of NC State’s SCALE-UP program. That collaboration led to Belcher to adapt many elements of SCALE-UP in his iCampus proposal. The result was TEAL: a complete transformation of MIT”s undergraduate physics sequence, producing lasting gains in students’ conceptual understanding of physics. That whole chain of events began with NSF’s reviewers and staff insisting that Belcher understand the state of the art before he, or NSF, went forward. 

 

We suggest that you give potential applicants enough time and assistance so that they can learn from others, rather than assuming that they already have frontier ideas, or that it doesn’t matter the first time around.

 

 

8. Building a culture of evidence: Require recipients to use evaluation to guide their efforts, and to 'publish' their results

Johnson C. Smith University in Charlotte, NC, has done as good a job as any institution in the country at using faculty mini-grants to leverage development of a culture of evidence, while using that culture to help assure that the mini-grants succeed.  As discussed in this case study, grant fund have been obtained periodically over the years for a set of projects, each advanced by competitive award of mini-grants to faculty.  Faculty applicants know that they must a) receive training in how to do evaluation (training often conducted by other faculty who have received such grants in the past, b) use feedback to guide their efforts, and c) report on that inquiry in order to receive the last payment of the grant. 

 

                                  

9. How can you make a Grants Competition more competitive?

 

If you decide that competition is needed to create more successful projects, you’ll want to think about how to increase the competitive incentive.

           

The first question, of course, is whether the proposed monetary awards are big enough to do the job, and to attract attention.

 

Second, be sure there’s enough time to research the problems and create competitive proposals. You might consider a two-stage proposal process, with the first stage focusing on needs assessment and description of relevant resources for dealing with the problem. Successful applicants at stage 1 might get some funds (e.g. for travel) and/or support, to help them learn what others are doing. Stage 2 proposals would be work plans.

 

I also suggest raising the stakes from the merely monetary.  Is there a title or award that could also go to members of a winning team? If you anticipate this competition being run annually, the more that winners are then lionized by their institutions, and the more of the institution’s own resources (monetary and otherwise) go to winning projects, the more powerful these incentives can be. 

 

More competition means more losers, of course. Consider ways to help the unsuccessful applicants to learn fro and with the successful applicants (this is a side benefit of having clear goals for all projects: the successful and unsuccessful applicants will have more in common.)

 

10. Summary

 

We’ve suggested several ideas that might result in a grants program for course redesign with a greater and longer-lasting benefits for students and the institution(s) involved:

  • Consider designing your grants program as a 'campaign,' supporting projects that help one another achieve a common goal,

  • Conduct formative evaluations, starting right away, focusing on the activities that these redesigned courses are intended to change. This can help assess needs, guide funding, and make the case for attracting or reallocating resources based on your initial experiences;

  • Solicit projects that each will redesign a cluster or sequence of courses,

  • Require and assist applicants to be aware of the state of the art for tackling their problem, and

  • Maximize the competition for grants in order to stimulate the development of better projects, and find some way to network and support some or all of the unsuccessful applicants.

 

- Stephen C. Ehrmann, The TLT Group


[1] The Center for Academic Transformation has been popularizing this strategy and provides a variety of resources to support such work.  See http://www.center.rpi.edu/RA.htm.  Some of these notes are consistent with the CAT program’s goals while others push in a complementary direction.

 

 

 

Some Rights Reserved:  "Share it Forward" Creative Commons License by the TLT Group, a Non-Profit Corp.

PO Box 5643,
Takoma Park, Maryland 20913
Phone
: 301.270.8312/Fax: 301.270.8110  

To talk about our work
or our organization
contact:  Sally Gilbert

Search TLT Group.org

Contact us | Partners | TLTRs | FridayLive! | Consulting | 7 Principles | LTAs | TLT-SWG | Archives | Site Map |