Program Evaluation: Ten Aphorisms
Some months ago, I was asked to give a brief overview of the most important principles of program evaluation. The audience was directors of projects that had received federal funding to improve teacher quality through professional development programs. The talk was extemporaneous, so no verbatim text exists, but in response to requests for copies of the talk, an outline is posted here.
The basic principles of program evaluation are simple. They are mostly a matter of common sense. That’s why they can be summarized in aphorisms—pithy statements that contain a lot of truth. Some might see the following as a list of platitudes or as an insult to their intelligence. Still, these simple principles are overlooked remarkably often. In my experience when a project evaluation goes wrong the problem can be traced to ignoring one or more of principles sketched in these 10 aphorisms.
1. If a program does not have a theory of change, it cannot be fully evaluated.
- Your project proposals say that certain planned actions will lead to specific outcomes. Your theory of change explains how the actions will lead to the outcomes. Without that explanation of how actions lead to outcomes, your proposals will have limited value as a guide to practice now and in the future.
2. If you don’t know where you started, you can’t tell how far you’ve come.
- Without baseline data you can’t gauge your progress. It is easy to overlook the obvious importance of baseline data. Think of how often people forget to check the odometer before beginning a trip.
3. If you want to measure change, don’t change the measure.
- Multiple measurements are good, advantageous even, but to compare before with after use the same technique. Don’t use an attitude survey for before and a test of knowledge for after--or vice versa.
4. To collect useful evidence, it helps to know what you are going to collect.
- It’s always good to be ready for surprises, but a data collection plan is essential. If you want to be surprised, you have to have expectations.
5. If you don’t know how to analyze the data you plan to collect, don’t collect it. Or change your plan.
- Your time is too valuable to collect data hoping that it might someday be useful to somebody for something.
6. To use the data you’ve collected, you’ve got to be able to find it.
- A data management plan is as important as a data collection plan and a data analysis plan.
7. Everyone loves a good story, but nobody trusts stories very much as evidence.
- Narratives are essential in any good project evaluation, in part because they help describe causal processes (the how of number 1 above), but narratives are not very effective for establishing that outcomes occurred or for estimating their size.
8. Statistically significant findings may not be important—and vice versa.
- Statistical significance and p-values can sometimes provide evidence of project success, but they are not usually the best evidence—even for experiments. Effect sizes and confidence intervals are better for quantitative outcomes.
9. Nobody cheats on an eye exam.
- Why not? Because all parties have a vested interest in getting good data and drawing accurate conclusions. Project evaluation should be like that. It should not be an adversarial relationship between the project managers and evaluators.
10. This year’s evaluation is the heart of next year’s application.
- A rigorous evaluation of your current project is the best preparation for success in the next round of grant competitions. Thinking about that fact might give you an incentive to persist when evaluation work is getting tiresome. Remember that you own the evaluation. It is not only for external compliance and to describe what you’ve done in the past. It can help you plan for the future.