Despite bipartisan support in Washington, DC, which dates back to the mid-1990s, the “what works” approach has yet to gain broad support among policymakers and practitioners. One way to build such support is to increase the usefulness of program impact evaluations for these groups. We describe three ways to make impact evaluations more useful to policy and practice: emphasize learning from all studies over sorting out winners and losers; collect better information on the conditions that shape an intervention's success or failure; and learn about the features of programs and policies that influence their effectiveness. We argue that measurement of the treatment contrast that exists between the intervention and comparison condition(s) is important for each of these changes. Measurement and analysis of the treatment contrast will increase cost and policymakers and practitioners already see evaluations as expensive. Therefore we offer suggestions for reducing costs in other areas of data collection.
Available at: http://works.bepress.com/rebecca_maynard/16/