Lessons in Developmental Evaluation: Evaluate Early

By:

Last month, I wrote a post on the American Evaluation Associate blog about the increasing importance of developmental evaluation in the changing energy landscape. In that post, I mentioned three tips for incorporating elements of developmental evaluation into evaluation design:

  1. Incorporate evaluation early. 
  2. Be flexible in your approaches. 
  3. Iterate and adapt.

Over the next month, I’ll be discussing each one of those elements in detail. Today, we tackle the first step: building evaluation into program design early.

CHALLENGES WITH SUMMATIVE EVALUATIONS

 

In summative evaluations (which are often thought of as the traditional approach), evaluators begin their work after the program has been running for at least a year, so they can measure whether the program has been successful at meeting its goals. While this approach may provide some valuable post-program assessment, it runs into a few challenges:

  1. Summative evaluations do not support the planning process as the program is first being developed, and thus miss out on the critical value of ensuring the program is on-track from the beginning.
  2. Summative evaluations are often too late to provide maximum value. As the research is provided after the program is implemented, summative evaluation can struggle to provide results quickly enough for program staff to make impactful changes.
  3. Summative evaluations have difficulty responding to emergent needs as the program progresses. Summative evaluations are typically less nimble than developmental evaluations and can struggle to incorporate emergent issues and changes in program design into the evaluation design. Stay tuned for the second part of this series for more on this topic!

 

BENEFITS OF EARLY EVALUATION

 

Developmental evaluations are most valuable when the evaluators are engaged early in the program design process. In developmental evaluation, evaluators can support program design efforts by engaging with implementation staff prior to program launch. To help demonstrate the value of early evaluation, let’s dive into three examples of early-stage evaluation activities: logic models, program design assistance, and peer program research.

 

LOGIC MODELS

FIG 1

Logic models are critical building blocks for a successful program—and for successful evaluations. Logic models are diagrams that concisely show the relationships between program activities, activity outputs and the short, medium, and long-term outcomes that the program intends to create. Logic models answer the question, “Why are we providing this program?” and ensure that the activities support the program goals. These models are visual representations of the program theory, and provide several benefits:

  1. Logic models are used during program planning to identify the ways the program will achieve its desired impacts on the market, revealing any gaps in program design.
  2. Logic models are used to identify key metrics of success that the program will want to track over time.
  3. Logic models help provide clarity for the team working on the program, allowing them to collaborate effectively with a shared vision of how the program’s desired outcomes will be accomplished.
  4. Logic models also facilitate more effective evaluations, as they provide clarity for the evaluators on the objectives of the programs and provide both the evaluation and implementation teams with key outputs to track over time to ensure the outcomes are achieved.

PROGRAM DESIGN ASSISTANCE

In addition to documenting the program theory through logic models, evaluators can also support the design of the program. Evaluators can use the program logic models to make sure that the program is designed so that the linkages between activities and the desired outcomes of the program can be tested. This could include:

  • Designing a randomized control trial for the program
  • Advising program staff on what information to collect during site visits or as part of applications
  • Conducting early reviews of key metrics to validate assumptions

This early design assistance builds consensus between program staff and evaluators on the methods and timing of future evaluations. And it ensures that, when any evaluation does occur, the necessary data to assess program performance is available.

 

PEER PROGRAM RESEARCH

When designing a new program, learning about the challenges and successes of similarly-designed peer programs can be incredibly useful. Peer program research is often completed once a program is matured to test its performance relative to similar programs; however, this is not the only use of this type of benchmarking. Nothing is more valuable than experience, and peer utilities can share key insights into what has worked well for engaging customers, generating savings, or changing the market. Similarly, peer utilities can also share any challenges that they have experienced in rolling out their programs, as well as any barriers they’ve seen for customers, trade partners, or other stakeholders.

By learning about potential challenges prior to the program’s launch, implementation staff can proactively design their program to try to avoid these potential pitfalls. Additionally, evaluation staff can use this research as a guidebook for key successes and challenges to watch for as the program develops.

 

SUMMARY

Incorporating evaluation early in a program’s design can be incredibly beneficial for program staff. Early evaluation can help to build consensus, ensure evaluability, and ultimately lead to a more successful program. Stay tuned for my next post, when I’ll discuss how this developmental approach changes as the program is implemented and new questions emerge.