Lessons in Developmental Evaluation: Iterate & Adapt

By:

Welcome back to lessons in developmental evaluation! In each post, I’m tackling one of my tips from my American Evaluation Associate blog about the increasing importance of developmental evaluation in the changing energy landscape:

  1. Incorporate evaluation early. 
  2. Be flexible in your approaches. 
  3. Iterate and adapt. 

In my last post, I discussed the importance of retaining flexibility in your evaluation approaches. In this final installment, I will discuss the iterative process of developmental evaluations, and how they can be better equipped to respond to emergent needs and changes to program design.

The Evaluation Cycle

In developmental evaluations, the goal of any evaluation work is to inform continuous improvement of the program and lead to adaptive change. As such, programs that utilize developmental evaluation will be constantly evolving—and those new evolutions lead to additional research needs. For programs that operate in changing markets, developmental evaluation can also respond to emergent market conditions to test whether previously successful program components remain successful in a changed environment.

To promote the continuous improvement of a program, developmental evaluations utilize a cyclical evaluation process. First, the programs identify a new program design or research question and design their evaluation research according to that question (as discussed in the previous post). Next, evaluators conduct the research and analyze the results. Most importantly, developmental evaluation does not stop with the presentation of results; rather, evaluators should help program staff understand the implications of their research, identify emergent questions, and promote targeted improvements to the program. As those changes are made (or as new questions are identified), the evaluators should then return to their first evaluation phase, connecting and building their research throughout the program’s life.

DEVELOPMENTAL EVALUATION CYCLE

Case Study of a Small Business Assessment Program

To demonstrate this process, let’s return to the case study we explored in the last post. EMI Consulting worked with a small business assessment program from program piloting to maturity to promote continuous improvements as the program evolved. Over the course of the program’s life, we completed a variety of evaluation activities based on emergent findings and changes to the program.

Throughout this process, we utilized the developmental evaluation cycle to identify new research needs and build on past research. For example, the utility added a free energy saving kit to the program design after the program’s launch, as a way to add value to customers and help the auditors sell the customers on the program. The kit included a few free energy-saving measures, including some LEDs, an advanced powerstrip, an energy monitor, a filter alarm, and a thermal heat detector. As the kit was rolled out, program staff wanted to know if customers found the equipment useful—as it was not cheap to provide this equipment to customers for free. To understand what equipment was the most useful, we surveyed customers and interviewed auditors about their experience with the kits. Interestingly, we found that customers cited the more standard equipment (i.e., LEDs and advanced powerstrips to be the most useful), while auditors thought that customers were most excited about the more unique equipment (i.e., the thermal heat detector and energy monitor).

Based on these conflicting answers, the evaluation team designed customer interviews to examine in more detail what equipment customers found most useful. These interviews were illuminating—they found that customers were initially most excited about the novel equipment, and including that equipment helped them decide to participate in the program. However, as time went on, customers ultimately were most excited about the standard equipment that they could easily incorporate into their business. From this research, the evaluation team found there was a use for four of the five pieces of equipment, but that the filter alarms did not provide either the initial excitement or the long-term value to most customers. Program staff eliminated this equipment from the energy kits and were able to significantly cut the cost of delivering this kit to thousands of customers.

In this case, a summative evaluation approach would have several disadvantages to the developmental approach we used. The summative evaluation would have been limited both in flexibility and in timing. In a summative evaluation, the research plan would typically be designed and set at the beginning of the evaluation period (which is often one year). Because the research design would be locked in for that year, the evaluation team would not have been able to add additional clarifying research with customers and would have ended up with conflicting information that was unable to be resolved. The evaluation would also have been limited in terms of timing, as summative evaluations typically report results at the end of the evaluation period. If the evaluation team had waited to report results until a year after the research began, more than a thousand additional filter alarms would have been delivered to customers, increasing the cost of the program.

By designing the research with a cyclical evaluation process in mind, the evaluation team was able to clarify emergent questions from preliminary research and provide clear recommendations for program improvement. This research approach has led to several program improvements, including:

  • Integrating “next step” information into program materials to push more customers to implement recommendations.
  • Increasing customization of reports based on EMI Consulting recommendations to tailor assessments and recommendations.
  • Separating cost savings from energy savings to highlight dollar savings.
  • Adding a call center program to increase engagement with customers after the report and motivate them to complete recommendations.

Summary

Developmental evaluations are all about promoting adaptive change. As such, evaluators and clients should plan for iterative evaluation cycles, with flexibility in evaluation approaches based on emergent findings and changes in market conditions. Evaluations are the most useful when they are a valuable part of the program development process—and when evaluators work with program staff to continuously improve the program.

As the energy efficiency landscape continues to experience rapid change, utilities and evaluators alike need flexible, iterative approaches to ensure their programs are successful. Developmental evaluation provides a roadmap for navigating this change. And in an unknown land, who doesn’t like a map?

Find us at IEPEC 2019 to hear more about our developmental evaluation work, including how you can incorporate deep data insights into this framework.