Lessons in Developmental Evaluation: Design with Intention

By:

Welcome back to lessons in developmental evaluation! In each post, I’m tackling one of my tips from my American Evaluation Associate blog about the increasing importance of developmental evaluation in the changing energy landscape:

  1. Incorporate evaluation early. 
  2. Be flexible in your approaches. 
  3. Iterate and adapt.

In my last post, I discussed the importance of incorporating evaluation early in a program’s design process. This time, I will discuss the importance of retaining flexibility in your evaluation approaches, the process of designing a custom approach to evaluation, and the increased value it provides compared to “off the shelf” approaches.

DESIGNING DEVELOPMENTAL EVALUATIONS

As I mentioned in my last post, developmental evaluations work best when they are incorporated into early program design planning and piloting. During this period, developmental evaluations can be part of the program innovation process, taking emergent ideas and molding them into a cohesive, scalable offering. This process requires evaluators to design their overall approach and their specific activities with intention, taking a holistic view of the program’s design, anticipated challenges, and outcomes. Many summative evaluations take a more prescriptive approach, utilizing standard evaluation practices across all programs they are evaluating. With developmental evaluations, it is important to have flexibility in your approaches, and design a truly custom evaluation to suit each program’s needs.

CASE STUDY OF A SMALL BUSINESS ASSESSMENT PROGRAM

To show what this process looks like in action, I’ve developed a case study of a recent evaluation EMI Consulting completed for one of our utility clients. Our team had a long-standing partnership with this client, allowing our team a unique insight into their program development processes. As the client developed a small business assessment program, they sought out research and strategy support from EMI Consulting to ensure that the program would be effective from launch—rather than waiting a year or more to find out if their strategies had the intended effects.

PROGRAM DESIGN & BACKGROUND

The new program had a fairly simple design: the utility would send auditors door-to-door to conduct free walk-through energy assessments for small business customers. Once the walk-through was complete, the auditors would provide energy-saving recommendations and a follow-up report of their findings. The utility hoped this program would help engage small business customers, increase their awareness of energy efficiency and the available rebate programs, identify opportunities for direct install measures, and ultimately lead to an increase in energy efficiency projects.

The program was designed with the following goals in mind:

  1. The assessments should be quick and easy for customers to complete.
  2. The assessments should be easily scalable, with the goal of reaching up to 5,000 customers in the first year of the program’s launch.
  3. The assessments should drive participation in other programs through the recommendations for energy-saving upgrades.

 

DEVELOPMENTAL RESEARCH DESIGN

Because the program had several ambitious goals, the evaluation team designed a custom evaluation that would be able to adapt to emergent program needs. The table below demonstrates how the evaluation team designed the evaluation with intention, based on the unique needs of the program.

Using this framework, EMI Consulting developed a loose guide for evaluation work in the first year of the program’s launch. This work included several flexible tasks that were staged to build upon one another, allowing room for the evaluators to adapt the research as the program developed (come back for the next post to read more about how to course correct mid-research).

In addition to designing the overall evaluation approach based on the program’s needs, our team also designed each evaluation activity based on a specific emergent need or program objective. For example, when the program launched, it was critical to gather early feedback on the program to ensure that it would meet customer needs before reaching thousands of customers.

In a traditional summative evaluation, evaluators may have planned an end-of-year survey with participants to assess their experiences with the program. Instead, our developmental approach included a continuous rapid-feedback survey which launched one week after the first assessments were completed. Every two weeks, the utility sent EMI Consulting a list of all new completed assessments, and EMI Consulting sent all new program participants invitations to the online survey. All participants on this list had completed their participation in the program, including receiving the audit, any direct install measures, and the follow-up energy report with any recommendations to save energy. To provide the utility rapid feedback on the program, EMI Consulting set up an online dashboard, where utility staff could see live analysis of the quantitative results; an example of some of the results is shown below. At the end of each week, the evaluation team also sent the client a summary of new qualitative results and a synthesis of key findings.

FIG 1

The rapid-feedback survey allowed the utility to troubleshoot any issues with the program and make adjustments before the program was fully launched by providing snapshots of program satisfaction and key program processes, such as whether customers were receiving and reading the energy report. For example, the utility was able to see customer satisfaction with the program through the dashboard, and then learn about why any customers were dissatisfied through weekly emails that summarized qualitative open-ended findings. Through this research, EMI Consulting discovered that some customers were disappointed that they did not receive any recommendations that would result in significant energy savings for their business. As a result, EMI Consulting planned auditor ride-alongs to determine if the auditors could improve their language around customers’ next steps, and to identify why customers might not receive relevant recommendations.

SUMMARY

Each program has a unique set of goals, challenges, and design elements that can impact the efficacy of individual evaluation approaches. As such, evaluators need to take care at the beginning of the evaluation design process to document key evaluation needs (built from elements of the program’s design) and to tailor each evaluation approach to those needs. In doing so, evaluators increase their value to the program team by allowing their evaluations to provide customized findings and more relevant recommendations. Stay tuned for my next post, where I will discuss the iterative process of developmental evaluations, and how they can be better equipped to respond to emergent needs and changes to program design.