Posted Monday, May 5, 2014 at 6:24 pm

Across the globe, hundreds of people and organizations are using collective impact to address a range of complex problems, such as climate change, poverty, and juvenile detention. Some of the practitioners and funders involved in these initiatives have recently begun their collective impact journeys; others are well on their way. But all of them share a common challenge: once their journey has begun, how can they know how far they’ve travelled, how close they are to their destination, and which way they should go next?

The initiatives’ shared measurement systems can usually answer the first question and offer good insight into the second. Yet, even the best shared measurement systems, on their own, cannot offer the depth and range of information that partners need to fully understand their progress and make well-informed decisions about the future. At the same time, traditional approaches to program evaluation are also limited in their ability to meet all of a CI initiative’s information needs.

The complexity of collective impact initiatives requires a new approach to performance measurement and evaluation that is as multi-faceted, flexible, and adaptive as the initiatives themselves.

The Guide to Evaluating Collective Impact, recently released by FSG and the Collective Impact Forum and sponsored in part by the Hewlett Foundation, offers practical advice on how to use different performance measurement and evaluation activities over a CI initiative’s lifetime to better understand its progress, effectiveness, and impact. These evaluative activities are most useful and effective when they occur in the context of an initiative-wide commitment to active learning and continuous improvement. (In other words, the goal of CI evaluation should not be learning for the sake of knowing, but rather learning for the sake of improving.)

The Guide to Evaluating Collective Impact contains three parts:

  • Part One offers an introduction to the role of learning and evaluation in the context of collective impact.
  • Part Two provides practical guidance on how to plan for and implement a variety of performance measurement and evaluation activities. It includes sample evaluation questions, outcomes, and indicators for CI initiatives in different stages of development, as well as advice on how to gather, make sense of, and use data to inform strategic decision making.
  • The Guide’s Supplement includes a larger set of sample evaluation questions, outcomes, and indicators.

Part Two of the Guide also features short case studies describing how four different CI initiatives are approaching performance measurement and evaluation:

An emergent CI initiative focused on infant mortality in Missouri is using developmental evaluation (DE) to better understand how key contextual factors and cultural dynamics are influencing their problem definition and strategy development. Leaders of the initiative are hopeful that the DE process will uncover some of the tensions underlying their work (e.g., expectations of mothers and medical care providers, racial disparities in access to and quality of prenatal care), and provide guidance on how the initiative’s design and implementation might respond to (or help manage) these tensions.

The Road Map Project (RMP), a four year old education-focused initiative in South Seattle and South King County, Washington, is using formative evaluation to better understand its effectiveness and impact to date, and to inform decisions about its strategy going forward.  The evaluation is exploring questions such as:

  • How does the RMP use its core strategies to catalyze organization and systems change in the region?
  • How is the RMP being implemented? (e.g., what roles do community-based organizations, educational institutions, public sector agencies, business partners, and community members play in the RMP? Are sufficient supports provided for the work to occur and to develop networks and collaboration?)
  • What changes are occurring across the South Seattle and South King County region and within individual organizations as a result of the RMP?

Vibrant Communities, a pan-Canadian initiative focused on capacity building for poverty reduction, used summative evaluation to understand its ultimate outcomes and discover lessons learned through its work.

All of these CI initiatives also use their shared measurement systems to collect and manage data on a set of common indicators. CI leaders can use this data to understand what progress they’ve made on their CI journey and how far they have yet to go. Often, though, what leaders of complex CI initiatives really need to know is just not what progress they’ve made, but how they got where they are, why it took so long or seemed so quick, what difference the journey has made, and most importantly, what they should do next. For this, they need the more nuanced data and insights offered by evaluation.

Read the Guide to Evaluating Collective Impact to learn more.