Last November, Matt Forti and Kim Siegal penned an article titled Actionable Measurement: Getting from “Prove” to “Improve” in the Stanford Social Innovation Review. The article calls upon the social sector to unite around “common questions” that “nonprofits ought to answer about their impact so that they can maximize learning and action around their program models.”
Forti and Siegal depart from ongoing debates in the social sector’s measurement community over the appropriateness of experimental evaluations (i.e., randomized trials)—the industry’s gold standard—to prove a program’s impact. Such large-scale evaluations may be suitable in some instances, but Forti and Siegal thoughtfully argue, instead, that most practitioners would be better served through a more immediate focus on improvement.
We agree. Experimental evaluations are valuable tools to test whether a program works—when programs are applied consistently across similar settings.
But community-level interventions pose significant limitations to experimental evaluation. Ethics aside, providers are quick to point out their community’s uniqueness from all others, confounding an apples-to-apples comparison across sites. Moreover, an average study timeline of three to five-years, coupled with a price tag in the hundreds of thousands of dollars, or more, pose serious hurdles to those who must not only maximize the value to their clients and funders, but also demonstrate that value in short order.
Instead, Forti and Siegal pose a guiding question that closely mirrors our Institute’s approach to community-level evaluation: “what common insights would allow nonprofit leaders to make decisions that generate more social good for clients they serve?”
There is an old Army saying that goes, “what gets checked gets done.” So too, Forti and Siegal’s idea of actionable measurement is to use insights now—in the midst of doing the work itself—to learn, adapt, improve program service delivery, increase social good, and maximize impact over time.
Actionable measurement, or “shared measurement” in collective impact parlance, is a major driver within our AmericaServes initiative, an effort to build local coordinated networks of service organizations that improve how our nation’s military-connected members and their families access a wide range of services and resources in their communities.
Put simply, AmericaServes helps communities create networks of service providers and improve how they operate as a system. Analogous to health care coordination models (e.g., accountable care organizations, patient centered medical homes), AmericaServes strengthens local nonprofit coordination by providing initial funding for a backbone coordination center and the technology to manage—and measure—a referral-based system of care. Accordingly, for both health care and human service delivery, system-level measurement focused on continuous quality improvement is critical to test and implement changes that address the complex or changing needs of the client.
Standard system outcome and satisfaction measures allow AmericaServes communities to monitor and improve their performance. These insights provide the basis for community planning sessions, on-the-ground relationship building, and quarterly in-progress reviews.
As new insights continually emerge, communicating our advances (and setbacks) takes on increasing importance. Additionally, there are new aspects of our work—some we believe followers may have missed—that we want to expand upon to promote a greater awareness and understanding of IVMF’s community-based efforts.
Forti and Siegal, following a comprehensive review of a decade’s worth of their organization’s field studies and research, established “four categories of questions that drove the greatest learning, action, and impact improvement.” We apply the Forti and Siegal framework to the AmericaServes initiative and find that it provides a helpful basis upon which to consider our current outcomes and future actions in the coming years.
1. Impact Drivers: Are there particular conditions or program components that disproportionately drive results?
While there are multiple performance indicators, two stand out above all others: case referral timeliness and appropriateness. As a coordinated network, AmericaServes’ theory of change is centered on assisting clients to the right point of service, resource, or care, in the shortest time possible. This is consistent with what the heath care field defines as quality of care.
Often, those seeking services present multiple, co-occurring (i.e., comorbid) needs. Consequently, service providers within AmericaServes communities—operating as a comprehensive support network, rather than fragmented collection of services—are best-incentivized to address the specific need(s) presented to their organization. Here, their limited resources are put to their first and best use—a hallmark of superior performance and sustainability.
As human service providers, we all know the disproportionate amount of time and energy spent on attempts to address needs beyond our organization’s boundaries. More often than not, these efforts to connect people and their needs beyond our capacity or expertise results in not only organizational failure, but extreme client frustration and unmet expectations. Getting the right client to the right point of service in a timely fashion—streamlined access—while critical, is, at times a herculean feat.
It is often said that communities are not capacity-poor, but rather fragmented-rich. Additionally, the veteran-serving nonprofit sector is rife with patchy eligibility criteria (each uniquely exclusive or inclusive in their approach) and layered on top of membership rules that subsequently underpin the very programs put in place to help. To combat these factors, AmericaServes communities work carefully to digitally connect their clients to the most appropriate provider in a timely fashion, mitigating the deep fragmentation across the social sector. If done disproportionately well enough, we can open the all-too-often locked doors of any community’s capacity to serve human needs and drive greater innovation within human services overall.
2. Impact distribution: Does the program generate better results for a particular sub-group?
Apparently so. The greatest early gains appear to be in networks with strong, active coordination centers—the backbone organizations that manage and monitor case referrals between network providers.
We see a pattern emerging in our AmericaServes networks. Those that report the greatest share of positive case outcomes (e.g., client received housing services) and levels of provider engagement (i.e., making and receiving case referrals), also tend to have coordination centers that:
(1) focus on equitable referral distribution across many providers and
(2) have built strong relationships with the local VA.
For example, the PAServes-Greater Pittsburgh coordination center, based within the Pittsburgh Mercy Health System, has a longstanding relationship with the local VA. To date, the Pittsburgh network reports the highest share of providers making and receiving referrals, and of positive overall case outcomes in the first year of operation. Having witnessed the success in Pittsburgh, other networks are actively building and expanding their relationships with local VA offices, and we will be monitoring the resulting provider engagement and outcomes over the coming months.
Strong coordination centers with knowledgeable intake specialists are able to navigate the complex eligibility criteria and make appropriate client referrals. In other words, this generates “smart” referrals to them, consisting of pre-screened clients who eligible for the services they provide. More importantly, accurate referrals eliminate wasted time, resources, and most importantly, the negative interactions that occur when providers are forced to turn away ineligible clients.
3. Impact persistence: How does a given client’s impact change over time?
While AmericaServes ultimately aims to demonstrate a positive long-term impact on the well-being of each community’s local military-connected population, it is, foremost, a care coordination intervention on a system of human service providers. The initiative’s immediate outcomes—adapted from health care—are centered on the activities and experiences of those coordinating and receiving coordinated services.
Forti and Siegal’s work revealed that clients that experience good outcomes tend to engage with the program more over time.
AmericaServes aims to ensure that clients who access coordinated services see similar benefits. If working as intended, long-term impact at the client level should loosely follow a needs hierarchy. That is, over time, clients should use the network less frequently as needs are met. Moreover, longer-tenured or repeat clients’ needs should resemble a pattern that transitions from basic physiological needs (food and water), to security (housing, employment, healthcare), social (education, relationships, love), and esteem (hobbies, volunteering) needs.
Early data suggests that a select number of program participants return to the network for additional services. While further analysis is underway, early thinking suggests three possible explanations:
(1) the initial provider’s service intervention failed to take root sufficiently, thus creating an opportunity to improve and reattempt to solve the individual’s problem;
(2) a tertiary need (a related aspect of co-occurrence) was discovered after the initial provider’s service intervention was introduced, creating a secondary network demand; or
(3) the client returned to the network for additional services to satisfy higher-order social or esteem needs, following successful resolution of prior basic physiological or security needs.
Regardless of the root cause, one constant is clear: clients are viewing the network as a resource to help address their needs. And as Forti and Siegal found, client impact may be measured and improved upon through a greater emphasis on client retention.
4. Impact externalities: What are the positive and negative effects on the people and communities not directly accessing the program?
While we aim to in time, we have yet to explore the unintended consequences—both positive and negative—on the communities and individuals not directly accessing AmericaServes. Consider, for example, does AmericaServes, by addressing the social determinants of health and well-being, generate positive returns to VA health care system (e.g., improved health markers, reductions in hospitalization, prescription drugs, cost avoidance, etc.)? This is a fantastic research question, notwithstanding that AmericaServes is barely two years old, operating in just a handful of communities, and still evolving.
Learning from what gets measured—“checked” in Army-speak—and the actions taken in light of that learning, may be, as Forti and Siegal concluded, the more important boost in social good needed to serve our veterans and military better today. Certainly, understanding these externalities is crucial to prove the efficacy of our approach in the long-term, and we continue to explore opportunities for an AmericaServes randomized trial or quasi-experiment.
We will get there eventually. For now, however, we remain strongly focused on improving the AmericaServes model to create more social good in these communities today.
What do you think? How have you worked with public, philanthropic, and nonprofit stakeholders to reconcile the tensions and timing of both proving and improving system-level collective impact initiatives? How are you using insights today to drive greater understanding and dialogue around the impact drivers, distribution, persistence, and unintended benefits and consequences of your work?