How Do Cross-Sector Collaborations for Education Present Data to the Public?

,

The collective impact model of cross-sector collaboration emphasizes the use of shared measurement systems for identifying problems and needs, tracking progress, and measuring results. But to what extent are cross-sector collaborations around the country promoting data as an integral part of their work? With support from The Wallace Foundation, our research team at Teachers College, Columbia University set out to understand the characteristics of a national array of cross-sector collaborations for education, taking an aerial view to analyze information presented on their public websites. What we have learned is that despite the emphasis on data, only 40% of the 182 initiatives identified by our nationwide scan devote a separate section of their websites to data, statistics, or outcomes.

What data are collaborations tracking?

The most common indicators on initiatives’ websites are student performance on standardized tests (43%) and high school graduation rates (35%). Many of the collaborations are “cradle to career” initiatives, designed to support students from pre-kindergarten through college and career entry, so it is not surprising to see that roughly one-quarter track indicators of early childhood care and learning. Post-secondary enrollment (20%) and completion rate (18%) data are also somewhat prevalent on public websites. When it comes to data about student experiences and well-being, far fewer initiatives track such measures. For example, only 5% of the initiatives report some kind of indicator for social and emotional development, which has been recognized as crucial for 21st-century learning and attainment.

It may be the case that initiatives choose to use certain indicators because they are important markers for academic success and college attainment, but it is also likely that some data are presented because they are fairly easy to obtain from state and/or local data platforms. Common indicators like high school graduation rates can also be aggregated to a city or regional level where separate public, private, and charter school sectors are involved, making it easier to draw points of comparison. Less conventional indicators, such as social-emotional learning, might not be as common due to a lack of agreement on measurement. It seems plausible that convenience, rather than intentionality about program goals or community needs, marks the standard for choosing indicators. While a quarter of the collaborations show data patterns over time, only 17% provide indicators disaggregated by race/ethnicity or social class on their websites. This can help collaborations monitor how well they are ensuring equity in services and outcomes. The disaggregation of data by racial/ethnic group and/or social class will likely grow as initiatives mature and pay attention more systematically to equity concerns.

Which collaborations promote data the most?

The StriveTogether network, which inspired and continues to rely on the collective impact model of collaboration, places considerable emphasis on the use of data for agenda setting and continuous improvement. The average number of indicators tracked by initiatives in the StriveTogether network is 4.5, more than twice the average number tracked in non-Strive initiatives.

The 2011 article by Kania and Kramer in the Stanford Social Innovation Review introduced collective impact to a broad audience. In our nationwide scan, we found that collaborations established before that article tend to track slightly more indicators than the newer initiatives. This might suggest that the current emphasis on data is not possible or a priority for many collaborations. On the other hand, it may be that it takes time to build trust among many partners to share potentially sensitive data, to agree on appropriate indicators, and to locate reliable sources of data for them.

 

What does this mean for cross-sector collaborations?

Despite the heavy emphasis on data in the collective impact literature and the potential availability of new kinds of data for incorporation into multi-indicator systems, it appears that the data indicators in use by cross-sector collaborations are fairly conventional and limited in scope. Measuring third-grade reading proficiency might not tell us everything we need to know about how children are progressing in their learning. Moreover, outcome measurements like third grade reading often cannot convey an elaborated theory of action for the process steps needed to produce particular outcomes. In addition, most data reports on websites do not illustrate how multiple organizations and agents work together to produce results, so there is often a lack of evidence about how the collaborations themselves are making a difference.

These patterns raise a number of questions that are worth thinking about. How were data indicators selected? Were indicators suggested by national network affiliations or were they decided locally? What are the theories of action by which cross-sector collaborations are expected to meet their goals, and can data be used to monitor interim steps? How do cross-sector collaborations address issues of causality in their data, so it’s clear how they influence and/or take credit for the outcomes that truly matter?

We will be exploring questions like these more deeply in our intensive case studies of three cross-sector collaborations across the country – Say Yes to Education in Buffalo, N.Y., Milwaukee Succeeds in Wisconsin, and All Hands Raised in Portland, Ore. We invite you to contact us with your ideas and perspectives. For those interested in accessing our report, Collective Impact and the New Generation of Cross-Sector Collaborations for Education, you can find it here.

Note: The ongoing study of cross-sector collaborations for education at Teachers College, Columbia University, was commissioned by The Wallace Foundation in 2014. The principal investigators are Jeffrey Henig, Professor of Political Science and Education, and Carolyn Riehl, Associate Professor of Sociology and Education Policy. Iris Daruwala is a graduate research assistant and doctoral candidate in the Sociology and Education Program. The research team also includes Professor Michael Rebell, Jessica Wolff, Melissa Arnold, Constance Clark, and David Houston.

What do you think? Share your comments and questions below.

Close

Sign Up to Download

You will also receive email updates on new ideas and resources from Collective Impact Forum.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

Confirm Your Registration

You will also receive email updates on new ideas and resources from Collective Impact Forum.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.