What Impact?

I learned plenty from this article by two professors at the Harvard Business School. The title of their paper: What Impact? A Framework for Measuring the Scale and Scope of Social Performance. California Management Review, May 1, 2014.

Don’t be off put by the title. They make some strong points regarding whether community organizations are in the position to be measuring outcomes rather than outputs. They make a distinction between organizational mission as opposed to operational mission, and double-down on the importance of scope and scale.

Authors: A. Ebrahim and V. Kasturi Rangan are two professors at the Harvard Business School

ABSTRACT: Organizations with social missions, such as nonprofits and social enterprises, are under growing pressure to demonstrate their impacts on pressing societal problems such as global poverty. This article draws on several cases to build a performance assessment framework premised on an organization’s operational mission, scale, and scope. Not all organizations should measure their long-term impact, defined as lasting changes in the lives of people and their societies. Rather, some organizations would be better off measuring shorter-term outputs or individual outcomes. Funders such as foundations and impact investors are better positioned to measure systemic impacts.

Ebrahim and Rangan’s Basic Thesis of this article

It is not feasible, or even desirable, for all organizations to develop metrics at all levels of a results chain, from immediate outputs to long term societal impacts.

Demonstrate Results

We need to keep in mind that the word “impact” is a term used along with an emphasis on “transparency, more bang for a buck, return on investment, accountability etc.” Many organizations (start-ups nonprofits etc), end up chasing impacts dictated by external parties that don’t properly reflect the true impact of their work. In other words for one thing, it is ‘social impact’ that is key.

Conventional wisdom says you need to measure impacts as far down the logic chain as possible. But Ebrahim asks, “does this make sense for all social sector organizations?

This attention to impact, following on the heels of accountability, is mainly driven by funders who want to know whether their funds are making a difference or might be better spent elsewhere.

If social purpose organizations rely to heavily on measuring impact strictly for the reasons of proving accountability to funders, in other words showing “bang for the buck”, then Ebrahim notes this places, “too much emphasis on outcomes for which the causal links are unclear, thus reflecting more of an obsession with institutional expectations of accountability to funders than an interest in actually finding ways of improving services and results.”

The crux of Ebrahim’s argument

  1. Conventional wisdom in the social sector suggests that one should measure results as far down the logic chain as possible, to outcomes and societal impacts.
  2. This expectation is based on a normative view that organizations working on social problems, especially if they seek public support, should be able to demonstrate results in solving societal problems.
  3. Yet it is worth considering whether, and to what degree, such measurement makes sense for all social sector organizations.

An example: Red Cross Doctors Without Borders

They are engaged in emergency relief work. Measuring the work is straight forward: count the timeliness and delivery of emergency supplies such as tents, food water and medical supplies, count also the numbers of people reached.

Emergency relief is thus typically measured in terms of activities and outputs.

The links between inputs, activities, and outputs follow logically: the organization plans its requirements of supplies and staff (inputs), the logistics for delivering those supplies (activities) in order to provide relief to the people most affected by the emergency (outputs). When the effort is well planned and executed, the program will be able to orchestrate activities that lead to measurable outputs.

What about the outcomes?

Outcome measurement, on the other hand, requires answers to a more complex causal question: Are the activities and outputs leading to sustained improvements in the lives of affected people?

Outcome measurement is less common and more difficult to do, given that organizations have the most control over their immediate activities and outputs, whereas outcomes are often moderated by events beyond their organizational boundaries.

For example the emergency relief organization that has done excellent work during and after a natural disaster might still fall short on outcomes of rehabilitating and resettling those displaced from their homes and livelihoods, especially if those outcomes depend on extended coordination with local governments, businesses, and other NGOs.

Connecting outcomes to societal impacts, such as a sustained drop in poverty in the region, is even more complex due to the number of additional factors at play — involving the larger political, social, cultural, and economic systems—that are beyond the control of any one entity. In short, outputs don’t necessarily translate to outcomes, and outcomes don’t necessarily translate to impact.

Aravind eye hospital in India tight linkage of outputs to outcomes

An eye hospital in India performed over 340,000 surgeries, mostly cataract surgeries, its outputs providing vision correction surgery to over 3 million individuals since opening in 1979. The outcome is that vision problems will be satisfactorily cured. Indicator showing rate of complications, declined year after year.

“Even given the tight linkage in the hospital’s operations between outputs and outcomes, the organization assumes but does not measure impact—that individuals with recovered eyesight from cataract treatment will be able to lead productive lives once again and thereby contribute to society. While this assumption seems reasonable, the organization has cautiously stayed away from making that leap and seeking to take credit for impacts such as reduction in poverty, or increased well-being etc

… More generally, measuring outcomes is possible under two conditions that are uncommon in the social sector: when the causal link between outputs and outcomes is well established, or when the range of the integrated interventions needed to achieve outcomes are within the control of the organization.”

Harlem Children’s Zone (HCZ)

Educational and community supports for children (Kindergarten to Grade 12)

HCZ has concentrated its activities in a narrow geographical region of nearly 100 city
blocks of Harlem, under the assumption that it will be better able to control the child’s overall environment

More generally, measuring outcomes is possible under two conditions that are uncommon in the social sector: when the causal link between outputs and outcomes is well established, or when the range of the integrated interventions needed to achieve outcomes are within the control of the organization.

In 2011, the sixth graders in its two main charter schools had shown significant improvements: approximately 80% were at or above grade level in statewide math exams, and between 48-67% (depending on the school) were at or above grade level in English. Moreover, 95% of seniors in public schools who attended HCZ after-school programs were accepted into college.

The grade level metrics are primarily output measures, while college acceptance may be considered an outcome measure.

The time horizon for these interventions is long (5 to 19 years), and the organization is undertaking longitudinal studies to better assess its results

Even then drawing a causal link between HCZ’s interventions and longer-term outcomes such as lifetime incomes of its graduates, and impacts such as a decline in poverty in Harlem, remains complicated due to numerous social and economic factors that HCZ cannot control.

More generally, measuring outcomes is possible under two conditions that are uncommon in the social sector:

  1. when the causal link between outputs and outcomes is well established,
  2. when the range of the integrated interventions needed to achieve outcomes are within the control of the organization.

Impact and the Social/Solidarity Economy

Social Impact measurement for the Social and Solidarity Economy OECD 2021 PDF

This is perhaps the most comprehensive document I’ve read on impact measurement and its relation to social purpose organizations.

Fact #1: There exists no single internationally accepted conceptual framework to value social impact nor to understand the drivers and obstacles to create that impact.

Fact #2 Social purpose organizations are increasingly requested to demonstrate their positive contribution to society through measuring their social impact.

Definition: Social impact measurement aims to assess the social value and impact produced by the activities or operations of any for-profit or non-profit organisation (OECD 2015)

Continue reading “Impact and the Social/Solidarity Economy”

Roadmap to Social Impact

Ramia, I., Powell, A., Stratton, K., Stokes, C., Meltzer, A., Muir, K. (2021).
Roadmap to outcomes measurement. Your step-by-step guide to planning, measuring and communicating social impact. Centre for Social Impact.

Hey, the more I’m reading about social impact the more I’m realizing that a Theory of Change and Logic Model are important pieces, but you need to use them together especially for those projects or enterprises that may not conform to a linear model that the structure of the Logic Model assumes. The Theory of Change should capture more or less visually in a diagram what you’re hoping to achieve, and if the groundwork still requires lots of thinking and rethinking, lots of iterations and feedback loops, then work that into the Logic Model and make sure its visually diagrammed in the Theory of Change.

Theory of Change: A theory or model of how a program will achieve the intended or observed outcomes.

  • Articulates hypothesised causal relationships between a program’s activities and its intended outcomes
  • Identifies how and why changes are expected to occur
  • Comprises a change model (the changes the program intends to achieve) and an action model (the activities that will lead to those changes)
  • Articulates the assumptions and enablers that explain why activities will lead to the outcomes outlined
  • Often represented as a diagram or chart, a narrative can also be used
  1. Start by defining the main activity for your program and its long-term outcomes. These represent the ‘start’ and ‘end’ of your theory of change (what you do and for what purpose).
  2. Clearly outline the change model (the changes that will result from your program). You can then articulate the main processes or activities (the action model) through which you engage with your target group, population, or community to achieve those outcomes.
  3. Your theory of change should be informed by knowledge of ‘what works’ to address the problem you are seeking to solve (e.g. similar programs or approaches in different circumstances), or evidence that an innovative approach (e.g. engaging with groups at different times, in different circumstances) is likely to work and why

First Nations Information Governance

I’m taking an online course detailing the First Nations principles of OCAP® (Ownership, Control, Access and Possession). There are 7 modules, each one is about 40 minutes

We kick things off with a quote from the Report of the Royal Commission on Aboriginal Peoples Vol 3, p. 498

In the past, Aboriginal people have not been consulted about what information should be collected, who should gather that information, who should maintain it, and who should have access to it …

The information gathered may or may not have been relevant to the questions, priorities and concerns of Aboriginal peoples.

Logo copied from FNIG website

“OCAP® reflects First Nations commitments to use and share information in a way that brings benefits to the community while minimizing harm.”

The Havasupai

This story begins in the 1990s when the Havauspai tribe living deep in the Grand Canyon, were suffering from cases of diabetes and wanted to know why. They sought out research help from Arizona State University. The tribe agreed to provide blood samples to the university. However the blood samples extracted from the members of the Havauspai were later used for experiments far beyond the initial agreement. Here is how National Public Radio (NPR) describes the situation.

“The tribe was suffering from high rates of diabetes and they wanted to know why, so they sought the help of researchers from Arizona State University. The tribe agreed to provide blood samples so the University could test their DNA but what they didn’t know was how extensively their DNA would be used. Researchers looked into mental illness, inbreeding, even migratory patterns that contradicted Havasupai traditional belief.”

Portion of NPR interview with Carletta Tilousi, a member of the Havasupai Tribe

Part of it is it was a part of my body that was taken from me, a part of my blood and a part of our bodies as Native-Americans are very sacred and special to us and we should respect it. And once they obtained that blood sample, my understanding was they didn’t use it for the purpose of diabetes, they used it for other studies.

And that angered me because I was not properly informed nor did I sign any consent form or fully explained to what my blood was being used for. And it was benefiting different people in the university levels as professors have been obtaining their doctor’s degrees and undergraduate students that were graduating with master’s degrees while our people down here, first of all, were not informed of all of those studies but was also lied to from the beginning and I don’t like being lied to. And it wasn’t just myself that did this, it was a lot of people involved that helped with this legal case.

More information in this NPR article

OCAP® is a registered trademark of The First Nations Information Governance Centre, used under license/or used with permission.

On Social Impact Measurement

I’m reading an article “Accountability for Social impact: A Bricolage Perspective on Impact Measurement in Social Enterprises” 2019.

They start off citing a problematic ambiguity when it comes to social impact measurement:

An underlying reason for this ambiguity is that, in contrast to accounting conventions for financial performance assessment, there are no generally agreed-upon methodologies or units for social impact measurement.

Because of the unique influences that social enterprises encounter that are different from for profit enterprises i.e., social enterprises tend to enter less valued markets, have multiple stakeholders with differing viewpoints, and the overall impression that for profit return on investment methodologies don’t transfer well to this sector.

As a result the authors contend that many social entrepreneurs use ‘bricolage’ for social impact measurement. “The concept of bricolage refers to making do with at hand resources.” In this case what it means is that social entrepreneurs make space for new ideas about what social impact means within their specific context.

In this article the standard Logic Model of measuring impact is criticized for its emphasis on linear causality between inputs-outputs-outcomes-impact.

In contrast, practitioners with experience in implementing such formal methodologies often stress the causal ambiguity of this chain; they contend that impacts are difficult to understand with precision, mush less calculate … Social enterprises operate in an ecosystem, including other social enterprises, businesses, and aid organizations, each of which may contribute to or interact with each other’s impacts. Attributing impact to a specific actor can thus be very difficult.

The point they make here is that outputs can be controlled by the organization and can thus be measured, while outcomes and impacts are more difficult to isolate and account for. Friction between stakeholders in an enterprise may arise, the authors state due to disagreements as to how to, “translate rich, experiential information into simple, parsimonious measures of social impact.”

There is a trade-off in social impact measurement between creating accounts that capture the experiential richness, variance, and flexibility of social entrepreneurs’ interpretations and accounts that are easily transferable and interpretable for funders.

Social entrepreneurs’ evaluation of impact has a vast experiential element to it – their daily experiences provide them with rich contextual information. Yet, much of this experience and context can be difficult to convey to other stakeholders

Their findings from 23 interviews, 1 hour in length (pg. 16)

  • while social entrepreneurs were exposed to, and had sometimes attempted to some degree, many of the formal methodologies to measure social impact, social entrepreneurs almost never committed entirely to a specific methodology. Instead, social impact measurement was frequently more akin to a patchwork combination of elements from multiple methodologies.
  • Extant methodologies (e.g. SROI, logic models, or experimental methods) were essentially unused among social entrepreneurs in the small-to medium sized enterprises that we interviewed. Only two of the twenty-two enterprises systematically used a formal methodology to understand their social impact.
  • social entrepreneurs avoided existing methodologies. Instead, they frequently resorted to creating improvised collections of simple, ad hoc, self-generated methods, bricolaged together from at-hand data, experiential anecdotes, insights collected through encountered academic articles and collective industry wisdom from industry players
  • For those who searched heavily for impact measurement resources, the issue was never that they could not discover the existence of these methods; it was that they often did not have sufficient support and resources to implement them. Thus, even for those who actively sought out existing methodologies, these served as inspiration but were never formally implemented.

For the emerging social entrepreneur, having impact was essential, but demonstrating impact was viewed as a burden on their time and resources. Social entrepreneurs instead focused more on providing details of impact as economically as possible to avoid distracting themselves from their goal of creating impact in the first place. As one entrepreneur stated:

One of the challenges as a startup social enterprise is that, it’s not like we
have a huge budget to invest in doing research. That’s not our job. And so
any research and data collection that we do has to be incorporated into our
regular course of doing business.

One of the challenges of being a social enterprise or social business is that first and foremost, we have to make sure that the business is working and that we’re going to survive and that we’re going to be able to continue operating. Figuring out how to measure impact, and when, is a secondary priority.

Key Point

Overall, we found that social entrepreneurs were not deciding upon an existing method and
then attempted to collect the relevant data necessary to implement that method. Instead, they started with the data they have at hand and bricolaged it together into thematic bundles to see what kind of ideas regarding impact might arise.

For most social enterprises in our sample, however, much of the funding came from funders with whom they could exert some interpretive flexibility regarding what types of impact measurements to use.

By giving richer narratives a place in social impact measurement, ideational bricolage
helped the social entrepreneurs blend their interpretations and priorities into the trend toward social impact measurement being more objective and evidence-based (see third contribution below). pg 32

We found a pattern whereby social enterprises demonstrated their social impact through ad-hoc combinations of at-hand operational, design, and sales data; experiential anecdotes; retained highlights of academic research that they encountered; emotionally resonant images, videos, and other ‘bric-a-brac remains’ (Douglas, 1986: 67) of commonly accepted wisdom and assertions. Social entrepreneurs strategically chose which ideas about impact to include. For example, a social enterprise that focused on the deep impact of providing homes avoided defining impact in terms of scale.

Delegitimization: 4 insights into the problem of formal impact measurement

Delegitimization provides the interpretive flexibility needed to avoid simply accepting a formal methodology wholesale and instead demand that social entrepreneurs and funders search through the individual elements – the data and ideas – that the methodology has put together to see where the problem lies and bricolage a solution.

Delegitimization atomizes the data and ideas within an integrated formal impact measurement system, breaking them free from their roles within the delegitimized methodology for use as elements to be bricolaged together, along with all the other at hand data and ideas about impact that never made it into the delegitimized methodology to begin with. The refusal to be limited by formal methodologies leads the entrepreneurs to critically enact their resource environment and thus potentially create something from nothing (Baker & Nelson, 2005).

Bricolage

Bricolage represents such blending, as it shows how social entrepreneurs take whatever they can from a formal methodology and see how it might be combined with other data and ideas into a new construction of social impact. Social entrepreneurs using bricolage use the same data and ideas that formal methodologies use (to the extent that they are at-hand), but instead of placing these in a causal chain, they simply create a collection of facts that are left for stakeholders to interpret. The data is positioned together but the overarching causal relationships are stripped away or left implicit. This switch from borrowing to blending counterbalances discussions how the imported ideas are similar and appropriate (‘i.e. of one-way borrowing based on analogical resonance’) with discussions how they are dissimilar and do not fit (i.e. ‘two-way blending based on analogical dissonance’) (Oswick et al., 2011: 318). The social entrepreneurs’ delegitimization critiques serve to create this dissonance and thus become the mechanisms to assist their interpretive flexibility.

Impact Measurement after COVID19: Expected, unlikely, and ideal

Kate Ruff April 27, 2020

An impact data standard is widely used by impact softwares. Agreeing on a data standard is easier, and smarter, than agreeing on indicators.

Think standardizing building materials into 2x4s and 2x8s, rather than residential floor plans. A data standard supports visibility into the degree of comparability across indicators, rather than getting everyone to use the same indicators.

Kate expects: In the post-covid world, something like impact measurement will remain important — even increase in importance — but the scope will evolve such that investors, grantmakers and managers of social purpose organizations are doing something more aptly called resilience measurement or social valuation. Specifically:

Our experience with covid will attune us more toward resilience.

Five years from today, the leaders in impact measurement will do more than show impact on key performance metrics; they will also show how — working with others — those metrics lead to resilience of people and planet. It is a more systems-thinking approach to affecting change.

Inequality will be greater five years from today; it will also be more visible and more uncomfortable. The effect on impact measurement will be to focus attention more on social value. We will see more widespread use of measurement frameworks that consider the perspectives of those whose lives are most affected by corporate and charitable impacts.

The next five years will bring the beginning of a more systems-approach to impact measurement.

To measure progress toward greater resilience and social value, things like contribution will require a less organization-by-organization assessment and more of a collective understanding. Supported by better data systems collective impact approaches could become decentralized and dynamic ad hoc networks rather than anchored by backbone organizations.

More investors and grantmakers will require impact measurement. Many will use impact data to support evidence-based decision making. Others will use impact data for symbolic and political reasons: as a signal of good management; and for assurance that their decisions are defensible to others. Even those who don’t use data will want to see the data.

Great looks like:

  • An impact data standard is widely used by impact softwares. Agreeing on a data standard is easier, and smarter, than agreeing on indicators. (Think standardizing building materials into 2x4s and 2x8s, rather than residential floor plans.) A data standard supports visibility into the degree of comparability across indicators, rather than getting everyone to use the same indicators.
  • So many companies score 80 or higher on the BAssessment such that BLab needs to raise the bar.

Over-the-moon looks like:

  • A shift in thinking from uniform indicators to flexible standards. This means less emphasis on designing a buffet of well-specified indicators and more emphasis on generating principles and techniques for aggregating bespoke indicators into thematic clusters. (The buffets of indicators that we have now (GRISASBIRIS+ and others) are an important step in building these flexible standards.)
  • Greater impact measurement by businesses and charities for the purposes of improving services, rather than for the purposes of winning over investors and grantmakers. This requires adaptable bespoke impact indicators tailored to each organization’s impact context rather than off-the-shelf indicators selected from a buffet of indicators designed to support investors’ decisions.

What if our economy valued what matters

This old system is perversely beholden to indicators like GDP, an indiscriminate measure of “progress” that ends up rewarding the destruction of people and the planet.

Mariana Mazzucato

Jose Cabezas/AFP via Getty Images

A good example of an attempt to change ‘impact indicators’ is this article that argues its time to change the impact indicators for economic growth. The standard indicator GDP or gross domestic product doesn’t measure what truly needs to be measured when it comes to the emphasis on global 21st century sustainable growth.

As we take stock of the pandemic’s wreckage, we must use this moment to overhaul how we measure value, and thus how we organize the global economy. The goal should be to create an economy that supports the health and well-being of every person on the planet, as well as the health of the planet itself. We currently have the inverse: a system that values health only as a means to the end of economic growth.

Mazzucato here cites the reasons why we need to ‘overhaul the way we measure value.’ So how can we do this? The first step is to throw out the old measure of ‘progress’, GDP. She cites two current global projects that are presenting indices that can move the global economy from its destructive tendencies that measure and encourage growth at whatever cost to people and the planet. The United Nations Sustainable Development Goals and Kate Raworth’s Doughnut Economics are presenting ways to change up the way we measure economic growth and social progress.

 In 2020, global GDP grew by $2.2 trillion as a result of governments increasing their military spending; meanwhile, the world still has not provided the mere $50 billion needed to vaccinate the global population.

Mariana Mazzucato here presents an argument for the importance of paying attention to the indicators we use to measure outcomes, and underscores the need to change indicators that will enable us to measure more sustainable outcomes.

Economics has hitherto measured the price of everything and the value of nothing. That must change. We need to measure the value of everything so that we can account for the things that truly matter. Health and well-being – and the care that sustains them – should become our principal measures of success.

Key points taken from a guide to social impact measurement

Muir, K. & Bennett, S. (2014). The Compass: Your Guide to Social Impact Measurement.
Sydney, Australia: The Centre for Social Impact.

The primary purposes of outcomes measurement are to provide evidence of what works and what doesn’t, and why and how to improve effectiveness and efficiency.

The 3 Ps to achieve social impact: Purpose (what’s our purpose, why are we measuring impact, what are we trying to achieve?), Process (how are we going to get there?), Performance (have we made impact?).

Indicators are measures that show whether progress is being made on individual outcomes or goals. They may show no, positive or negative change over time. Change might be intended or an unintended side-effect.

Indicators can be qualitative or quantitative. Qualitative indicators seek to understand how the world is understood, interpreted and experienced by individuals, groups and organisations (usually through the eyes of the people being studied and in natural settings). They help to unpack the ‘why’ and are often richly descriptive, flexible, relative and subjective.

Quantitative indicators seek to explain something by using numerical data: how many, how much, how often. They are highly structured and based on theory/evidence and usually objective, but they can also capture subjective responses such as attitudes and feelings.

Common Indicators

If common indicators are used and the outcome data is de-identified and shared, outcomes will be comparable not just at a population level by also at an organisational, group, sector and/or social issue area.

… For example, if your organisation provides housing services, you might track and report tenant housing stability and wellbeing (outcomes) along with information on the client demographics, housing type and other information about your organisation (what you do, how you work, how many people are housed etc).

If similar outcome indicators are used, the housing stability and wellbeing of one group of residents could be compared to other residents in the organisation, in other organisations, in different geographic areas, across the housing provision sector, or to the broader population.

A simple problem can generally be thought of as having a linear cause and effect relationship.

To improve the child with a disability’s social participation, one of the problems the initiative is trying to solve is access to the school playground because of a mobility restriction and steps.

If the problem is that the school playground has steps and needs a ramp, this is a relatively simple problem. The relationship between cause and effect is clear: the steps are causing a lack of access so if you put in a ramp, the outcome is access to the playground.

But problems can be more complicated or complex. A simple problem usually requires a standard approach and the problem will usually be addressed quickly or over time. The solution can often be replicated by others in different situations. Measuring the change that has occurred with a simple problem is also fairly straightforward.

A Complicated Problem

A complicated problem might have a linear cause and effect relationship between the problem and solution. However, there are usually multiple, interconnected components and feedback loops.

The complicated problem might be access and inclusion in a mainstream school that is not set up for a child with a mobility impairment. Modifications may need to be made to the physical space, resource allocation and practices. The problem, however, can be solved over time and outcomes can be measured.

A Complex Problem

A complex problem is one that has many possible interrelated cause and effect pathways. The behaviour of each part will affect other parts and the overall system. Outcomes might be intended or unintended and positive or negative.

An uncertainty that the problem will be resolved, measuring outcomes is also more difficult, and attribution for which group or initiative was responsible for the outcome cannot usually be accurately determined.

Achieving improved social participation overall in this scenario not only relies on practical changes and resource investments, it is also affected by social acceptance, cultural beliefs (e.g. disability is ‘hidden’ within certain cultures), legislation to enforce equal access and the right to live free from discrimination, parent resources (to purchase goods and services needed beyond those accessed publicly) and access to integration supports – to outline just a few contingent factors.

Key Navigation Points

In summary, there are three steps for integrating measurement into your organisation:

  • Clarify your purpose
  • Determine and articulate the process of how social impact will be achieved
  • Measure your performance, the markers of change and the conditions of how this occurs

In undertaking these steps, consider the complexity of the problem and interrelated systems that will affect change.

Resources

W.K. Kellogg Foundation (2004), ‘Logic Model Development Guide: Using Logic Models to Bring Together Planning, Evaluation, and Action’,

https://www.wkkf.org/resource-directory/resources/2004/01/logic-model-development-guide

Baker and Bruner (2010), ‘Participatory evaluation essentials: An updated guide for non-profit organizations and their evaluation partners’, The Bruner Foundation.

http://www.evaluativethinking.org/docs/EvaluationEssentials2010.pdf

Equity in Collective Impact

Centering Equity in Collective Impact Stanford Social Innovation Review Winter 2022

This article is too long. Note the important highlights below.

Illustrator: Julia Schwarz

… the single greatest reason why collective impact efforts fall short is a failure to center equity.

Collective impact has lasting effectiveness only if it is focused on changing underlying systems, not just adding new programs or services.

Disaggregate the data

Unless the data is disaggregated, we cannot truly understand problems, develop appropriate solutions, or document progress.

Describing society’s problems with aggregate data: the national unemployment rate, high school graduation rates, the number of people living below the poverty line, or the percent of neonatal fatalities, masks variations by characteristics such as race and ethnicity, gender, age, sexual orientation, income levels.

Improve the precision of data collection and reporting practices to support more equitable analysis and more targeted solutions.

Disaggregated data are essential but not sufficient.

Centering equity in the work of collective impact requires a more holistic understanding of the life experience of marginalized populations that can come only from interviews, surveys, focus groups, personal stories, and authentic engagement.

Putting data into the appropriate context

Groups interpreting the data do not often include those with lived experience when making sense of the data. Data sets that are solely quantitative, fail to capture important context that only the people most impacted and those closest to them know, and

To address this problem, many collective impact efforts begin with “data walks,” in which all participants in the collective impact effort, including organizational leaders and residents with lived experience of the issues, review easy-to-understand visual data and together analyze, interpret, and create shared meaning about what the data say.

Power of stories

The very act of seeking out and listening to stories from the affected group can provide a foundation for building trust with community stakeholders. Active use of stories can also serve to locate and center the narrative for change in the community. This step can shift conversations about solutions from more conventional programmatic responses to more systemic solutions focused more concretely on achieving greater equity.