5 competing voices in impact evaluation

Evaluation is important, and so very hard!

If knowledge is power, then evaluation is powerful. Without good quality evaluation, we are left to infer whether our Program, service or product is delivering value. And looks can be deceiving. Having demand does not equal creating value. Not only might we be wasting resources, we may be distracting other doers and funders from delivering services that actually matter. Without evaluation, we are flying blind.

But evaluation is tricky.

Culturally, evaluation shines a light on our deepest insecurities, pointing out all the ways in which our Program, service or product is not having the impact we presumed.

Tactically, measuring value is hard to do when ‘value’ means so many different things to so many people.

It’s no wonder that as impact investors, philanthropists, conveners, doers, we often wrestle to align on definitions and approaches to know our impact.

But. There are tools you can use to short circuit the endless debate about what even is ‘impact’ or the ambiguous reasoning that doesn’t actually give us direction.

Common perspectives that drive evaluation

One of the most helpful insights I’ve gained is to decipher which perspective is driving the evaluation.

I’ve identified five common perspectives:

  • The researcher

  • The evaluator

  • The strategist

  • The storyteller

  • The program leader

Each perspective will take a slightly different approach in designing and delivering the evaluation. This matters. A-lot. The perspective will influence:

  • How impact is defined

  • What data is collected

  • The utility of that data

  • And, fundamentally, who gains a voice in determining where to next.

If we take on the wrong perspective, we may fail to meet the original purpose of our evaluation (i.e. what needs to be done and who needs to do it).

Before getting into the perspectives, imagine this… Your organization is about to launch a new Program that aims to increase numeracy and literacy of children aged between five and ten. The Program involves a) distributing new curriculum b) delivering additional training to teachers and c) offering free before school care. Let’s use this example to meet the five perspectives.

The researcher perspective — “determine what’s true”

A research-driven approach is about uncovering truth. It aims to prove or disprove an empirical hypothesis or question.

In our example, the researcher would be most concerned with determining what change in numeracy and literacy scores has actually occurred. They may use A/B trials to compare the participating students and a control group. They would prioritize gathering data using a rigorous methodology to ensure accuracy and precision in results.

A cheat sheet to the researcher perspective:

  • The evaluation must be designed up front. Otherwise you’ll miss the moments to capture data when it matters (like at a baseline or midline).

  • The evaluation should be driven by a core question or hypothesis that can be falsified.

  • The evaluation is probably less concerned with the nature or experience of change, and much more concerned with whether or not change occurred (a focus on the end, not the means). Be mindful of what color you may be missing.

  • This style typically holds a narrow view of what constitutes ‘good’ data and only relies on data points that are considered robust.

  • Optimizes for rigor in methodology.

The program evaluator perspective — “measure quality and merit”

What we typically know as program evaluation aims to measure the quality and merit of a given program. In theory, this aims to inform a decision about whether or not an initiative is worth doing.

In our example, the program evaluator would be concerned with a broader, more holistic series of questions about how efficiently and effectively the Program is being delivered. Sure, they would explore the change in numeracy and literacy scores. But they would also capture measures of program efficiency (how much did it cost, how long did it take, what was the marginal cost per student etc.).

A cheat sheet to the evaluator perspective:

  • Program evaluations are very helpful at ‘tipping point’ moments, like the moment before taking an intervention to scale or the moment before investing more resources after a pilot.

  • This style typically has many lines of inquiry, all of which feed into a holistic assessment of whether something should be considered valuable.

  • However, what is considered ‘valuable’ is highly subjective. So, such an evaluation should be made more rigorous by comparison to a control or alternative intervention. This will tell you the opportunity cost of your resource allocation.

  • This perspective typically holds a broader view of what constitutes ‘good’ data, ranging from quantitative to qualitative indicators with varying levels of precision.

  • Optimizes for a holistic account of what happened.

The strategist perspective — “inform decision making”

Strategists use evaluation to guide future decision-making. The central aim is to clarify why we need data — not just in a vague sense, but in a very real way by articulating what future decisions need to be made. The approach is likely to reflect aspects of the evaluator and researcher, but with a greater emphasis on how the data will be used once collected, and by whom.

In our example, the strategist would clarify the primary actions that the leadership team needs to take. Imagine they share that in 12 months time, they will share a blue-print for how to onboard new schools. The strategist would develop a process to assess what enables and inhibits successful onboarding. And in 24 months, the leadership will refine their allocation of resources. The strategist would set up a process to compare the efficacy of the three interventions (is one more impactful than another?).

A cheat sheet to the strategist perspective:

  • A typical format would look like 1) articulate your strategic aim 2) identify what decisions need to be made to deliver on that strategic aim 3) identify what data would help make those decisions 4) collect and analyze the data 5) respond — pivot, adjust or continue the course!

  • This approach aims to limit unnecessary data collection by forcing clarity and intentionality in how we do our work.

  • Typically a sharper and more focused style of inquiry. And one that evolves over time, changing shape with the changing needs of the program.

  • This style may miss general interesting-to-know details that could be useful later.

  • Optimizes for application in actual decision-making.

The story teller perspective — “craft a narrative around impact”

Story tellers use case studies and anecdotes to demonstrate just about anything. Stories can be used to suggest what change has occurred, influence fundraisers, build an audience and more.

In our example, the story teller would attempt to capture detailed examples of the student’s experience — how they feel in the class, why literacy matters to them, the most significant change for them etc.

A cheat sheet to the story teller perspective:

  • This style of inquiry is highly specific to the individual and, therefore, caution is required when making generalizations to an entire community or experience.

  • Story telling is often used as a complementary tool to garner richer insights and a deeper understanding of the social problem and how it shows up for people. It can be dangerous on its own as it can lead us to assume trends that aren’t actually there.

  • Optimizes for building a gateway to experience in the minds of others.

The Program leader perspective — “show the good stuff”

Program leaders want to demonstrate that their Program works. It’s a natural human tendency. We want to look good and we want our work to matter. It takes an intellectually humble and honest person to walk into an evaluation with no pre-held view of what the report should say.

In our example, the Program leader surfaces the data points and case studies that tell a compelling story of impact. This may involve ignoring the trend showing that retention rates are declining or failing to follow-up because, hey, the literacy rates are higher now and that’s all that matters.

(Of course not all Program leaders are like this. But some are, and we should be very aware of it).

How to use the perspectives in practice

Each perspective leans toward a slightly different angle in data collection and analysis. Meaning that they all get stuff done… they just get different stuff done.

I recommend two steps.

1. Before starting any conversation on measurement, evaluation or impact reporting, ask two questions:

  • What needs to be done with this information?

  • Who needs to do it (who is the critical audience?)

2. Given your response above, determine which perspective should drive your evaluation before you begin its design. Be aware that disagreements about what data to collect are usually driven by different perspectives to evaluation.

Previous
Previous

How uncertainty should be changing your strategy

Next
Next

9 things to avoid doing in philanthropy