Evidence-based, impact-driven… what does it all mean?
by Lea Buck
Impact and evidence, two buzzwords that by now often lead to people rolling their eyes. These terms are something rather abstract and probably few philanthropists, social entrepreneurs or even people in general will tell you that they are not impact-driven. At the same time, it’s one of our main selection criteria, so let’s take a closer look at what we mean by it.
Let’s start with another truism: it’s complex ☺
Before we dive into the nitty-gritty of evidence-based and impact-driven approaches, it’s essential to recognize the complexity that underlies these concepts. Often, when people start exploring the realms of evidence and impact, they encounter the Effective Altruism (EA) movement, a relatively young field dedicated to determining the most effective ways to contribute to the common good. EAs invest millions in research to identify the most impactful causes, interventions, and organizations. I love skipping through some of the discussions, ranging from very rational arguments about why getting cats vegan is imperative to around maximization by individuals like Holden Karnofsky.
GiveWell, a prominent player in the field, distributed over half a billion dollars in 2021 alone. Their focus is on the “easier” direct aid space, but even here there’s a lack of absolute clarity. Consider two examples:
Number 1: For many years, funding deworming initiatives was hailed as one of the most cost-effective actions, but in 2022, GiveWell removed deworming organizations from its list of top charities.
Number 2: Until 2020, GiveWell estimated the cost of saving a life at $2,300, but their current estimate is $4,500, almost a 100% increase.
At present, GiveWell recommends only four organizations as top charities, all of which operate in the health sector and are backed by large-scale Randomized Control Trials (RCTs), often seen as the gold standard in impact measurement. Yet RCTs have their critics, too. Recently, someone from a large foundation told me that they stopped funding health interventions that are based on RCT evidence as they don’t want to take part in exploiting the global poor as the testing grounds for health innovations. That is an ethical argument against RCTs. There are others. A very obvious one is the limited scope of interventions that can be tested in an RCT. Another is that they require rigidity and can prevent the ability to innovate and pivot solutions – especially if measuring longitudinal.
I know people that love effective altruism and I know people who hate it. I understand both. (I recommend David Thorstadt’s Reflective Altruism blog for more on that). The point I want to make relates back to the truism above: even if we spent millions on researching the best solutions, there remain many unknowns as well as (moral) assumptions one needs to make.
So, what does the Azurit Foundation mean by it?
A mindset
We’re not looking for leaders who are overly convinced of their ideas to the point of being inflexible. Instead, we seek those who are genuinely curious, open to questioning their beliefs, and willing to make adjustments based on evidence. Our partners should assess their impact not merely to meet funder requirements but out of a deep intrinsic motivation to understand the consequences of their work and consistently apply new learnings.
A conceptual framework
We encourage organizations to be clear about their impact assumptions, their sources, the science in their field of action and to consider what others have discovered. Anecdotal observations are a valid starting point, but they should lead to a deeper exploration before designing comprehensive programs that are challenging to adjust if initial evidence proves unreliable. Due diligence and considering potential negative impacts are crucial. While we appreciate data-driven approaches, we don’t prescribe a specific method; organizations must find what works best for their context and intervention.
A skillset
Having the mindset and a conceptual framework is not enough. Assessing one’s impact is complex because the world is complex, and change is complex. We expect concrete measures and clarity on how organizations plan to implement their impact assessment. it’s much easier to write an M&E framework than operationalizing it – even if you have defined the perfect KPIs that align 100% with your theory of change. Collecting information is a challenge; analyzing it as well and incorporating learnings into your work even more so.
Evidently it’s not always easy
In conclusion, being evidence-based and impact-driven is not a straightforward path and there are no universal answers. As a consequence, we don’t standardize our evidence expectations to avoid forcing our partners into pre-defined impact KPIs. This, by the way, also means we cannot aggregate impact across our portfolio. After many years in the sector and having worked with many impact frameworks on micro to meta level, I fully agree with this recently published lesson learned by Ceniarth, a very impact-driven investor:
“We have all but abandoned the idea of portfolio-wide impact metrics as they all strike as uselessly vague.”
I might add: in almost all cases the immense burdens we place on our partners if we force them to follow standardized impact KPIs defined by us is not proportional and in the end not impactful.
The evidence base and outcomes our partners seek are embedded individually in their approaches. For some, it’s their cost effectiveness multiple, for others it’s mindset shifts, or increases in income. Then there are more qualitative elements such as long-term biographical analyses or policy changes. Our expectations are really high hurdles for young and small organizations and make us very selective. This is one reason why our grant portfolio is relatively diverse across sectors. However, there are many smart people out there who have the motivation, knowledge and skills yet not enough funding. And that’s the ones we are looking for.