What do we mean by impact? Our response to Simon Hearn’s blog

Meaning of ImpactComments are off for this post.

What do we mean by impact? Our response to Simon Hearn’s blog 

Meaning of impact

We read Simon Hearn’s blog with interest. The original article entitled ‘What do we mean by impact’? was posted on the Research to Action website in February 2016. He outlined an interesting problem in that much confusion and ambiguity exists around how we make sense of and use the term ‘impact’ in development programmes. Below we outline our response, in particular, what we think about the problem, our beliefs about how we create a common language and whether we should be assigning blame (at all) in impact evaluations .

Are we talking about the same thing?

Simon’s work at the Methods Lab involved developing and testing innovative approaches to impact evaluation. This work highlighted the existence of many differing definitions of impact and how different stakeholders were interpreting and using the term in practice. Much confusion and ambiguity was found to exist around what we mean by impact. He argued that the way impact is framed in these definitions has a significant influence on how development programmes are designed, managed and evaluated.

The discussion paper that resulted from Simon’s work identified a framework that highlights six dimensions of impact. You can read the discussion paper here. These dimensions were found to vary across the differing definitions and interventions. Guiding questions were produced to help clarify and make sense of what ‘impact’ means. The paper highlights that without a ‘common language’, negative consequences for programme design, measurement, cooperation and evaluation can result.

We agree that a shared mental model of what impact means is critical. Without a common definition or a mutually agreed reference of what we mean by impact it becomes difficult or even impossible to make comparisons about ‘what works’ within and between programmes.

A common language of impact – whose perspectives count?

According to Simon, much of the confusion and ambiguity around what we mean by impact exists because of the wide array of definitions that have been developed. He suggests that by thinking through the six dimensions of impact and its related questions we can start to find a middle ground between the econometric definition at one extreme and the broad OECD-DAC definition at the other end and be able to defend a contextually appropriate understanding of impact.

We see value in this approach to creating a common language. It’s vital to try to minimise the confusion and ambiguity that has arisen because of the lack of a common definition. However, we ask the question ‘whose language are we talking about’?

In the conclusion of The Methods Lab paper it states:

First and foremost, we call upon development practitioners and evaluators to be explicit in how they use the term. Formal definitions are abundant but it is clear that in practice people have different understandings; it cannot be assumed that others will know what is meant when the term impact is used (p.14).

The tendency with international development organisations has been to define impact without input from the very people that the aid programmes are concerned with helping. Historically definitions of impact have been developed using a top down approach.

In order to create a common language, perhaps there is a better way to come up with a more contextually relevant definition(s). Rather than relying on definitions that have been mutally agreed by experts, why don’t we involve grass-roots communities in determining what impact means to them? Why don’t we ask those who we seek to assist ‘what is impact supposed to look like’? Why don’t we listen to their voices?  We don’t mean listening to find out how to present what we want to deliver in ways that make the interventions acceptable to communities we serve. We mean listening by valuing their voices  and having explicit conversations with communities about how impact is perceived, used and understood from their perspective.

Defining Impact:  Accountability, blame and learning

We enjoyed reading Simon’s blog and The Methods Lab discussion paper but disagree with the statement regarding who should take blame (see blog extract below).

During or after a programme, the way impact is defined will affect how its success or failure is perceived, and who takes credit or blame. And this then affects what we learn from the programme to help adapt it or contribute to other programmes (Hearn, p1, 2016).

Blame is not a word that should be used in defining or evaluating impact. When we assess for impact we should be doing so with the aim of improvement, but never to assign blame for shortfalls in performance. If accountability is being exercised to blame someone, then an investigation may be a more appropriate mechanism, but not an evaluation.

If impact evaluations are to be used to blame stakeholders, accountability will trigger fear and defensiveness. Being on guard will inhibit a development worker’s willingness to share information and voice their concerns. The development community may not be willing to share evaluation reports in the public domain where they fear this may affect future funding. Silence is not conducive to an environment that supports openness and learning. We’ve talked about this in a previous blog. If we are to see more innovation, creativity and risk-taking in development programmes and if we really value organisational learning (which we believe goes hand-in-hand with creating impact), we need to remove the rampant desire to blame. Instead we need to shift from blaming for mistakes, to blaming for not learning from mistakes. An impact evaluation has an important role to play in identifying what can be learned, so that we can avoid mistakes in future programmes.

The Methods Lab paper identified two additional frameworks that may be used to create a common language around what we mean by impact. You can read about them here. It will be interesting to see how these frameworks are used in future programmes. In particular, what effect these have in reducing confusion and ambiguity in terms of what we mean by ‘impact’.

Julie Rasmussen and Jeff Sheldon. Thanks to Jindra Cekan for her help editing this post.

Enjoyed reading our blog? Sign up below to receive future posts directly to your inbox.