Not a Measured Approach? Does the Hewlett Foundation’s decision indicate the failure of impact-measuring approaches to philanthropy?

It is increasingly part of the received wisdom of the charity sector that “donors want to see evidence of impact”. Focusing on data about the impact charities have, it is argued, will not only drive improvements in the quality of outcomes delivered by organisations, but will also motivate donors to give more as they see the effect their money is having.

 

It was therefore surprising to see the news that the William and Flora Hewlett Foundation, a major US grantmaker, has announced that it is ending its funding for a $12 million programme that aims to support initiatives that provide publicly-accessible information about the financial performance and social impact of nonprofits.

 

The reason given for this decision relates to the second of the arguments for focusing on impact given above, rather than the first: namely that although plenty of robust and useful data has been generated through the various initiatives funded, it doesn’t seem to have had any effect on donors’ behaviour. Rather than changing their donation decisions based on rational analysis of the information provided, most people continue to let the heart rule the head when it comes to philanthropy and base their giving on emotional factors.

 

As various commentators have said, perhaps this isn’t all that surprising – charitable giving is, at its core, an emotional response to the perceived suffering of others or a perceived problem in our society. However, even if emotion is the prime motivating factor, there is nothing to preclude trying to change the way people act on this motivations by informing their decisions and guiding them to the most effective course of action. Perhaps this is the dividing line between charitable giving and philanthropy? The former being purely an emotion-driven reaction to a problem, and the latter being an extension of this basic impulse to a rational attempt to address the cause of the problem.

 

If that is the case, then perhaps we can accept that there is a large bulk of charitable giving on which any efforts to provide measurements of impact are going to struggle to have any effect, whilst still believing that impact measurement is important? We should strive to improve the quality of information available on the effectiveness of organisations and interventions, but temper our belief that it is going to drive increased giving because this is unlikely to be the case.

 

There will also be a smaller domain of philanthropy for which evidence-based assessments of impact and efficacy remain crucial and may well influence giving behaviour. Funding organisations such as foundations, which are not subject to the same sort of emotional factors as individuals should come into this category. There will also be individual philanthropists who are ruthlessly results-focused and for whom the accumulation of data on impact remains a central concern.

 

Even if we maintain that measurement of impact is important, I think there is a debate over how we do that measurement and the extent to which data should drive giving decisions. I have covered in other blogs my thoughts about the “effective altruism” movement and the way in which it takes the focus on data too far by relying on a brutal utilitarian arithmetic of “lives saved” as a way of measuring impact. Those criticisms are relevant here too, and it is interesting that GiveWell – one of the leading proponents of effective altruism – is one of the organisations losing funding as a result of the William and Flora Hewlett Foundation decision.

 

There are, to my mind, three questions we need to ask about trying to use data on impact to influence donor behaviour:

  1.  What sort of information should we be providing donors with?
  2. At what point in the process is it most effective to provide this information?
  3. Do donors value this information sufficiently to pay for it, and if so what level of cost are they willing to bear?

 

The first of these basically requires that we understand proportionality and appropriateness when it comes to reporting impact. It is obvious (or at least should be) that the information a charity might provide to a foundation which is considering giving it a large grant is going to be very different to the information it provides to an individual donor who is considering giving £100. These funders are likely to have different goals and differing expectations of what the charity needs to tell them in order for them to be satisfied that their money will be used well.

 

If you were hoping at this point that I would have an answer to the question, I’m afraid I have to disappoint you. I don’t know how best to pitch impact reporting to different donors, largely because I haven’t collected any robust evidence of how different approaches influence behaviour. This is something that I would really interested to see investigated further though, to see whether we can get a better insight into what information we could give donors that might actually influence their behaviour in a positive way.

 

This approach could also be applied to the other questions on my list. A lot of experiments in behavioural economics seem to centre around the notion of “choice architecture”- i.e. the way in which options are presented. This would seem particularly relevant for the question of when it is best to present donors with data (on the assumption that we have established what sort of data we are using). One can imagine a series of experiments in which different groups of donors are given information at different times (perhaps some prior to being asked to make a donation, some once the ask has been made, some once the donation has been made etc), and the resulting effect on behaviour measured. I think this would be fascinating.

 

Experimentation might also help to answer the question of whether donors themselves are willing to pay for data about impact. It is the hypothesis of some of the organisations who have had funding cut as a result of the William and Flora Hewlett Foundation’s decision that they will be able to get their donors (mostly foundations and young philanthropists) to pay for the collection and analysis of data now that they have seen the benefits the approach brings. I am not quite so sure. Hopefully their optimism is justified, but there is a big difference between someone saying that they value something and what they do when you actually ask them to put their hand in their pocket. It would be interesting to find out through experimentation how likely people were to pay for impact data at various levels of cost – perhaps as a percentage of the value of the donation (i.e. are people happy to pay for impact metrics at 1%, but not at 5% or 10%?)

 

The really interesting thing about this whole story is that it challenges an idea that has become something of a truism in philanthropy circles: that all we need to do in order to drive more giving is provide donors with better evidence of impact. I still believe that improved evidence on the effectiveness of interventions is important and necessary for the development of the charitable sector, but challenges such as this one can force us to examine what we actually mean by “evidence” and what our expectations are in terms of how providing this evidence will drive increased or better philanthropic giving.

 

Rhodri Davies

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s