I have been thinking a lot about the intersection of new technological developments and the future of philanthropy over recent weeks, and as such have been reading around a lot. I was struck by an anecdote in Kevin Kelly’s new book The Inevitable: Understanding the 12 technological forces that will shape our future (which I highly recommend, by the way) which led to a few ideas coalescing in my mind. The anecdote concerns a conversation Kelly had with Google co-founder Eric Schmidt 25 years ago, in which he questioned why they were creating a new search engine when there were already plenty on the market. Schmidt’s answer was that they weren’t creating a search engine: that was just means to an end, and the real goal was to create an Artificial Intelligence (AI).
The point of the story is that at the heart of Google’s search capability is a highly complex deep learning algorithm governing how the search function operates and which determines how pages are ranked, what information is presented to each user etc. Algorithms of this kind need nourishment in order to grow and develop, and this comes in the form of data. In Google’s case that is what the search engine provides, and has been providing for nearly two decades : a vast store of data on people’s online search habits.
This got me thinking about whether there is an important lesson for philanthropy in terms of data being a means to an end, rather than necessarily an end in itself. There is an awful lot of talk in philanthropy/charity circles about measuring social impact, but far less clarity about exactly why. There is obviously an intuitive argument that we should want to know as far as possible whether the interventions we are using actually work, but the flipside of that is that implementing new measurement systems is very time consuming and expensive, so for cash-strapped organisations there may have to be some sort of more tangible pay off to convince them to invest scarce resources.
It is true that some institutional funders like government agencies or charitable foundations increasingly demand rigorous metrics on how money is spent, so in these cases there is a clear imperative for putting in place the required measurement systems. (Although it is still worth bearing in mind that these may be specific to the need of the particular funder in question and thus not necessarily that applicable in other contexts). However, the broader assumption that “if you give people more information on the impact of donations, the more they will give” is starting to be questioned.
Firstly, what do we mean by “more information on impact?” For a major institutional funder it might mean well mean spreadsheets full of data on outcomes, or a full SROI analysis, but for the average donor that is unlikely to be appropriate, and in fact it might be simply about more effective and compelling storytelling. Furthermore, even if donors do have an appetite for quantitative data, there is growing evidence that if you give it to them, not only does it not make them give more, but it may actively make them give less.
So is impact measurement totally pointless, or even harmful, then? Not, of course not: as mentioned above, there are funders who really want this sort of info, and as long as that is not placing an unreasonable admin burden on recipient organisations, that seems broadly healthy. (Particularly if the funder is willing to pay at least part of the cost of setting up the measurement system and developing the requisite skills). My argument here, however, is that there may be a good long-term argument for trying to measure social impact effectively even where there is not a funder directly demanding it at that time.
I have previously written about the prospect of AI or deep learning algorithms being applied to charitable giving in the course of my explorations of the impact of blockchain technology on philanthropy. I argued there that such algorithms could be used to analyse data in order to determine both what the most pressing needs in society were at a given moment and also how best to address those needs by supporting relevant organisations.
I am reliably informed by experts in machine learning that actually constructing algorithms of this kind is neither theoretically nor technically that difficult; the real barrier is finding sufficiently rich data sets for them to analyse in order to ‘learn’. Hence my argument here, with one eye to the future, is that the real value of measuring social impact in a robust and appropriately comparable way is in that it could enable us to develop the required data set to allow philanthropic deep learning algorithms to reach their potential.
Why is this such a massive prize, you might ask? To my mind the answer to that question is that is that whilst people are not necessarily keen on wading through reams of data themselves in order to making decisions about giving (hence the apparent failure of impact measurement to increase individual donations), in the future they will be willing to let AIs do that trawling on their behalf and either suggest where they should donate or do it for them based on knowledge of their preferences. In fact, given that many futurists predict that AI will become ubiquitous in the really-not-very-distant-future and that the ‘filtering’ of our experience by algorithms will become a part of the fabric of our lives, it is likely that we will demand that there are algorithms capable of directing our philanthropy. If AI guidance becomes the norm in all other walks of life, then it is going to seem very odd if charitable giving is not able to operate in the same way.
If that is to be the reality, then now is when we need to start building the data set. So the big challenge, it seems to me, is: how do you get organisations to invest the resources required to put in place the right measurement and reporting systems on the basis of such a theoretical future pay-off?
In some cases, there may be funders who are already demanding the right sort of information (and willing to pay for it, hopefully!) And we are seeing more and more of this with the growth of new, measurement-heavy approaches like Social Impact Bonds (in fact, I would argue the the accumulation of impact data is one of the strongest arguments in favour of SIBS, but that’s a whole other blog….) But the challenge is to make sure that the information recorded to meet the needs of individual funders is also shared more widely (with appropriate anonymising etc if needed), as otherwise all that valuable data just goes in silos where it sits, probably doing little good, and does not add to the overall dataset in the way we need. This is where an open data approach is absolutely vital.
But even if we capture all of this information from current funders and share it, there are still going to be far more cases in which an organisation does not have funding that demands they put in place measurement systems or supports them to do so. How, then, do we incentivise those organisations to invest in such systems?
As mentioned above, I’m not sure the argument that it will result in increased donations really holds much water, so do we have to find a dedicated source of funding for this? And what could that source be? As ever with these things, if I knew the definitive answer to that question I would be far richer and more successful than I currently am, but there do seem to be three obvious avenues to explore:
- Government. Here we are talking about funding for general capacity building in social impact measurement, rather than funding linked to a specific instance in which the government is procuring a service and wishing to measure the effectiveness of the investment. The challenge of course is that the balance of government funding has shifted massively away from grant funding and towards contract funding over recent years, so this may not be a realistic expectation.
- Grant makers and philanthropists. Again, this is not just about funders incorporating the development of impact measurement into their own grant making, but rather about investing in the capacity of organisations that they might not otherwise be funding. So the development of the data set would have to be an overt aim rather than a side product
- Companies. Might companies (particularly tech companies) be interested in supporting CSOs to develop measurement systems either as part of a straightforward CSR programme or (more intriguingly) because they recognise that they could also learn things that would be commercially valuable? Whilst developing an AI for philanthropic funding might not be something that they would choose to do for purely commercial reasons, could a case be made for investing in it with blended motives?Given the prominence in recent years of philanthropists from a hi-tech Silicon Valley background (many of whom are building algorithms and AIs for commercial purposes), surely at least one could be tempted to apply the same thinking to support the future of philanthropy?
I must admit that I have found myself being mildly sceptical about social impact measurement in the past; not because I don’t think that it is a good idea in principle (again, all other things being equal, why would you not want to know whether your interventions actually worked…?) but because I wasn’t sure what the practical case was for investing in it for many organisations (given that it is not necessarily cheap or easy), given that it is not clear that it will actually result in more income in the form of donations. But the argument outlined above about the need to develop the dataset on social impact in order to be prepared for what is coming round the corner does convince me.
The challenge, then, is to work out how to bridge the gap between the short term reality of organisations operating on very tight budgets and the long-term opportunity that philanthropic deep learning algorithms might offer. I have suggested a few starting points, but there is clearly plenty more to do. As ever, any thoughts or feedback on any of these ideas would be heartily welcomed!