News and Comment

Evaluating the added value of Social Impact Bond investment

Tuesday 5 July 2016

By:

In this fourth blog of the series that summarises key points in a speech I delivered at the 2016 Social Investing and Corporate Social Responsibility Forum, held at Meiji University in Tokyo, I reflect on some of the lessons learned in evaluating Social Impact Bonds (SIBs). This blog explores what it means to conduct such evaluations, and what others should be prepared for.

Bridging worlds

Evaluators of SIBs must develop an understanding of the different worlds that are brought together by the SIB model. Service providers, outcome payers and investors, and not to mention the myriad intermediaries, can have very different organisational cultures and structures. Their ways of ‘doing business’ can be very different, as can their priorities and expectations. Others have written about the potential ‘cultural clash’ when bringing together the public and private sectors. Throw the voluntary and community sector into the mix and you have multiple bridges to build.

For the purpose of evaluation (and also for ongoing performance monitoring for operational purposes), it is important to pay attention to the implications of the above for data. The various SIB players can approach data differently. They may have different interpretations of the same data, and may have different standards for data quality and what is ‘good enough’.

One observation is the fact that service providers from the voluntary and community sector often under-estimate the data requirements underpinning SIBs. Prior to engaging with a SIB, they may think that their existing infrastructure, capacity and capability for monitoring and evaluation are adequate. Post-launch, however, they may realise that these may not be quite ‘up to the job’ for the purpose of a SIB as they may have anticipated. Some SIBs have therefore incorporated specific components of capacity building to help service providers develop and sustain more effective and robust data systems and processes.

Evaluating formatively

Evaluators of SIBs must be flexible. SIBs are still relatively new, and new models of SIBs appear all the time. As I argued in a previous blog, SIBs as a concept is still being stretched and tested. The malleability of SIBs and their capacity for innovation encourages us to design evaluations that generate formative learning that can help ongoing improvement and track changes.

Expect the unexpected. Evaluations should therefore be designed with a degree of flexibility and responsiveness. For example, during the first round of evaluation activities for the Essex County Council SIB, we kept hearing the various SIB players talk in a way that led us to suspect there may be ‘hidden costs’ over and above the financial value of the SIB. We suggested modifying the second round of evaluation activities so that we can shine a light on these ‘hidden costs’, and be able to make recommendations for how they may be reduced or ‘designed out’ in future SIBs.

The value of independence and transparency

Like all other evaluations, independence and transparency is vital. These can be even more important when evaluating SIBs. There can be strong commercial interests involved. For example, service providers may wish to be able to ‘sell’ their services more widely if the ‘SIB experiment’ shows that the specific service has delivered desired levels of outcomes. Social investors may also be perceived by others, perhaps unfairly, to be more concerned about the financial return on their investment. We are familiar with the long-standing criticism that SIBs may be perceived as a vehicle through which people ‘make money’ out of the needs of vulnerable groups. For all these reasons, and more, the evaluator must be independent, and methods and analysis must be transparent.

The health of the SIB market relies on honesty and transparency. It is not helpful that most of the literature remains at the level of aspirations and assertions, particularly when audiences are aware of the vested interests that may underpin some of these claims. If the market loses trust, the various players may start withdrawing.

Adding value

While this may sound simplistic, I think it is important to point out that evaluations of SIBs should focus primarily on the impact of the SIB, and not the impact of the intervention. In my view, the crux of our interest in these evaluations is not about finding out whether the intervention ‘worked’, but is about understanding how the SIB model of procuring, delivering and funding the intervention may or may not add value to conventional ways of ‘doing business’. For example, why should a commissioner embark upon a SIB and not some other method of commissioning? Would it be ‘better’ for a service provider to market a service through a SIB model or would a direct contracting arrangement between them and the commissioner be preferable?

For those of us who are curious about SIBs, we are probably interested in knowing not only whether SIBs add value, but also whether and how the different SIB models may or not be appropriate in different contexts. It is only through proper evaluation and timely sharing of findings and lessons learned that we continue to develop our collective understanding of what’s different about the SIB way of doing things, and whether it’s worthwhile.