Monday, September 19, 2016

Evaluation of the Wandsworth End of Life Care Coordination Centre

The Wandsworth Care Coordination Centre was designed to address confusion among patients and their families about who to contact for help and support, due to the range of organisations involved in caring for someone approaching the end of their life.

It comprises a 7 day nurse-led coordination team and helpline for patients, families and professionals based at Trinity; a dedicated St George’s End of Life Community Nurse; and a team of Marie Curie Health and Personal Care Assistants who can offer specialised hands-on care at home for people with any terminal illness.

This evaluation ran from October 2015 to June 2016 and explores the impact of the centre on patients, families and professionals, the cost savings and other forms of value offered by the centre as well as the lessons learned through implementation.

Tuesday, August 30, 2016

Evaluation of the Reducing Social Isolation and Loneliness Grant Fund

The Reducing Social Isolation and Loneliness Grant programme was commissioned and funded by North, Central and South Manchester Clinical Commissioning Groups (CCGs), and administered and managed by Manchester Community Central (Macc). OPM Group evaluated the programme over 2013-16 and produced this final report and report summary.

Monday, August 22, 2016

Reflect, adjust and share learning: how to measure soft skill development in children and young people

So far in this blog series, we’ve talked about how to manage expectations and understand findings from baseline and exit surveys with children and young people. We’ll now continue this conversation with a focus on the challenges of measuring soft skill development, and share some useful tools and ideas from our recent work.

The challenges of measuring soft skills and getting honest and accurate answers when working with children and young people are well documented. Common reasons for this include:

This blog explores how we adjusted our approach to overcome these issues when evaluating the Reading Agency’s Reading Hack and Greater London Authority (GLA)’s Stepping Stones programmes.

Lessons from Reading Hack evaluation

Reading Hack is an innovative national programme that brings together volunteering, reading advocacy and reading-inspired activities for children and young people. Our 3-year evaluation was designed to measure impact and programme learning, while contributing to the evidence base of ‘what works’ in getting young people to read more.

The first year of research involved surveys, observation, focus groups, and in-depth interviews with young people and staff. Our fieldwork focused on four case study library authorities, while our baseline and exit surveys were co-produced with the Reading Agency and rolled out to participants across the country.

When the survey results came back to us we found they told a very different story from the qualitative fieldwork. We knew from interviews and from reflective questions included in the exit survey that participants felt the programme was having a positive impact on them in lots of ways, but the comparison of the baseline and exit survey results suggested that little change had taken place for young people over the course of the year. Although young people had generally ranked their skill development high in the exit survey, many had also ranked themselves so highly in the baseline survey that there was comparatively little change. The second issue was that the baseline and exit surveys were not linked due to concerns about collecting personal data from this age group, and Reading Hack is a rolling programme with young people joining and leaving at different times across the year. This meant there was no way to know for sure if the same young people who completed the survey at the start of the programme were those who completed it at the end.

To address these challenges, we looked at what had worked well in understanding impact so far in this evaluation. Building on these strengths, and together with the Reading Agency, we adjusted our approach to the survey in Year Two. The traditional baseline and exit survey was replaced with a reflective survey to be filled out at the end of the participants’ journey. This type of survey encourages self-reflection of the ‘distance travelled’ and acknowledges that participants’ awareness of the skills in question will be more developed at this stage in their journey. This means the young people are likely to be more able to reflect on where they think they were at the start and how this compares to where they feel they are now. Secondly, a reflective survey means we can confidently capture a young person’s complete journey, solving our challenge of the unlinked baseline and exit surveys. We anticipate that by adapting our approach this will better capture the extent of the changes we heard about through our qualitative research.

Lessons from Stepping Stones evaluation

GLA’s Stepping Stones programme is another example of where we have recently addressed some of the challenges of evaluating soft skill development with children and young people. Here we are combining formative, summative and economic elements to understand whether the GLA’s pilot model of peer mentoring and life skills lessons does indeed support at-risk children to have smoother, more successful transitions from primary to secondary school.

In order to set an honest and accurate baseline, we are working with the GLA’s Peer Outreach Workers and using their ‘virtual young Londoner’ tool. This tool creates a fictional character and allows young people to reflect on what needs, successes and challenges their character faces. Through projection, participants are able to speak more openly about their aspirations, behaviour and attitudes without feeling overly vulnerable or exposed, especially when speaking in peer groups. This method also removes any judgment as to where participants set their baseline and exit responses. Instead of asking young people to rank themselves as ‘good’ or ‘bad’, it asks for examples of how their character might feel, react or change in different situations.

We are in the early stages of the Stepping Stones evaluation but we are taking all this learning on board as we design the next steps. As in the Reading Hack evaluation, we are also taking a multi-method approach so that we can compare findings to uncover and explore patterns and discrepancies as they arise.

The importance of sharing learning and reflection

Although the challenges discussed in this blog are not new, our aim is that through reflection and sharing our learning, tools and methods we can work together to keep uncovering what works, why, and how. This blog series is part of accelerating this conversation and strives to share approaches for capturing more meaningful findings to support work with children and young people.

Be sure to check out the next blog in this series to learn more about flexible approaches to qualitative research with children and young people that can help in this endeavour.

Monday, August 15, 2016

Handle with care: a case study in using baseline and follow up surveys

The new orthodoxy

Baseline and follow up methodologies have become the new orthodoxy when assessing the impact of interventions on programme participants – particularly when evaluating programmes with children and young people. In a context where experimental approaches are rarely seen as workable or even ethical, collecting baseline and follow up data is increasingly the default option expected by funders and commissioners. They are also relatively cheap to deliver – which, in current times – is appealing.

The evaluator will typically convert a programme’s anticipated outcomes into a set of indicators, use these to form the basis of a survey, and then invite participants to complete the survey at the start and end of the programme. They then hope for the best, based on an assumption that the programme will see a positive shift against the indicators over the course of the intervention.

However, when the results come back there are not always neat improvements in the follow-up. This is where disappointment and panic can set in. A surface level analysis of the data might suggest that the programme has had no – or indeed had a negative – impact on participants. This is not good for busy practitioners who are juggling delivery with the pressure of proving the impact of their work, particularly when they rely on good outcomes for future funding!

Drawing on our recent experience of evaluating a range of complex social interventions, while baseline and follow-up approaches are not necessarily the wrong tool for the job the results need to be carefully interpreted and contextualised and – where possible – cross-checked against other strands of evidence. Only then can you begin to draw out some sensible conclusions about the impact that might have been achieved.

A case study: evaluating Body & Soul Beyond Boundaries programme

We recently conducted an impact evaluation of Beyond Boundaries, an innovative peer mentoring programme for HIV charity Body & Soul. This used a baseline and follow up survey against key indicators. When the results came in there was a positive progression against the majority of the indicators. Young people who had been on the programme were more likely to communicate openly and honestly about their health and their HIV status by the end of the programme. However respondents responded less positively to some wellbeing indicators around self-respect and general state of happiness.

It would have been easy to assume that the programme had had little impact on these areas of wellbeing. However, further scrutinising of the different strands of data allowed us to develop an alternative analysis.

1) Beyond Boundaries did not operate in isolation from other factors that influenced participants’ lives. Programme staff emphasised that the particular cohort they are working with led very chaotic lives and tend to experience a myriad of socioeconomic issues. Here it could be reasonably argued that their participation in the programme may have been helping them to maintain a certain level of wellbeing, and possibly even prevented a more severe decline against these indicators.

2) As a consequence of receiving support and building trust with the service, there are examples where participants increased their emotional literacy and become more willing and able to answer openly and honestly about how they feel. This could explain how they became more able to appraise their personal sense of wellbeing in a more critical way. This finding is consistent with wider evidence that suggests that young people in particular are likely to overstate their levels of wellbeing at baseline and then provide more open and critical score in the follow up.

A further challenge was that, for a whole host of reasons, there was a high rate of attrition between participants completing the baseline and the follow up. This meant that the survey data in isolation did not produce a robust understanding of the impact of the programme. However, when this data was taken in combination with other strands of the evaluation it was possible to make stronger claims about the impact. Triangulating the findings from the baseline and follow up survey with case study and focus group data also allowed us to better understand participant’s journeys and to explore the impact of the programme on the most vulnerable and hard to reach participants, who are often the least willing to take part in evaluation activities.

Focus on why and how, not before and after

Collecting detailed qualitative feedback from programme staff and participants helped us to explore those crucial “why?” and the “how?” questions which helped to shed light on the pathways to outcomes. This was crucial when exploring the impact of a complex service used by participants who have equally complex lives.

Tuesday, August 9, 2016

Our work with children and young people is informing best practice

We are delighted that our evaluation of the vInspired 24/24 programme has been included in the systematic review of the Early Intervention Foundation (EIF) Social and Emotional Learning Programme Assessment, contributing to the evidence base of best practice in this area.

The EIF has been funded by the Joseph Rowntree Foundation to undertake a programme of work to understand what works in developing the relational, social and emotional capabilities and assets of children and families either in or at risk of poverty.  This work builds upon and extends an existing review carried out by EIF in 2015: ‘Social and Emotional Learning: Skills for life and work’

The 24/24 programme, managed by vInspired and funded by the Department of Education (DfE) and the Jack Petchey Foundation, was a structured volunteering and social action intervention programme, designed to help young people facing challenging circumstances to improve their life choices. The evaluation was designed to measure the progress of individuals during the programme and evidence the impact of the programme on young peoples’ outcomes and against key performance indicators. It also identified the critical success factors and limitations of the delivery model and made recommendations for future delivery.

The evaluation report can be accessed here.

Thursday, June 23, 2016

Not waving, but drowning… in data? Managing data and Social Impact Bonds

This is the third blog in the series that summarises key points in a speech I delivered at the 2016 Social Investing and Corporate Social Responsibility Forum, held at Meiji University in Tokyo. Here, I reflect on the following issue:

The importance of data

SIBs are outcomes-focussed. In the first blog of this series, I wrote about outcome metrics, and I think it’s sufficiently appreciated that outcome measurement is critical to the successful of SIBs. It is also widely reported that the ‘data burden’ of SIBs can be considerable. Our first interim evaluation report of the Essex County Council SIB, for example, pointed out that “information and reporting requirements of the SIB have felt onerous for all partners”.

While outcomes are vital, SIBs require more than outcome data to work. If stakeholders are not careful, they may find themselves in a position of expending disproportionate amounts of time and resources in collecting, analysing and reporting data.

We should not, however, simply accept this at face value as a ‘fact’ of SIBs. If we are to take the outcomes-focussed principle to its logical conclusion, then we must surely also be clear about the desired outcomes for data collection and its use. Starting with this helps streamline data collection and use, ensuring that we know why we are collecting certain types of data, and how we are going to use them.

As SIBs involve multiple players, each must develop clarity about data for their specific needs. In addition, the different players need to work together to minimise duplication and ensure that information is shared and that there are systems in place to support collaborative interpretation and scrutiny.

Shared approaches to data categorisation and collection

Different though SIB stakeholders may be, their approaches to data categorisation and collection is surprisingly similar.

When categorising data there are particular ‘headings’ that data relate to. Data can be categorised as regular performance management data, process data, impact data, and cost-benefit data. This points to the fact that delivering a SIB effectively requires parties to monitor ongoing operational matters; constantly assess and review the implementation of the intervention(s); ascertain the degree of which implementation may be leading to the desired outcomes; and assurance that transactions represent good value for money.

Indeed, different SIB players often collect and/or require similar, if not identical, data. This immediately alerts us to the fact that the various players need to work collaboratively to ensure they do not duplicate efforts; share data where relevant; and streamline processes to reduce the overall burden of collecting, analysing and reporting data. Not doing so can lead to unintended additional costs for all parties, as our second Essex SIB interim evaluation report has shown.

Different stakeholders use data differently

While data required can be identical across the various players, the use of the same data can be quite different.

Outcome payers scrutinise data as part of due diligence, which can be heightened in the case of SIBs. They need to show that they have undergone robust scrutiny of the data to justify paying out to investors. They also look at data from the point of view of assessing performance against the original business case for the SIB, and to see how the SIB way of doing things compare with more conventional ways of commissioning services.

Service providers look at the data in terms of understanding the effectiveness of implementation and the efficacy of the intervention. This may be particularly true if they are not delivering a strongly evidence-based intervention, and/or if their intervention is flexible and adaptive. In addition, service providers will wish to be clear about the true cost of delivering a service under a SIB model and how it compares with other ways of ‘selling’ services. They may be interested in ‘going to market’ more widely through a SIB model, and such information is therefore crucial in helping to price appropriately and competitively. Needless to say, most if not all service providers have a strong focus on outcomes for their service users.

Social investors, from our experience, tend to look at data with an eye on what can be improved. They are always looking at how they may redirect resources, adjust inputs and the approach to give it the best chance of success. After all, payment is linked to success. They also look at the data to assess return on investment, and how it compares with other forms of investment, and also how it compares with their investments in other SIBs.

Evaluators, of course, look at the bigger picture in terms of what the impact of the SIB has been and whether it adds values, over and above the operational concerns of individual SIB players. This is where I would encourage evaluators of SIBs to place emphasis on understanding the impact of the SIB, as opposed to the impact of the intervention per se. There is a real gap in our collective knowledge base in terms of how and whether SIBs add value; and whether particular models of SIBs may be more or less effective in different contexts, policy areas, or target groups.


Just as SIBs are focussed on outcomes, the exercise of collecting and analysing data for a SIB should equally be outcomes-focussed. Many commentators have noted that SIBs can be overly complex, and data requirement is often part of this complexity. Equally, commentators have pointed out that in order for SIBs to flourish and to achieve the desired degree of spread and scale, it is vital for us to work together to find ways of simplifying and streamlining core SIB components so as to reduce transaction costs.

There will always be a degree of bespoke tailoring required in specific contexts, but there are core generic components that may be simplified or made consistent. The information collection and reporting requirement seems to be one of these ‘design features’ of SIBs, using the terminology from Bridges Ventures, that may be amenable to this, thereby contributing towards reducing the transaction costs of SIBs.

Monday, June 20, 2016

Reading Hack evaluation interim report

Reading Hack is an innovative Reading Agency programme funded by the Paul Hamlyn Foundation. Launched in 2015, the programme works with young people aged 13-24 across England to create opportunities for reading-inspired activity, volunteering roles and peer-to-peer reading advocacy.

This is the interim report from a three year evaluation of Reading Hack to understand its impact on the young people and organisations taking part, as well as exploring more widely what works when engaging young people with reading.

Monday, June 20, 2016

Reading Hack is a fun way for young people to recognise their self-worth

Reading Hack is an innovative Reading Agency programme funded by the Paul Hamlyn Foundation. Launched in 2015, the programme works with young people aged 13-24 across England to create opportunities for reading-inspired activity, volunteering roles and peer-to-peer reading advocacy.

OPM has been commissioned to undertake a three year evaluation of Reading Hack to understand its impact on the young people and organisations taking part, as well as exploring more widely what works to engage young people with reading.

Our findings in the first year show that Reading Hack is popular and growing quickly, offering a unique opportunity for young people to access year-round volunteering, skills development and engagement in one place.

“I feel like I am taking part in the community – not someone on top of the community but within it helping looking after it.” Reading Hack volunteer

In its first year of delivery, 53 library authorities signed up as delivery partners and delivered approximately 1,800 Reading Hack events in 621 local libraries. Of these events, 26% happened in areas of social deprivation. Across these authorities, 5,686 young people took part as ‘Reading Hackers’ with 9,619 young people participating in events. That equates to an average of 107 volunteers per Local Authority area engaging an average of 181 other young people in activities.

Reading Hackers spoke about how fun the experience was, as well as how positive it felt to be productive and be contributing to ‘something useful.’ The programme’s focus on young people’s leadership, from developing an idea through to implementing and delivering it, has helped participants to recognise their self-worth and feel more confident in their ability to take initiative.

You can find out more information about Reading Hack here and download the interim report with our learning and recommendations here.

Thursday, June 16, 2016

Social impact bonds in Japan – it’s not always about saving money

In the previous blog in this series, I shared some key messages from a recent speech I delivered at the 2016 Social Investing and Corporate Social Responsibility Forum, held at Meiji University in Tokyo. In this blog, I summarise some of the thinking shared with Japanese colleagues on a second topic they had asked me to touch on:

Can you structure outcome metrics in a way that is not motivated directly by budgetary savings but by social wellbeing?

Level of outcomes

We can look at outcomes at the individual level, the group level and the system level. At the individual level, outcomes are obviously about achieving and improving individual wellbeing. At the group level, we may aim for improving social wellbeing. At the system level, there can be policy or political priorities that focus on social wellbeing. There can also be economic benefits that are being aimed for, and also budgetary ‘cashable’ savings.

Meeting the priorities for outcomes at one level may not always lead to (or support) achievement of priorities at another level. Outcomes for individuals, for instance, may not always lead to savings for the system. An intervention aimed at reducing the social isolation of a socially excluded group may uncover previously unmet needs, resulting in these people being put in touch with other services. While this is a good thing, it does mean there is a direct service use cost, at least in the immediate short term, that was not previously there.

Wellbeing, rather than savings, as a driver

If we are to take an approach that prioritises the achievement of individual/social wellbeing as opposed to the achievement of budgetary savings, there are different ways for structuring outcome metrics in support of this goal.

(a) Taking a wider definition of outcomes and accounting for their value: This recognises that there are different perspectives of the value of the same outcome. For example, when we talk about the ‘cost of crime’, members of the public do not really think of the cost of policing, the cost of incarceration, etc. Instead, they are likely to think about the economic, social and emotional harm inflicted on the victim and communities experiencing crime. A wellbeing perspective will therefore need to take these into account, over and above the system costs.

(b) Acknowledging public ‘willingness to pay’: The public can value things even when the financial implications of those things may not be clear. This ‘willingness to pay’ is underpinned by what we as a society think of as ‘right’ and ‘of value’. It can be influenced by cultural, ethical or moral norms. For example, there can be a strong moral case for supporting war veterans.

Public ‘willingness to pay’ is subjective, contextual, and can change. For example, prior to the UK Government announcing its Comprehensive Spending Review (CSR) at the end of 2015, there was widespread expectation that the government would cut policing budgets significantly. In fact, the government had hinted this will happen. The CSR took everyone by surprise when it was announced that policing budgets would be protected. Why had this happened? In a nutshell, the global terrorism threat had become increasingly significant. Public expectations around policing and security have gone up the political agenda, and it would have been untenable politically to cut policing budgets in such a context.

How may this look like in terms of structuring outcomes

First, there can be situations where the primary outcome metric that triggers payment is savings related, but there are also a range of other wellbeing outcomes being monitored and tracked. For example, in the Essex County Council SIB, the primary outcome metric that triggers payment is ‘care days avoided’. There are a range of secondary outcomes, such as strength of family functioning, emotional wellbeing, educational outcomes, etc. Data on all of these are analysed and discussed by the Programme Board on a regular basis. The seriousness to which the various players consider outcomes holistically is demonstrated by the fact that the social investors have committed to re-contacting the families who have received Multi-Systemic Therapy some time after the intervention has ended to check whether a range of outcomes are sustained over and above the duration for formal outcomes monitoring required by the SIB.

Second, there can be outcome payers who are prepared to pay for outcomes even when they are not linked directly to savings. These outcomes may be paid for as stand-alone outcomes. One of the outcome metrics that triggers payment in the Lisbon Junior Code Academy SIB, for example, is ‘logical thinking’. Logical thinking, obviously, is not something that leads directly to budgetary savings, but is a life skill that will serve an individual well throughout his and her life and can operate in very different situations.

Third, rather than to pay for stand-alone wellbeing outcomes, these may be blended in with other types of outcomes that may be more savings-related. The New South Wales, Australia, Benevolent Society Social Benefit Bond shows how this may be done. Payment is triggered by the Performance Percentage. Performance Percentage is a weighted average of three separate measures. This provides a model for how wellbeing outcomes can be woven in with others to create a composite outcome that triggers payment.


As many of the initial SIBs are underpinned by a savings logic, it is easy to think that SIBs can only be structured with this objective in mind. The outcomes focus of SIBs should challenge us to be clear about ‘outcomes for whom?’ Shifting the gaze away from system-level outcomes onto outcomes for others requires us to think of different ways to identify and structure outcome metrics to support the achievement of individual and social wellbeing.

Thursday, June 16, 2016

Evaluation of Moneytalk Bournemouth, Quaker Social Action


In August 2014 Quaker Social Action (QSA) commissioned OPM to evaluate the Moneytalk programme in Bournemouth. Moneytalk is a financial education programme, delivered over 6 workshop sessions, which aims to equip people with the information, knowledge and tools they need to cope with financial difficulties. The evaluation sought to assess the impact of the programme on participants and local organisations involved in delivery, and to explore what worked well and less well about the programme design and deliver, and why.

What we did

OPM conducted a qualitative evaluation, and incorporated quantitative data collected and analysed by QSA into the report. We conducted in depth telephone interviews with workshop participants, referral partners and facilitator trainees over a seven month period, capturing the experiences and insights of those involved. This data was triangulated with monitoring information and surveys collated by QSA.


Workshop participants reported a number of benefits including: increase in their confidence in dealing with their finances; improved ability to save money, prioritise spending and budget effectively; and changes in attitude towards borrowing. Participants also learned how to talk about money with partners and family members in an honest and constructive way.

For QSA, the evaluation demonstrated that the Moneytalk programme could be adapted to work in different contexts (it was first run in London) and that local partnership arrangements were central to the success of the project. The evaluation provided both learning to inform the development of the programme, and a solid evidence base for its effectiveness.