Wednesday, February 22, 2017

Independent evaluation of the Essex Multi Systemic Therapy Social Impact Bond

For the past three years OPM has undertaken an independent evaluation of the Essex County Council Multi Systemic Therapy (MST) Social Impact Bond (SIB). The Social Impact Bond delivers MST to children and young people at risk of being taken into the local authority care.  The Social Impact Bond has been managed by Children’s Support Services Ltd, a Special Purpose Vehicle, established by Social Finance.

The purpose of OPM’s work has been to evaluate the potential added value that can be achieved through local authority and other commissioners using Social Impact Bonds as a mechanism for financing the delivery of new services.  Specifically, whether the use of a SIB impacted on the implementation of MST and whether significant value was added to either outcomes or performance.

The evaluation highlights many instances where the use of the SIB has been seen to add value to systems and processes and indirectly to overall performance.  It also draws attention to some additional costs and complexities which may result from operating through a SIB and how these can be mitigated.  Drawing on a variety of quantitative and qualitative evidence and insights, OPM have now summarised their findings and used this to inform a set of recommendations applicable to organisations that may be considering developing SIBs of their own.

In addition, OPM have drawn on the evaluation findings to develop a new guide on ‘Top Tips for developing and implementing a Social Impact Bond’ – an interactive document for commissioners, providers, funders and managers.  While summarising the Essex experience and including examples from Essex to illustrate specific points, this also brings in themes from the wider evidence base, with the aim of distilling lessons that have wider applicability.

The Essex SIB was the first local authority commissioned SIB to be established in the UK and the experience and findings reflect its innovative nature.  It was launched in 2013 and will be operational for five years, concluding in 2018. Two teams from Action for Children have delivered an MST intervention to approximately 260 young people to date resulting an in an approximately 80% rate of successful care diversion.   MST is an evidence-based programme that seeks to improve parenting and rebuild positive family relationships, enabling families to manage future crisis situations themselves.

 

 

 

 

 

Friday, January 27, 2017

Evaluation report – Learning into practice project

OPM was commissioned to produce an evaluation report on the Learning into Practice Project (LiPP) which was funded under DfE’s Innovation Programme and ran by NSPCC and SCIE.

The LiPP was testing a proof of concept – aiming to establish what is needed on an ongoing and sustainable basis to improve the quality and use of Serious Case Reviews (SCR)  in England. The LiPP consisted of four main workstreams:

Our evaluation was asked to explore:

The evaluation involved 63 qualitative interviews with those involved in LiPP activities; and an online survey aimed at non-participants in the LiPP activities to explore wider views on the proposals. 126 people completed this. Alongside the external evaluation, the project team conducted an internal evaluation of the LiPP, focussing on describing the mechanisms being tested, and the emerging learning from these.

Please find this report and other evaluation reports under the same Innovation Programme on DfE’s website.

Tuesday, September 20, 2016

Reflections on evaluating the Wandsworth Care Coordination Centre

We recently completed our evaluation of the Wandsworth Care Coordination Centre, based at Royal Trinity Hospice, which is a pilot service providing coordination and care to people approaching end of life.

Overall we found the pilot to have been a success, with positive impacts for people at the end of life, their families and carers, as well as for other professionals working in the local end of life care system. Rather than repeating all our detailed findings here, I wanted to reflect on a few points which I personally found particularly interesting about this project and what the evaluation discovered.

The holistic view

Firstly, one of the real strengths of the Centre is how, through coordinating people’s end of life care support, it has an overview of people’s experiences and needs, and brings together all the services that help them in their last months and days. We found that this was positive for people and their families, as well as representing a more effective way of working for other health and social care professionals involved in end of life care. The crucial point here is that the Centre is able to take a holistic view of a person at the end of life and their care, which is invaluable in the complex end of life care landscape. The importance of this holistic view in end of life care is now widely recognised – not least as a key quality principle in NICE’s guidance for end of life care.

Secondly, what is true for the individual at the end of life, in terms of taking a holistic view, we also found to be true for the local end of life care system as a whole. It’s important to remember that the success of the Centre is linked to how well it interacts with other parts of Wandsworth’s local system, from district nurses and GPs, via Royal Trinity Hospice’s other services, to physiotherapists and specialised CNSs, relaying information back and forward, and coordinating the efforts and dedication of all the other professionals involved. But through coordinating all this activity, the Centre also gains a kind of holistic perspective on the local system overall; and in occupying this central, coordinating position, the Centre is able to identify systemic blocks in local processes, and try to have these resolved. In this way, it was able to improve pathways – ultimately bettering care provision for people. This represents an added benefit of having a coordinating function in a system as complex as end of life care.

Contributing to an under-researched field

The last point that really piqued my interest was methodological. We knew that the role of coordination in end of life care had been under-researched and under-evaluated. Particularly with regards to economic evaluation approaches for end of life care and care coordination, the evidence base is still rather slim (with some exceptions, such as the NAO’s study, and the economic evaluation of EPaCCS).

In our own small-scale, local economic evaluation of the Centre, we mainly worked with the Centre’s existing data, but nonetheless were able to demonstrate monetisable benefits for the Centre’s work. A lot of this hinges on assigning value to the reduction in avoidable admissions, with that reduction recognised as something that is notoriously difficult to quantify, and attribution in a system with multiple interventions is tricky. Nonetheless, we were able to run through scenarios for what kind of monetisable benefits the Centre may be achieving, suggesting that there is value in coordinating functions for end of life care (something we also found in a previous study we undertook for Macmillan).

There’s clearly still a lot more research to be done, but it’s been great to contribute to the development of knowledge in this field through our evaluation of the Centre. Of course, no two models are exactly the same, with local context a huge factor influencing variations in service provision; but hopefully the lessons we identified from Wandsworth can help other commissioners and service providers in considering whether such a model would work for them elsewhere.

The full report from our evaluation can be found here.

If you want to talk to us about how we can support you to evaluate services in end of life care, please contact Chih Hoong Sin.

Monday, September 19, 2016

Our evaluation finds end of life coordination centre a success

Our evaluation looking at the impact of a new End of Life Care Coordination Centre based at Royal Trinity Hospice has found that patients using this centre are less likely to be admitted to hospital or to die in hospital at the end of their lives.

The Wandsworth Care Coordination Centre was designed to address confusion among patients and their families about who to contact for help and support, due to the range of organisations involved in caring for someone approaching the end of their life.

It comprises a 7 day nurse-led coordination team and helpline for patients, families and professionals based at Trinity; a dedicated St George’s End of Life Community Nurse; and a team of Marie Curie Health and Personal Care Assistants who can offer specialised hands-on care at home for people with any terminal illness. It was commissioned by Wandsworth CCG as a two year pilot in February 2015.

We found the centre resulted in avoided admissions and faster discharge from hospital, which could reduce deaths in hospital. The centre was also found to free up clinical time for local healthcare professionals. The evaluation calculated these benefits could amount to potential savings of almost £350,000 to the NHS in the first year of the pilot, although these have to be offset against the cost of delivering the model.

The evaluation also found the centre improved the quality of end of life care through patients and families feeling supported at home and feeling reassured that they will be looked after. The quality of care arranged and provided by the centre was also deemed to be improved.

Dallas Pounds, Chief Executive Officer at Trinity, said, “Most people want to be cared for at home at the end of their lives rather than spending their final days in hospital. We now have evidence to show that the Wandsworth End of Life Care Coordination Centre is helping this become a reality for more people.

We hope other hospices and healthcare providers are able to draw from this evaluation so more people can achieve their final wishes at the end of life.”

The final evaluation report can be found here.

Monday, August 22, 2016

Reflect, adjust and share learning: how to measure soft skill development in children and young people

So far in this blog series, we’ve talked about how to manage expectations and understand findings from baseline and exit surveys with children and young people. We’ll now continue this conversation with a focus on the challenges of measuring soft skill development, and share some useful tools and ideas from our recent work.

The challenges of measuring soft skills and getting honest and accurate answers when working with children and young people are well documented. Common reasons for this include:

This blog explores how we adjusted our approach to overcome these issues when evaluating the Reading Agency’s Reading Hack and Greater London Authority (GLA)’s Stepping Stones programmes.

Lessons from Reading Hack evaluation

Reading Hack is an innovative national programme that brings together volunteering, reading advocacy and reading-inspired activities for children and young people. Our 3-year evaluation was designed to measure impact and programme learning, while contributing to the evidence base of ‘what works’ in getting young people to read more.

The first year of research involved surveys, observation, focus groups, and in-depth interviews with young people and staff. Our fieldwork focused on four case study library authorities, while our baseline and exit surveys were co-produced with the Reading Agency and rolled out to participants across the country.

When the survey results came back to us we found they told a very different story from the qualitative fieldwork. We knew from interviews and from reflective questions included in the exit survey that participants felt the programme was having a positive impact on them in lots of ways, but the comparison of the baseline and exit survey results suggested that little change had taken place for young people over the course of the year. Although young people had generally ranked their skill development high in the exit survey, many had also ranked themselves so highly in the baseline survey that there was comparatively little change. The second issue was that the baseline and exit surveys were not linked due to concerns about collecting personal data from this age group, and Reading Hack is a rolling programme with young people joining and leaving at different times across the year. This meant there was no way to know for sure if the same young people who completed the survey at the start of the programme were those who completed it at the end.

To address these challenges, we looked at what had worked well in understanding impact so far in this evaluation. Building on these strengths, and together with the Reading Agency, we adjusted our approach to the survey in Year Two. The traditional baseline and exit survey was replaced with a reflective survey to be filled out at the end of the participants’ journey. This type of survey encourages self-reflection of the ‘distance travelled’ and acknowledges that participants’ awareness of the skills in question will be more developed at this stage in their journey. This means the young people are likely to be more able to reflect on where they think they were at the start and how this compares to where they feel they are now. Secondly, a reflective survey means we can confidently capture a young person’s complete journey, solving our challenge of the unlinked baseline and exit surveys. We anticipate that by adapting our approach this will better capture the extent of the changes we heard about through our qualitative research.

Lessons from Stepping Stones evaluation

GLA’s Stepping Stones programme is another example of where we have recently addressed some of the challenges of evaluating soft skill development with children and young people. Here we are combining formative, summative and economic elements to understand whether the GLA’s pilot model of peer mentoring and life skills lessons does indeed support at-risk children to have smoother, more successful transitions from primary to secondary school.

In order to set an honest and accurate baseline, we are working with the GLA’s Peer Outreach Workers and using their ‘virtual young Londoner’ tool. This tool creates a fictional character and allows young people to reflect on what needs, successes and challenges their character faces. Through projection, participants are able to speak more openly about their aspirations, behaviour and attitudes without feeling overly vulnerable or exposed, especially when speaking in peer groups. This method also removes any judgment as to where participants set their baseline and exit responses. Instead of asking young people to rank themselves as ‘good’ or ‘bad’, it asks for examples of how their character might feel, react or change in different situations.

We are in the early stages of the Stepping Stones evaluation but we are taking all this learning on board as we design the next steps. As in the Reading Hack evaluation, we are also taking a multi-method approach so that we can compare findings to uncover and explore patterns and discrepancies as they arise.

The importance of sharing learning and reflection

Although the challenges discussed in this blog are not new, our aim is that through reflection and sharing our learning, tools and methods we can work together to keep uncovering what works, why, and how. This blog series is part of accelerating this conversation and strives to share approaches for capturing more meaningful findings to support work with children and young people.

Be sure to check out the next blog in this series to learn more about flexible approaches to qualitative research with children and young people that can help in this endeavour.

Monday, August 15, 2016

Handle with care: a case study in using baseline and follow up surveys

The new orthodoxy

Baseline and follow up methodologies have become the new orthodoxy when assessing the impact of interventions on programme participants – particularly when evaluating programmes with children and young people. In a context where experimental approaches are rarely seen as workable or even ethical, collecting baseline and follow up data is increasingly the default option expected by funders and commissioners. They are also relatively cheap to deliver – which, in current times – is appealing.

The evaluator will typically convert a programme’s anticipated outcomes into a set of indicators, use these to form the basis of a survey, and then invite participants to complete the survey at the start and end of the programme. They then hope for the best, based on an assumption that the programme will see a positive shift against the indicators over the course of the intervention.

However, when the results come back there are not always neat improvements in the follow-up. This is where disappointment and panic can set in. A surface level analysis of the data might suggest that the programme has had no – or indeed had a negative – impact on participants. This is not good for busy practitioners who are juggling delivery with the pressure of proving the impact of their work, particularly when they rely on good outcomes for future funding!

Drawing on our recent experience of evaluating a range of complex social interventions, while baseline and follow-up approaches are not necessarily the wrong tool for the job the results need to be carefully interpreted and contextualised and – where possible – cross-checked against other strands of evidence. Only then can you begin to draw out some sensible conclusions about the impact that might have been achieved.

A case study: evaluating Body & Soul Beyond Boundaries programme

We recently conducted an impact evaluation of Beyond Boundaries, an innovative peer mentoring programme for HIV charity Body & Soul. This used a baseline and follow up survey against key indicators. When the results came in there was a positive progression against the majority of the indicators. Young people who had been on the programme were more likely to communicate openly and honestly about their health and their HIV status by the end of the programme. However respondents responded less positively to some wellbeing indicators around self-respect and general state of happiness.

It would have been easy to assume that the programme had had little impact on these areas of wellbeing. However, further scrutinising of the different strands of data allowed us to develop an alternative analysis.

1) Beyond Boundaries did not operate in isolation from other factors that influenced participants’ lives. Programme staff emphasised that the particular cohort they are working with led very chaotic lives and tend to experience a myriad of socioeconomic issues. Here it could be reasonably argued that their participation in the programme may have been helping them to maintain a certain level of wellbeing, and possibly even prevented a more severe decline against these indicators.

2) As a consequence of receiving support and building trust with the service, there are examples where participants increased their emotional literacy and become more willing and able to answer openly and honestly about how they feel. This could explain how they became more able to appraise their personal sense of wellbeing in a more critical way. This finding is consistent with wider evidence that suggests that young people in particular are likely to overstate their levels of wellbeing at baseline and then provide more open and critical score in the follow up.

A further challenge was that, for a whole host of reasons, there was a high rate of attrition between participants completing the baseline and the follow up. This meant that the survey data in isolation did not produce a robust understanding of the impact of the programme. However, when this data was taken in combination with other strands of the evaluation it was possible to make stronger claims about the impact. Triangulating the findings from the baseline and follow up survey with case study and focus group data also allowed us to better understand participant’s journeys and to explore the impact of the programme on the most vulnerable and hard to reach participants, who are often the least willing to take part in evaluation activities.

Focus on why and how, not before and after

Collecting detailed qualitative feedback from programme staff and participants helped us to explore those crucial “why?” and the “how?” questions which helped to shed light on the pathways to outcomes. This was crucial when exploring the impact of a complex service used by participants who have equally complex lives.

Monday, June 20, 2016

Reading Hack evaluation interim report

Reading Hack is an innovative Reading Agency programme funded by the Paul Hamlyn Foundation. Launched in 2015, the programme works with young people aged 13-24 across England to create opportunities for reading-inspired activity, volunteering roles and peer-to-peer reading advocacy.

This is the interim report from a three year evaluation of Reading Hack to understand its impact on the young people and organisations taking part, as well as exploring more widely what works when engaging young people with reading.

Monday, June 20, 2016

Reading Hack is a fun way for young people to recognise their self-worth

Reading Hack is an innovative Reading Agency programme funded by the Paul Hamlyn Foundation. Launched in 2015, the programme works with young people aged 13-24 across England to create opportunities for reading-inspired activity, volunteering roles and peer-to-peer reading advocacy.

OPM has been commissioned to undertake a three year evaluation of Reading Hack to understand its impact on the young people and organisations taking part, as well as exploring more widely what works to engage young people with reading.

Our findings in the first year show that Reading Hack is popular and growing quickly, offering a unique opportunity for young people to access year-round volunteering, skills development and engagement in one place.

“I feel like I am taking part in the community – not someone on top of the community but within it helping looking after it.” Reading Hack volunteer

In its first year of delivery, 53 library authorities signed up as delivery partners and delivered approximately 1,800 Reading Hack events in 621 local libraries. Of these events, 26% happened in areas of social deprivation. Across these authorities, 5,686 young people took part as ‘Reading Hackers’ with 9,619 young people participating in events. That equates to an average of 107 volunteers per Local Authority area engaging an average of 181 other young people in activities.

Reading Hackers spoke about how fun the experience was, as well as how positive it felt to be productive and be contributing to ‘something useful.’ The programme’s focus on young people’s leadership, from developing an idea through to implementing and delivering it, has helped participants to recognise their self-worth and feel more confident in their ability to take initiative.

You can find out more information about Reading Hack here and download the interim report with our learning and recommendations here.

Thursday, June 16, 2016

Evaluation of Moneytalk Bournemouth, Quaker Social Action

Background

In August 2014 Quaker Social Action (QSA) commissioned OPM to evaluate the Moneytalk programme in Bournemouth. Moneytalk is a financial education programme, delivered over 6 workshop sessions, which aims to equip people with the information, knowledge and tools they need to cope with financial difficulties. The evaluation sought to assess the impact of the programme on participants and local organisations involved in delivery, and to explore what worked well and less well about the programme design and deliver, and why.

What we did

OPM conducted a qualitative evaluation, and incorporated quantitative data collected and analysed by QSA into the report. We conducted in depth telephone interviews with workshop participants, referral partners and facilitator trainees over a seven month period, capturing the experiences and insights of those involved. This data was triangulated with monitoring information and surveys collated by QSA.

Impact

Workshop participants reported a number of benefits including: increase in their confidence in dealing with their finances; improved ability to save money, prioritise spending and budget effectively; and changes in attitude towards borrowing. Participants also learned how to talk about money with partners and family members in an honest and constructive way.

For QSA, the evaluation demonstrated that the Moneytalk programme could be adapted to work in different contexts (it was first run in London) and that local partnership arrangements were central to the success of the project. The evaluation provided both learning to inform the development of the programme, and a solid evidence base for its effectiveness.

 

Tuesday, June 7, 2016

New evaluation of STEM Learning Triple Science Support Programme

We have just completed a two-year evaluation project for STEM Learning (a national body providing schools with continuous professional development in science, technology, engineering, and maths), looking into the impact of the Triple Science Support Programme (TSSP) in 2014-2016. The TSSP supports schools to develop their triple science provision at GCSE level. In our evaluation we spoke to teachers and delivery staff, and found that the TSSP has helped schools make considerable improvements in their triple science provision. We found the project had a range of positive impacts, including raising subject teachers’ confidence; improving science departments’ capacity and capability to teach triple science; and outcomes for students such as improved motivation, and progress and attainment in triple science. Download the full report here.