Monday, September 19, 2016

Southend Vision for 2030 – Southend on Sea Borough Council

Background

Following an initial phase of community and stakeholder engagement carried out by Southend-on-Sea Borough Council, the OPM Group was commissioned by the council to deliver a community engagement programme to support the Borough and Council in setting its community vision. The aim of the project was to facilitate a series of conversations to explore and develop the collective aspirations of a different groups within the community for the Borough, as well as identifying how these groups will work together to achieve it against a backdrop of reducing resources.

What did we do

In order to ensure that conversations were grounded in a detailed understanding and appreciation of the challenges and opportunities facing the borough, the initial stage of the project centred on in-depth research and scoping. This comprised community and stakeholder mapping, a rapid document review of recent public engagement and service monitoring data for the borough and a series of telephone scoping interviews.

The central element of our approach focused on a participation and engagement, harnessing the energy and enthusiasm of communities to ensure buy-in to the vision being developed. We carried out a series of workshops involving a wide cross-section of Southend’s communities, from businesses to grass-roots local organisations to residents more widely.

Outcome

These events comprised a smaller, targeted focus-group style discussions with specific groups as well as larger open-invitation workshops and enable local people to contribute thoughts and ideas to the ‘narrative’ being developed so that it reflected a genuine community-wide, collaborative discussion.

Monday, September 19, 2016

Our evaluation finds end of life coordination centre a success

Our evaluation looking at the impact of a new End of Life Care Coordination Centre based at Royal Trinity Hospice has found that patients using this centre are less likely to be admitted to hospital or to die in hospital at the end of their lives.

The Wandsworth Care Coordination Centre was designed to address confusion among patients and their families about who to contact for help and support, due to the range of organisations involved in caring for someone approaching the end of their life.

It comprises a 7 day nurse-led coordination team and helpline for patients, families and professionals based at Trinity; a dedicated St George’s End of Life Community Nurse; and a team of Marie Curie Health and Personal Care Assistants who can offer specialised hands-on care at home for people with any terminal illness. It was commissioned by Wandsworth CCG as a two year pilot in February 2015.

We found the centre resulted in avoided admissions and faster discharge from hospital, which could reduce deaths in hospital. The centre was also found to free up clinical time for local healthcare professionals. The evaluation calculated these benefits could amount to potential savings of almost £350,000 to the NHS in the first year of the pilot, although these have to be offset against the cost of delivering the model.

The evaluation also found the centre improved the quality of end of life care through patients and families feeling supported at home and feeling reassured that they will be looked after. The quality of care arranged and provided by the centre was also deemed to be improved.

Dallas Pounds, Chief Executive Officer at Trinity, said, “Most people want to be cared for at home at the end of their lives rather than spending their final days in hospital. We now have evidence to show that the Wandsworth End of Life Care Coordination Centre is helping this become a reality for more people.

We hope other hospices and healthcare providers are able to draw from this evaluation so more people can achieve their final wishes at the end of life.”

The final evaluation report can be found here.

Monday, August 15, 2016

Unlocking Local Capacity – four years on

Let’s start with the scene-setting – the introduction you hear at every local government conference you’ve been to for the last five years. Money is getting tighter and tighter. Demand is growing – particularly in areas like adult social care. Five years from now – short of some miraculous windfall – councils won’t be able to deliver many of the services they do at present, at least in the way they’re used to, and maybe not at all.

Next, you hear something about citizens and communities. Wherever we look for solutions – localism, behaviour change, channel shift, technology, better partnerships and so on – sooner or later relationships with communities crops up as crucial to making at least some of this work. That could mean getting better at co-designing services with citizens, as opposed to calling them out to dull consultation events every few years. At the other end of the spectrum, it could mean local organisations – or even just groups of residents – taking on a service or an aspect of a service that otherwise would no longer be sustained. Both of these activities are happening already, of course, in different places and according to different challenges.

This much we know – and have known it, talked about it and predicted work around it for over the last five years. But what we know less well is how far everyone’s got on in actually doing something about it. And that is what I want to find out.

So, if you work in a local authority, what is your organisation doing to build, nurture or unlock the capacity in your communities? How have you been trying to genuinely, deeply involve local people in redesigning services, or in helping them to change their lives/neighbourhoods for the better in ways that might not involve traditional council services at all? We asked these questions to 30 local authorities in 2011-12 when we researched our publication ‘Unlocking Local Capacity: why active citizens need active councils’. We made the case that empowering citizens didn’t just mean councils ‘getting out of the way’, but that on the contrary, it demanded that councils play a very direct, active role – just working in a different way than many had been used to.

Four years on, we want to revisit those same questions and take stock of what councils are doing or planning now. For some, the constant pressure on budgets and ever-increasing demand will have put innovation around community involvement firmly on the back-burner. For others, those same challenges have been a spur to action, driven by the ambition of certain members, senior managers, officers at the coalface or other local partners to try new things. Are we seeing real, tangible results, or is it all still a work in progress?

Over the next few months I’ll be holding a series of telephone interviews with strategy and policy leads in local authorities to hear about their successes, frustrations, ambitions and plans to build local capacity and move into new, dynamic and impactful collaborations with community partners. I would love to hear from people delivering different things across a range of local authorities across England to build up a picture of what’s happening and what works. So if you’d to add to the debate, please do get in touch.

Monday, August 15, 2016

Handle with care: a case study in using baseline and follow up surveys

The new orthodoxy

Baseline and follow up methodologies have become the new orthodoxy when assessing the impact of interventions on programme participants – particularly when evaluating programmes with children and young people. In a context where experimental approaches are rarely seen as workable or even ethical, collecting baseline and follow up data is increasingly the default option expected by funders and commissioners. They are also relatively cheap to deliver – which, in current times – is appealing.

The evaluator will typically convert a programme’s anticipated outcomes into a set of indicators, use these to form the basis of a survey, and then invite participants to complete the survey at the start and end of the programme. They then hope for the best, based on an assumption that the programme will see a positive shift against the indicators over the course of the intervention.

However, when the results come back there are not always neat improvements in the follow-up. This is where disappointment and panic can set in. A surface level analysis of the data might suggest that the programme has had no – or indeed had a negative – impact on participants. This is not good for busy practitioners who are juggling delivery with the pressure of proving the impact of their work, particularly when they rely on good outcomes for future funding!

Drawing on our recent experience of evaluating a range of complex social interventions, while baseline and follow-up approaches are not necessarily the wrong tool for the job the results need to be carefully interpreted and contextualised and – where possible – cross-checked against other strands of evidence. Only then can you begin to draw out some sensible conclusions about the impact that might have been achieved.

A case study: evaluating Body & Soul Beyond Boundaries programme

We recently conducted an impact evaluation of Beyond Boundaries, an innovative peer mentoring programme for HIV charity Body & Soul. This used a baseline and follow up survey against key indicators. When the results came in there was a positive progression against the majority of the indicators. Young people who had been on the programme were more likely to communicate openly and honestly about their health and their HIV status by the end of the programme. However respondents responded less positively to some wellbeing indicators around self-respect and general state of happiness.

It would have been easy to assume that the programme had had little impact on these areas of wellbeing. However, further scrutinising of the different strands of data allowed us to develop an alternative analysis.

1) Beyond Boundaries did not operate in isolation from other factors that influenced participants’ lives. Programme staff emphasised that the particular cohort they are working with led very chaotic lives and tend to experience a myriad of socioeconomic issues. Here it could be reasonably argued that their participation in the programme may have been helping them to maintain a certain level of wellbeing, and possibly even prevented a more severe decline against these indicators.

2) As a consequence of receiving support and building trust with the service, there are examples where participants increased their emotional literacy and become more willing and able to answer openly and honestly about how they feel. This could explain how they became more able to appraise their personal sense of wellbeing in a more critical way. This finding is consistent with wider evidence that suggests that young people in particular are likely to overstate their levels of wellbeing at baseline and then provide more open and critical score in the follow up.

A further challenge was that, for a whole host of reasons, there was a high rate of attrition between participants completing the baseline and the follow up. This meant that the survey data in isolation did not produce a robust understanding of the impact of the programme. However, when this data was taken in combination with other strands of the evaluation it was possible to make stronger claims about the impact. Triangulating the findings from the baseline and follow up survey with case study and focus group data also allowed us to better understand participant’s journeys and to explore the impact of the programme on the most vulnerable and hard to reach participants, who are often the least willing to take part in evaluation activities.

Focus on why and how, not before and after

Collecting detailed qualitative feedback from programme staff and participants helped us to explore those crucial “why?” and the “how?” questions which helped to shed light on the pathways to outcomes. This was crucial when exploring the impact of a complex service used by participants who have equally complex lives.

Tuesday, August 9, 2016

Introducing our new blog series – engaging children and young people in research and evaluation

At OPM we jump at the opportunity to work on projects involving children and young people. Not only are they tremendously fun to work with, but we passionately believe in the importance of their voices being heard in issues that affect them.

From researching young people’s views on police stop and search powers to evaluating their experiences of major government policies like the SEN and disability pathfinders, I have been privileged to meet some amazing and inspiring young people in the course of my work here. And we are constantly developing our practice in this area to make sure our research combines meaningful participation by young people, with compelling evidence generated for our clients.

OPM Group colleagues have written a series of thoughtful blogs on this topic. First we have two linked pieces on challenges around measuring impact – both highlighting examples of our recent work with young people, but with plenty of relevance for research with adults too.

Tim Vanson’s blog draws on a recent impact evaluation of an innovative peer mentoring programme for HIV charity Body & Soul to explore the current emphasis on baseline and follow up methodologies, and the challenges of using surveys in this way to demonstrate impact. When surveys don’t show the results you might expect, how do you make sense of this and present the findings so that they actually tell us something meaningful?

Caitilin McMillan and Bethan Peach focus on soft skills, which many programmes and interventions aim to improve, particularly those involving children and young people. And many succeed in doing so, but it’s notoriously hard to prove this with any accuracy or consistency. In OPM’s evaluation of the GLA’s Stepping Stones programme, we are taking a fresh approach to this using the ‘virtual young Londoner’ tool. Read about how this enables participants to speak more openly about their aspirations, behaviour and attitudes, without feeling overly vulnerable or exposed – especially when speaking in peer groups.

Finally, Bethan Peach shares her 7 top tips for conducting qualitative research with children and young people, based on her recent experiences during a project for Essex County Council evaluating the impact of early help for families, children and young people. A sneak peek: don’t assume you know what’s going to work with any given group of young people, because they’ll probably surprise you. So be prepared for anything to happen, and enjoy when it does!

If you’d like to get in touch to discuss any of this work, or the other research we have done with children and young people, then drop me an email.

Tuesday, August 9, 2016

Our work with children and young people is informing best practice

We are delighted that our evaluation of the vInspired 24/24 programme has been included in the systematic review of the Early Intervention Foundation (EIF) Social and Emotional Learning Programme Assessment, contributing to the evidence base of best practice in this area.

The EIF has been funded by the Joseph Rowntree Foundation to undertake a programme of work to understand what works in developing the relational, social and emotional capabilities and assets of children and families either in or at risk of poverty.  This work builds upon and extends an existing review carried out by EIF in 2015: ‘Social and Emotional Learning: Skills for life and work’

The 24/24 programme, managed by vInspired and funded by the Department of Education (DfE) and the Jack Petchey Foundation, was a structured volunteering and social action intervention programme, designed to help young people facing challenging circumstances to improve their life choices. The evaluation was designed to measure the progress of individuals during the programme and evidence the impact of the programme on young peoples’ outcomes and against key performance indicators. It also identified the critical success factors and limitations of the delivery model and made recommendations for future delivery.

The evaluation report can be accessed here.

Wednesday, July 27, 2016

New report on Motor Neurone Disease models of care

The Motor Neurone Disease Association (MND Association) has published our report exploring the different ways MND care is organised across the UK. It examines the practical arrangement of MND care, such as how multi-disciplinary teams are constituted and the relationship between hospital and community services. The report provides advice for the NHS and social services on how to organise MND care effectively.

A summary of the work can be found on the MND Association website and the full report can be accessed here.

Thursday, July 21, 2016

Focus on water: PR19 engagement

Starting to think about the 2019 Price Review period? So are we! Ofwat recently released a policy paper on their expectations for the PR19 period, emphasising the importance of customer engagement and linking it clearly to pricing. To hear more about how we can help you put in place a strategy that meets regulatory requirements and delivers value for your company get in touch with our water specialist Amelie Treppass or check out what we offer.

water-marketing-icon

Thursday, June 23, 2016

Not waving, but drowning… in data? Managing data and Social Impact Bonds

This is the third blog in the series that summarises key points in a speech I delivered at the 2016 Social Investing and Corporate Social Responsibility Forum, held at Meiji University in Tokyo. Here, I reflect on the following issue:

The importance of data

SIBs are outcomes-focussed. In the first blog of this series, I wrote about outcome metrics, and I think it’s sufficiently appreciated that outcome measurement is critical to the successful of SIBs. It is also widely reported that the ‘data burden’ of SIBs can be considerable. Our first interim evaluation report of the Essex County Council SIB, for example, pointed out that “information and reporting requirements of the SIB have felt onerous for all partners”.

While outcomes are vital, SIBs require more than outcome data to work. If stakeholders are not careful, they may find themselves in a position of expending disproportionate amounts of time and resources in collecting, analysing and reporting data.

We should not, however, simply accept this at face value as a ‘fact’ of SIBs. If we are to take the outcomes-focussed principle to its logical conclusion, then we must surely also be clear about the desired outcomes for data collection and its use. Starting with this helps streamline data collection and use, ensuring that we know why we are collecting certain types of data, and how we are going to use them.

As SIBs involve multiple players, each must develop clarity about data for their specific needs. In addition, the different players need to work together to minimise duplication and ensure that information is shared and that there are systems in place to support collaborative interpretation and scrutiny.

Shared approaches to data categorisation and collection

Different though SIB stakeholders may be, their approaches to data categorisation and collection is surprisingly similar.

When categorising data there are particular ‘headings’ that data relate to. Data can be categorised as regular performance management data, process data, impact data, and cost-benefit data. This points to the fact that delivering a SIB effectively requires parties to monitor ongoing operational matters; constantly assess and review the implementation of the intervention(s); ascertain the degree of which implementation may be leading to the desired outcomes; and assurance that transactions represent good value for money.

Indeed, different SIB players often collect and/or require similar, if not identical, data. This immediately alerts us to the fact that the various players need to work collaboratively to ensure they do not duplicate efforts; share data where relevant; and streamline processes to reduce the overall burden of collecting, analysing and reporting data. Not doing so can lead to unintended additional costs for all parties, as our second Essex SIB interim evaluation report has shown.

Different stakeholders use data differently

While data required can be identical across the various players, the use of the same data can be quite different.

Outcome payers scrutinise data as part of due diligence, which can be heightened in the case of SIBs. They need to show that they have undergone robust scrutiny of the data to justify paying out to investors. They also look at data from the point of view of assessing performance against the original business case for the SIB, and to see how the SIB way of doing things compare with more conventional ways of commissioning services.

Service providers look at the data in terms of understanding the effectiveness of implementation and the efficacy of the intervention. This may be particularly true if they are not delivering a strongly evidence-based intervention, and/or if their intervention is flexible and adaptive. In addition, service providers will wish to be clear about the true cost of delivering a service under a SIB model and how it compares with other ways of ‘selling’ services. They may be interested in ‘going to market’ more widely through a SIB model, and such information is therefore crucial in helping to price appropriately and competitively. Needless to say, most if not all service providers have a strong focus on outcomes for their service users.

Social investors, from our experience, tend to look at data with an eye on what can be improved. They are always looking at how they may redirect resources, adjust inputs and the approach to give it the best chance of success. After all, payment is linked to success. They also look at the data to assess return on investment, and how it compares with other forms of investment, and also how it compares with their investments in other SIBs.

Evaluators, of course, look at the bigger picture in terms of what the impact of the SIB has been and whether it adds values, over and above the operational concerns of individual SIB players. This is where I would encourage evaluators of SIBs to place emphasis on understanding the impact of the SIB, as opposed to the impact of the intervention per se. There is a real gap in our collective knowledge base in terms of how and whether SIBs add value; and whether particular models of SIBs may be more or less effective in different contexts, policy areas, or target groups.

Reflection

Just as SIBs are focussed on outcomes, the exercise of collecting and analysing data for a SIB should equally be outcomes-focussed. Many commentators have noted that SIBs can be overly complex, and data requirement is often part of this complexity. Equally, commentators have pointed out that in order for SIBs to flourish and to achieve the desired degree of spread and scale, it is vital for us to work together to find ways of simplifying and streamlining core SIB components so as to reduce transaction costs.

There will always be a degree of bespoke tailoring required in specific contexts, but there are core generic components that may be simplified or made consistent. The information collection and reporting requirement seems to be one of these ‘design features’ of SIBs, using the terminology from Bridges Ventures, that may be amenable to this, thereby contributing towards reducing the transaction costs of SIBs.

Thursday, June 16, 2016

Social impact bonds in Japan – it’s not always about saving money

In the previous blog in this series, I shared some key messages from a recent speech I delivered at the 2016 Social Investing and Corporate Social Responsibility Forum, held at Meiji University in Tokyo. In this blog, I summarise some of the thinking shared with Japanese colleagues on a second topic they had asked me to touch on:

Can you structure outcome metrics in a way that is not motivated directly by budgetary savings but by social wellbeing?

Level of outcomes

We can look at outcomes at the individual level, the group level and the system level. At the individual level, outcomes are obviously about achieving and improving individual wellbeing. At the group level, we may aim for improving social wellbeing. At the system level, there can be policy or political priorities that focus on social wellbeing. There can also be economic benefits that are being aimed for, and also budgetary ‘cashable’ savings.

Meeting the priorities for outcomes at one level may not always lead to (or support) achievement of priorities at another level. Outcomes for individuals, for instance, may not always lead to savings for the system. An intervention aimed at reducing the social isolation of a socially excluded group may uncover previously unmet needs, resulting in these people being put in touch with other services. While this is a good thing, it does mean there is a direct service use cost, at least in the immediate short term, that was not previously there.

Wellbeing, rather than savings, as a driver

If we are to take an approach that prioritises the achievement of individual/social wellbeing as opposed to the achievement of budgetary savings, there are different ways for structuring outcome metrics in support of this goal.

(a) Taking a wider definition of outcomes and accounting for their value: This recognises that there are different perspectives of the value of the same outcome. For example, when we talk about the ‘cost of crime’, members of the public do not really think of the cost of policing, the cost of incarceration, etc. Instead, they are likely to think about the economic, social and emotional harm inflicted on the victim and communities experiencing crime. A wellbeing perspective will therefore need to take these into account, over and above the system costs.

(b) Acknowledging public ‘willingness to pay’: The public can value things even when the financial implications of those things may not be clear. This ‘willingness to pay’ is underpinned by what we as a society think of as ‘right’ and ‘of value’. It can be influenced by cultural, ethical or moral norms. For example, there can be a strong moral case for supporting war veterans.

Public ‘willingness to pay’ is subjective, contextual, and can change. For example, prior to the UK Government announcing its Comprehensive Spending Review (CSR) at the end of 2015, there was widespread expectation that the government would cut policing budgets significantly. In fact, the government had hinted this will happen. The CSR took everyone by surprise when it was announced that policing budgets would be protected. Why had this happened? In a nutshell, the global terrorism threat had become increasingly significant. Public expectations around policing and security have gone up the political agenda, and it would have been untenable politically to cut policing budgets in such a context.

How may this look like in terms of structuring outcomes

First, there can be situations where the primary outcome metric that triggers payment is savings related, but there are also a range of other wellbeing outcomes being monitored and tracked. For example, in the Essex County Council SIB, the primary outcome metric that triggers payment is ‘care days avoided’. There are a range of secondary outcomes, such as strength of family functioning, emotional wellbeing, educational outcomes, etc. Data on all of these are analysed and discussed by the Programme Board on a regular basis. The seriousness to which the various players consider outcomes holistically is demonstrated by the fact that the social investors have committed to re-contacting the families who have received Multi-Systemic Therapy some time after the intervention has ended to check whether a range of outcomes are sustained over and above the duration for formal outcomes monitoring required by the SIB.

Second, there can be outcome payers who are prepared to pay for outcomes even when they are not linked directly to savings. These outcomes may be paid for as stand-alone outcomes. One of the outcome metrics that triggers payment in the Lisbon Junior Code Academy SIB, for example, is ‘logical thinking’. Logical thinking, obviously, is not something that leads directly to budgetary savings, but is a life skill that will serve an individual well throughout his and her life and can operate in very different situations.

Third, rather than to pay for stand-alone wellbeing outcomes, these may be blended in with other types of outcomes that may be more savings-related. The New South Wales, Australia, Benevolent Society Social Benefit Bond shows how this may be done. Payment is triggered by the Performance Percentage. Performance Percentage is a weighted average of three separate measures. This provides a model for how wellbeing outcomes can be woven in with others to create a composite outcome that triggers payment.

Reflection

As many of the initial SIBs are underpinned by a savings logic, it is easy to think that SIBs can only be structured with this objective in mind. The outcomes focus of SIBs should challenge us to be clear about ‘outcomes for whom?’ Shifting the gaze away from system-level outcomes onto outcomes for others requires us to think of different ways to identify and structure outcome metrics to support the achievement of individual and social wellbeing.