Monday, August 14, 2017

Demonstrating the impact and value of vision rehabilitation – a Report to the RNIB

Vision rehabilitation services are crucial to ensuring blind and partially sighted people remain as independent as possible. Now, new independent research commissioned by RNIB, with support from the Department of Health, has identified that the cost of providing vision rehabilitation services is dwarfed by the financial benefits.

Independent research by the Office for Public Management (OPM), based on a case study of services provided by Sight for Surrey has shown that the financial benefits of good vision rehabilitation services significantly outweigh the actual costs of delivering this service. In fact in the case study site, over £3.4 million of health and social care costs were avoided, reduced or deferred annually based on a service which cost an estimated £900,000 a year to deliver.

Building on the work of our See, Plan and Provide campaign, we are now working to ensure that commissioners or those making decisions understand the economic value of providing effective vision rehabilitation services and the long term costs avoided, reduced or deferred for the health and social care system.

Wednesday, July 19, 2017

What’s your Corporate Responsibility value? What, how and why to measure impact

How can you know what impact your corporate responsibility activity made? Can you know what’s changed because of what you’re doing?

These questions can lead many down a rabbit hole of thinking programme evaluation and impact measurement is daunting – if not impossible.

As a research consultancy, we’ve heard many thoughts about the limitations of evaluating and measuring programmes and outcomes:

We get it. When corporate responsibility isn’t your day job, or even if it is, the process of designing research tools, talking to the right beneficiaries and analysing your data might be pushed down the long To-Do list.

But, as many participants were likely reminded of at the Heart of the City’s workshop on “Data, Information and Insight: understanding your impact” last week, measuring the value of a programme develops insights that can substantively make a case for your corporate responsibility story: to the Finance Director, the CEO and clients.

This is important. Whether we’ve come to corporate responsibility deliberately or accidentally, it’s hard to ignore the benefits a well-designed programme has on staff, communities and (eco)systems.

But what are these benefits? And, how do we know this? What can be improved?

Understanding your impact supports your organisation in demonstrating how its values and mission are being met and lets your team and organisation improve activities and programmes for the future.

However, while most organisations are great at communicating what they want to do:

“We will reduce poverty in the local area our office operates in by December.”

“We will reduce our carbon footprint by 5% by composting tea bags over 12 months.”

From our experience, there are 3 questions many organisations could improve on asking:

  1. What will change (for the organisation or communities) by doing what you want to do?
  2. Why will that change occur?
  3. What data do we already have that can generate this insight?

But – how can organisations ask these questions differently – or better?

One way is a tool called Theory of Change. This might be the first time you’re hearing of it, or maybe your organisation is already using the tool. But, so we’re on the same page, a recap:

The tool and thinking process to develop it:

Getting started

Last week, Tarran MacMillan and I were thrilled to work with fellow Heart of the City newcomers during our workshop, “What’s your value: how to know what, how and why to measure impact.”

During the session, we led participants through a Theory of Change exercise using our experience delivering workshops with the tool and evaluating our own corporate responsibility activity that we recently completed for the Migration Museum Project, a UK-based charity.

Example questions

  1. Action: what would your organisation like to achieve with its corporate responsibility programme?
  2. Activities: what does your organisation need to meet this goal?
  3. Change: what will happen as a result of your organisation’s corporate responsibility activities?

A Theory of Change can be produced at any point, though it’s most useful when developed collaboratively at the outset of a programme and referred to and adapted throughout as the programmes or goals change.

Programme and activity measurement can feel like a task to be done later, but the Theory of Change reminds us that getting started from the beginning helps you track useful data from the beginning that you can use to prove the great work you and your corporate responsibility team are doing.

Have you tried it?

If you’re interested to learn more, contact Dr Peter Welsh.

 

Tuesday, June 20, 2017

Accelerated Non-Medical Endoscopist Training Programme – Year 1 Evaluation (Report to Health Education England)

The Office for Public Management (OPM) was commissioned by Health Education England (HEE) to conduct an evaluation of the Non-Medical Endoscopist (NME) accelerated training pilot. The NME training pilot aimed to recruit and successfully train 40 NMEs across two cohorts. The first cohort started the programme in late January 2016 and the second cohort started the programme in mid-April 2016.

The evaluation aimed to produce both formative and summative findings about the impact and effectiveness of the training pilot. The evaluation activities consisted of:

 

 

Tuesday, May 2, 2017

The impact of learning and sharing on the development of Social Impact Bonds

In this third blog of my 2017 series inspired by my advisory visit to Japan, I reflect on the importance of international learning and sharing for improving Social Impact Bonds (SIBs). While honoured to have been an expert advisor to colleagues in Japan over the past three years, helping the country take its first steps to develop SIBs; I have also benefitted hugely from the opportunity to learn from them and others.

Here I reflect on the impact of international learning and sharing on two specific areas, based on my Japanese experience.

Role of government

In a previous blog, I argued that governments have key roles to play in supporting the growth of SIBs (and social investment more widely). As I shared the UK lessons during the Social Impact Forum at Yokohama City, I also heard from Australian colleagues who put forward a similar view. What was notable was the fact the New South Wales Government in Australia has actually issued a social investment policy committing to two SIB transactions per year. While the UK Government has been hugely supportive of SIBs, the support has been enacted in different ways. We do not have a specific policy committing us to a specific number of SIBs per year. As Australian colleagues noted, this policy really focusses minds and has mobilised everyone to work together. The machinery of government has been aligned to support this, for example by building in evaluation; by developing policy reviews and analyses; by assessing the effectiveness of known interventions in priority policy areas, etc.

Japanese colleagues, reflecting on their (still very recent) experience, observed that while the Japanese government has made certain overtures indicating interest in SIBs, they have been far less proactive and engaged in stimulating growth, compared with Australia and the UK. It has been very challenging to engage with central government, leaving local governments and their non-profit organisation partners to try lobbying for change while attempting to make things happen on a very small scale.

This comparative approach enabled us to work closely with Japanese colleagues to share specific recommendations for engaging with central government, while also drawing in lessons from related developments and how these have successfully captured the imagine of governments, such as Climate Bonds.

The purpose of SIBs

Another area where the comparative approach surfaced important issues for scrutiny is the motivation behind SIBs. While much of the discourse in the UK, US and Australia is underpinned by a strong ‘savings’ narrative, Japan seems to be more minded to develop SIBs that are focussed squarely on improving wellbeing even when this may not lead to any discernible savings for the public purse.

In challenging the dominant discourse around SIBs, Japanese colleagues tapped into a creative seam of thinking around constructing SIBs on a very different foundation. We were able to share specific models of how this may be done, proceeding to advise Japanese colleagues about the implications for outcome metric selection and outcome modelling. At the same time, this re-focussing enabled us to build a stronger narrative and practice around more meaningful user-defined outcomes in the UK, counter-balancing the more dominant system-defined outcomes approach. I have certainly woven this into my work with Northern, Eastern and Western Devon Clinical Commissioning Group on their SIB to tackle alcohol dependency.

Conclusion

One of the downsides of working in the SIB field is that although we all assert that “things change very quickly”, we have yet to demonstrate willingness to share experiences, learning and data. Indeed, I have often encountered strong opposition towards sharing, under the guise of “commercial and/or political sensitivity”.

At the same time, we all call for transaction costs to be reduced as the current high costs make it difficult for SIBs to be sustainable. Surely one of the ways to bring down transaction costs is for better sharing of information and experiences so that others do not have to reinvent the wheel every time a new SIB is being developed. This glaring contradiction does not escape me and many others. It is time that we have the courage and humility to learn and share more widely.

Dr Chih Hoong Sin, Director, Innovation and Social Investment

Monday, April 24, 2017

Unleash the creative potential of Social Impact Bonds

In a previous blog based on my latest advisory trip to Japan, I noted that Japan is currently ‘translating’ the SIB model in order for it to be implemented in a way that is appropriate to its specific social, economic, political and cultural context. I have encouraged Japanese colleagues not to simply think that SIBs are and always will be what they currently look like. Instead, they should approach it creatively, making it work better for all. There is a risk that an innovation, such as SIBs, may be abandoned because of disillusionment with early versions of it, which may not have fulfilled the creative potential that may be on offer. Here, I describe four reasons why I think current SIBs have only scratched the surface of what may be possible.

Outcome payers

In the UK, we have become rather lazy with our terminology. I often hear people swap ‘commissioner’ for ‘outcome payer’. In doing so, there is a real risk that we limit the way we think about who outcome payers can or should be. Apart from public bodies, who else may be interested in paying for outcomes? What types of outcomes may they be interested in paying for?

If outcome payers are ever only going to be public sector commissioners, then we need to question whether SIBs are indeed channelling ‘new’ or ‘different’ funds. After all, if all outcome payments to investors are ultimately made by public sector commissioners, then the monies will only ever come from direct taxation.

Transaction costs

Read any publication or attend any conference on SIBs, and you will head the refrain: “SIBs have high transaction costs”. Again, rather than simply accept this as an immutable fact, I challenge the market to design these costs out of future SIBs. We have good evidence that this is possible, at least for some types of transaction costs. For example, we know that vested commercial interests can cause some intermediary organisations to develop SIBs that are unnecessarily complicated. Similarly our evaluation of the Essex County Council SIB found that when the various players are more concerned about minimising risks to themselves, they can end up with a contract that is too complicated and therefore imposes ongoing costs. As our experience in Essex shows, these can be designed out of a SIB even after it has gone ‘live’.

Who bears the risks?

An attraction of SIBs is that we introduce a new group of stakeholders called ‘social investors’ into the picture, who have higher risk appetites and are socially minded. However, the fact that 7 out of the 10 SIBs in the US have over half of their values guaranteed by philanthropic organisations really causes us to question who is really bearing the risk? Are we attracting the ‘right’ types of investors into the market or are we distorting the market to suit certain types of investors? In addition, there is also evidence that some social investors can try to pass on some of the risks to service providers, for example, by providing part of the capital as a loan rather than as revenue. We must therefore be clear about who bears what risks, and whether these models are true to the ideal of the SIB aspiration.

Is it all about savings?

In a blog I wrote last year, I argued that SIBs do not have to be about savings, and showed different ways of constructing alternatives. SIBs are about social outcomes. To reduce social outcomes to only those that generate financial savings for the public purse is highly limiting. This, again, draws attention to the limitations of thinking of ‘outcome payers’ only as ‘commissioners’. Even amongst commissioners, financial savings do not have to be the only motivator. We need to ask ourselves the question: “Outcomes for whom?” when we design SIBs. If we never ask service users what success looks or feels like to them, then what message are we sending out about SIBs? Whose interests do they serve?

Conclusion

Current SIBs have barely scratched the surface of what may be possible. Rather than allowing them to ossify into what they currently look like, we should challenge ourselves to keep pushing the creative potential of the idea of a SIB. In the process of doing so, we must never lose sight of outcomes and how they can be meaningfully defined.

Dr Chih Hoong Sin, Director, Innovation and Social Investment

Monday, April 24, 2017

Unleash the creative potential of Social Impact Bonds

In a previous blog based on my latest advisory trip to Japan, I noted that Japan is currently ‘translating’ the SIB model in order for it to be implemented in a way that is appropriate to its specific social, economic, political and cultural context. I have encouraged Japanese colleagues not to simply think that SIBs are and always will be what they currently look like. Instead, they should approach it creatively, making it work better for all. There is a risk that an innovation, such as SIBs, may be abandoned because of disillusionment with early versions of it, which may not have fulfilled the creative potential that may be on offer. Here, I describe four reasons why I think current SIBs have only scratched the surface of what may be possible.

Outcome payers

In the UK, we have become rather lazy with our terminology. I often hear people swap ‘commissioner’ for ‘outcome payer’. In doing so, there is a real risk that we limit the way we think about who outcome payers can or should be. Apart from public bodies, who else may be interested in paying for outcomes? What types of outcomes may they be interested in paying for?

If outcome payers are ever only going to be public sector commissioners, then we need to question whether SIBs are indeed channelling ‘new’ or ‘different’ funds. After all, if all outcome payments to investors are ultimately made by public sector commissioners, then the monies will only ever come from direct taxation.

Transaction costs

Read any publication or attend any conference on SIBs, and you will head the refrain: “SIBs have high transaction costs”. Again, rather than simply accept this as an immutable fact, I challenge the market to design these costs out of future SIBs. We have good evidence that this is possible, at least for some types of transaction costs. For example, we know that vested commercial interests can cause some intermediary organisations to develop SIBs that are unnecessarily complicated. Similarly our evaluation of the Essex County Council SIB found that when the various players are more concerned about minimising risks to themselves, they can end up with a contract that is too complicated and therefore imposes ongoing costs. As our experience in Essex shows, these can be designed out of a SIB even after it has gone ‘live’.

Who bears the risks?

An attraction of SIBs is that we introduce a new group of stakeholders called ‘social investors’ into the picture, who have higher risk appetites and are socially minded. However, the fact that 7 out of the 10 SIBs in the US have over half of their values guaranteed by philanthropic organisations really causes us to question who is really bearing the risk? Are we attracting the ‘right’ types of investors into the market or are we distorting the market to suit certain types of investors? In addition, there is also evidence that some social investors can try to pass on some of the risks to service providers, for example, by providing part of the capital as a loan rather than as revenue. We must therefore be clear about who bears what risks, and whether these models are true to the ideal of the SIB aspiration.

Is it all about savings?

In a blog I wrote last year, I argued that SIBs do not have to be about savings, and showed different ways of constructing alternatives. SIBs are about social outcomes. To reduce social outcomes to only those that generate financial savings for the public purse is highly limiting. This, again, draws attention to the limitations of thinking of ‘outcome payers’ only as ‘commissioners’. Even amongst commissioners, financial savings do not have to be the only motivator. We need to ask ourselves the question: “Outcomes for whom?” when we design SIBs. If we never ask service users what success looks or feels like to them, then what message are we sending out about SIBs? Whose interests do they serve?

Conclusion

Current SIBs have barely scratched the surface of what may be possible. Rather than allowing them to ossify into what they currently look like, we should challenge ourselves to keep pushing the creative potential of the idea of a SIB. In the process of doing so, we must never lose sight of outcomes and how they can be meaningfully defined.

Dr Chih Hoong Sin, Director, Innovation and Social Investment

Monday, October 10, 2016

Care Quality Commission Annual Public Awareness and Sentiment Tracking Survey 2016

Background

OPM Group was commissioned by Care Quality Commission (CQC) to carry out the latest iteration of its Annual Public Awareness and Sentiment Tracking Survey. This is a nationally representative online and telephone survey of 1,000 members of the public, designed to provide CQC with a clear understanding of how its brand, reputation and mission are perceived by the public.

What did we do

The survey questions, developed in discussion with the client, had dual purpose: some of them were repeats from previous years which allowed for data to be compared while others looked to obtain new information and insights.

Fieldwork was carried out in April 2016 and we conducted a mixture of telephone and online surveys. Participants’ data was purchased from an accredited consumer data supplier to parameters that enabled us to survey a representative sample of adults (18+) living in England.

For the telephone surveys, researchers were provided with a survey script, and data was collected using an online system, therefore providing time and cost efficiencies.

Our approach allowed us to complete the field work one week ahead of schedule.

Outcome

We provided rigorous statistical analysis, allowing CQC to understand how their reputation is perceived by different demographic groups, those with different levels of experience in the healthcare systems and in different geographical regions.

The findings were summarised in a powerpoint report that was commended on its engaging and user-friendly presentation techniques.

Tuesday, August 30, 2016

So what works? Our top 7 tips for qualitative research with children and young people

Previously in this blog series we have looked at challenges and ideas when using baseline surveys and how to measure soft skills when working with children and young people. In this post I talk through some ideas around approaching qualitative interview-based research and the importance of being creative and flexible in this work.

Working with children and young people calls for a more bespoke approach than the traditional research methods we tend to use with adults. I have recently been putting this into practice in a project for Essex County Council where we are evaluating the impact of early help for families, children and young people who need support with emerging difficulties to prevent the problems getting worse.

Here are a few things we have built in to the project so that children and young people feel comfortable engaging with the research.

1) Find out how the children and young people like to communicate

The popularity of smartphone apps and communication tools changes quickly these days, and children and young people are at the forefront of these technological developments. So it’s important not to assume that what you knew about their communication preferences still holds true, and to remember that what might be true in one location may not be in another.

For this project we arranged a workshop with a group of young people in Essex to find out what methods of communication they and their peers prefer to use, what they feel uncomfortable with, what barriers we might come across, and what methods might encourage young people to engage with our research. This gave us a great starting point for designing our approach.

2) Find out what settings or surroundings the children and young people feel comfortable in

In this project we are working across a range of different provider organisations and types of early help, so before starting the research with children and young people, we arranged to speak with the practitioners delivering the support.

We asked them what they thought the children and young people they tend to work with would prefer, for example, speaking on the phone, meeting face-to-face on a one-to-one basis, or getting a group together for an interactive session. We also asked them what surroundings they might feel most comfortable in, for example at school, at the provider’s premises, or in their homes. This has all given us a good steer as to how to set up the research to be most effective.

3) One size doesn’t fit all

Based on all this information we have been able to take a bespoke approach to designing the qualitative research, working in different ways with children and young people of different ages, in different locations, and using a variety of methods and interactions. These decisions are informed by the insights above, but ultimately the approach we take is guided by each individual, within a clear set of ethical research parameters.

4) Take your time

Depending on their experiences of interacting with adults, young people can sometimes feel suspicious or defensive when meeting someone for the first time. In these situations building rapport is a vital part of the process, so always build in additional time for this before diving straight into interview questions. To an extent this is a standard research approach but it’s especially important for working with children and young people.

5) Think about creative incentives

Typically, research incentives take the form of gift vouchers, and these can be really effective. But there are all sorts of other incentives that might encourage children and young people to take part. As part of this project we’re planning to produce videos, either as case studies that follow a few personal journeys, or in animated form. We will involve some of the children and young people in these final products, to help them see that they have contributed to something tangible and engaging, and how this might improve the help that others receive in the future.

6) Focus on a small number of questions

With young children in particular, it’s vital not to get too attached to your interview topic guide. To build trust, the interaction needs to be led by them and this is likely to mean that the conversation takes some ‘interesting’ directions before finding its way back to what you’re trying to understand for the research! Keep your research aims simple so that you can stay clear about where you’re trying to get, without worrying about exactly how you end up getting there.

7) Have plenty of materials up your sleeve

Again, for young children in particular, have plenty of options to hand to encourage conversation based on what they feel comfortable with. Children often feel uncomfortable with one-to-one face-to-face conversation – it’s not how they interact with the world. But give them a game to play, something to build, or some paper and pens and they start to relax and tell you their stories. Having said that, what works for one may not work for everyone, so always have a few options available.

It’s all worth it

This might all seem like a lot of work just to do a few interviews, but the benefits really outweigh the effort. For me the two most important benefits are:

  1. The children and young people feel comfortable and enjoy the process.
  2. We get more valuable and relevant insights because the children and young people feel at ease and in control.

And even if it takes a bit more time and effort to get there, it’s still worth it every time.

Monday, August 15, 2016

Handle with care: a case study in using baseline and follow up surveys

The new orthodoxy

Baseline and follow up methodologies have become the new orthodoxy when assessing the impact of interventions on programme participants – particularly when evaluating programmes with children and young people. In a context where experimental approaches are rarely seen as workable or even ethical, collecting baseline and follow up data is increasingly the default option expected by funders and commissioners. They are also relatively cheap to deliver – which, in current times – is appealing.

The evaluator will typically convert a programme’s anticipated outcomes into a set of indicators, use these to form the basis of a survey, and then invite participants to complete the survey at the start and end of the programme. They then hope for the best, based on an assumption that the programme will see a positive shift against the indicators over the course of the intervention.

However, when the results come back there are not always neat improvements in the follow-up. This is where disappointment and panic can set in. A surface level analysis of the data might suggest that the programme has had no – or indeed had a negative – impact on participants. This is not good for busy practitioners who are juggling delivery with the pressure of proving the impact of their work, particularly when they rely on good outcomes for future funding!

Drawing on our recent experience of evaluating a range of complex social interventions, while baseline and follow-up approaches are not necessarily the wrong tool for the job the results need to be carefully interpreted and contextualised and – where possible – cross-checked against other strands of evidence. Only then can you begin to draw out some sensible conclusions about the impact that might have been achieved.

A case study: evaluating Body & Soul Beyond Boundaries programme

We recently conducted an impact evaluation of Beyond Boundaries, an innovative peer mentoring programme for HIV charity Body & Soul. This used a baseline and follow up survey against key indicators. When the results came in there was a positive progression against the majority of the indicators. Young people who had been on the programme were more likely to communicate openly and honestly about their health and their HIV status by the end of the programme. However respondents responded less positively to some wellbeing indicators around self-respect and general state of happiness.

It would have been easy to assume that the programme had had little impact on these areas of wellbeing. However, further scrutinising of the different strands of data allowed us to develop an alternative analysis.

1) Beyond Boundaries did not operate in isolation from other factors that influenced participants’ lives. Programme staff emphasised that the particular cohort they are working with led very chaotic lives and tend to experience a myriad of socioeconomic issues. Here it could be reasonably argued that their participation in the programme may have been helping them to maintain a certain level of wellbeing, and possibly even prevented a more severe decline against these indicators.

2) As a consequence of receiving support and building trust with the service, there are examples where participants increased their emotional literacy and become more willing and able to answer openly and honestly about how they feel. This could explain how they became more able to appraise their personal sense of wellbeing in a more critical way. This finding is consistent with wider evidence that suggests that young people in particular are likely to overstate their levels of wellbeing at baseline and then provide more open and critical score in the follow up.

A further challenge was that, for a whole host of reasons, there was a high rate of attrition between participants completing the baseline and the follow up. This meant that the survey data in isolation did not produce a robust understanding of the impact of the programme. However, when this data was taken in combination with other strands of the evaluation it was possible to make stronger claims about the impact. Triangulating the findings from the baseline and follow up survey with case study and focus group data also allowed us to better understand participant’s journeys and to explore the impact of the programme on the most vulnerable and hard to reach participants, who are often the least willing to take part in evaluation activities.

Focus on why and how, not before and after

Collecting detailed qualitative feedback from programme staff and participants helped us to explore those crucial “why?” and the “how?” questions which helped to shed light on the pathways to outcomes. This was crucial when exploring the impact of a complex service used by participants who have equally complex lives.

Tuesday, August 9, 2016

Qualitative research into Registration Assessment performance among Black-African candidates

This General Pharmaceutical Council (GPhC) report explores the experiences and performances of Black-African candidates in the Registration Assessment. For this work we held depth, qualitative interviews and focus groups with Black-African trainees and recently registered pharmacists and interviews with people who are involved in pharmacist education and training.

The research concludes by suggesting some actions which may help improve the experience and performance of some Black-African trainees and at the same time offer benefits to all pharmacy students and trainees, and ultimately to the pharmacy profession.