News and Comment

A cautionary tale of survey sampling

Wednesday 27 January 2016

By:

A cautionary tale of survey sampling

It’s rare that survey sampling gets much attention outside of methodological textbooks. But recently, it has been getting just this attention – in the aftermath of the surprise result to the 2015 general election, that major polling organisations did not predict.

It’s also rare that survey sampling is considered to sway history on a particularly grand scale. But look at how hypothetical scenarios of ‘what if the polls had been different’ are cascading out, now that the Polling Inquiry into what went wrong for the pollsters in the summer of 2015 has shared its first findings.

Although the Polling Inquiry doesn’t report its final findings and conclusions till March, it’s already worth reflecting on what this means for research practice.

At OPM we run research surveys both with large nationally representative samples, but also, very differently, on a much smaller scale with specific groups of respondents. These could be users of a pilot service; frontline professionals delivering particular interventions; or stakeholders in a particular project or area. Often this will involve only sending out surveys to limited sets of specific people.

It may seem that what’s going on in the world of large-scale nationally representative polling has little bearing on these smaller-scale, targeted survey projects. But I’d argue that there is a cautionary tale here for practitioners of both types of surveys, which could be summed up as: ‘know your sample!’

This simple maxim actually has quite wide-ranging implications that affect all stages of the survey process. And, if sampling isn’t carefully considered at each of these stages, quality can suffer. Below are some very basic pointers, which may seem like no-brainers but are worth considering every time when putting together a survey.

  • Know who your sample is, and make sure you are asking the right people. If not, the results will be inaccurate or irrelevant; essentially this is what pollsters got wrong in 2015 by under-representing Conservative voters in their polls in the run-up to the election.
  • Know your sample’s perspective – so you can design questions that make sense to the people you want to answer. This could be in terms of terminology used, assumptions about the level of knowledge respondents may have already, and what questions they can constructively respond to. It’s worth testing this out with a smaller group before going out with the full survey where possible.
  • Know your sample’s behaviour and preferences when it comes to filling in responses, so you can use the most appropriate survey format, timing, and method.
  • Know the differences within your sample, so you can distinguish relevant subgroups within your sample. Overall results can hide very significant variations in opinion between different types of respondent.
  • Know your sample’s context when analysing and reporting on findings. Some responses only make sense in light of what we know about respondents’ involvement in a service, organisation or field.
  • Know when your sample is biased. Sometimes this can be adjusted for; but for very targeted surveys, a degree of bias may be unavoidable. However, being aware of this upfront is vital in order to analyse results with the appropriate pinch of salt.

Particularly with the advent of online surveys, it may seem that the rulebook for survey practice needs to be rewritten. But I’d argue that this maxim is important no matter what method, sample, or scale. And it’s always better to check twice before than to only notice your mistake afterwards – just ask the pollsters from the summer of 2015.