Communicate the method as well as the madness when measuring the political pulse
Friday 22 May 2015By:
- Lawrence Finkle
If a week is considered a long time in politics, then the previous two will have felt like an eternity for some in the polling industry.
The pollsters will be relieved to see the back of the last fortnight. Traumatised by the release of the now infamous exit poll that sent shock waves around the country at 10pm on election night, the following 48 hours were no less nightmarish with the announcement of an independent inquiry into their performance by The British Polling Council. Many have spent the following days desperately scouring raw data for clues and trying to explain to clients, the public and angry politicians how on earth the prevailing wisdom of the campaign failed to sense a Conservative majority. Even some of the big players in the political narrative are now calling for regulation of the polls in the final stages of election campaigns.
Ultimately the polling industry finds itself in such an uncomfortable position because it has a lot to answer for. Never had their services been in such high demand as in the run up to this month’s election, dominating the day-to-day framing of debate in the media by analysts and commentators. The parties themselves danced to their tune of ‘neck and neck’ and a hung parliament, shaping their strategies accordingly. Newspapers succumbed to the apparent deadlock by commissioning their own daily measurements of the nation’s political pulse – and in doing so dedicated discussion, not to policy or to manifestos but to electoral and Parliamentary arithmetic. Yet in practice this apparent dead heat bore no resemblance to reality and was never the prospect it appeared. The influence of the polling industry over political opinion during this period – and could be argued over the eventual outcome – was unprecedented. What proportion of the electorate voted tactically out of a fear that an opposition party were in touching distance of victory?
Yet at its most influential the polling industry was also at its most vulnerable – there are more pitfalls than ever to avoid in the exercise of polling itself. This explosion of political polling comes at a time where people lead busier lives than ever, when much of the public are increasingly reluctant to answer any kind of survey – not least one seeking political opinions when engagement is supposedly at an all-time low, and further complicated by the unusual fluidity of the electorate amid a new, multi-party landscape. Why they got it so wrong and how wrong each individual organisation got it is under investigation by the psephologists. My principal concern is with the communication of their research results and methods to the electorate, as I think they got that wrong too.
The numbers themselves may be subject to critical scrutiny by the British Polling Council, but the messages that accompany the numbers clearly were not. It should have been more important than ever to ensure that the definition of polling – that polls are samples, not forecasts, and the methods by which conclusions are drawn – extrapolating from people that can be reached to the people that can’t – was communicated as convincingly to the public as the poll results themselves. At OPM we’re acutely aware of the importance of effectively communicating how our research findings have been reached and that the public dialogues we run, sometimes exploring complex scientific concepts, are accessible – allowing meaningful participation in debate. These same principles should be adopted in the communication of election poll results, helping to shape a more politically literate public.
Health warnings, disclaimers – in other words, providing a context – may reduce the impact of headlines, but this will be far outweighed by the damage to the industry’s reputation if voting behaviour continues to be misrepresented on the scale it has been this month. It’s just not realistic to expect members of the public to appreciate how each polling organisation weights or models its raw data to arrive at an estimate of voting intentions by glancing at the latest headlines generated by the day’s polls. What the questions were (and how they were worded), how were they answered (typically by phone or via the internet), and who answered them, in other words – the methodology – must feature as boldly as the headlines.
I’m not suggesting that the accurate representation of forecasts is the sole responsibility of the polling organisations – it’s a matter for collective consideration. But polls themselves will no longer be taken on trust alone. The pollsters have no choice but to seek to remedy this confusion over how results are reported and interpreted if they are to recover from what was considered widely to be their worst night for 23 years. The last two weeks might otherwise only be the beginning.