Why bigger isn’t always better

Political campaigners can often be heard complaining that opinion polls do not reflect what they are hearing ‘on the doorstep’. Arguing that they have spoken to many more people than the 1,000 or so typically interviewed for a poll, they claim the polls must be biased or just plain wrong. In Scotland, the Radical Independence Campaign has carried out several ‘mass canvasses’ in which its activists have contacted over 5,000 households. After undecided voters are excluded, they report a majority for Yes by around 60% to 40%. This is almost the polar opposite of the picture presented by the polls – the most recent ‘poll of polls’ (based on an average of the last  6 published polls) puts Yes support at 43% and No on 57%.

So who is right? Well to understand why bigger isn’t always better, it’s worth telling the story of George Gallup and the Literary Digest. In 1936, the Literary Digest magazine carried out a straw poll of 2.4 million people to find out how they planned to vote in the Presidential election. They confidently predicted that Alfred Landon would win by some margin. Meanwhile, George Gallup’s American Institute of Public Opinion predicted that Roosevelt would win based on a much smaller sample of around 5,000. In the end, Gallup was proved correct – Roosevelt won with 61% of the vote. And his success was in part responsible for the more widespread adoption of modern opinion polling techniques. The Literary Digest, meanwhile, went out of business shortly after.

The reason Gallup got it right and the Literary Digest got it wrong, in spite of its far bigger sample size, lay in the nature of the two samples. The Literary Digest primarily polled its own readers as well as people on automobile registration lists. As a result, its sample was heavily biased towards those on higher incomes. The response rate to the survey was also very low – over 10 million mock ballot papers were sent out to achieve 2.4 million responses. Meanwhile, Gallup set quotas for the number of interviews with, for example, men and women, people on low and high incomes, etc. based on what was known about the actual profile of the population as a whole. This meant that the final sample was far more representative – it ‘looked like’ the population whose views it was meant to reflect.

So what of the methods used by modern opinion surveys and polls? Arguably, the gold standard of survey research remains probability sampling (as used on the Scottish and British Social Attitudes surveys), whereby participants are selected at random from a list that includes all (or almost all) those people your sample is meant to represent. Other opinion polls currently being conducted in Scotland use methods such as quota sampling (TNS BMRB), Random Digit Dialling (RDD – IPSOS Mori), and what might be termed stratified ‘volunteer’ sampling (YouGov, Panelbase, Survation and ICM, who all conduct online polls). RDD involves selecting and contacting landline numbers on a random basis, with quotas usually set to ensure the achieved sample includes people with specific characteristics to reflect the population as a whole. Stratified volunteer samples are used for online surveys, whereby the sample is selected from a large database of people who have volunteered to take part. The issued sample is typically stratified to try and ensure a balance of respondents of different ages, genders etc. Most surveys and polls then apply weighting to the achieved sample to try and correct for any imbalance in terms of, for example, the age and gender profile. For a more detailed discussion of the methods currently being used by political polling companies in Scotland, see John Curtice’s recent article in Scottish Affairs.

It is true that none of these methods are immune to criticism. Probability samples suffer from lower response rates than they once achieved (although most surveys based on other kinds of samples do not quote any response rate at all). Quota samples might control for certain characteristics, like age, gender and past vote, but can easily over- or under-represent characteristics for which quotas are not set – resulting, for example, in a more politically interested sample than the population as a whole. Volunteer samples may diverge from the population as a whole even further – by definition volunteer panels are comprised of people who are more interested in taking part in surveys than people selected at random from the population. However, whatever your views of the methods of a specific survey or poll, the Literary Digest/Gallup story clearly illustrates the merits of a more scientific approach to sampling and the dangers of assuming that simply by speaking to lots of people you will get an accurate measure of what a population as a whole is thinking.

Information about the ‘methods’ adopted by political canvassers is often thin on the ground. However, typically election (or referendum) canvasses involve sending as many partisan activists as are available to particular streets or areas and encouraging them to knock on as many doors as possible. The aim is usually to identify supporters who may need encouragement or help to get to a polling station on voting day. Canvassers are not set quotas which would help them achieve a sample that is representative in terms of age, gender, working status, region, urbanity/rurality etc. We might also ask whether people are likely to give a truthful answer to someone who may be wearing a badge or T-shirt that makes their own voting intentions clear. Interviewers for polling and survey companies, in contrast, are trained to be scrupulously neutral and to avoid giving anything away about their own views.

Mass canvasses may well be a useful tool to mobilise campaigners to get their messages across. However, all these issues mean that as a mechanism for gauging the state of opinion in the population as a whole they are far less reliable. In interpreting their findings, we should remember that a badly designed large sample tells you far less than a well-designed small sample. In other words, bigger is not always better.

This blog was originally posted on The Conversation web site.

Avatar photo

About the author

Rachel Ormston is a Senior Research Director at ScotCen Social Research and co-director of the Scottish Social Attitudes survey. She regularly writes and presents on social and political attitudes and has a particular interest in attitudes to devolution and independence in Scotland.