The Long Odds of U.K. Opinion Polls

Despite their close following in the media, opinion polls remain an unreliable art.
The Long Odds of U.K. Opinion Polls
5/5/2010
Updated:
5/5/2010
LONDON—Hardly a week has passed over the last couple of months without an opinion poll soothsaying the outcome of the U.K. elections. Despite their close following in the media, opinion polls remain an unreliable art—less reliable than the fortune-telling power of the bookies’ odds.

The past elections are littered with failed opinion polls. Even when they have picked the winner, opinion polls have still been way off the mark in predicting the level of support.

In the 1997 general election, although the change of power from John Major’s Conservatives to Tony Blair’s New Labour was anticipated, the degree of the swing in voting was overestimated.

Final numbers by Gallup, Harris, MORI, and NOP were out by 6 percent to 8 percent. Another big pollster, the Institute of Commercial Management (ICM), came quite close to the 13 percent difference between the parties. Close enough for Nick Sparrow of ICM to confidently publish an appreciation of the failure of his industry.

But by identifying the winning party, the opinion polls in 1997 largely obscured their numerical failing, thus avoiding the derision of the 1992 opinion poll predictions.

In the 1992 U.K. general election, final pollster figures were out on average by over 8 percentage points—the Conservative vote was forecast as 4 percent less than it turned out to be and Labour support was overestimated by a similar amount. The Tory victory was not forecasted.

Since that time, says Dominic Lawson of the Times Online, the polling organizations have sought to counter what is perceived by some to be an endemic underrepresentation of the Tory vote in polling samples.

Writing in The Independent, Mr. Lawson said, “[All] the polling organizations have followed ICM by introducing various ‘adjustments,’ which seek to make allowances for the fact that, for one reason or another, many of those who ultimately vote Conservative disguise this fact from people who ask them about their intentions.”

But Mr. Lawson speculates that this year, the silent supporters may be sitting on the other side of the fence. With Labour being the most fashionable political party to denigrate—as hopeless and incompetent. He suggests this may put a large dent in the pollsters’ calculations.

“What if the pollsters are busily allowing for silent Tories in their methodology for polling in the 2010 campaign, only to discover too late that they should instead have been concerned that those intending to vote Labour were reluctant to admit it?”

Another uncertain factor is what critics of the opinion polls call the bandwagon effect, a kind of inverse self-fulfilling prophecy. After Ted Heath overturned pollsters’ predictions and pulled off a surprise victory over Harold Wilson in 1970, the pollsters suggested the turnaround had been caused by their own polls. They argued that many Labour voters believed so strongly in the polls that they thought that they would not need to turn out to vote and Labour Party leader Harold Wilson would automatically walk back into number 10.

Work by Simon Jackman, an Australian political scientist at Stanford University, indicates that polling buffer zones should be acknowledged to be as large as 5 percent to 6 percent.

One British pollster, Populus, states: “Since there is no demonstrable ‘right way’ to poll on political attitudes—and views differ significantly and honorably about the best approach ... the only guarantor of integrity is openness: clearly showing the ways in which we weight poll data and why, so people can put the findings of each company in context.”

With this in mind, Populus, together with ICM and other polling companies, launched the British Polling Council (BPC) in 2004. The BPC is concerned only with polls “designed to measure the views of a specific group—for example a country’s electors (for most political polls) or parents or trade union members.”

However, the data is usually taken from a sample of one or two thousand people. The BPC’s website states: “Many polls are non-random, and response rates are often ... well below 50 percent in many countries for polls conducted over just a few days.”

Jackman says the best way to deal with this is to poll the polls—combining multiple opinion polls together to determine the true mood of the electorate.

This is generally considered good advice.

There is some evidence to suggest that bookies are a more reliable predictor of election outcomes as they produce a kind of statistical medium of the population’s perception of who is likely to win—a perception which can take in various subtleties and factors beyond the assessment of the pollsters.

“Opinion polls tend to apply uniform national swing,” Richard Royal of Ladbrokes told U.K. Channel 4 News. “They’re in large part depending on people just ticking a box to say who they’re going to vote for whereas with political betting it’s about people putting their money where their mouth is. And of course, people behave very differently when it’s their own money at stake.”

As a Huffington Post headline on the 2008 U.S. Presidential election summed it up, “The Most Accurate Election Forecast? Hardcore Gamblers.”