Although those with a keen interest in the federal election pay close attention to polls, political scientists and pollsters say poll questions, available answers, and skewed headlines can mislead voters.
Continual opinion polls throughout election campaigns help candidates, parties, and political strategists to gauge support and adjust their approach accordingly. However, poll results released to the public may also affect what voters themselves do, according to Tom Flanagan, a political science academic who led various candidate and party campaigns.
“Polling results can have an effect by encouraging strategic voting in multi-party races,” Flanagan told The Epoch Times.
“Current polls showing the Conservatives in the lead may well encourage some [NDP supporters] to hold their noses and vote Liberal. I’m sure there is some political science literature on this, but strategic voting is hard to study because people don’t like to give straight answers about voting for a second choice.”
The professor emeritus from the University of Calgary said it’s in pollsters’ financial interest to have accurate poll results because credibility leads to clients. Even so, pollsters may fail.
“Pollsters are human beings with political views of their own. No matter how hard they try to be objective, their personal commitments might colour their choice of questions and the wording. But I don’t think they consciously aim to distort their findings; there’s just no payoff in it.”
University of Lethbridge political science professor Geoffrey Hale agrees most pollsters try to get it right, but for those that fail, it’s hard to conclude why.
“Pollsters can display a ‘house bias,’ although how much is methodological and how much is ‘torque’ is open to interpretation by those who study such things,” Hale said in an interview.
“There have been a few companies this fall that haven’t been too careful in this regard. Poll aggregation … may have methodologies that project certain assumptions about the unfolding of a race, which may or may not pan out in reality but which can [influence voter perception and behaviour] when combined with the editorial stance of a particular newspaper.”
Given how far off the polls were regarding the Progressive Conservatives’ upset win in Nova Scotia last month, “pollsters who want to protect their reputations should stay in the field as close as possible to election day to catch last-minute shifts,” Hale said.
Challenges for Pollsters
Mario Canseco, president of Research Co., told The Epoch Times that large regional fluctuations are more likely when the sample size is small. He said polls of 1,000 voters achieves a reasonable +/- 3.1 percent confidence interval at a national level, but do less well regionally.
“Let’s say you have 150 people in B.C. and about 110 in Alberta, that will bring a margin of error that is closer to seven or eight percent, depending on the population,” Canseco told The Epoch Times.
Sean Simpson, vice president of public affairs at Ipsos, said asking the right people in the right numbers is foundational to accurate results.
“The first key is in reducing sources of sampling error, which is making sure that we are talking to a sample that reflects the population that we’re studying,” he said.
Simpson said Ipsos’s national polls are careful to maintain representative counts of regions, sex, and educational level. Pollsters who fail, he says, get “samples that are highly skewed, and then which require very significant weighting in order to essentially manufacture the representative sample.”
Even a demographically accurate and large sample can be challenged by varying rates of voter turnout, Simpson said, with this election having “the added complication of COVID-19.”
“We know from our polling for Global News that one in four Canadians say that they don’t feel safe going to vote in person.”
Canseco said some clients approach pollsters with very specific ideas on the question to be asked and the answers to be made available, but a pollster with integrity must insist on a proper approach.
“It has to be the right question. … You shouldn’t really load the questionnaire in a way that makes people side with whatever you’re saying.”
He recalls when with a previous employer, a political opponent of then-Conservative federal finance minister Jim Flaherty proposed a poll question on the 2006 decision to tax income trusts.
“I remember saying, ‘There’s no way we can ask this. It’s a very lengthy preamble, the question is wrong. You’re essentially putting a bunch of words in people’s mouths,’” he said.
Two weeks later, the client found another company to ask the question.
“Sometimes you turn people down, and … data collection agencies will say, ‘That’s fine. We’ll ask whatever you want because we have a quota to fill.’ And it’s bad for the reputation of the entire industry.”
Some pollsters say the media can also misuse poll results in the way they interpret them to the public. Simpson finds political pundits to be especially prone to this problem.
“If one party’s polling more and everybody [else] is down two points, then they’re making firm conclusions trying to explain why we see that, [yet] it really doesn’t matter whether a [political] person did this or said that,” he explains.
“Effectually speaking, it’s all a fluctuation. … Sometimes it’s just a random sampling error because you’re never conducting the perfect poll.”
Canseco said partisan voters sometimes protest the headline pollsters use in their press releases. What bothers him more is when the media combines a poor poll question with a misleading headline.
“There’s people out there who are well known who will put out a question that doesn’t have undecideds or allows people to just say A or B. And then the headline will say, ‘70 percent of Canadians feel this way.’ Well, what happened to the other 30 [percent]? Are they undecided? Did they choose something different?” he said.
“If the data is not properly gathered, then it gets reported as if it’s real, then it’s definitely problematic.”