Polls Can Shape Elections, but They Are Flawed by Design

Polls Can Shape Elections, but They Are Flawed by Design
9/16/2016
Updated:
9/24/2016

Election campaigns have come to live and die by the polls, to the extent that it’s no longer possible to imagine election coverage devoid of the constant din of the race caller.

It has become so intrinsic to the process that whether a candidate is eligible to join presidential debates is based entirely upon how they fare in the polls. The Commission on Presidential Debates (CPD) has ruled that a candidate must average at least 15 percent support in five selected national opinion polls to earn a spot on the stage.

And yet, despite the significant influence of polls, they are imperfect. How imperfect depends on the design. And because of imperfection, any poll can be dissected and disputed, particularly by parties not partial to the results.

In this environment, it can be hard for the public to know what to make of a given poll. But to be a truly literate electorate, it’s important to understand polls beyond the numbers. In aid of that, here are some lessons from 2016 so far.

Role of Polls

Polls service elections in many ways. They allow the public to weigh in with their opinions about a candidate or position. They provide talking points for media and campaigns. In August alone, 258 polls—or 8.3 per day (including national and state polls)—were conducted by 65 different agencies, entering the political zeitgeist.

Polls are also essential to campaign strategists whose job it is to know what messages resonate with the electorate, then adjust accordingly.  

For instance, Hillary Clinton’s campaign reacted to Donald Trump’s low poll numbers among moderate Republicans by releasing a video of Republicans denouncing Trump.

Trump reacted to Clinton’s low ratings for trust by amping up his attacks on her résumé as secretary of state, the email scandals, and money handling at the Clinton Foundation.

Moreover, polling is a battle over perception. Far from being passive reflections of preference, they can impact political fortune via the “bandwagon effect.” A candidate who polls better will be taken more seriously, which can attract more voters, interest groups, funders, and would-be endorsers.

Ben Carson joined the GOP primaries as a soft-spoken, retired neurosurgeon with no political experience and a tiny campaign budget. There was nothing to indicate he would be a contender, but when he started polling second, then first, last fall, suddenly he was treated as a serious candidate. This lasted until he receded in the polls in early December.

Methodology Fundamentals

Presidential nominee Hillary Clinton during the Democratic National Convention Philadelphia on July 28. (TIMOTHY A. CLARY/AFP/Getty Images)
Presidential nominee Hillary Clinton during the Democratic National Convention Philadelphia on July 28. (TIMOTHY A. CLARY/AFP/Getty Images)

There are two kinds of agencies that conduct polls: independent agencies and partisan agencies hired by a party or candidate. The latter are considered less reliable, particularly if results are made public, since it’s all too easy to get desired outcomes by tweaking the question wording or skewing the sample group.

“The more transparent a poll, the more reliable,” said Doug Schwartz, director of the Quinnipiac University Poll, referring to disclosure about polling methodology. This includes sample size and composition, how respondents were contacted, when they were contacted, margin of error, weighting of results, and the questions asked.

Each polling agency has its own preferred methodology. Major news networks conduct some of the most respected polls in the business largely because they have the funds to conduct accurate polls.

All five polls selected by the CPD in 2016 to decide who wins a spot in the presidential debates are by major media: ABC News-Washington Post, CBS News-New York Times, CNN-Opinion Research Corporation, Fox News, and NBC News-Wall Street Journal.

To assess a poll, you have to understand what the pollster has done.

“First thing: Was it a random sample, and does everyone in the population have a chance of being represented?” said Schwartz.

Some types of polls cannot produce random samples.

For example, online polls—where the sample group is often limited to people who visit that website—are the least reliable and cheapest to conduct.

Robocalling (automated dialing) skews results because federal regulations don’t allow robots to call cellphones, only landlines. If the 47 percent of U.S. adults who only own cellphones—and who tend to be younger, less educated, earn lower incomes, and live in urban areas—are left out, the poll cannot capture a representative sample of the population, according to Pew Research Center.

This method of calling is used by many pollsters since it’s about half the price or less of manual dialing and live interviewers.

The Critique Game

Republican presidential candidate Donald Trump during the Republican National Convention in Cleveland on July 21. (Joe Raedle/Getty Images)
Republican presidential candidate Donald Trump during the Republican National Convention in Cleveland on July 21. (Joe Raedle/Getty Images)

Polls are never perfect; they are educated guesses at future events. Every methodological choice can affect the results of a poll, which is why it’s easy, especially in a media soundbite, to find holes in virtually any poll.

When a CNN/ORC poll released on Sept. 6 became the first major poll since the conventions to show Trump ahead of Clinton—45 to 43 percent—liberal-leaning MSNBC host Chuck Todd said it had oversampled whites without a college degree, which is a strong demographic for Trump.

By “unskewing“ results (adjusting based on voter composition in 2012), Todd came up with new numbers—46 percent for Clinton versus 42 percent for Trump.

Whether Trump will bring out more of the white, non-college-educated vote this year, no poll can predict.

On the other side of the aisle, conservatives have critiqued polls that show Clinton ahead. The main criticism is that the polls are biased because they sample more registered Democrats than Republicans.

For example, an ABC/Washington Post national survey released June 26 showed Clinton leading Trump 51 percent to 39 percent. Fox News conservative-leaning pundit Sean Hannity pointed out that 36 percent of respondents were Democrats and only 24 percent were Republicans.

In 2012, Republicans made a similar argument when Mitt Romney was running against President Barack Obama, but Pew Research Center and others defend not standardizing the distribution of party affiliation.

“This would unquestionably be the wrong thing to do,” writes Pew, explaining that it’s because party identification is an attitude, not a demographic.

“Party identification is one of the aspects of public opinion that our surveys are trying to measure, not something that we know ahead of time, like the share of adults who are African-American, female, or who live in the South.”

The Michigan Mistake

It’s always easier to look back at a “bad” poll to see where it went so wrong.

In the Democratic primary in Michigan, Bernie Sanders beat Clinton after trailing badly in the polls. FiveThirtyEight’s average of 20 polls, put him 21 points behind.

The last poll before the March 8 primary, by Fox-Mitchell Research and Communications, had Clinton leading by 37 points. That was just two days before Sanders won the state by 1.5 percent.

And while Michigan proved to be an exception to an otherwise accurately polled primary season, the reason for the miscue raised questions.

One critical error attributed to the Fox-Mitchell poll was that it only called landlines. As a result, it predicted that less than a quarter of the electorate would be under 50, whereas they actually accounted for over half of voters.

A similar poll by Huffington Post that included cellphones showed a smaller margin for Clinton—a 13 percent lead—but was still far off the mark, suggesting other problems with the polling model.

Renowned pollster Ann Selzer, whose company conducted polls for the Des Moines Register, said the mistake in Michigan was due to a misinterpretation of the historical model from the 2008 Democratic primary.

“I happened to poll Michigan for the 2008 primary, which was unique,” Selzer commented via email.

“The Iowa Democratic Party asked candidates to sign a pledge that they would not campaign in states that jumped the line to hold primaries/caucuses before the date assigned to them by the DNC. Michigan jumped to hold an earlier primary.

“As a consequence, those candidates who had signed the pledge did not have their name on the Michigan ballot—including Barack Obama and other major players.”

Clinton was the only major contender who ended up on the ballot, which made for an unprecedented turnout on Election Day.  

“The turnout was weird—unlike anything in the past,” she said. “So pollsters who thought the 2016 primary electorate would resemble the 2008 primary electorate were either dreaming or just didn’t know the history.

“It’s that kind of pitfall that awaits anyone trying to use the past to predict the future,” Selzer concluded.

Social-Desirability Bias

DEWEY DEFEATS TRUMAN President Harry S. Truman holds up an election day edition of the Chicago Daily Tribune on Nov. 4, 1948, which, based on polling results, mistakenly announced his loss in the election he won. (AP photo/Byron Rollins)
DEWEY DEFEATS TRUMAN President Harry S. Truman holds up an election day edition of the Chicago Daily Tribune on Nov. 4, 1948, which, based on polling results, mistakenly announced his loss in the election he won. (AP photo/Byron Rollins)

It’s been suggested that Trump’s non-traditional campaign, which relies on support from groups who don’t regularly vote, will yield unexpected results in the coming general election.

A recent telephone survey (using robocalling) by Rasmussen Report suggested that “17 percent of likely Republican voters are less likely this year to let others know how they intend to vote compared to previous presidential campaigns.” 

Pollsters and marketers call this a “social desirability bias,” which refers to the fact that people may give inaccurate answers on sensitive topics to put themselves in a good light.

The prospect of this bias has raised the question of whether or not there’s a “silent majority” of Trump supporters who either don’t answer polls or who don’t answer them truthfully, but will support Trump in November.

The term “silent majority” refers to a statement by President Richard Nixon in a speech on Nov. 3, 1969: “And so tonight—to you, the great silent majority of my fellow Americans—I ask for your support,” he said speaking to Americans who supported U.S. military intervention in Vietnam. 

The Trump campaign has made a point of invoking this phrase, with one of the popular signs seen at his rallies reading, “The Silent Majority Stands with Trump.”

Looking to the past, Schwartz doesn’t think this silent majority exists. “There isn’t going to be a hidden Trump vote, because the primaries, on average, got what was predicted,” he said.

Only time will tell of course if pollsters got it all wrong.

Outlier Effect

“Outlier” polls, when a single poll’s results contradict the majority of other polls, always come under close scrutiny.

For example, the first poll to show Trump ahead of Clinton following the conventions was the August 21 USC Dornsife/Los Angeles Times tracking poll, which gave Trump the edge—45 percent to 43 percent (the same poll shows Trump has maintained that slight lead).

The poll was widely criticized on several fronts.

“The L.A. Times poll is a non-traditional approach,” said Selzer. “They weight their data according to how respondents say they voted in 2012. Ask any—any—pollster who has asked that question if he/she thinks they get a valid answer—meaning that the answer the respondent gives is actually the way they voted—and that pollster will say, ‘No.’”

The poll also asked the “horse race” question—pitting the two presidential candidates against each other—in two separate questions instead of one, which is more conventional.

Moreover, instead of random sampling, the L.A. Times surveys the same panel of roughly 3,000 likely voters repeatedly throughout the campaign.

“In social science, we worry about the ‘instrument effect,’ that the mere act of measuring something changes it,” said Selzer. “Like measuring the pressure in your tires. To measure it, you release some air, thus lowering the pressure inside.

“Someone asking repeatedly how you will vote could make you attend to the campaign differently, and decide differently from the way you would if you were not part of the poll’s panel.”

Irrespective of the criticism, Nate Silver, founder and chief editor of FiveThirtyEight, argues the poll shouldn’t be discounted.

“The trend from L.A. Times poll still provides useful information, even if the level is off,” Silver wrote in an article titled, “Election Update: Leave The LA Times Poll Alone!

Silver advocates for weighting polls based on a “house effects adjustment.” The house effect is a “persistent lean toward one candidate or another, relative to other polls,” writes Silver.

This doesn’t necessarily indicate a partisan bias; it may just be an artifact of the particular methodology.

“For example, Public Policy Polling, a Democratic polling firm, has a very mild pro-Trump house effect this year,” according to Silver. 

Poll Averaging

15 PERCENT THRESHOLD Libertarian presidential candidate Gary Johnson in Salt Lake City on Aug. 6. Johnson is the first third-party candidate on all 50 states' ballots since 1996. At about 13 percent in the polls, he's 2 points shy from being eligible to join the presidential debates. (George Frey/Getty Images)
15 PERCENT THRESHOLD Libertarian presidential candidate Gary Johnson in Salt Lake City on Aug. 6. Johnson is the first third-party candidate on all 50 states' ballots since 1996. At about 13 percent in the polls, he's 2 points shy from being eligible to join the presidential debates. (George Frey/Getty Images)

One approach statisticians use to mitigate shortcomings of any one poll is to mix them into the larger pool of polls to come up with a single, comprehensive number.

FiveThirtyEight has a complex algorithm: Each polling agency is graded and weighted based on the reliability of its methodology; also taken into account are factors such as house effects, trends, and whether third party candidates were included.

RealClearPolitics, a political news and polling data aggregator, has a simpler model of the same concept. Instead of weighting the polls, it picks ones it considers the most reliable and averages them to give a medium range of where each candidate should be relative to their opponent. 

But averaging also has its critics.

Schwartz from Quinnipiac Polls (an institute that gets an A- grade from FiveThirtyEight) doesn’t think aggregating solves anything, because there are high-quality polls and low-quality polls.

“Averaging polls uses both and it’s better to dismiss low-quality polls as not reliable,” he said.

Schwartz thinks RealClearPolitics includes too many low budget polls. He allowed that FiveThirtyEight’s method of weighting polls was likely more accurate, but remains skeptical.

As an alternative to polling averages, he suggests that “it’s easier to track one poll you consider reliable.”

Thermometers Not Crystal Balls

Despite the many criticisms, polls now play a pivotal role in political life, particularly during election season, so they can’t be ignored.

What’s important to remember is that polling is just a tool to take a temperature reading of opinions in the present, using modeling from the past, in the hopes of predicting the future. Polls are not crystal balls, no matter how they’re presented.