Can We Trust the Polls Now?

Analysis Ahead the 2018 Midterms

Image for post
Image for post
CNN

Understanding Polls

On the surface, political and opinion polling is simple. Stripped of their complexity and their faults, polls are nothing more than surveys conducted on a sample of the public. Once the results are in, the data is used to deliver a forecast (or prediction) of sorts; the leftover results are generally supposed to offer a clearer picture of the current state of events. These results are to operate as a compass, showing us where the arrows are pointing. However, matters easily get convoluted the larger a sample size grows, and when the polling subjects also grow in numbers. In the modern political sphere, polling is regarded as the standard method to measure the temperature. Although, there is an obvious, fetid flaw in the reputation of polls in recent times. The 2016 Election. Millions of Americans are still stumped over the polling leading up to that presidential election. A lingering question remains: “What happened there?” It would probably be fair to say that a fine majority of the electorate holds reservations in trusting the polls after the previous presidential voting cycle. And these trust issues are valid. The Left was prematurely tasting victory. The Right was complaining about a “rigged system.” Although the Republicans won the election, both political parties largely emerged winless from the polls. Most predictions were incorrect. Now that we are shy five weeks from the midterm elections, reliability in polling has dimmed. A moderate trust in the polls has been replaced with a massive question mark.

Before we ask the tough questions, we have to know the subjects in question. The multiple different polls we’ll be looking at are as follows: benchmark, opinion, entrance, exit, tracking, and straw.

  1. Benchmark Poll: generally the first poll taken in a campaign, often before (yet sometimes immediately after) a candidate announces his or her bid for public office.
  2. Opinion Poll: an assessment of public opinion obtained by questioning a representative sample.
  3. Entrance Poll: a poll taken before voters have cast their votes at the polling stations.
  4. Exit Poll: a poll of voters conducted immediately after having left the polling stations, asking for whom the voter actually voted.
  5. Tracking Poll: a type of poll repeated periodically with the same group of people to check and measure changes of opinion or knowledge.
  6. Straw Poll: an unofficial vote conducted as a test of opinion; an ad-hoc or unofficial vote.

The most utilized forms of data collection include polls administered via telephone, mail, email, personal in-home surveys, personal mall or street intercepts, and/or combinations of these methods.

“The choice between administration modes is influenced by 1) cost, 2) coverage of target population, 3) flexibility of asking questions, 4) respondents’ willingness to participate, and 5) response accuracy.” — Lumen Learning

Once a poll is administered, the results (the data) file in and a climate/prediction/answer is derived directly from the collective response. These results, as the questions, come in different forms. Lumen Learning, again, gives us a standard, neat breakdown.

Image for post
Image for post
Lumen Learning

Advantages of Polling

The pros of polling outweigh the cons. There’s a reason why they’ve stuck around for so long in American politics. We would have to go back to Pennsylvania and Delaware in 1824 to see the first presidential poll. The Harrisburg Pennsylvanian surveyed a group of citizens in Wilmington (DE), inquiring about their presidential favorites. The poll showed Andrew Jackson in a commanding lead, ahead of John Quincy Adams, Henry Clay, and William Crawford. John Quincy Adams was sworn in as President in 1825.

Advantages:

  • A large collective opinion can be gathered without a formal referendum via polling.
  • Elections can be costly; administering polls are usually cost-efficient.
  • Polls can determine the public perception of government, rendering them a salient tool to ensure a healthy democracy — allowing the public to express views on various issues.
  • The randomness in polling helps the accuracy of the responses.
  • Polls are relatively simple to conduct.
  • Polls are obviously a better way to predict elections than a blind bet.
  • A wide range of information can be collected (e.g. values, beliefs, ideologies, attitudes, etc.)

Where Polls Go Wrong

Disadvantages and Imperfections:

  • Selection Bias: when the people selected to take part in a poll are not representative of the entire population.
  • Sampling Error: this can mean different things. Public opinion researcher, Gary Langer, has defined it as the calculation of how closely the results reflect the attitudes or characteristics of the population that’s been sampled. A sampling error can simply be the result of people being unresponsive to the poll. An example: “ If a pollster is conducting a sidewalk survey, there is a strong possibility several people will refuse to take part in it. If the poll was about attitudes toward public opinion polls, for example, a very significant portion of the population may not be represented.”
  • Non-Response Bias: this is what happens when there is a significant difference between those who responded to your survey and those who did not. Some reasons for not participating include the refusal to participate, forgetting to return/answer a survey, poorly constructed surveys, and surveys not reaching all members of the sample.
  • Question Design/Answer Design: answer choices could lead to vague data sets because they’re sometimes relative only to a personal notion concerning the strength of choice. An example would be the answer choice of “moderately agree.” Moderately agreeing with a subject may very well mean different things to different people, and anyone analyzing the data seeking correlation.
  • Dishonesty in people answering. This is important because polls largely rely on people answering honestly.
  • Failing memory in the people answering.
  • Bandwagon and Underdog effects: these are disadvantages that emerge only under the circumstance that the respondent(s) has/have been exposed to the latest results and standings. The bandwagon effect arises when someone responding to a poll sides with the subject or candidate ahead, simply because of the lead. The underdog effect is the inverse, when the person responding sides with the person or subject that is lagging — simply because it is behind. Both effects cause inaccuracies in collected data.

Margin of Error

The margin of error is typically a small amount that is allowed for in case of miscalculation or change of circumstances. It’s the plus or minus in percentage points that partners most polls, i.e. the wiggle room. The margin of error serves as a mild warning and ironic disclaimer when looking at a poll. This makes it a funny form of insurance among pollsters. Let’s say the election in question, that had been previously polled, ends in contrast with the results of the poll — but still within the margin of error. The poll would still be “correct.” By this standard, it would probably be fair to say that an actual (non-poll) result outside the margin of error would give a failing grade to the poll. Pew Research Center best defines the margin of sampling error as “ how close we can reasonably expect a survey result to fall relative to the true population value.” Pew offers an easy example of a margin of error in a poll. “A margin of error of plus or minus 3 percentage points at the 95% confidence level means that if we fielded the same survey 100 times, we would expect the result to be within 3 percentage points of the true population value 95 of those times.”

The confidence level, or confidence interval (in general stats), is the expression of confidence with which a projection may be made.

The Los Angeles Times wonderfully explains the relationship between the margin of sampling error and the confidence interval.

“ The margin of sampling error and the confidence interval are the expressions of the confidence with which that projection may be made. Typically, a sample is analyzed with a standard confidence level of 95%, meaning that 95% of the time the actual number will lie within the margin of sampling error. (This is such a standard measure that we usually don’t even mention it.) The margin of sampling error, then, is the range of numbers surrounding the projected figure, such that we can be 95% confident that the actual number lies within that range.”

The 2016 Election

Image for post
Image for post
270toWin

The polls were wrong. There’s no way around it. The election was held between the two most unpopular candidates seeking the presidency. And due to a slew of factors, the race got tighter the closer we all inched toward election day. Still, the New York Times forecast and Nate Silver were predicting a win for Hillary Clinton. Here is how the polls stood on November 7, 2016 — the day before the election.

  • Bloomberg: Clinton +3
  • CBS News: Clinton +4
  • Fox News: Clinton +4
  • Reuters/Ipsos: Clinton +3
  • ABC/Washington Post Tracking: Clinton +4
  • Monmouth: Clinton +6
  • Economist/YouGov: Clinton +4
  • Rasmussen Reports: Clinton +2
  • NBC News/SM: Clinton +6

What (the hell) went wrong?

Once the dust settled, pollsters came forward with their “autopsy reports.” Business Insider largely blamed state-level polling underestimating the level of Trump’s support in three key states, Pennsylvania, Michigan, and Wisconsin. It’s worth noting that Michigan and Wisconsin were specifically targeted by Russian operatives and bots online, via social media networks. The goal was an attempt to disadvantage the Clinton campaign, thus putting Trump in a more favorable light. Russia maneuvered through highly trafficked social media channels — Facebook, Twitter, and YouTube — to influence voters, often with baseless information and inflammatory rhetoric. The Council on Foreign Relations stated: “The CIA, FBI, and National Security Agency jointly stated with ‘high confidence’ that the Russian government conducted a sophisticated campaign to influence the recent election.” Mark Zuckerberg, the CEO of Facebook, testified before Congress in early April of this year. He was there to defend his company and answer questions regarding Facebook ads and the Cambridge Analytica fiasco. (Cambridge Analytica, the now defunct political consulting firm that did work for the Trump campaign, harvested data from up to 87 million Facebook profiles. This was data exposed by Facebook. Read about it here.) Furthermore, if you are still in denial about the Russian interference in the 2016 election, here is some more recommended reading from the Washington Post. The New York Times also offers scores and scores of articles pertaining to the Russian hacking and interference in the election, if you are still skeptical. Back to what exactly happened with the polls.

The aforementioned Business Insider piece also points to three other factors as to why the polls were incorrect.

  1. A change in vote preference during the final days leading to the election.
  2. A failure to properly adjust for an over-representation of college graduates.
  3. Many Trump voters failing to reveal their preferences until after the election.

Patrick Murray, the head of Monmouth University’s polling institute, theorized: “Non-response among a major core of Trump voters” was a reason the polls proved flawed.

NPR also blamed non-response playing a big role in the upset, among other reasons. Claudia Dean, Vice President of research at the Pew Research Center, said, “The problem is if you get what pollsters call non-response bias, people are less likely to take your call or stay on the phone with you.” NPR continued by citing the existence of a “secret Trump vote.” i.e. people weren’t being honest with the pollsters; voters didn’t want to tell strangers they were voting for Donald Trump.

The day after the election, FiveThirtyEight simply stated that Trump outperformed in the swing states.

Andrew Gelman, the American statistician, and professor of statistics, also pointed to non-response, last minute change in vote, but also a third party collapse. “Final polls had [Gary] Johnson at 5% of the vote. He actually got 3%, and it’s reasonable to guess that most of this 2% went to Trump.” Gelman also reasons that people were dissuaded from voting due to long lines and other measures making it more difficult to vote. He adds that Trump supports had higher enthusiasm than Clinton and democratic voters. The high enthusiasm may have translated into higher turnout.

The major anomaly outside of how incorrect the polls were is the fact that national polls weren’t all that wrong. Most pointed to Clinton receiving more votes, and she did.

Polls Ahead the Midterms

Image for post
Image for post
WSJ

There are 35 Senate seats on the ballots in November. All 435 seats in the House of Representatives will be on the ballots. Pollsters and media have named the recent liberal optimism as “The Blue Wave.” The name is somewhat fair. On one hand, Democrats are running and winning in places once thought to be Republican strongholds — or at least “safe” for conservative candidates. On the other hand, claiming victory before the main contest is outright foolish.

News outlets and publications including NBC, Fox, ABC, CNN, The New York Times, the Wall Street Journal, and the Washington Post all fall under what is considered the mass media, or simply “The Media.” Most of these either administer and/or publish their self-produced polls. For the past several years now, the media has been subject to attacks and questions of reliability. President Trump frequently calls CNN “fake news” and the NYT the “The Failing New York Times.” He discredits the Washington Post and slams NBC News. He has called the media “the enemy of the people.” Moreover, Trump gets personal with the attacks — sometimes naming the journalists he doesn’t agree with at rallies, brief Q&A’s with the press, and, of course, on Twitter. Having endured the vitriol, the image of the media’s honesty has suffered (which means the integrity of polls has been thrust into question as well).

Earlier this year, an Axios/SurveyMonkey poll discovered that 92% of Republicans think the media intentionally reports fake news. The same poll found that 72% of Americans believe “traditional major news sources report news they know to be fake, false, or purposely misleading.” The Hill covered the figures: “almost two-thirds of those polled say fake news ‘is usually reported because people have an agenda.’ About one-third of those polled say false information is reported because of ‘poor fact-checking’ or laziness.” In June of this year, a Gallup poll determined that 62% of Americans believe that news is biased and 44% say it’s inaccurate.

With a tarnished image and a flawed record, who is accurate in polling? Nate Silver published a piece on FiveThirtyEight (titled “Which Pollsters to Trust in 2018”) that answers the question. An updated pollster rating, which evaluates the performance of individual polling firms based on their methodology and past accuracy, also shows a new, interesting statistic called “Advance Plus-Minus” that further evaluates pollster performance. “It compares a poll’s accuracy to other polls of the same races and the same types of election. Advanced Plus-Minus also adjusts for a poll’s sample size and when the poll was conducted. Negative plus-minus scores are good and indicate that the pollster has had less error than other pollsters in similar types of races.” The data is displayed in the table below.

Image for post
Image for post
FiveThirtyEight

The numbers indicate accuracy, a tangible result. Trust, however, isn’t anything we can grasp. Trust is earned over a period of time. Trust is fragile, and trust is hard to lend after it has been broken. As to whether anyone can and should trust the polls before the 2018 midterms — it is entirely up to them and their own personal judgment. Even if the majority of forecasts prove to be correct in 2018, fully restoring faith in polls would be ignorant. Polls predict, they do not promise. That is, when looking at polls, the fatal deception.

Hugo is a writer of politics, culture, humor, and fiction. Follow him on Twitter (@hugosaysgo) for recommended reading and on Instagram (@hugosnaps) for photography. Happy reading.

Written by

Freelance writer. Athlete. Texan. I consume a lot of news and my secretary looks a lot like me, but with glasses on. Email: hugoarrcontact@gmail.com

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store