There has been a lot of discussion over the past week on polls and why polling for the 2014 elections was so bad. As Sean Trende of RealClearPolitics said:
As Tuesday night got going, most people suspected that Republicans were headed for a good night. This, after all, was what the polls strongly suggested. Instead, they had a great night.
What is odd is that, while there was a cluster of conversations suggesting we might see Republican gains of around six-to-eight Senate seats, six-to-nine House seats and the loss of a few governorships — and a secondary cluster analyzing why Republicans might be disappointed (mostly citing the possibility that polls could be skewed away from Democrats by demographic undersampling) — there was no cluster of conversations suggesting that the public polls might understate Republican performance. As the New York Times’ Nate Cohn put it: “I’m not aware of any evidence that polls underestimate Republicans. Please show me the data.”
For much of this election cycle, Democrats complained the polls were biased against them. They said the polls were failing to represent enough minority voters and applying overly restrictive likely-voter screens. They claimed early-voting data was proving the polls wrong. They cited the fact that polls were biased against Democrats in 2012.
The Democrats’ complaints may have been more sophisticated-seeming than the ”skewed polls” arguments made by Republicans in 2012. But in the end, they were just as wrong. The polls did have a strong bias this year — but it was toward Democrats and not against them.
The overall result of a D +4 bias in Senate races and D +3.4 in statehouse races in polls within the last 21 days of the campaign masks the true state of affairs. For instance,
It is difficult, if not impossible, to explain any of these results without using the word “bad guess” at some point. In governor’s races, the polls missed Maryland’s results by nearly 12 points and Kasich’s romp in Ohio by 10.
In the aftermath, pollsters closed ranks and many asserted that the internal campaign polls were very close:
Contrary to the many public opinion polls that showed Democrats and Republicans deadlocked heading into Election Day, most internal campaign surveys were correctly forecasting the GOP rout.
In interviews Thursday, strategists on both sides of the aisle told the Washington Examiner they weren’t surprised by the GOP’s Senate takeover, padding of its House majority and other gains made up and down the ticket in red and blue territory across the country.
Properly predicting the correct partisan and demographic turnout model was the difference. Campaigns and party committees got it right, while many, though not all, of the public polls were wrong.
It is hard not to call bull**** on this because we haven’t seen these polls and the aforesaid Nate Silver has examined these internal polls. This from the aftermath of 2012:
Perhaps these Republicans shouldn’t have been so surprised. When public polls conducted by independent organizations clash with the internal polls released by campaigns, the public polls usually prove more reliable.
Take, for example, the gubernatorial recall election in Wisconsin earlier this year. Independent polls had the Republican incumbent, Scott Walker, favored to retain his office by about six percentage points. A series of polls conducted for Democratic groups showed a roughly tied race instead.
Mr. Walker in fact won by seven points: the independent polls called the outcome almost exactly, while the internal polls were far from the mark.
To understand why the polls went so wrong it is first necessary to understand that polling, political or social science, is an art to which mathematics is harnessed. Were polling actually a quantitative science two key principals would apply. First, the results obtained by polls of the same piece of geography would be very, very close. Second, the polls would be correct. If a chemistry class, for instance, ran an experiment involving water and got a wide variety of answers the instructor wouldn’t average the results to find the ratio of hydrogen to oxygen in water. The instructor would grade any answer not “1 : 2” as “wrong.”
But neither of those statements applies to polling. There are wild variations between polls for the same states and congressional districts. There are similarly wild fluctuations between voter preferences over time absent any seminal event that could be deemed to have had an effect (GW Bush’s DWI, George Allen’s “macaca” comment).
I have a simpler and more defensible thesis. Polls are developed to underpin a media narrative and the weighting of the turnout models is more driven by the desired narrative (young voters, Soccer moms, Angry White Males) than it is related to reality.
To evaluate polls I think there are underlying fundamentals to keep in mind:
This is the standard model polls work from. They are assumed to be correct until they are proven wrong (like this year or the infamous exit polls in 2004 that declared John Kerry had been elected president. But this is really a post hoc ergo propter hoc logical fallacy. The polls are not forecasting the results but the results are used to prove the veracity of the polls. Back in 2000 Zogby was flavor of the day because they called the election correctly. Then Rasmussen was the annointed one. Logic tells you that if skill, not luck, is involved in calling elections that there should be a cluster of pollsters who are always right, in every race called. This is not the case.
“Late breakers go to the challenger.” Ever heard that one?
“What ended up happening very simply was that due to the national wave, all of the undecided voters broke for the Republican candidates,” said a Democratic strategist with regular access to internal polling.
This is not science. This is not art. This is a shibboleth. By reciting this line pollsters inoculate themselves from their own error by baking in a reason. How does one identify these “late breakers”? Does anyone who isn’t engaged enough to form an opinion on candidates a week or so out from Election Day actually pull themselves out of their Barcalounger or away from their crack pipe and vote? How do you prove this? Where is the comprehensive, time series polling, if you will, of late deciding voters that demonstrates this to be true? The short answer is that with a turnout of less than 60% in presidential elections and about 36% this year there are no late deciding voters.
While every pollster wants to be right, being right only happens once. What is more important is being the poll that finds something new, interesting, different, etc., that creates a buzz about your brand… hopefully such buzz does not emanate from flies over a carcass or lump of ordure.
If you watch the course of polls you are led with the inescapable conclusion that polls early in the cycle are dominated by various types of polls
1) trying to create a horse race where none exists
The best example of this are the Georgia races. Virtually no one who was familiar with Georgia politics thought the races were as close as the polls indicated. However, it was important for the national narrative pushed by the media that a deep red state like Georgia was poised to go Democrat. This was attempted with Wendy Davis in Texas but nothing could make this race look close.
2) burying candidates the media do not want elected
Maryland’s governor’s race is a classic here. Larry Hogan was not considered a serious candidate because Maryland is a congenitally liberal state that went for Obama by 26 points in 2012. Hogan won by 9. This was not the result of a meltdown by the Democrat. But the polls did not pick up on the shift. A GOP challenger leading a Democrat by 9 in a deep Blue state would write a narrative that no one in the media was interested in hearing. Hence the surprise “wave” election.
Why would they do this? Because it sells papers and attracted eyeballs on the internet. Because these early results are no provable. If you wonder why races “tighten at the end” you can stop wondering. Decades and decades and decades of research into how humans make decisions tell you that people do not change their position easily, if at all, once they have decided. Rather, in the last six weeks to a month of a campaign the pollsters have every incentive to be accurate and then mystery challengers begin failing, leading to more articles on how the insurgent campaign was poised to win, but didn’t.
The real thing that points to why polls aren’t anything approaching a science is the fact that each pollster has their own ‘secret sauce’ to model voter turnout. This alone should be a deal breaker for anyone with a respect for the scientific method.
Pollsters are people. They generally want to be respected by their peers. No one wants their colleagues sniggering a their work product. As a result they talk to each other and they talk to the campaigns. For instance:
A senior Republican strategist who helped direct the GOP’s midterm strategy speculated that media outlets weren’t willing to spend enough on their polls to produce a product that was as detailed as the private, more accurate partisan surveys. This Republican also believes that public pollsters in general were “vulnerable and susceptible” to the arguments Democrats were making about expanding the electorate, given what the party achieved in 2012.
This statement alone tells you that polls are generally in the business of writing a narrative (utterly awesome Democrat turnout machine, David Plouffe’s penis is thi-i-i-i-i-i-i-s big!) rather than forecasting outcome. The turnout that the Democrats were talking about had to be apparent in the polling, or it simply didn’t exist. You can’t statistically adjust people into the voting booth because the cool campaign guys told you to.
Election after election we go through the drill of conducting an elaborate post mortem on why the polls were wrong. The answer is obvious. Polls are correct through chance. But mere chance cannot account for the poll results in 2014. Polls have been harnessed to the service of creating news to the extent that they are no longer quasi-scientific instruments focused on forecasting outcomes but rather vehicles for setting a media narrative that governs the coverage of a particular race or even a national election.
I have no faith in polls and my skepticism will continue until the top polling organizations start calling most races, for the winning candidate, within their own margin of error.