This is the third and final part of my three-part polling post-mortem. Part I here looks at the national and state polls, and Part II at the likely voter screens and the electorate.
V. Polls vs. Non-Poll Tools
One of my premises in reviewing projections of turnout was that other items of information besides the polls were worth reviewing. Many of these same indicators favoring Romney in 2012 had forecast the rise of Obama in 2008. Why did so many of them prove useless or misleading?
The Bush era saw a modern-historic rise in the partisan component of the electorate, i.e., the percentage of votes that were either Democrats voting Democrat or Republicans voting Republican - and while the partisan component has become more Democratic, the trend has not significantly abated under Obama:
As long as that remains the case, knowing the partisan composition of the electorate remains critical. Yet non-poll data on the topic proved elusive. The data point I stressed that failed most spectacularly was the Rasmussen and Gallup surveys of party ID. I don't regret looking at those; they had a proven track record in the past of being right, and I like looking at data that has a proven track record in the past of being right. Rasmussen's surveys had been right in presidential and non-presidential election years, in years before and during/after the rise of Obama, and had never before overstated GOP turnout. And the Rasmussen survey in particular is based on an enormous sample of something like 15,000 interviews a month. But both proved to be way off the mark: Gallup had the electorate at R+1, Rasmussen's final survey R+6. Despite their record of accuracy before 2012, I will almost certainly put no stock in those surveys again. It really was different this time.
On the other hand, I still stand by my scorn for TPM's party ID survey average; it was useless slop that failed once again. It got 2010 wrong, and 2012 too: even if you adjust the numbers upward proportionally from the 90.7% of the population it purported to survey, it projects a D+8 electorate of D 36/R 28/I 36, when the exits told us it was D 38/R 32/I 30. The survey assumed significantly more independent voters than Republican voters, but Republicans outnumbered independents at the polls, just as they have every year since 1980.
That said, as in 2008 and unlike in the off-year electorate of 2010, Republicans were at a recent-historically low share of the vote relative to independents, suggesting that their turnout problem was not solely one of high Democratic turnout - this chart computes the GOP share of (Republicans+Independents), and the Democratic share of (Democrats+Independents), so as to avoid letting one party's turnout cloud estimates of the other's:
No two ways about it: there were not enough Republicans at the polls. The question for the GOP going forward is how to bring the people who stayed home or left the party back into the fold and the voting booth.
Then there's voter registrations; I relied on a bunch of studies showing that Democrats were registering new voters at a slower rate than 2008 and suffering a net decline in voter registration in key battleground states, while Republican registrations were up slightly and independent registrations were up dramatically. This hard data told the same story as the national party ID surveys and the voter enthusiasm self-reporting. While Democrats said they could just turn out the voters they'd registered in 2008, I was skeptical on two grounds: voters age 22-25 were likely to have moved since 2008, and voters age 18-21 could not have been registered then.
I may have overrated these problems. It would seem that OFA's digital outreach must have kept a handle on transient recent college grads. And we have yet to see final voter-registration figures; while 2008 featured yearlong registration drives, it's still possible that the Obama campaign just registered a whole lot of people in October and/or the day they voted. I'll be very surprised if we do not see, in the data that comes out after the fact, a surge in last-minute registrations.
There was also early voting and absentee ballot data; I didn't have systematic data, but lots and lots of the individual hard-data points, especially from Colorado and Ohio, showed that early voting and absentee ballot requests were up in Republican areas or among registered Republicans, and down with Democrats, at least compared to 2008. Many of these data points came from official state records; they were not just the usual vaporous campaign emissions about how many doors they knocked on. Yet again, all this data turned out to be misleading. For example, the Colorado Secretary of State at one point was showing an R+2 electorate in the state after 62% of the state had voted early. Coming from official records, that seemed to me a non-crazy reason to think the electorate would be pretty good for the GOP, given that early voting is more of a Democratic strength in most states. Exit polls showed the Colorado electorate ended up D+4.
Relatedly, one of the realities the GOP has to come to grips with is the extent to which early voting has changed both the process of turning out voters and the process of polling even as compared to a decade ago - early voting makes it easier to turn out less-motivated voters, but also harder to use traditional tools to figure out who will turn out. Many of the polls in October hugely oversampled early voters (you'd get samples that were around 40% early voters when about 20% of the state, according to official records, had voted early) - but of course, with voter turnout overall below 60% of the voting-age population, you probably do need to oversample people who you now know are 100% certain to vote, if your sample is going to reflect final turnout. I suspect that at least in some states, the polls taking a turn towards Obama at the end reflected, not a change in public opinion, but a change in the poll samples as more of Obama's early vote got locked in. That suggests that past patterns in how the polls moved at the end of a race in the days before early voting may be a poor guide to how they will move in years to come.
Another indicator I factored in, from within the polls, was polls of self-reported voter enthusiasm. Many, many polls reported GOP voters more enthusiastic about voting. Such polls have been indicative of an "enthusiasm gap" borne out on Election Day in the past, including in 2010; they were not this time. Ditto the less scientific indicator of the large, enthusiastic crowds Romney and Ryan drew on the trail. By contrast, one thing I didn't put a ton of stock in, small-dollar donations, favored Obama, and in retrospect it was probably a sign of the effectiveness of his digital outreach (the much-mocked three-a-day fundraising emails), and a proxy for real base enthusiasm just as it had been for Bush in 2004. Romney never really did particularly well with small donors.
I also failed to consider that Dick Morris predicted a Romney win, which should have set the probability of a Romney win to zero all by itself.
All of which does make me wonder whether, despite my longstanding philosophy of wanting to use external sources as a sanity check on the polls, there are any left we can trust. If relative or in some cases absolute advantages in voter registration, early voting, absentee balloting, party identification, and self-reported voter enthusiasm are not worth anything, we may be stuck trusting the pollsters' hunches - and may be blindsided the next time they are wrong.
VI. Presidential vs. Off Year Polling
Many of us quite reasonably thought that 2010 proved the GOP had recovered from its 2006 and 2008 wipeouts, and that we should expect an electorate in 2012 that looked at least as much like 2010 as like 2008; at a minimum, a midway point between the two, which would be D+3.5. After all, 2010 was the more recent sample, and both parties had contested it vigorously. But one of the real emerging lessons of 2012 is that we are in an age where turnout in mid-term elections is genuinely not predictive of the electorate that will show up in a presidential election, and vice versa. As with many things in the Age of Obama, it remains to be seen if this effect will persist after Obama is gone - but it is clearly with us now, and suggests both that (1) Democrats on the ballot in 2014 should not count on the 2012 electorate showing up and (2) even a strongly Republican-tilted electorate in 2014, if one resurfaces, will not tell us much about the 2016 electorate. Right now, I would not want to be Mark Warner facing the electorate that voted in Bob McDonnell by 19 points, or Ron Johnson facing the electorate that re-upped Obama by 7.
This chart shows each party's swing between the off year elections and the prior and subsequent general election:
As you can see, Republican turnout in the era from 1984 to 2000 was extremely steady every two years, in both general and off-year elections, around 35% of the vote. Democrats would go up and down relative to independents, but the GOP share was a constant. But since 2000, that has fluctuated much more wildly, with high GOP turnout in the 2002, 2004 and 2010 elections and low turnout in 2006, 2008 and 2012. That volatility is even higher than the volatility of the Democrats. What it suggests is, more or less, that there are a lot more casual Democratic voters than casual Republican voters - the GOP's determined base turns out rain or shine every two years except in a real washout like 2006, but the extra people who come out only every four years are (at present) composed more heavily of Democrats. That's terrible news if you're a Democratic candidate for Senator or Governor in 2014 (even aside from the usual carnage that attends a president's sixth-year elections), but it's also frightening news for Republicans considering the long-term strength of the party.
VII. Models vs. Averages
My criticism, and that of other informed skeptics on the Right, of Nate Silver's 538 model was on three grounds. First, most of the major controversies in this election cycle centered around how much faith to place in the state polling averages, a debate for now largely resolved in favor of the state polling averages. Since the 538 model runs on those averages, it successfully called the election - but so did the averages themselves, without the assistance of the model.
Second, the model has been oversold. This really has nothing to do with the model itself, and everything to do with making people understand that it was only as good as its inputs. That criticism still stands: as noted in Part II, the polls had to make some very unscientific adjustments to keep up with the electorate this year, and there are significant reasons to question their ability to do so in the future. If the pollsters' "hunches" are wrong next time, the model contains no mechanism to avoid failing just as it has failed in virtually every past instance where the state polls were wrong. If you view the 538 model as a way of aggregating imperfect inputs - like the RCP average, but with some additional bells and whistles - you can get value from it as an informed consumer. If you view it as an infallible Oracle to be obeyed, you are likely to sooner or later be disappointed.
Third, the most questionable part of the model is its projections of the likelihood of how late-deciding voters will break, which by definition is the part not anchored to the polls. (You can read Nate Silver's breakdown of past incumbent-challenger races here, and while as he notes it suffers from the usual small-sample-size problems of any presidential poll analysis, you can also see that the challenger has traditionally tended to gain more ground than the incumbent as compared to his standing in the October polls). This is an area where others in this field have done more work than I have, so I won't repeat the controversies, but one of my prior concerns was Ted Frank's point that the 538 model was placing heavy emphasis on the 2000 election in projecting that voters were less likely to break against an incumbent party when the Democrats are in office than the Republicans. Ted's point was that Bush's DUI story was an unusual end-of-race event not likely to recur here (I had a good deal of confidence that Mitt Romney had never been busted for DUI). But we did, yet again, have an end-of-the-race late-October surprise, in the form of Hurricane Sandy, and we did, yet again, have voters break towards Obama right at the end. Unless you place a lot of value on the ability of the media to spin a late-breaking story in the Democrats' favor, however (not a factor in Bush's case, since the story was self-explanatory), it's hard to see how you build a credible mathematical model that assumes this sort of thing will happen with regularity.
The model's usefulness in presidential polling is also not necessarily translatable to other races, especially in off-years when the electorate is not as predictable. There was no running 538 forecast this year, at all, for the Democrats' chances of re-taking the House (which they did not). In 2010, the 538 forecast in August 2010 gave Republicans only around a 60 percent chance of taking the House, and still had Democrats with about a 20% chance of holding their House majority as late as Election Day - a much higher chance than the model gave Romney of winning this year. But of course, the Democrats got shellacked in a landslide, far worse by historic House standards than Romney's loss by historic presidential standards.
As to the parts of the 538 model that go beyond just plugging in the state poll averages, I continue to take Bill James' view of expert and expertise:
"[G]etting the answers right" had almost nothing to do with the success of my career. My reputation is based entirely on finding the right questions to ask - that is, in finding questions that have objective answers, but to which no one happens to know what the objective answer is...When I do that, it makes almost no difference whether I get the answer right, or whether I get it a little bit wrong. Of course I do my very best to get the answers right, out of pride and caution, but it doesn't actually matter.
Because if I don't get the answer right, somebody else will. It is called "science."
...[T]he scientific method has been the greatest ally of my career. Basically, what I know about the scientific method would fit onto a bumper sticker, and, that being the case, I might as well read you the bumper sticker. We design tests to see whether an assertion is compatible or incompatible with the evidence. When you do that, someone else will always figure out some way to do another test, and a better test. When that happens, it is my responsibility to acknowledge that the other person's research is better than mine or is an advancement from mine. What is necessary to the advancement of knowledge, then, is humility - the capacity to recognize that other people have accomplished something that I have not been able to accomplish. That, then, is the bumper sticker: what is necessary to the advancement of knowledge is humility.
When you go to an expert and you say that, "I don't think that what you are saying is true," that will be perceived as arrogance. Who are you to challenge the experts? But it is not arrogance, at all; it is grounded in the understanding that we are all floating in a vast sea of ignorance, and that much of what we all believe to be true will later be shown to be nonsense. To recognize this is not arrogance; it is humility.
When I was in Elementary School in the early 1960s, our principal was fond of telling us that, when he was a young man just after World War One, he took a college chemistry class, in which the professor told the students that they were studying science at the ideal time, because all of the important discoveries had been made now. Everything that there was to be known about chemistry or biology or physics, he suggested, was pretty much known now.
I add to that my own prior view of experts:
[T]he expert who learns that the recitation of jargon and the appeal to authority effectively exempts him from moral or social scrutiny has made the most dangerous discovery known to man: the ability to get away with virtually anything. Because if people will let you talk your way into money and influence with good science on the grounds that they do not understand it or have no right to obstruct it, what is to stop the expert from using bad science from accomplishing the same end, if they layman isn't equipped to tell the difference between the two?
We have not arrived now at the End of History or the End of Science. The polling controversies of past election cycles forced pollsters and poll analysts to learn important lessons. The polling controversies of this election cycle have, in my view, done the same. I wish my conclusions had carried the day this time, but I make no apology for challenging assumptions that were being treated as Holy Writ by liberals merely because, on this occasion, those assumptions proved correct. That's what I often do in my day job as a lawyer, in which I often encounter two contending experts with irreconcilable conclusions: probe their competing assumptions to expose what each side's conclusions assume to be true. There will always be a role for a Socrates, asking well-compensated analysts and pundits to explain themselves and put their assumptions on the line to be judged. The day we stop asking those questions is the day we let the "experts" know they can get away with anything just by hanging some numbers on it.
For the reasons explained in Part II, the old model of what kind of voter represents the swinging center of the electorate didn't work in 2012 - in fact, the center wasn't the decisive factor at all, but rather the huge margins in one corner of the electorate matched against a party that saw falling turnout among its natural base. Here in a single chart is the winning candidate's share of the two-party vote among five groups traditionally thought of as swing voters since 1972 - independents, suburbanites, voters age 30 and up, white women, white Catholics:
Mitt Romney's coalition among these five groups would have been the foundation of a clear national majority throughout the political era that ran from Harry Truman to George W. Bush, and the polling practices that grew up in that era would have captured that majority's formation. Pollsters have had to unlearn a lot of what they knew about polling in order to stay ahead of those changes in the electorate, and poll analysis does as well. The harder question will be how we can tell, other than after the fact, if the pollsters' guesses don't capture future shifts. Merely appealing to the idea that the majority of pollsters will always be right is an unsatisfying answer, especially given the follow-the-herd tendencies in the industry.
So the state poll averages were right, and really nothing that contradicted their narrative was. Does all of this mean that the state poll averages, and the models that run on them, will always be right in the future? Of course not. Anybody who has followed gambling prognosticators or stock pickers knows that winning streaks of couple of cycles do not always equal omniscience, even when backed by facially impressive-looking math. Unquestioning faith in mathematical models still has not been adequately called to account for its role in the 2008 financial crisis, for example. (On the other hand, climate models can only dream of the predictive success record of the state poll averages). Just because somebody gives you a prediction with numbers on it and is right a few times in a row doesn't mean they always will be. If you look at this as a science, you have to recognize that we don't have nearly enough data from presidential elections to constitute a meaningful sample size. And the fact that the poll averages were right because the pollsters changed the way they poll - in a world of ongoing technological and demographic change, and via methods that are themselves far from scientific - leaves us with a lot of uncertainties about whether they will make the right guesses again next time. Tom Jensen's next "hunch" could be wrong. There might be elections in the future in which polls using likely voter screens are more accurate than polls that all but abandon the project. Skeptical examination of the assumptions behind the polls' turnout forecasts will not go away, and should not go away.
But all that said, we're conservatives; we learn from experience, and even when the process is questionable, results talk. The case for trusting state poll averages over all other indicators, at least in the stretch run of presidential elections, has been strengthened a good deal by a third consecutive cycle of those averages calling the result right in 48 or 49 states out of 50. The case for treating other indicators as predictive of turnout has been weakened and in some specific cases pretty badly discredited. And while I remain a little less firmly convinced of the value added by modeling of how undecided voters will break at the end - over and above the value of the poll averages themselves - the 538 model had a good election in that regard.
A final word. While I've been following elections for a long time, I really cut my teeth reading polls in the 2002 and 2004 elections. I recall well from those races seeing a lot of polls that were registered-voter polls or polls with D-heavy samples very favorable to Democrats, yet the end results were much more favorable to Republicans. In the presidential elections of 2000, 2004 and even 2008 (before the financial crisis), we repeatedly saw the polls shift towards the GOP when we got past Labor Day and most pollsters started using likely voter screens. I learned a lot from that experience, some of which is clearly still true, and some not.
But I also saw a lot about human nature that is eternally true. A lot of Republican pundits and poll-readers looked like geniuses in that period by projecting Republican wins in a lot of the competitive races. The lesson, then as now, is that it is easy to look smart when your own side is winning all the close ones. It doesn't make you a bad or dishonest advocate for your side if you are better at predicting your side's victories than its losses. But it means you are still one side's advocate – and while I work hard to call things as I see them (I genuinely believed every word I wrote in this race), I make no bones about being an advocate.
But you have not really made it as a neutral arbiter of presidential polling - let alone a scientific one - until you have given both sides news they desperately do not want to hear. We will know Nate Silver has really made it as a presidential pollster when people on his own ideological side are screaming in terror at his conclusions, and not before.