Nate Silver has all the liberals calmed down these days. He is the Xanax of the activist left, nervous about Tuesday. In an article HotAir linked to earlier today about rich, white San Francisco residents so upset by the thought of a Romney win they can't use their home gyms, Mary Katharine Ham found this hilarious quote about a liberal who watched the first debate:
“After I read Nate Silver the other day, I felt better,” Blume said.
It is as if the former Daily Kos blogger who went on to be the lefty pollster du jour at the New York Times is some sort of magician. Actually, he's not. He's a smart guy to be sure, but my buddy Sean Davis has a great must read about how you too can be the next Nate Silver with a cheap Microsoft Excel spreadsheet plugin whether or not you've ever had a Daily Kos account.
I spent a few hours re-building Nate Silver’s basic Monte Carlo poll simulation model from the ground up. It is a simplified version, lacking fancy pollster weights and economic assumptions and state-by-state covariance factors, but it contains the same foundation of state poll data that supports Nate Silver’s famous FiveThirtyEight model. That is, they are both built upon the same assumption that state polls, on average, are correct.
After running the simulation every day for several weeks, I noticed something odd: the winning probabilities it produced for Obama and Romney were nearly identical to those reported by FiveThirtyEight. Day after day, night after night. For example, based on the polls included in RealClearPolitics’ various state averages as of Tuesday night, the Sean Davis model suggested that Obama had a 73.0% chance of winning the Electoral College. In contrast, Silver’s FiveThirtyEight model as of Tuesday night forecast that Obama had a 77.4% chance of winning the Electoral College.
So what gives? If it’s possible to recreate Silver’s model using just Microsoft Excel, a cheap Monte Carlo plug-in, and poll results that are widely available, then what real predictive value does Silver’s model have?
What might make the difference between the Sean Davis and the Nate Silver model? Well, Silver, in 2008, had access to internal Obama polling that he never disclosed. Maybe he has access to some this time as well.
In other words, it is worth noting to those on the right beating up Silver that the data is shaping his model. He is not shaping the data.
But it is also worth noting that for all the super-special pride accredited to Nate Silver and his predictive powers, you too can do it with a cheap Excel plugin.
Lastly, as a number of people have noted, including Sean Davis in this piece, the big issue this year is whether or not the state polls are wrong. If they are, then Silver is wrong. If they are right, Silver is right. It's that simple. As Sean notes about 2010
State polling averages were wrong in Alaska (they said Joe Miller would be elected), wrong in Colorado (they said Ken Buck would be elected), and embarrassingly wrong in Nevada (they said Harry Reid would be involuntarily retired). FiveThirtyEight incorrectly forecast the winner in each of those states, perfectly reflecting the inaccurate information contained in the state polls.
Thus, of the five major state races in which polls were wrong over the last four years, Silver only got one right. I’m no baseball scout, but batting .200 when it counts won’t get you into the big leagues, let alone the All-Star game.