Before he moved over to his new home at Mother Jones, Kevin Drum revisited a topic at his old Washington Monthly blog that I too have raised before, to criticize reporters who say that there is “statistical dead heat” whenever the polls show the difference between voters preferences for two candidates fall within the margin of error.
In other words, if the polls show 46% for Obama and 43% for McCain with a 3% margin of error, then the race is reported as a “statistical tie” or some such thing, giving the impression that it is a toss-up as to who is ahead. This is simply not true.
Drum has consulted with two professors pf mathematics and statistics at California State University, Chico and they have provided the formulas that enabled him to prepare a handy little chart to tell you the actual chance that some one is ahead, even though the preferences fall within the margin of error.
|Margin of Error||1%||2%||3%||4%||5%||6%|
So if a candidate has a 3% lead with a 3% margin of error, far from being a dead heat, it is highly likely (84% chance) that the candidate is actually ahead. Even if the candidate has only a slim 1% lead and the margin of error is a whopping 5%, it is still not a ‘dead heat’. The candidate still has a 58% chance of winning.
Like Drum, I do not have much hope that reporters will ever change their misleading reporting because they have a vested interest in continuing to talk this way. Races that are close generate more interest and thus more viewers and readers, so reporters will always try to make them seem closer than they are.
Talking of polls, there seems to have been an explosion in the number of polling organizations out there, and their results differ. This can cause some confusion in the public mind. When one polls gives one result one day and the next day the media report another poll with quite different results, this might give people the impression that the race is highly volatile or that some polling organizations are biased in favor of one or another candidate.
But that need not be true. There is something called the ‘house effect’ that can skew the results in particular ways without any intention of misleading. Charles Franklin over at Pollster.com explains what is going on:
Who does the poll affects the results. Some. These are called “house effects” because they are systematic effects due to survey “house” or polling organization. It is perhaps easy to think of these effects as “bias” but that is misleading. The differences are due to a variety of factors that represent reasonable differences in practice from one organization to another.
For example, how you phrase a question can affect the results, and an organization usually asks the question the same way in all their surveys. This creates a house effect. Another source is how the organization treats “don’t know” or “undecided” responses. Some push hard for a position even if the respondent is reluctant to give one. Other pollsters take “undecided” at face value and don’t push. The latter get higher rates of undecided, but more important they get lower levels of support for both candidates as a result of not pushing for how respondents lean. And organizations differ in whether they typically interview adults, registered voters or likely voters. The differences across those three groups produce differences in results. Which is right? It depends on what you are trying to estimate – opinion of the population, of people who can easily vote if the choose to do so or of the probable electorate. Not to mention the vagaries of identifying who is really likely to vote. Finally, survey mode may matter. Is the survey conducted by random digit dialing (RDD) with live interviewers, by RDD with recorded interviews (“interactive voice response” or IVR), or by internet using panels of volunteers who are statistically adjusted in some way to make inferences about the population.
Given all these and many other possible sources of house effects, it is perhaps surprising the net effects are as small as they are. They are often statistically significant, but rarely are they notably large.
One way to avoid mistaking inter-poll variability for voter volatility is to track the results of just one poll. In other words, only compare the results of one poll with the earlier results of the same poll conducted using the same methods and questions.
Another way is to do what the outfit Real Clear Politics does. It tries to take some of the inter-poll variability out by giving the averages of the major polls as a function of time.
To paraphrase Jon Stewart, elections are god’s way of teaching Americans statistics.
POST SCRIPT: Mike Huckabee on Colbert Report
It was refreshing to listen to Mike Huckabee being interviewed on the Colbert Report about his reaction (after just the first two days) to the Democratic Convention. Huckabee was one of the most interesting primary candidates on the Republican side but the attacks on him from the Republican Party establishment were quite vicious.
Although I disagree with many of his views, there was something engaging and honest about him that I found likeable. He also has a sense of humor. All these positive characteristics are reflected in the interview. His closing comments on Obama and the role of race in America seemed genuine and heartfelt.