The ‘statistical tie’ fallacy


As I said earlier, journalists will use every device to persuade us that elections are closer than they are, in order to keep interest high. One of the things they do is to assert that if the difference between the predicted votes for two candidates is smaller than the margin of error of the poll, then the two candidates are in a statistical tie. This gives the impression that it is a toss-up, i.e., 50-50 chance, as to who is ahead. This is simply not true and worth reiterating during the election season, like I did back in 2008 around this time.

The margin of error of a poll is found by calculating 100/√N where N is the sample size. It says that if (say) the poll uses a sample size of 1000 (hence a margin of error of 3.2%) and it shows 45% support for a candidate, then there is a 95% probability that the candidate’s actual vote tally lies somewhere between 42% and 48%. (This is why you have to be cautious when people dig into the data to look at the so-called ‘internals’ (i.e., the results pertaining to a specific demographic subset of those who are over 65 or Hispanic or whatever). The margin of error can be quite large if the numbers of respondents in that group are small. If there were only 100 people in that group, then the margin of error is 10%.)

But just because the ranges for the two candidates overlap does not mean that the race is a statistical tie. It is possible to calculate the probability that one candidate is ahead and Kevin Drum gives a helpful little table.

According to this, if the margin of error is 3% and one candidate leads the other by 3 points, then rather than the result being a toss-up, there is actually an 84% probability that that candidate is actually ahead. If the lead is 2% and the margin of error is a whopping 5%, the candidate still has a 65% of being actually ahead. You could make quite a bit of money by betting on election outcomes with people who do not know this.

When there are multiple polls surveying the same question, one can use other statistical techniques that further reduce the margin of error, which is why the predictions of poll aggregators like Sam Wang or Nate Silver tend to be more accurate than individual polls. This is also why it is unfair to unfavorably compare the accuracy of the predictions of any single polling outfit with those of the poll aggregators, since the latter have a built-in advantage. If Gallup (say) could use its own poll as well as all the other polls to aggregate, their predictions would be more accurate too. But of course they don’t (can’t? won’t?) do that.

Comments

  1. slc1 says

    It should also be pointed out that some pollsters use 1 standard deviation as their margin of error. Considering that the Physical Review requires 5 standard deviations to claim an effect is statistically significant, that’s pretty lame.

  2. Mano Singham says

    Yes, that’s right. And they don’t use 5 sigma for everything but major results do provoke extra scrutiny.

  3. Uri says

    reminds me of when the first statistical study of excess deaths in iraq after the US invasion came out in the lancet, which had the number of deaths as 98,000 but had a large 95% confidence interval of 8000-194,000. some media were reporting it as though the 8000 figure was as likely as the 98,000 figure, although of course there would be only a 2.5% chance that the figure was lower than 8000, and a 50% chance it was higher than 98,000.

  4. richardrobinson says

    Isn’t the confidence level required also proportional to the noise-to-signal ratio?

Leave a Reply

Your email address will not be published. Required fields are marked *