Tentative Higgs sightings


Reports are emerging from the Large Hadron Collider at CERN that provide some evidence of the detection of the Higgs boson with a mass of 125 GeV, or about 133 times the mass of a proton.

Why are these reports so tentative? It is because the standards within the high-energy particle physics community for claims of a discovery of some effect are very high. In many areas of research, 5% probability of the result occurring by chance is considered small enough to be worth publishing as a potential real effect. But when it comes to discoveries of new particles, you need to have results that meet the five-sigma threshold, which means that the probability that the observation is not real but merely a product of random chance is 0.000028%. Apparently the current level of confidence in the data is just at the three-sigma level, which implies a 0.13% probability of occurring by chance.

Two independent experiments at the LHC are seeking the Higgs. It is reported that each one is approaching the four-sigma (0.0032% chance) level and once they do, by combining the data to get a larger set and thus improving the statistics, we may reach the five-sigma level.

Comments

  1. jamessweet says

    You’re in a better position to explain this than I am, Mano, being a physicist and all, but I think it’s worth a brief blurb explaining why particle physicists have such a high threshold. It’s not like they are particularly more keen than those in other disciplines to avoid being wrong, and even if they were, that alone would not be enough to justify a dissatisfaction with a 3- or 4-sigma level of confidence.

    I’ll take a crack at it, but correct me if I get anything wrong: It’s basically the Green Jelly Beans Cause Acne problem, writ large. If you perform 20 separate experiments, and only one of them has p < 0.05 (and it's just barely under), then that means nothing. Because of the amount of data that high energy particle physicists troll through, it's like they are performing elebenty gazillion different experiments all at the same time. If one of them comes up with p < 0.001… well so freakin' what? They performed so many experiments that odds are that a non-trivial handful of them would get such an unlikely result just by chance. In a nutshell: Because their dataset is so huge, it is a certainty that some really unlikely trends will appear in it just out of sheer chance. So you need to have a tremendously unlikely trend before you believe it.

    Is that about right?

  2. unbound says

    To be honest, I think it is just a matter that particle physicists are just more thorough in general. There are a lot of problems with using statistical evidence as proof of something (statistical analysis does a far better job refuting things) and all disciplines should be pushing for much better than the 5% threshold.

    Heck, most of the nutritional studies I’ve read (that bother to publish the actual statistics) barely reach the statistical significance threshold…yet most of those studies claim victory. This is why health advice keeps changing so often (I know everyone likes to blame the media, but for nutritional studies, the scientists are actually more at fault).

  3. Mano Singham says

    I am not really sure what the reasons are so your guess is as good as mine but I suspect that both reasons given so far are at play.

    In general, the hard work in particle physics is in setting up the experiment. Once you have it, then you can run it over and over (given limits on time and money to run the accelerators) and get lots of data, and this availability of copious data enables one to set a high bar for confidence limits. Because the initial set up is so expensive, repeating another person’s experiment is not easy so people are not likely to invest the time and money to do so unless they are pretty sure that it is not a wild goose chase.

    To be fair to the people in the health sciences, it is very expensive to increase the data set since each clinical trial has complicated protocols and finding patients to match the required profile is not easy and so the bar is lower. But as unbound says, the serious downside is that this can lead to a high frequency of false positives.

  4. F says

    I would also throw into this mix, at least for high-energy particle physics, the consideration that what is actually detected and measured is frequently a proxy (of a proxy, etc.). That is, you get to measure decay products (of decay products, perhaps) and with great confidence eliminate all other possibilities that some other particle interaction produced what was detected. And you are looking all along a mass curve, where ‘if Higgs is of mass x, then expect decay products blah blah blah’.

    Quantum mechanics is extremely precise and accurate, so if you are going to throw a new particle in, it has to be extremely well-defined. (Just the fact that modern electronics actually work regularly is a testament to how insanely on-the-button QM is.)

  5. ACN says

    Mano,

    I was actually really disappointed with this WIRED piece. The only source they actually cite in the article (other than their own) is a crazy person at the vixra-blog, which I can only assume is because the writer was too lazy to figure out whether or not arxiv or vixra was the crazy one 🙂

    At any rate, it seems like we’re grasping at straws until 7/4:
    http://press.web.cern.ch/press/PressReleases/Releases2012/PR16.12E.html

  6. says

    Medical studies give a prime example why not to trust the 95% statistical significance. It seems that in real life, they are only accurate about 50% of the time.

    The thing to remember is we are only talking about statistical significance. There are SO MANY other ways for experiments to screw up that using the 5 sigma mark helps compensate for that.

  7. Graham says

    James, you are right in principle, but the CERN researchers are not doing a gazillion experiments. They are effectively doing quite a lot, because they have been searching for the Higgs over a wide range of masses, but its not a vast number, and it can be accounted for. Here’s how Nature (7 March 2012) reported it:

    In February CMS announced a possible signature of the Higgs at the statistical level of 1.5 sigma (sometimes quoted as 3.1, a figure that would assume the team looked only in the region where they saw the largest excess, when in fact they searched for an excess of events appearing anywhere in a wide mass range), and its counterpart experiment at CERN, ATLAS, reported one at 2.2 sigma (sometimes quoted as 3.5). Both experiments have since refined their analyses, resulting in slightly weaker signals being reported at Moriond today.

    I wonder if the final announcement at the 5 sigma level (if there is one) will take this into account?

    Second point. The 5 sigma level translates into a 0.000028% probability if you assume the errors are distributed normally. There is little reason to suppose they are. Studies of empirical data from all areas of science show that real-world data usually has a longer tailed distribution than the normal. Some recommend using a t distribution with 5-10 degrees of freedom, or a logistic. These give probabilities around 0.01% to 0.07%. Still very small but a lot less impressive than 0.000028%. (I suspect that physicists use the sigma formulation because they don’t want to express an opinion as to what the true error distribution is.)

    Two more factors need to be taken into account before deciding whether to make an official public announcement. The first is how extraordinary or unlikely the claim seems to be. This is subjective, but that’s inevitable. The second factor is the cost of getting it wrong. What is the cost of making an announcement which turns out to be false? What is the cost of NOT making an announcement which turns out to be true?

    An example. On 26 May 2011, German health officials announced that cucumbers from Spain were identified as a source of the E. coli outbreak in Germany. This turned out to be false, but the cost of not making the announcement had it been true would have been high.

  8. says

    Because the initial set up is so expensive, repeating another person’s experiment is not easy

    Seems that’d be a big piece of it. It’s not like someone is going to go independently build their own LHC. Now, the interesting question is whether, if they did, the results would be the same. The “faster than light neutrinos” come to mind as an example of how a flaw in experimental apparatus is a big deal. Since science works on the basis that if you refute an important result, it throws everything into chaos, a flawed experiment that contradicts general relativity is no minor error.

  9. Mano Singham says

    What they do with the LHC is to have two (or more) independent groups of experimenters set up their own detectors at different locations is the beam path. So although they are using the same extremely expensive beam production system, they are also competing with each other to be first as well as checking on the other’s results. This is often the practice in particle physics.

    What this does not do is eliminate systematic errors that might affect both experiments and which cannot be reduced by simply repeating the experiment many times. In the case of the neutrino result, their statistics were good but a systematic error was the problem.

  10. jamessweet says

    That would be a good justification to get up to 3- or maybe 4-sigma, but going all the way to 5-sigma seems to demand some sort of explanation.

    I see Mano has a new post up about it, so I’ll take a look…

Leave a Reply

Your email address will not be published. Required fields are marked *