James Beacham’s LHC Nightmare: A Love Story


So, doop-de-doo, I’m doing daily hobby-reading, in this case about why the Large Hadron Collider has been such a bust. I mean, sure, it found the Higgs, but we were expecting that at these energies it would reveal new physics, not merely confirm that last little bit of the Standard Model.

As a lay person entirely uneducated in the really interesting bits of physics (no, F=ma doesn’t count), I’ve been surprised that they’ve been running this multi-billion dollar machine for years at a time and yet we haven’t gotten any blockbuster, completely-misleading NYTimes stories about Holy Freud, Everything We Thought We Knew Is Wrong! since they explained that all the mass in massive particles isn’t inherent throughout the particles, but is instead integrated granularly through the Higgs.

So why isn’t the LHC producing terrifically new particles? Why isn’t it finding any super-symmetric particles, for instance? I mean, not finding anything can limit the number of crazy ideas that might be right, but still…

It turns out that this very well could be a function of the algorithms used to collect data from the LHC’s collisions. You see, these events happen at such fine time scales that we need to “gate” the information collected to limit the amount piped through data transport and saved in data storage to something remotely manageable. The vast amount of detection data isn’t saved, but since the vast amount isn’t relevant to the work they’re doing, this is perfectly acceptable.

The problem arises because while we can all agree that the vast majority of the signals generated by detectors abutting the collider are irrelevant junk, it seems there’s some significant disagreement over exactly how large the remaining fraction of useful data might be. In a manner similar to categorizing the vast majority of DNA as non-functional (or “Junk DNA” to use a term much loathed by many who know more than I about biology) and then later discovering that a small amount of this DNA does have a function, albeit not directly regulating or coding for the production of proteins (which were originally assumed to be the only functions of “functional” DNA), it turns out that some signals screened out by the processors that initially sift the raw detector data might actually be the direct products of high-energy collisions. But because these (as yet hypothetical) signals result from new physics, we didn’t program the processors to retain that data, thinking it was likely to be spurious.

Now we are in a quandary. We may be screening out the data that proves the nature of new physics because that data doesn’t conform to the principles of physics already known to operate during particle collisions. To quote ScienceMag.org:

CMS and ATLAS, however, were designed to detect particles that decay instantaneously. … Particles that fly even a few millimeters before decaying would leave unusual signatures: kinked or offset tracks, or jets that emerge gradually instead of all at once.

Standard data analysis often assumes such oddities are mistakes and junk, notes Tova Holmes, … from the University of Chicago in Illinois who is searching for the displaced tracks.

Think about this for a moment and it makes perfect sense. In order to protect the delicate detectors, they aren’t right in the beam line. They are, literally, decimeters if not meters away. Particles that appear to reach the detector traveling in a line that passes directly through the heart of the collision zone are assumed to be relevant to studying the collision. We discard data from random particles that might have passed through earth’s atmosphere and simply passed near to the collision zone before striking a detector. In this way, we can avoid overloading our computers with noise from hypersensitive detectors picking up stray photons (or other particles) that rarely pass through the shielding, but exist in such numbers that even shielding out the vast majority isn’t enough to prevent data overload.

But what if, instead of decaying nearly instantaneously the way that the particles we have studied in the past have done, a collision produced a particle with serious staying power. At the energy level we’re talking about, the ability to travel millimeters represents astonishing stability relative to other collision products. And yet, super-symmetric particles might just have stability of this order of magnitude:

Giovanna Cottin, a theorist at National Taiwan University in Taipei, explained that:

“Almost all the frameworks for beyond-the-standard-model physics predict the existence of long-lived particles,” she says. For example, … some [super-symmetric particles] could be long-lived. Long-lived particles also emerge in “dark sector” theories that envision undetectable particles that interact with ordinary matter only through “porthole” particles…

But what happens when a collision product flies a couple millimeters before decaying into the photons and other decay products that ultimately reach detectors? Well, when the “stable” collision products burst apart into more familiar (and detectable) decay products, that creates its own “explosion” of sorts a couple millimeters away from the original collision. The decay products race away from the center of this explosion instead of racing away from the center of the proton collision. Detectors then see that these decay products aren’t traveling in a line that leads directly back to the collision and assumes because of our experience with existing physics, that these detections are spurious, that they are unrelated to the physics happening at the heart of the collision.

So could it be that our detectors have been catching the hard evidence of new particles from their decay products but the detections were not saved as data because of the sifting algorithms programmed into the data processors that are designed to prevent overload? The answer is, terrifyingly, yes. We may have built this multi-billion dollar machine to detect exotic physics and then programmed it not to tell us when new physics is detected. Yikes!

In response to growing awareness that this lack of data might not be data indicating a lack of new particles, scientists are now arguing for expensive (but much cheaper than a new collider) new detectors that function in a manner designed to catch the decay products of exotic new particles as well as for capturing a bit more data from our existing detectors. This second strategy might capture more noise as loosens its tight screens on which data are worth inspection, but the consequences of doing nothing are frightening for James Beacham of Duke University to imagine:

The nightmare scenario is that in 20 years, Jill Theorist says, ‘The reason you didn’t see anything is you didn’t keep the right events and do the right search.’

I had to do a double-take because at first I thought “Jill Theorist” was the legal name of an actual researcher. After a brief delay, however, my uncertainty collapsed and I realized that this was Beacham’s feminist variant of “Joe Scientist”.

And so today I learned both what’s up (at least potentially what’s up) with the relatively slow pace of discovery at LHC and also that I Freuding love James Beacham.


It is, of course, entirely possible that I’ve gotten some significant aspects of this story wrong. If I have, I hope to be lucky enough that Mano or some other knowledgeable person will so inform me.

 

 

 

Comments

  1. Rob Grigjanis says

    I don’t think you got anything significant wrong. There have been mumblings about this in the last couple of years.

    I’d only take exception to calling the LHC a bust. If there was a cheaper way to find the Higgs, I don’t know about it. And finding it was fucking huge, as the French say. Didn’t (yet) find SUSY or evidence for M-theory? Wah-wah.

  2. says

    The story sounds very plausible, although I wouldn’t characterize it as a failure exactly. The data selection process is something that needs to be fine-tuned just like any other part of the experiment. And if it turns out it wasn’t sufficiently fine-tuned from the get-go that’s not really a failure, it’s an important step.

  3. another Stewart says

    You have a biology error. Junk DNA was always more narrowly defined than the definition you give – telomeres, centromeres, rRNA and tRNA genes, etc., were never considered junk DNA.

Leave a Reply

Your email address will not be published. Required fields are marked *