Skip to content
Starts With A Bang

Will The Large Hadron Collider ‘Break’ The Standard Model?

We need more and better data to know, but that’s exactly what’s coming.

Over the past few decades, a number of important advances have helped revolutionize our picture of the Universe. The astrophysical evidence for dark matter is overwhelming, teaching us that the majority of mass in our Universe doesn’t arise from any of the particles we know. The Universe’s expansion is accelerating, revealing the existence of a new type of energy — dark energy — that seems inherent to empty space. We’ve invented room-temperature superconductors, discovered every fundamental particle in the Standard Model (including the elusive Higgs boson), revealed the massive nature of the neutrino, and made atomic clocks so precise that they can measure the difference in the rate at which time passes when they’re separated by as little as one foot (30 cm).

And yet, in many ways, our picture of what makes up the Universe hasn’t advanced significantly in over ~40 years. No particles outside of the Standard Model have shown up at any of our colliders — at high or low energies — and our largest data sets of all time have revealed no robust, repeatable surprises for fundamental physics. Importantly, many of our greatest ideas, including supersymmetry, extra dimensions, leptoquarks, technicolor, and string theory, have made no predictions that have been borne out by experiment. Yet still, many are excited about a possible hint of new physics at the Large Hadron Collider (LHC). Even if you’re optimistic, it’s important to be skeptical. Here’s the reason why.

The particles and antiparticles of the Standard Model of particle physics are exactly in line with what experiments require, with only massive neutrinos providing a difficulty and requiring beyond-the-standard-model physics. Dark matter, whatever it is, cannot be any one of these particles, nor can it be a composite of these particles. (E. SIEGEL / BEYOND THE GALAXY)

Most of us, when we think of the Standard Model, think of the indivisible particles that exist in our Universe. There are the quarks and gluons: the fundamental constituents of protons, neutrons, and all of their heavier and lighter cousins. There are the leptons, including the electron, muon, and tau, plus all of the neutrinos. There are the antiparticles: the antimatter counterparts of the quarks and leptons. And also, there are the weak bosons — the W+, W-, and Z0 — as well as the photon, mediator of the electromagnetic force, and the Higgs boson.

But the Standard Model is also a whole lot more than a framework for the fundamental particles that exist (and can exist) within our Universe. It also provides a complete description for all the quantum fields that exist between these particles, which encapsulates how every particle that exists interacts with every other particle that exists. The proton’s mass depends on quark-gluon and gluon-gluon couplings that include even massive particles like the top quark; if we were to change any of the parameters of the Standard Model, including rest masses or couplings, there would be many consequences that would experimentally reveal themselves to us.

A proton isn’t just three quarks and gluons, but a sea of dense particles and antiparticles inside. The more precisely we look at a proton and the greater the energies that we perform deep inelastic scattering experiments at, the more substructure we find inside the proton itself. There appears to be no limit to the density of particles inside. (JIM PIVARSKI / FERMILAB / CMS COLLABORATION)

Over many decades, theorists have proposed extension after extension to the Standard Model. Perhaps there are extra fields that arise as a consequence of Grand Unification. Perhaps there are extra particles that arise from additional symmetries. Perhaps there are new decays or couplings that could show themselves at high energies or with the production of large numbers of rare, unstable particles. We know there are many puzzles that are not resolvable with physics as we know it, from dark matter to why there’s more matter than antimatter to why particles have the mass values they do, among others. Yet the Standard Model, no matter how we tweak it, offers no viable solutions on its own.

The original hope of many was that the Large Hadron Collider (LHC) at CERN — the most powerful particle accelerator in human history — would reveal not only the Higgs boson, but some clues about many of these unsolved mysteries. The way it does so is brilliant: by producing large numbers of high-energy collisions, exotic, unstable particles are created in great numbers. Those events are then tracked and recorded by the world’s largest particle detectors, identifying the energy, momentum, electric charges, and many other properties of everything that comes out.

The CMS Collaboration, whose detector is shown prior to final assembly here, is one of the largest, most dense detectors ever constructed. The particles that collide in the center will make tracks and leave debris that deposits energy into the detector, enabling scientists to reconstruct the properties and energies of any particles that were created during the process. This method is woefully inadequate for measuring the energies of cosmic rays. (CERN/MAXIMLIEN BRICE)

If the Standard Model — all of its particles and interactions — were legitimately all that were out there, we could calculate precisely what we’d see. There would be new particles created with particular probabilities that corresponded to the particular parameters of each collision. The new particles that came into existence would then decay in a particular set of ways:

  • with particular lifetimes,
  • into sets of particles that are permitted,
  • with particular ratios,
  • and not into other groups of particles which are forbidden,

all according to the Standard Model’s rules.

What we’re basically doing is testing the Standard Model to incredible precision, and looking for any possible deviations. Most of the ideas we initially examined didn’t pan out at the LHC: the Higgs isn’t a composite particle, there are no low-energy supersymmetric particles, the evidence for large or warped extra dimensions isn’t there, and there appears to be just one Higgs particle instead of many. But that doesn’t mean everything we’ve seen is in perfect agreement with the Standard Model’s predictions.

A candidate Higgs event in the ATLAS detector. Note how even with the clear signatures and transverse tracks, there is a shower of other particles; this is due to the fact that protons are composite particles. This is only the case because the Higgs gives mass to the fundamental constituents that compose these particles. At high enough energies, the currently most-fundamental particles known may yet split apart themselves. (THE ATLAS COLLABORATION / CERN)

Anytime you collide large numbers of particles at high energies, you’re going to create heavy, rare, unstable particles so long as they’re allowed by Einstein’s most famous equation: E = mc². Those particles will live for a short while and then decay. If you can create enough of them, you can actually test the Standard Model with some level of mathematical rigor. Because there are explicit predictions for how often any particle you create should decay in a particular fashion, measuring the frequency of these decays precisely, by creating enormous numbers of these particles, puts the Standard Model to the test.

And there are many, many ways that we genuinely believe physics must, somehow, go beyond the Standard Model. For example, gravity is not treated as a quantum interaction, but rather as a classical, unchanging background by the Standard Model. Neutrinos are predicted to be massless by the Standard Model, and there’s no dark matter nor dark energy. The Standard Model doesn’t explain everything we see about our Universe, and we fully anticipate that, at some level, there may be additional fields, particles, interactions, dimensions, or even physics from beyond our observable Universe that could be affecting us.

The Standard Model particles and their supersymmetric counterparts. Slightly under 50% of these particles have been discovered, and just over 50% have never showed a trace that they exist. Supersymmetry is an idea that hopes to improve on the Standard Model, but it has yet to make successful predictions about the Universe in attempting to supplant the prevailing theory. If there is no supersymmetry at all energies, string theory must be wrong. (CLAIRE DAVID / CERN)

Of course, the grave danger — and we’ve done this many times in the past — is that we might see something unexpected and leap to an incorrect conclusion. We know how the probabilities ought to break down and what to expect, but observing anything different doesn’t necessarily mean there’s new physics showing up here. Sometimes, there’s just an unlikely statistical fluctuation.

In this particular instance, we see B-mesons, which are particles that contain bottom quarks (the second heaviest quark, behind the top), decaying to either an electron/positron pair or a muon/anti-muon pair. In theory, these two decays should occur at the same rate; in practice, we see that a slightly higher-than-expected fraction of particles decays into muons and antimuons compared to electrons and positrons.

But in terms of statistical significance — where we ask, “how confident are we that this isn’t just an unlikely but perfectly normal outcome?” — the answer is not very good: we’re only about 99.8% sure this is out of the ordinary.

A decaying B-meson, as shown here, may decay more frequently to one type of lepton pair than the other, contradicting Standard Model expectations. There has been suggestive evidence of this for many years, but it still has not risen above the threshold necessary to declare a robust discovery. (KEK / BELLE COLLABORATION)

You might seem incredulous: if we’re 99.8% sure, statistically, that something’s out of the ordinary, why would we consider that “not very good?” I like to think about it in terms of coin flips. If you flipped a coin ten times in a row and got identical results all ten times — either 10 heads or 10 tails results, consecutively — you would declare that to be extremely unlikely. In fact, the odds of that happening are just 1 in 512, or 0.02%: about the same odds as getting the outcome that the LHC saw with these decaying B-mesons.

But think about what would happen if, instead of ten flips, you flipped the coin 1000 times. Now, what are the odds that somewhere in that succession of 1000 coin tosses, you’d get a string where you saw either 10 heads or 10 tails consecutively? Perhaps surprisingly, only 14% of the time would you never see a string of 10 identical outcomes in a row. On average, you’d expect to get the same result 10 times in a row about 3 times in 1000 tosses: sometimes more, sometimes less.

Ten random coin flips can result in any of 1024 possibilities, all of which have equal probability. While this exact sequence, of HHTTTHHHHH has the same probability as any other, the fact that it has five heads in a row is a feature that is relatively unlikely. Whether the coin is biased or not cannot be determined from this single trial. (© 1998–2020 RANDOM.ORG)

At the LHC, we have many different classes of “unlikely outcomes” that we’re searching for. As it stands, the LHC has discovered more than 50 new composite particles, and has created hundreds of different types of particles that were already known to exist. Each one has, typically, one or two handfuls of ways it can decay, some of which are extremely rare and others of which are far more likely. It’s no stretch to say that there are literally thousands of ways that new physics could potentially show up at the LHC, and we’re looking for every single one of them that we know how to look for.

That’s why, when we look at data that doesn’t line up with the Standard Model’s predictions, we want to make sure that it’s crossed an unambiguous threshold of confidence. We want to be so certain that it isn’t an unlikely statistical fluctuation we’re seeing that we aren’t impressed by 95% confidence (a two-sigma result), by 99.7% confidence (a three-sigma result, which is what this latest announcement is), or even by 99.99% confidence (a four-sigma result). Instead, in particle physics — to avoid fooling ourselves in exactly this fashion, like we’ve done many times throughout history — we demand that there be just a 1-in-3.5 million chance that a discovery is a fluke. Only when we cross that threshold of significance can we declare that we’ve made a robust discovery.

The first robust, 5-sigma detection of the Higgs boson was announced a few years ago by both the CMS and ATLAS collaborations. But the Higgs boson doesn’t make a single ‘spike’ in the data, but rather a spread-out bump, due to its inherent uncertainty in mass. Its mean mass value of 125 GeV/c² is a puzzle for theoretical physics, but experimentalists need not worry: it exists, we can create it, and now we can measure and study its properties as well. (THE CMS COLLABORATION, “OBSERVATION OF THE DIPHOTON DECAY OF THE HIGGS BOSON AND MEASUREMENT OF ITS PROPERTIES”, (2014))

What’s frustrating about the current situation is that many commentators are passing judgment on whether this result is likely to hold up or not, when that’s not something we have the necessary information to conclude. It could be evidence for a novel particle, like a leptoquark or a Z’ (pronounced zee-prime) particle. It could signal a novel coupling in the lepton sector. It could even help explain the matter-antimatter asymmetry in the Universe, or be indicative of a sterile neutrino.

But it could also just be a statistical fluctuation. And without more data — and it’s coming, as the LHC has so far only collected about 2% of the data it will collect over its lifetime — we have no way of telling these scenarios apart. Over its history, the LHC has seen many somewhat unexpected decays involving bottom-quark containing particles; just recently the LHCb collaboration (where the “b” indicates their focus on bottom-quark containing particles) announced a completely different decay that could challenge the Standard Model’s expectations. What we’ll have to do is, as we gather more data, look at all of these various anomalies together. Only when, combined, their significance crosses that “gold standard” for significance, will we get an announcement of discovery that’s as confident as we were with the Higgs.

The observed Higgs decay channels vs. the Standard Model agreement, with the latest data from ATLAS and CMS included. The agreement is astounding, and yet frustrating at the same time. Still, with 50 times as much data headed our way, even tiny deviations from the Standard Model predictions could be game-changing. (ANDRÉ DAVID, VIA TWITTER)

Right now, the LHC is undergoing a high-luminosity upgrade, which should significantly increase the rate of collisions that appear in our detectors. We should keep in mind that many unexpected bumps in the data have appeared — a diboson excess, a diphoton bump, unexpected ratios of Higgs decays — and disappeared as we subsequently collected more data. We cannot know how this experiment will turn out, and that’s why we have to perform it.

Travel the Universe with astrophysicist Ethan Siegel. Subscribers will get the newsletter every Saturday. All aboard!

Many physicists are excited about the possibilities while others are more pessimistic. However, the most important aspect of this is that everyone is appropriately cautious, practicing responsible science instead of prematurely declaring a new discovery. There are many hints of new physics out there, but we cannot be sure which ones will hold up and which ones will turn out to be mere statistical flukes. The only way forward is to take as much data as we can and to examine the full, synthesized suite of all of it. The only way we’ll ever reveal the secrets of nature is to put the question to the Universe itself, and listen to whatever it is that it says. With every new collision we create in our detectors, the closer we get to that inevitable but critical moment that physicists all over the world are awaiting.

Starts With A Bang is written by Ethan Siegel, Ph.D., author of Beyond The Galaxy, and Treknology: The Science of Star Trek from Tricorders to Warp Drive.


Up Next