LHC on the road to discovery

2011-03-16

The most energetic proton–proton collisions ever observed are now being produced at CERN's Large Hadron Collider. In the first of two articles, Tommaso Dorigo explains what drives the collider's researchers to sit awake eagerly taking data through the small hours of the night.

 
 awake and eager (pic from physicsworld. com)

The walls are plastered with banks of computer screens. Most show bland-looking information, constantly streaming in, about the status of the accelerator or the sub-detector elements. But if you walk in while the collider is taking data, what stands out most is the least useful piece of hardware: a large, colourful flat-screen display set up high in front of the shift leader's seat, where snapshots of the reconstructed tracks and energy deposits of particles produced in the collisions are continually broadcast in a 3D orgy of colours (see "What the pixels show" below).

The screen refreshes every few seconds with a new collision, so it is tough luck if you wanted to spend more time examining the last one: it will have been recorded in a data file somewhere, but the chances are you will never see it again. Millions of such "events" – the term used to describe particle collisions, as well as the resulting few-hundred-megabytes-worth of data – are logged every day in the huge on-site centre known as "Tier zero", where tape robots handle and store the precious raw data, and thousands of CPUs perform the first full reconstructions of particle trajectories and energies. The one you paid attention to on screen was nothing special in itself – it was merely raised to ephemeral glory by a random choice of the event-display program.

Welcome to the control room of the Compact Muon Solenoid (CMS), one of the four experiments running at the CERN particle-physics lab just outside Geneva (figure 1). Here, and in three other command centres, researchers work shifts, spending busy days and sleepless nights in front of computer screens running monitoring programs that take the pulse of detector components, data-collection hardware and electronics of all kinds. Nights are better than days for data taking: everybody is more focused; phones do not ring; and data quietly accumulate in the storage hardware.

 
 Figure 1: deep underground (pic from physicsworld. com)

The valuable strings of zeroes and ones do not lay undisturbed for long. A formidable network of computers continuously copies the reconstructed data to different regional centres around the world, where huge parallel sets of CPUs reprocess the information, producing skimmed datasets that are then broadcast around the globe to their final destinations – an array of smaller regional centres. There, the data files get sucked in by avid programs that researchers deploy in long queues. Like people politely queuing, the programs silently await their turn to spin the data disks and produce distilled information for the analysers who designed them.

The gigantic effort of machines and brains that converts hydrogen atoms into violent proton–proton collisions, and then turns these into data-analysis graphs, is surprisingly seamless and remarkably fast. There is overt competition between the Large Hadron Collider (LHC) experiments and those at the Tevatron, the US's proton–antiproton collider at Fermilab in Illinois, despite the latter's imminent demise. Having run for 25 glorious years and due to be decommissioned at the end of this year, the Tevatron is unwilling to leave the scene to its younger and more powerful European counterpart just yet, and is trying to catch a first faint glimpse of the Higgs boson before its CERN rival discovers it. Even more, there is in-family competition between the two main LHC experiments: ATLAS and CMS. The challenge for these two large collaborations is not only to find the signal of a Higgs boson; perhaps even more exciting, they will also try to figure out which of the "new physics" scenarios already on the blackboards of theorists is the follow-up to the "Standard Model" of particle physics. The quest is on to fill the blank pages of our fascinating story of the infinitely small.

Fundamental matter

Through a century of investigations and a string of experimental observations, particle physicists have amassed a precise knowledge of how matter at the shortest length scales consists of a small number of elementary bodies acted upon by four forces (figure 2). We know that matter is composed of two dozen fermions – six leptons and 18 quarks – interacting by the exchange of a dozen bosons; the odd player is a single additional particle, the Higgs boson that characterizes the excitations of the vacuum in which particles live. The LHC can generate enough energy to "shake" this vacuum and so could finally observe those "Higgs vibrations" that were hypothesized more than 40 years ago but which have so far escaped experimental confirmation.

The LHC experiments have been designed with the explicit aim of finding that one missing block. Yet even with a Higgs boson, as pleasing and tidy as the Standard Model looks, it is necessarily incomplete. Like Newton's theory of classical mechanics, which we now understand to be the small-speed approximation of Einstein's theory of relativity, the Standard Model is believed to be what we call an effective theory – one that works well only in a restricted range of energies. The energy at which the theory starts to break down and new phenomena become evident is unknown, but theoretical arguments suggest that it is well within reach of the new accelerator.

Acting like speleologists confined in a small corner of a huge unknown cavern, researchers have scrupulously explored all the territory they could light up with the available technology; their fantasies of what lies beyond, however, have never ceased. The LHC is a powerful new lamp, capable of illuminating vast swaths of unexplored land. Where light is cast, we hope we will finally see features at odds with our low-energy effective theory. These new phenomena should provide us with the crucial hints we need in order to answer some nagging questions and widen our understanding of nature. For example, why is it that there are only three generations of matter fields, and not four, or five, or 10? Or is there, perhaps, a host of "supersymmetric" particles mirroring the ones that we already know about? Maybe these particles have not been discovered yet only because they are too massive and thus impossible to materialize with the collisions created by less-powerful accelerators. And is space–time really 4D, or can we produce particles that jump into other dimensions? These and other crucial questions can only find an experimental answer if we continue to widen our search.

Casting new light

The new lamp is now finally turned on, but it was not a painless start. The celebrations for the LHC's start-up on 10 September 2008 were halted only eight days later by a fault in an electrical connection between two of its 1232 dipole magnets: the heat produced vaporized six tonnes of liquid helium, the blast from which created a shock wave that damaged a few hundred metres of the 27 km underground ring and forced a one-year delay in the accelerator programme. A total of 53 magnets had to be brought to the surface, repaired or replaced by spares, and reinstalled in the tunnel. A full investigation of the causes of the accident was carried out, and safety systems were designed to prevent similar catastrophes in the future.

Since the LHC restarted in November 2009 at an energy of 0.45 TeV per beam, it has been working impeccably; but a cautious ramping up in stored energy was still required. Bit by bit, and with great patience, the physicists and engineers who operate the accelerator have raised the energy of the circulating protons, as well as their number, while painstakingly searching for the best "tunes": orbit parameters that avoid electromagnetic resonances of the beams with the machine that would otherwise cause instabilities and decrease the lifetime of the beams.

 
 Figure 2: matter of fact? (pic from physicsworld. com)

Although the quality of the beams has so far consistently exceeded expectations, only up to a tenth of the maximum protons per beam have so far been circulated, and the total collision energy of 7 TeV now being produced is half the design goal of 14 TeV. Still, 7 TeV is more than three times what has been achieved at the Tevatron, allowing the investigation of large swaths of new, unexplored territory. The latest schedule is for the LHC to remain at 7 TeV until the end of 2012, when an upgrade to 8 TeV or more will be possible. Then, after a year-long shutdown in 2013 to finalize the commissioning of extra safety systems, the machine will gradually be brought up to its 14 TeV maximum.

Particle physicists need higher energy to see deeper, but they also need more intense light and observation time to resolve what they are illuminating more clearly; for them, energy and intensity – or time if you think about how long it takes to build up an intense signal – are two sides of the same coin. In November 2009, as news of the first collisions was broadcast worldwide, it was easy to find curious non-physicists asking what the outcome of the experiment had been, but hard to explain to them why it is likely to last at least another two decades. The signal of a new particle or unknown effect is not expected to spew out as soon as a switch is flicked and collisions take place: it will appear at first as a small departure of the observed data from what the models predict, and only the accumulation of more data will turn it into clear evidence of a new phenomenon.

What is more, any evidence of new physics had better be rock solid if it is to be published. Despite being more than 40 years old, the Standard Model has only required tweaking once: it was initially thought that neutrinos (the partners of the charged leptons e, μ and τ, see figure 2) were massless, but in 1998 long-awaited experimental proof showed that they have a small but non-zero mass. Since its conception in the early 1970s, the Standard Model has withstood such detailed and precise tests that no physicist is going to take a claim of its inability to describe an observed phenomenon lightly. Indeed, the thousands of researchers working on the LHC experiments will provide a deep level of internal scrutiny to any scientific result claiming new physics. By the time they let it be submitted for final publication, the chosen journal's peer-review process will be like the bored glance of a ticket inspector in comparison.

The search for evidence

The typical modus operandi of the search for a new particle signal or a new phenomenon involves several steps. First, it must be verified that the detector's response to known phenomena is well understood and matches what is expected from computer simulations. Test particles include electrons, muons, photons and neutrinos, as well as the collimated streams of particles, or "jets", that originate from the emission of energetic quarks or gluons by colliding particles (see "What the pixels show" below). The processes being sought may produce a combination of these objects, and simulations are needed in order to accurately predict what signal they will yield in the detector.

The second step involves selecting events that contain that specific signature being searched for; for example, if the goal is to find a massive particle believed to yield a pair of quarks when it disintegrates, then one may choose to only analyse events where exactly two energetic jets are observed (again, see "What the pixels show" below). Third, researchers usually impose some fine-tuning to the signatures' requirements: events are chosen for which the produced particles were emitted orthogonally to the beam, or thereabouts, as these are the most interesting events. Particles that are produced at a small angle to the beam do not undergo much of a momentum change and are therefore more likely to originate from background processes. The point of this step is to discard physical processes that we already understand with the Standard Model (which in effect constitute an unwanted background noise in the search) while retaining as many events as possible that may contain the new particle signal. The less background that remains in the final sample, the more likely it is that some small anomaly caused by the new process will become visible.

In the final step of a search for new physics one typically uses statistics to infer whether a signal is caused by a real effect or just some random variation. The observed size and features of the selected data are compared to two different hypotheses: the "null" and the "alternate". According to the null hypothesis, the data result exclusively from known Standard Model processes; according to the alternate hypothesis, the data also contain a new particle signal. If there is a significant disagreement between the data and the null hypothesis, and a much better agreement with the alternate one, researchers then estimate the probability that such a phenomenon occurred by sheer chance. They usually convert this into units of "standard deviations" – commonly labelled by the Greek letter sigma (σ). A "3σ significance" effect would be produced by background fluctuations alone (i.e. without any signal contribution) only once or twice if the whole experiment were repeated a thousand times. Such an occurrence is said to constitute evidence for a possible signal, though a statistical fluctuation usually remains the most likely cause. A "5σ significance" instead describes effects where the chance of random occurrence is smaller than a few parts in tens of millions, and is agreed to be enough to claim the observation of a new particle or phenomenon.

Unfortunately for Nobel-hungry particle seekers, most of the searches result in no new signal: the data fit reasonably well to the null hypothesis; standard deviations remain close to zero; and that flight to Stockholm can be put on hold. Still, even a negative result contains useful information: the consolation prize is then a publication in a journal. From the level of disagreement of the data with the alternate hypothesis one can in fact extract and publish a "95% confidence-level upper limit" on the rate at which an LHC collision may create the new particle being sought. This means that when no signal is found, physicists conclude that either the particle does not exist (and its rate of creation in LHC collisions is thus zero) or that it is produced too rarely: too few of them would then be contained in the data for their presence to be detectable. These limits are a useful guide for theorists, whose models need to avoid predicting new particles that are produced in collisions at a rate already excluded by experimental searches.

The LHC is now casting light further into the unknown. If there is anything to discover out there, many are betting that it will be reported by the CMS and ATLAS collaborations this year. The excitement for these new searches is as great as ever, and the internal meetings of the two collaborations, where the status of ongoing analyses is presented, are packed with researchers constantly balancing their primeval scepticism with their childlike enthusiasm for anything that looks like a potential new find. Will the LHC finally prove its worth, 20 years after its original design? A description of the discoveries that might hit the news in the next few months is offered in the accompanying article ("Signatures of new physics" pp26–30 in print and digital editions – see below).

About the author

Tommaso Dorigo is a research scientist at Padova University, Italy, and a member of the Compact Muon Solenoid and the Collider Detector at Fermilab collaborations. He writes about particle-physics news for non-experts on his blog A Quantum Diaries Survivor

Source: PHYSCISWORLD.COM Website