Wednesday, 23 July 2014

Higgs Recap

On the occasion of summer conferences the LHC experiments dumped a large number of new Higgs results. Most of them have already been advertised on blogs, see e.g. here or here or here. In case you missed anything, here I summarize the most interesting updates of the last few weeks.

1. Mass measurements.
Both ATLAS and CMS recently presented improved measurements of the Higgs boson mass in the diphoton and 4-lepton final states. The errors shrink to 400 MeV in ATLAS and 300 MeV in CMS. The news is that Higgs has lost some weight (the boson, not Peter). A naive combination of the ATLAS and CMS results yields the central value 125.15 GeV. The profound consequence is that, for another year at least,  we will call it the 125 GeV particle, rather than the 125.5 GeV particle as before ;)  

While the central values of the Higgs mass combinations quoted by ATLAS and CMS are very close, 125.36 vs 125.03 GeV, the individual inputs are still a bit apart from each other. Although the consistency of the ATLAS measurements in the  diphoton and 4-lepton channels has improved, these two independent mass determinations differ by 1.5 GeV, which corresponds to a 2 sigma tension. Furthermore, the central values of the Higgs mass quoted by ATLAS and CMS differ by 1.3 GeV in the diphoton channel and by 1.1 in the 4-lepton channel, which also amount to 2 sigmish discrepancies. This could be just bad luck, or maybe the systematic errors are slightly larger than the experimentalists think.

2. Diphoton rate update.
CMS finally released a new value of the Higgs signal strength in the diphoton channel.  This CMS measurement was a bit of a roller-coaster: initially they measured an excess, then with the full dataset they reported a small deficit. After more work and more calibration they settled to the value 1.14+0.26-0.23 relative to the standard model prediction, in perfect agreement with the standard model. Meanwhile ATLAS is also revising the signal strength in this channel towards the standard model value.  The number 1.29±0.30 quoted  on the occasion of the mass measurement is not yet the final one; there will soon be a dedicated signal strength measurement with, most likely, a slightly smaller error.  Nevertheless, we can safely announce that the celebrated Higgs diphoton excess is no more.

3. Off-shell Higgs.
Most of the LHC searches are concerned with an on-shell Higgs, that is when its 4-momentum squared is very close to its mass. This is where Higgs is most easily recognizable, since it can show as a bump in invariant mass distributions. However Higgs, like any quantum particle, can also appear as a virtual particle off-mass-shell and influence, in a subtler way, the cross section or differential distributions of various processes. One place where an off-shell Higgs may visible contribute is the pair production of on-shell Z bosons. In this case, the interference between gluon-gluon → Higgs → Z Z process and  the non-Higgs one-loop Standard Model contribution to gluon-gluon → Z Z process can influence the cross section in a non-negligible way.  At the beginning, these off-shell measurements were advertised as a model-independent Higgs width measurement, although now it is recognized the "model-independent" claim does not stand. Nevertheless, measuring the ratio of the off-shell and on-shell Higgs production provides qualitatively new information  about the Higgs couplings and, under some specific assumptions, can be interpreted an indirect constraint on the Higgs width. Now both ATLAS and CMS quote the constraints on the Higgs width at the level of 5 times the Standard Model value.  Currently, these results are not very useful in practice. Indeed, it would require a tremendous conspiracy to reconcile the current data with the Higgs width larger than 1.3 the standard model  one. But a new front has been opened, and one hopes for much more interesting results in the future.


4. Tensor structure of Higgs couplings.
Another front that is being opened as we speak is constraining higher order Higgs couplings with a different tensor structure. So far, we have been given the so-called spin/parity measurements. That is to say, the LHC experiments imagine a 125 GeV particle with a different spin and/or parity than the Higgs, and the couplings to matter consistent with that hypothesis. Than they test  whether this new particle or the standard model Higgs better describes the observed differential  distributions of Higgs decay products. This has some appeal to general public and nobel committees but little practical meaning. That's because the current data, especially the Higgs signal strength measured in multiple channels, clearly show that the Higgs is, in the first approximation, the standard model one. New physics, if exists, may only be a small perturbation on top of the standard model couplings. The relevant  question is how well we can constrain these perturbations. For example, possible couplings of the Higgs to the Z boson are  

In the standard model only the first type of coupling is present in the Lagrangian, and all the a coefficients are zero. New heavy particles coupled to the Higgs and Z bosons could be indirectly detected by measuring non-zero a's, In particular, a3 violates the parity symmetry and could arise from mixing of the standard model Higgs with a pseudoscalar particle. The presence of non-zero a's would show up, for example,  as a modification of the lepton momentum distributions in the Higgs decay to 4 leptons. This was studied by CMS in this note. At this point, the obtained limits are weak, but as more statistics is accumulated in the next runs this will be an important probe of new physics.

5. Flavor violating decays.
In the standard model, the Higgs couplings conserve flavor, in both the quark and the lepton sectors. This is a consequence of the assumption that the theory is renormalizable and that only 1 Higgs field is present.  If either of these assumptions is violated, the Higgs boson may mediate transitions between different generations of matter. Earlier, ATLAS and CMS  searched for top quark decay to charm and Higgs. More recently, CMS turned to lepton flavor violation, searching for Higgs decays to τμ pairs. This decay cannot occur in the standard model, so the search is a clean null test. At the same time, the final state is relatively simple from the experimental point of view, thus this decay may be a sensitive probe of new physics. Amusingly, CMS sees a 2.5 sigma significant  excess corresponding to the h→τμ branching fraction of order 1%. So we can entertain a possibility that Higgs holds the key to new physics and flavor hierarchies, at least until ATLAS comes out with its own measurement.

Saturday, 19 July 2014

Weekend Plot: Prodigal CRESST

CRESST is one of the dark matter direct detection experiments seeing an excess which may be interpreted as a signal of a fairly light (order 10 GeV) dark matter particle.  Or it was... This week they posted a new paper reporting on new data collected last year with an upgraded detector. Farewell CRESST signal, welcome CRESST limits:
The new limits (red line) exclude most of the region of the parameter space favored by the previous CRESST excess: M1 and M2 in the plot.  Of course, these regions have never been taken at face value because they are excluded by orders of magnitude by the LUX, Xenon, and CDMS experiments. Nevertheless, the excess was pointing to similar dark matter mass as the signals reported DAMA, CoGeNT, and CDMS-Si (other color stains), which prompted many to speculate a common origin of all these anomalies. Now the excess is gone. Instead, CRESST emerges as an interesting player in the race toward the neutrino wall. Their target material - CaWO4 crystals - contains oxygen nuclei which, due their small masses, are well suited for detecting light dark matter. The kink in the limits curve near 5 GeV is the point below which dark-matter-induced recoil events would be dominated by scattering on oxygen. Currently, CRESST has world's best limits for dark matter masses between  2 and 3 GeV, beating DAMIC (not shown in the plot) and CDMSlite (dashed green line).

Sunday, 15 June 2014

Weekend Plot: BaBar vs Dark Force

BaBar was an experiment studying 10 GeV electron-positron collisions. The collider is long gone, but interesting results keep appearing from time to time.  Obviously, this is not a place to discover new heavy particles. However, due to the large luminosity and clean experimental environment,  BaBar is well equipped to look for light and very weakly coupled particles that can easily escape detection in bigger but dirtier machines like the LHC. Today's weekend plot is the new BaBar limits on dark photons:

Dark photon is a hypothetical spin-1 boson that couples to other particles with the strength proportional to their electric charges. Compared to the ordinary photon, the dark one is assumed to have a non-zero mass mA' and the coupling strength suppressed by the factor ε. If ε is small enough the dark photon can escape detection even if mA' is very small, in the MeV or GeV range. The model was conceived long ago, but in the previous decade it has gained wider popularity as the leading explanation of the PAMELA anomaly.  Now, as PAMELA is getting older, she is no longer considered a convincing evidence of new physics. But the dark photon model remains an important benchmark - a sort of spherical cow model for light hidden sectors. Indeed, in the simplest realization, the model is fully described by just two parameters: mA' and ε, which makes it easy to present and compare results of different searches.

In electron-positron collisions one can produce a dark photon in association with an ordinary photon, in analogy to the familiar process of e+e- annihilation into 2 photons. The dark photon then decays to a pair of electrons or muons (or heavier charged particles, if they are kinematically available). Thus, the signature is a spike in the e+e- or μ+μ- invariant mass spectrum of γl+l- events. BaBar performed this search to obtain world's best limits on dark photons in the mass range 30 MeV - 10 GeV, with the upper limit on ε in the 0.001 ballpark. This does not have direct consequences for the explanation of the  PAMELA anomaly, as the model works with a smaller ε too. On the other hand, the new results close in on the parameter space where the minimal dark photon model  can explain the muon magnetic moment anomaly (although one should be aware that one can reduce the tension with a trivial modification of the model, by allowing the dark photon to decay into the hidden sector).

So, no luck so far, we need to search further. What one should retain is that finding new heavy particles and finding new light weakly interacting particles seems equally probable at this point :)

Monday, 2 June 2014

Another one bites the dust...

...though it's not BICEP2 this time :) This is a long overdue update on the forward-backward asymmetry of the top quark production.
Recall that, in a collision of a quark and an anti-quark producing a top quark together with its antiparticle, the top quark is more often ejected in the direction of the incoming quark (as opposed to the anti-quark). This effect can be most easily studied at the Tevatron who was colliding protons with antiprotons, therefore the direction of the quark and of the anti-quark could be easily inferred. Indeed, the Tevatron experiments observed the asymmetry at a high confidence level. In the leading order approximation, the Standard Model predicts zero asymmetry, which boils down to the fact that gluons mediating the production process couple with the same strength to left- and right-handed quark polarizations. Taking into account quantum corrections at 1 loop leads to a small but non-zero asymmetry.
Intriguingly, the asymmetry measured at the Tevatron appeared to be large, of order 20%, significantly more than the value  predicted by the Standard Model loop effects. On top of this, the distribution of the asymmetry as a function of the top-pair invariant mass, and the angular distribution of leptons from top quark decay were strongly deviating from the Standard Model expectation. All in all, the ttbar forward-backward anomaly has been considered, for many years, one of our best hints for physics beyond the Standard Model. The asymmetry could be interpreted, for example, as  being due to new heavy resonances with the quantum numbers of the gluon, which are predicted by models where quarks are composite objects. However, the story has been getting less and less  exciting lately. First of all, no other top quark observables  (like e.g. the total production cross section) were showing any deviations, neither at the Tevatron nor at the LHC. Another worry was that the related top asymmetry was not observed at the LHC. At the same time, the Tevatron numbers have been evolving in a worrisome direction: as the Standard Model computation was being refined the prediction was going up; on the other hand, the experimental value was steadily going down as more data were being added. Today we are close to the point where the Standard Model and experiment finally meet...

The final straw is two recent updates from Tevatron's D0 experiment. Earlier this year, D0 published the measurement  of  the forward-backward asymmetry of the direction of the leptons
from top quark decays. The top quark sometimes decays leptonically, to a b-quark, a neutrino, and a charged lepton (e+, μ+).  In this case, the momentum of the lepton is to some extent correlated with that of the parent top, thus the top quark asymmetry may come together with the lepton asymmetry  (although some new physics models affect the top and lepton asymmetry in a completely different way). The previous D0 measurement showed a large, more than 3 sigma, excess in that observable. The new refined analysis using the full dataset reaches a different conclusion: the asymmetry is Al=(4.2 ± 2.4)%, in a good agreement with the Standard Model.  As can be seen in the picture,  none of the CDF and D0 measurement of the lepton asymmetry in several  final states shows any anomaly at this point.  Then came the D0 update of the regular ttbar forward-backward asymmetry in the semi-leptonic channel. Same story here: the number went down from 20% down to Att=(10.6  ± 3.0)%, compared to the Standard Model prediction of 9%. CDF got a slightly larger number here, Att=(16.4 ± 4.5)%, but taken together the results are not significantly above the Standard Model prediction of Att=9%.

So, all the current data on the top quark, both from the LHC and from the Tevatron,  are perfectly consistent with the Standard Model predictions. There may be new physics somewhere at the weak scale, but we're not gonna pin it down by measuring the top asymmetry. This one is a dead parrot:



Graphics borrowed from this talk

Sunday, 25 May 2014

Weekend plot: BICEP limits on tensor modes

 The insurgency gathers pace. This weekend we contemplate a plot from the recent paper of Michael Mortonson and Uroš Seljak:

It shows the parameter space of inflationary models in the plane of the spectral index ns vs. the tensor-to-scalar ratio r. The yellow region is derived from Planck CMB temperature and WMAP polarization data, while the purple regions combine those with the BICEP2 data. Including BICEP gives a stronger constraint on the tensor modes, rather than a detection of r≠0.

The limits on r from Planck temperature data are dominated by large angular scales (low l) data which themselves display an anomaly, so  they should be taken with a grain of salt. The interesting claim here is that BICEP alone does not hint at r≠0, after using the most up-to-date information on galactic foregrounds and marginalizing over current uncertainties. In this respect, the paper by Michael and Uroš reaches similar conclusions as the analysis of Raphael Flauger and collaborators. The BICEP collaboration originally found that the galactic dust foreground can account for at most 25% of their signal. However, judging from scavenged Planck polarization data, it appears that BICEP underestimated the dust polarization fraction by roughly a factor 2. As this enters in square in the B-mode correlation spectrum, dust can easily account for all the signal observed in BICEP2. The new paper adds a few interesting details to the story. One is that not only the normalization but also the shape of the BICEP spectrum can be reasonably explained by dust if it scales as l^-2.3, as suggested by Planck data. Another is the importance of gravitational lensing effects (neglected by BICEP) in extracting the signal of tensor modes.  Although lensing dominates at high l, it also helps to fit the low l BICEP2 data with r=0. Finally, the paper suggests that it is not at all certain that the forthcoming Planck data will clean up the situation. If the uncertainty on the dust foreground in the BICEP patch is of order 20%, which look like a reasonable figure, the improvement over the current sensitivity to tensor modes may be marginal. So, BICEP may remain a Schrödinger cat for a little while longer.

Friday, 16 May 2014

Follow up on BICEP

The BICEP2 collaboration claims the discovery of the primordial B-mode in the CMB at a very high confidence level.  Résonaances recently reported on the chinese whispers that cast doubts about the statistical significance of that result.  They were based in part on the work of Raphael Flauger and Colin Hill, rumors of which were spreading through email and coffee time discussions. Today Raphael gave a public seminar describing this analysis, see the slides and the video.

The familiar number r=0.2 for the CMB tensor-to-scalar ratio is based on the assumption of zero foreground contribution in the region of the sky observed by BICEP. To argue that foregrounds should not be a big effect, the BICEP paper studied several models to estimate the galactic dust emission. Of those, only the data driven models DDM1 and DDM2 were based actual polarization data inadvertently shared by Planck. However, even these models suggest that foregrounds are not completely negligible. For example, subtracting the foregrounds estimated via DDM2 brings the central value of r down to 0.16 or 0.12 depending how the model is used (cross-correlation vs. auto-correlation). If, instead,  the cross-correlated  BICEP2 and Keck Array data are used as an input, the tensor-to-scalar ratio can easily be below 0.1, in agreement with the existing bounds from Planck and WMAP.

Raphael's message is that, according to his analysis, the foreground emissions are larger than estimated by BICEP, and that systematic uncertainties on that estimate (due to incomplete information, modeling uncertainties, and scraping numbers from pdf slides) are also large. If that is true, the statistical significance of the primordial B-mode  detection is much weaker than what is being claimed by BICEP.

In his talk, Raphael described an independent and what is the most complete to date attempt to extract the foregrounds from existing data. Apart from using the same Planck's polarization fraction map as BICEP, he also included the Q and U all-sky map (the letters refer to how polarization is parameterized), and models of polarized dust emission based on  HI maps (21cm hydrogen line emission is supposed to track the galactic dust).  One reason for the discrepancy with the BICEP estimates could be that the effect of the Cosmic Infrared Background - mostly unpolarized emission from faraway galaxies - is non-negligible. The green band in the plot shows the polarized dust emission obtained from the  CIB corrected DDM2 model, and compares it to the original BICEP estimate (blue dashed line).

The analysis then goes on to extract the foregrounds starting from several different premises. All available datasets (polarization reconstructed via HI maps, the information scraped from existing Planck's polarization maps) seem to say a similar story: galactic foregrounds can be large in the region of interest and uncertainties are large.  The money plot is this one:

Recall that the primordial B-mode signal should show up at moderate angular scales with l∼100 (the high-l end is dominated by non-primordial B-modes from gravitational lensing). Given the current uncertainties, the foreground emission may easily account for the entire BICEP2 signal in that region. Again, this does not prove that tensor mode cannot be there. The story may still reach a happy ending, much like the one of  the discovery of accelerated expansion (where serious doubts about systematic uncertainties also were raised after the initial announcement). But the ball is on the BICEP side to convincingly demonstrate that foregrounds are under control.

Until that happens, I think their result does not stand.

Monday, 12 May 2014

Is BICEP wrong?

The BICEP claim of detecting the primordial B-mode in the polarization of the Cosmic Microwave Background was a huge news. If confirmed, it would be an evidence of gravity waves produced during cosmic inflation, and open a window on physics at an incredibly high energy scale of order 10^16 GeV. Possible implications were described in detail in some 300 papers triggered by the BICEP announcement.  But, among this understandable excitement, we have been aware that the signal has to be confirmed by other experiments before the discovery is established. Back then, Résonaances precisely estimated the chances of the signal being true at `fifty-fifty'.  It appears it's the latter fifty that's gaining an upper hand...

Barring a loose cable, the biggest worry about the BICEP signal is that the collaboration may have underestimated the galactic foreground emission. BICEP2 performed the observations at only one frequency of 150 GHz which is very well suited to study the CMB, but less so for polarized dust or synchrotron emission. As for the latter, more can be learned by going to higher frequencies, while combining maps at different frequencies allows one to separate the galactic and the CMB component. Although the patch of the sky studied by BICEP is well away from the galactic plane, the recently published 353 GHz polarized map from Planck demonstrates that there may be significant emission from these parts of the sky (in that paper the BICEP patch is conveniently masked, so one cannot draw any quantitative conclusions). Once the dust from the BICEP announcement had settled, all eyes were thus on precision measurements of the galactic  foreground. The rumors that have been arriving from the Planck camp were not encouraging, as they were not able to confirm the primordial B-mode signal. It seems that experts now put a finger on what exactly went wrong in BICEP.

To estimate polarized emission from the galactic dust, BICEP digitized an unpublished 353 GHz map shown by the Planck collaboration at a conference.  However, it seems they misinterpreted the Planck results: that map shows the polarization fraction for all foregrounds, not for the galactic dust only (see the "not CIB subtracted" caveat in the slide). Once you correct for that and rescale the Planck results appropriately, some experts claim that the polarized galactic dust emission can account for most of the BICEP signal. The rumor is that the BICEP team has now admitted to the mistake [Update: this last statement is disputed and outwardly denied].

Note that we should not conclude that there is no observable tensor modes in the CMB. Indeed, the tensor to scalar ratio of order 0.1 is probably consistent with the existing experimental data, and may be responsible for a part of the B-mode signal detected by BICEP. New data from Planck, POLARBEAR, ACTpole, and Keck Array should clarify the situation within a year from now.  However, at this point, there seems to be no statistically significant evidence for the primordial B-modes of inflationary origin in the CMB.