Notes: it is customary in high energy particle/nuclear physics for authors to be listed in alphabetical order. Experimental collaborations are often large (hundreds to thousands!) and it is typical to list everyone as authors. The papers and notes listed below are only the ones with substantial group contribution. The publications below are ordered by date of arXiv posting (or equivalent).
The Fundamental Limit of Jet Tagging
J. Geuskens, N. Gite, M. Krämer, V. Mikuni, A. Mück, B. Nachman, H. Reyes-González
e-Print: 2411.02628
Cite Article
@article{2411.02628,
author="{J. Geuskens, N. Gite, M. Krämer, V. Mikuni, A. Mück, B. Nachman, H. Reyes-González}",
title="{The Fundamental Limit of Jet Tagging}",
eprint="2411.02628",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2023"}
×
The Fundamental Limit of Jet Tagging
Identifying the origin of high-energy hadronic jets ('jet tagging') has been a critical benchmark problem for machine learning in particle physics. Jets are ubiquitous at colliders and are complex objects that serve as prototypical examples of collections of particles to be categorized. Over the last decade, machine learning-based classifiers have replaced classical observables as the state of the art in jet tagging. Increasingly complex machine learning models are leading to increasingly more effective tagger performance. Our goal is to address the question of convergence -- are we getting close to the fundamental limit on jet tagging or is there still potential for computational, statistical, and physical insights for further improvements? We address this question using state-of-the-art generative models to create a realistic, synthetic dataset with a known jet tagging optimum. Various state-of-the-art taggers are deployed on this dataset, showing that there is a significant gap between their performance and the optimum. Our dataset and software are made public to provide a benchmark task for future developments in jet tagging and other areas of particle physics. ×
Generative Unfolding with Distribution Mapping
A. Butter, S. Diefenbacher, N. Huetsch, V. Mikuni, B. Nachman, S. Palacios Schweitzer, T. Plehn
e-Print: 2411.02495
Cite Article
@article{2411.02495,
author="{A. Butter, S. Diefenbacher, N. Huetsch, V. Mikuni, B. Nachman, S. Palacios Schweitzer, T. Plehn}",
title="{Generative Unfolding with Distribution Mapping}",
eprint="2411.02495",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2023"}
×
Generative Unfolding with Distribution Mapping
Machine learning enables unbinned, highly-differential cross section measurements. A recent idea uses generative models to morph a starting simulation into the unfolded data. We show how to extend two morphing techniques, Schrödinger Bridges and Direct Diffusion, in order to ensure that the models learn the correct conditional probabilities. This brings distribution mapping to a similar level of accuracy as the state-of-the-art conditional generative unfolding methods. Numerical results are presented with a standard benchmark dataset of single jet substructure as well as for a new dataset describing a 22-dimensional phase space of Z + 2-jets. ×
Rejection Sampling with Autodifferentiation -- Case study: Fitting a Hadronization Model
N. Heller, P. Ilten, T. Menzo, S. Mrenna, B. Nachman, A. Siodmok, M. Szewc, A. Youssef
e-Print: 2411.02194
Cite Article
@article{2411.02194,
author="{N. Heller, P. Ilten, T. Menzo, S. Mrenna, B. Nachman, A. Siodmok, M. Szewc, A. Youssef}",
title="{Rejection Sampling with Autodifferentiation -- Case study: Fitting a Hadronization Model}",
eprint="2411.02194",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2023"}
×
Rejection Sampling with Autodifferentiation -- Case study: Fitting a Hadronization Model
We present an autodifferentiable rejection sampling algorithm termed Rejection Sampling with Autodifferentiation (RSA). In conjunction with reweighting, we show that RSA can be used for efficient parameter estimation and model exploration. Additionally, this approach facilitates the use of unbinned machine-learning-based observables, allowing for more precise, data-driven fits. To showcase these capabilities, we apply an RSA-based parameter fit to a simplified hadronization model. ×
W. Bhimji, P. Calafiura, R. Chakkappai, Y. Chou, S. Diefenbacher, J. Dudley, S. Farrell, A. Ghosh, I. Guyon, C. Harris, S. Hsu, E. Khoda, R. Lyscar, A. Michon, B. Nachman, P. Nugent, M. Reymond, D. Rousseau, B. Sluijter, B. Thorne, I. Ullah, Y. Zhang
e-Print: 2410.02867
Cite Article
@article{2410.02867,
author="{W. Bhimji, P. Calafiura, R. Chakkappai, Y. Chou, S. Diefenbacher, J. Dudley, S. Farrell, A. Ghosh, I. Guyon, C. Harris, S. Hsu, E. Khoda, R. Lyscar, A. Michon, B. Nachman, P. Nugent, M. Reymond, D. Rousseau, B. Sluijter, B. Thorne, I. Ullah, Y. Zhang}",
title="{FAIR Universe HiggsML Uncertainty Challenge Competition}",
eprint="2410.02867",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2023"}
The FAIR Universe -- HiggsML Uncertainty Challenge focuses on measuring the physics properties of elementary particles with imperfect simulators due to differences in modelling systematic errors. Additionally, the challenge is leveraging a large-compute-scale AI platform for sharing datasets, training models, and hosting machine learning competitions. Our challenge brings together the physics and machine learning communities to advance our understanding and methodologies in handling systematic (epistemic) uncertainties within AI techniques. ×
Multidimensional Deconvolution with Profiling
H. Zhu, K. Desai, M. Kuusela, V. Mikuni, B. Nachman, L. Wasserman
e-Print: 2409.10421
Cite Article
@article{2409.10421,
author="{H. Zhu, K. Desai, M. Kuusela, V. Mikuni, B. Nachman, L. Wasserman}",
title="{Multidimensional Deconvolution with Profiling}",
eprint="2409.10421",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2023"}
×
Multidimensional Deconvolution with Profiling
In many experimental contexts, it is necessary to statistically remove the impact of instrumental effects in order to physically interpret measurements. This task has been extensively studied in particle physics, where the deconvolution task is called unfolding. A number of recent methods have shown how to perform high-dimensional, unbinned unfolding using machine learning. However, one of the assumptions in all of these methods is that the detector response is accurately modeled in the Monte Carlo simulation. In practice, the detector response depends on a number of nuisance parameters that can be constrained with data. We propose a new algorithm called Profile OmniFold (POF), which works in a similar iterative manner as the OmniFold (OF) algorithm while being able to simultaneously profile the nuisance parameters. We illustrate the method with a Gaussian example as a proof of concept highlighting its promising capabilities. ×
Accelerating template generation in resonant anomaly detection searches with optimal transport
M. Leigh, D. Sengupta, B. Nachman, T. Golling
e-Print: 2407.19818
Cite Article
@article{2407.19818,
author="{M. Leigh, D. Sengupta, B. Nachman, T. Golling}",
title="{Accelerating template generation in resonant anomaly detection searches with optimal transport}",
eprint="2407.19818",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2023"}
×
Accelerating template generation in resonant anomaly detection searches with optimal transport
We introduce Resonant Anomaly Detection with Optimal Transport (RAD-OT), a method for generating signal templates in resonant anomaly detection searches. RAD-OT leverages the fact that the conditional probability density of the target features vary approximately linearly along the optimal transport path connecting the resonant feature. This does not assume that the conditional density itself is linear with the resonant feature, allowing RAD-OT to efficiently capture multimodal relationships, changes in resolution, etc. By solving the optimal transport problem, RAD-OT can quickly build a template by interpolating between the background distributions in two sideband regions. We demonstrate the performance of RAD-OT using the LHC Olympics R&D dataset, where we find comparable sensitivity and improved stability with respect to deep learning-based approaches. ×
Moment Unfolding
K. Desai, B. Nachman, J. Thaler
e-Print: 2407.11284
Cite Article
@article{2407.11284,
author="{K. Desai, B. Nachman, J. Thaler}",
title="{Moment Unfolding}",
eprint="2407.11284",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2023"}
×
Moment Unfolding
Deconvolving ("unfolding'') detector distortions is a critical step in the comparison of cross section measurements with theoretical predictions in particle and nuclear physics. However, most existing approaches require histogram binning while many theoretical predictions are at the level of statistical moments. We develop a new approach to directly unfold distribution moments as a function of another observable without having to first discretize the data. Our Moment Unfolding technique uses machine learning and is inspired by Generative Adversarial Networks (GANs). We demonstrate the performance of this approach using jet substructure measurements in collider physics. With this illustrative example, we find that our Moment Unfolding protocol is more precise than bin-based approaches and is as or more precise than completely unbinned methods. ×
Parnassus: An Automated Approach to Accurate, Precise, and Fast Detector Simulation and Reconstruction
E. Dreyer, E. Gross, D. Kobylianskii, V. Mikuni, B. Nachman, N. Soybelman
e-Print: 2406.01620
Cite Article
@article{2406.01620,
author="{E. Dreyer, E. Gross, D. Kobylianskii, V. Mikuni, B. Nachman, N. Soybelman}",
title="{Parnassus: An Automated Approach to Accurate, Precise, and Fast Detector Simulation and Reconstruction}",
eprint="2406.01620",
archivePrefix = "arXiv",
primaryClass = "physics.data-an",
year = "2023"}
×
Parnassus: An Automated Approach to Accurate, Precise, and Fast Detector Simulation and Reconstruction
Detector simulation and reconstruction are a significant computational bottleneck in particle physics. We develop Particle-flow Neural Assisted Simulations (Parnassus) to address this challenge. Our deep learning model takes as input a point cloud (particles impinging on a detector) and produces a point cloud (reconstructed particles). By combining detector simulations and reconstruction into one step, we aim to minimize resource utilization and enable fast surrogate models suitable for application both inside and outside large collaborations. We demonstrate this approach using a publicly available dataset of jets passed through the full simulation and reconstruction pipeline of the CMS experiment. We show that Parnassus accurately mimics the CMS particle flow algorithm on the (statistically) same events it was trained on and can generalize to jet momentum and type outside of the training distribution. ×
Constraining the Higgs potential with neural simulation-based inference for di-Higgs production
R. Mastandrea, B. Nachman, T. Plehn
e-Print: 2405.15847
Cite Article
@article{2405.15847,
author="{R. Mastandrea, B. Nachman, T. Plehn}",
title="{Constraining the Higgs potential with neural simulation-based inference for di-Higgs production}",
eprint="2405.15847",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2023"}
×
Constraining the Higgs potential with neural simulation-based inference for di-Higgs production
Determining the form of the Higgs potential is one of the most exciting challenges of modern particle physics. Higgs pair production directly probes the Higgs self-coupling and should be observed in the near future at the High-Luminosity LHC. We explore how to improve the sensitivity to physics beyond the Standard Model through per-event kinematics for di-Higgs events. In particular, we employ machine learning through simulation-based inference to estimate per-event likelihood ratios and gauge potential sensitivity gains from including this kinematic information. In terms of the Standard Model Effective Field Theory, we find that adding a limited number of observables can help to remove degeneracies in Wilson coefficient likelihoods and significantly improve the experimental sensitivity. ×
Advancing Set-Conditional Set Generation: Diffusion Models for Fast Simulation of Reconstructed Particles
D. Kobylianskii, N. Soybelman, N. Kakati, E. Dreyer, B. Nachman, E. Gross
e-Print: 2405.10106
Cite Article
@article{2405.10106,
author="{D. Kobylianskii, N. Soybelman, N. Kakati, E. Dreyer, B. Nachman, E. Gross}",
title="{Advancing Set-Conditional Set Generation: Diffusion Models for Fast Simulation of Reconstructed Particles}",
eprint="2405.10106",
archivePrefix = "arXiv",
primaryClass = "hep-ex",
year = "2023"}
×
Advancing Set-Conditional Set Generation: Diffusion Models for Fast Simulation of Reconstructed Particles
The computational intensity of detector simulation and event reconstruction poses a significant difficulty for data analysis in collider experiments. This challenge inspires the continued development of machine learning techniques to serve as efficient surrogate models. We propose a fast emulation approach that combines simulation and reconstruction. In other words, a neural network generates a set of reconstructed objects conditioned on input particle sets. To make this possible, we advance set-conditional set generation with diffusion models. Using a realistic, generic, and public detector simulation and reconstruction package (COCOA), we show how diffusion models can accurately model the complex spectrum of reconstructed particles inside jets. ×
Incorporating Physical Priors into Weakly-Supervised Anomaly Detection
C. Cheng, G. Singh, B. Nachman
e-Print: 2405.08889
Cite Article
@article{2405.08889,
author="{C. Cheng, G. Singh, B. Nachman}",
title="{Incorporating Physical Priors into Weakly-Supervised Anomaly Detection}",
eprint="2405.08889",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2023"}
×
Incorporating Physical Priors into Weakly-Supervised Anomaly Detection
We propose a new machine-learning-based anomaly detection strategy for comparing data with a background-only reference (a form of weak supervision). The sensitivity of previous strategies degrades significantly when the signal is too rare or there are many unhelpful features. Our Prior-Assisted Weak Supervision (PAWS) method incorporates information from a class of signal models in order to significantly enhance the search sensitivity of weakly supervised approaches. As long as the true signal is in the pre-specified class, PAWS matches the sensitivity of a dedicated, fully supervised method without specifying the exact parameters ahead of time. On the benchmark LHC Olympics anomaly detection dataset, our mix of semi-supervised and weakly supervised learning is able to extend the sensitivity over previous methods by a factor of 10 in cross section. Furthermore, if we add irrelevant (noise) dimensions to the inputs, classical methods degrade by another factor of 10 in cross section while PAWS remains insensitive to noise. This new approach could be applied in a number of scenarios and pushes the frontier of sensitivity between completely model-agnostic approaches and fully model-specific searches. ×
Unifying Simulation and Inference with Normalizing Flows
H. Du, C. Krause, V. Mikuni, B. Nachman, I. Pang, D. Shih
e-Print: 2404.18992
Cite Article
@article{2404.18992,
author="{H. Du, C. Krause, V. Mikuni, B. Nachman, I. Pang, D. Shih}",
title="{Unifying Simulation and Inference with Normalizing Flows}",
eprint="2404.18992",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2023"}
×
Unifying Simulation and Inference with Normalizing Flows
There have been many applications of deep neural networks to detector calibrations and a growing number of studies that propose deep generative models as automated fast detector simulators. We show that these two tasks can be unified by using maximum likelihood estimation (MLE) from conditional generative models for energy regression. Unlike direct regression techniques, the MLE approach is prior-independent and non-Gaussian resolutions can be determined from the shape of the likelihood near the maximum. Using an ATLAS-like calorimeter simulation, we demonstrate this concept in the context of calorimeter energy calibration. ×
The Landscape of Unfolding with Machine Learning
N. Huetsch, J. Villadamigo, A. Shmakov, S. Diefenbacher, V. Mikuni, T. Heimel, M. Fenton, K. Greif, B. Nachman, D. Whiteson, A. Butter, T. Plehn
e-Print: 2404.18807
Cite Article
@article{2404.18807,
author="{N. Huetsch, J. Villadamigo, A. Shmakov, S. Diefenbacher, V. Mikuni, T. Heimel, M. Fenton, K. Greif, B. Nachman, D. Whiteson, A. Butter, T. Plehn}",
title="{The Landscape of Unfolding with Machine Learning}",
eprint="2404.18807",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2023"}
×
The Landscape of Unfolding with Machine Learning
Recent innovations from machine learning allow for data unfolding, without binning and including correlations across many dimensions. We describe a set of known, upgraded, and new methods for ML-based unfolding. The performance of these approaches are evaluated on the same two datasets. We find that all techniques are capable of accurately reproducing the particle-level spectra across complex observables. Given that these approaches are conceptually diverse, they offer an exciting toolkit for a new class of measurements that can probe the Standard Model with an unprecedented level of detail and may enable sensitivity to new phenomena. ×
OmniLearn: A Method to Simultaneously Facilitate All Jet Physics Tasks
V. Mikuni, B. Nachman
e-Print: 2404.16091
Cite Article
@article{2404.16091,
author="{V. Mikuni, B. Nachman}",
title="{OmniLearn: A Method to Simultaneously Facilitate All Jet Physics Tasks}",
eprint="2404.16091",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2023"}
×
OmniLearn: A Method to Simultaneously Facilitate All Jet Physics Tasks
Machine learning has become an essential tool in jet physics. Due to their complex, high-dimensional nature, jets can be explored holistically by neural networks in ways that are not possible manually. However, innovations in all areas of jet physics are proceeding in parallel. We show that specially constructed machine learning models trained for a specific jet classification task can improve the accuracy, precision, or speed of all other jet physics tasks. This is demonstrated by training on a particular multiclass classification task and then using the learned representation for different classification tasks, for datasets with a different (full) detector simulation, for jets from a different collision system ($pp$ versus $ep$), for generative models, for likelihood ratio estimation, and for anomaly detection. Our OmniLearn approach is thus a foundation model and is made publicly available for use in any area where state-of-the-art precision is required for analyses involving jets and their substructure. ×
Anomaly detection with flow-based fast calorimeter simulators
C. Krause, B. Nachman, I. Pang, D. Shih, Y. Zhu
e-Print: 2312.11618
Cite Article
@article{2312.11618,
author="{C. Krause, B. Nachman, I. Pang, D. Shih, Y. Zhu}",
title="{Anomaly detection with flow-based fast calorimeter simulators}",
eprint="2312.11618",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2023"}
×
Anomaly detection with flow-based fast calorimeter simulators
Recently, several normalizing flow-based deep generative models have been proposed to accelerate the simulation of calorimeter showers. Using CaloFlow as an example, we show that these models can simultaneously perform unsupervised anomaly detection with no additional training cost. As a demonstration, we consider electromagnetic showers initiated by one (background) or multiple (signal) photons. The CaloFlow model is designed to generate single photon showers, but it also provides access to the shower likelihood. We use this likelihood as an anomaly score and study the showers tagged as being unlikely. As expected, the tagger struggles when the signal photons are nearly collinear, but is otherwise effective. This approach is complementary to a supervised classifier trained on only specific signal models using the same low-level calorimeter inputs. While the supervised classifier is also highly effective at unseen signal models, the unsupervised method is more sensitive in certain regions and thus we expect that the ultimate performance will require a combination of approaches. ×
Integrating Particle Flavor into Deep Learning Models for Hadronization
J. Chan, X. Ju, A. Kania, B. Nachman, V. Sangli, A. Siodmok
e-Print: 2312.08453
Cite Article
@article{2312.08453,
author="{J. Chan, X. Ju, A. Kania, B. Nachman, V. Sangli, A. Siodmok}",
title="{Integrating Particle Flavor into Deep Learning Models for Hadronization}",
eprint="2312.08453",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2023"}
×
Integrating Particle Flavor into Deep Learning Models for Hadronization
Hadronization models used in event generators are physics-inspired functions with many tunable parameters. Since we do not understand hadronization from first principles, there have been multiple proposals to improve the accuracy of hadronization models by utilizing more flexible parameterizations based on neural networks. These recent proposals have focused on the kinematic properties of hadrons, but a full model must also include particle flavor. In this paper, we show how to build a deep learning-based hadronization model that includes both kinematic (continuous) and flavor (discrete) degrees of freedom. Our approach is based on Generative Adversarial Networks and we show the performance within the context of the cluster hadronization model within the Herwig event generator. ×
Non-resonant Anomaly Detection with Background Extrapolation
K. Bai, R. Mastandrea, B. Nachman
e-Print: 2311.12924
Cite Article
@article{2311.12924,
author="{K. Bai, R. Mastandrea, B. Nachman}",
title="{Non-resonant Anomaly Detection with Background Extrapolation}",
eprint="2311.12924",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2023"}
×
Non-resonant Anomaly Detection with Background Extrapolation
Complete anomaly detection strategies that are both signal sensitive and compatible with background estimation have largely focused on resonant signals. Non-resonant new physics scenarios are relatively under-explored and may arise from off-shell effects or final states with significant missing energy. In this paper, we extend a class of weakly supervised anomaly detection strategies developed for resonant physics to the non-resonant case. Machine learning models are trained to reweight, generate, or morph the background, extrapolated from a control region. A classifier is then trained in a signal region to distinguish the estimated background from the data. The new methods are demonstrated using a semi-visible jet signature as a benchmark signal model, and are shown to automatically identify the anomalous events without specifying the signal ahead of time. ×
Safe but Incalculable: Energy-weighting is not all you need
S. Bright-Thonney, B. Nachman, J. Thaler
e-Print: 2311.07652
Cite Article
@article{2311.07652,
author="{S. Bright-Thonney, B. Nachman, J. Thaler}",
title="{Safe but Incalculable: Energy-weighting is not all you need}",
eprint="2311.07652",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2023"}
×
Safe but Incalculable: Energy-weighting is not all you need
Infrared and collinear (IRC) safety has long been used a proxy for robustness when developing new jet substructure observables. This guiding philosophy has been carried into the deep learning era, where IRC-safe neural networks have been used for many jet studies. For graph-based neural networks, the most straightforward way to achieve IRC safety is to weight particle inputs by their energies. However, energy-weighting by itself does not guarantee that perturbative calculations of machine-learned observables will enjoy small non-perturbative corrections. In this paper, we demonstrate the sensitivity of IRC-safe networks to non-perturbative effects, by training an energy flow network (EFN) to maximize its sensitivity to hadronization. We then show how to construct Lipschitz Energy Flow Networks (L-EFNs), which are both IRC safe and relatively insensitive to non-perturbative corrections. We demonstrate the performance of L-EFNs on generated samples of quark and gluon jets, and showcase fascinating differences between the learned latent representations of EFNs and L-EFNs. ×
Designing Observables for Measurements with Deep Learning
O. Long, B. Nachman
e-Print: 2310.08717
Cite Article
@article{2310.08717,
author="{O. Long, B. Nachman}",
title="{Designing Observables for Measurements with Deep Learning}",
eprint="2310.08717",
archivePrefix = "arXiv",
primaryClass = "physics.data-an",
year = "2023"}
×
Designing Observables for Measurements with Deep Learning
Many analyses in particle and nuclear physics use simulations to infer fundamental, effective, or phenomenological parameters of the underlying physics models. When the inference is performed with unfolded cross sections, the observables are designed using physics intuition and heuristics. We propose to design optimal observables with machine learning. Unfolded, differential cross sections in a neural network output contain the most information about parameters of interest and can be well-measured by construction. We demonstrate this idea using two physics models for inclusive measurements in deep inelastic scattering. ×
Full Phase Space Resonant Anomaly Detection
E. Buhmann, C. Ewen, G. Kasieczka, V. Mikuni, B. Nachman, D. Shih
e-Print: 2310.06897
Cite Article
@article{2310.06897,
author="{E. Buhmann, C. Ewen, G. Kasieczka, V. Mikuni, B. Nachman, D. Shih}",
title="{Full Phase Space Resonant Anomaly Detection}",
eprint="2310.06897",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2023"}
×
Full Phase Space Resonant Anomaly Detection
Physics beyond the Standard Model that is resonant in one or more dimensions has been a longstanding focus of countless searches at colliders and beyond. Recently, many new strategies for resonant anomaly detection have been developed, where sideband information can be used in conjunction with modern machine learning, in order to generate synthetic datasets representing the Standard Model background. Until now, this approach was only able to accommodate a relatively small number of dimensions, limiting the breadth of the search sensitivity. Using recent innovations in point cloud generative models, we show that this strategy can also be applied to the full phase space, using all relevant particles for the anomaly detection. As a proof of principle, we show that the signal from the R&D dataset from the LHC Olympics is findable with this method, opening up the door to future studies that explore the interplay between depth and breadth in the representation of the data for anomaly detection. ×
The Optimal use of Segmentation for Sampling Calorimeters
F. Acosta, B. Karki, P. Karande, A. Angerami, M. Arratia, K. Barish, R. Milton, S. Morán, B. Nachman, A. Sinha
e-Print: 2310.04442
Cite Article
@article{2310.04442,
author="{F. Acosta, B. Karki, P. Karande, A. Angerami, M. Arratia, K. Barish, R. Milton, S. Morán, B. Nachman, A. Sinha}",
title="{The Optimal use of Segmentation for Sampling Calorimeters}",
eprint="2310.04442",
archivePrefix = "arXiv",
primaryClass = "physics.ins-det",
year = "2023"}
×
The Optimal use of Segmentation for Sampling Calorimeters
One of the key design choices of any sampling calorimeter is how fine to make the longitudinal and transverse segmentation. To inform this choice, we study the impact of calorimeter segmentation on energy reconstruction. To ensure that the trends are due entirely to hardware and not to a sub-optimal use of segmentation, we deploy deep neural networks to perform the reconstruction. These networks make use of all available information by representing the calorimeter as a point cloud. To demonstrate our approach, we simulate a detector similar to the forward calorimeter system intended for use in the ePIC detector, which will operate at the upcoming Electron Ion Collider. We find that for the energy estimation of isolated charged pion showers, relatively fine longitudinal segmentation is key to achieving an energy resolution that is better than 10% across the full phase space. These results provide a valuable benchmark for ongoing EIC detector optimizations and may also inform future studies involving high-granularity calorimeters in other experiments at various facilities. ×
Flows for Flows: Morphing one Dataset into another with Maximum Likelihood Estimation
T. Golling, S. Klein, R. Mastandrea, B. Nachman, J. Raine
e-Print: 2309.06472
Cite Article
@article{2309.06472,
author="{T. Golling, S. Klein, R. Mastandrea, B. Nachman, J. Raine}",
title="{Flows for Flows: Morphing one Dataset into another with Maximum Likelihood Estimation}",
eprint="2309.06472",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2023"}
×
Flows for Flows: Morphing one Dataset into another with Maximum Likelihood Estimation
Many components of data analysis in high energy physics and beyond require morphing one dataset into another. This is commonly solved via reweighting, but there are many advantages of preserving weights and shifting the data points instead. Normalizing flows are machine learning models with impressive precision on a variety of particle physics tasks. Naively, normalizing flows cannot be used for morphing because they require knowledge of the probability density of the starting dataset. In most cases in particle physics, we can generate more examples, but we do not know densities explicitly. We propose a protocol called flows for flows for training normalizing flows to morph one dataset into another even if the underlying probability density of neither dataset is known explicitly. This enables a morphing strategy trained with maximum likelihood estimation, a setup that has been shown to be highly effective in related tasks. We study variations on this protocol to explore how far the data points are moved to statistically match the two datasets. Furthermore, we show how to condition the learned flows on particular features in order to create a morphing function for every value of the conditioning feature. For illustration, we demonstrate flows for flows for toy examples as well as a collider physics example involving dijet events ×
Improving Generative Model-based Unfolding with Schrödinger Bridges
S. Diefenbacher, G. Liu, V. Mikuni, B. Nachman, W. Nie
e-Print: 2308.12351
Cite Article
@article{2308.12351,
author="{S. Diefenbacher, G. Liu, V. Mikuni, B. Nachman, W. Nie}",
title="{Improving Generative Model-based Unfolding with Schrödinger Bridges}",
eprint="2308.12351",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2023"}
×
Improving Generative Model-based Unfolding with Schrödinger Bridges
Machine learning-based unfolding has enabled unbinned and high-dimensional differential cross section measurements. Two main approaches have emerged in this research area: one based on discriminative models and one based on generative models. The main advantage of discriminative models is that they learn a small correction to a starting simulation while generative models scale better to regions of phase space with little data. We propose to use Schroedinger Bridges and diffusion models to create SBUnfold, an unfolding approach that combines the strengths of both discriminative and generative models. The key feature of SBUnfold is that its generative model maps one set of events into another without having to go through a known probability density as is the case for normalizing flows and standard diffusion models. We show that SBUnfold achieves excellent performance compared to state of the art methods on a synthetic Z+jets dataset. ×
Refining Fast Calorimeter Simulations with a Schrödinger Bridge
S. Diefenbacher, V. Mikuni, B. Nachman
e-Print: 2308.12339
Cite Article
@article{2308.12339,
author="{S. Diefenbacher, V. Mikuni, B. Nachman}",
title="{Refining Fast Calorimeter Simulations with a Schrödinger Bridge}",
eprint="2308.12339",
archivePrefix = "arXiv",
primaryClass = "physics.ins-det",
year = "2023"}
×
Refining Fast Calorimeter Simulations with a Schrödinger Bridge
Machine learning-based simulations, especially calorimeter simulations, are promising tools for approximating the precision of classical high energy physics simulations with a fraction of the generation time. Nearly all methods proposed so far learn neural networks that map a random variable with a known probability density, like a Gaussian, to realistic-looking events. In many cases, physics events are not close to Gaussian and so these neural networks have to learn a highly complex function. We study an alternative approach: Schrödinger bridge Quality Improvement via Refinement of Existing Lightweight Simulations (SQuIRELS). SQuIRELS leverages the power of diffusion-based neural networks and Schrödinger bridges to map between samples where the probability density is not known explicitly. We apply SQuIRELS to the task of refining a classical fast simulation to approximate a full classical simulation. On simulated calorimeter events, we find that SQuIRELS is able to reproduce highly non-trivial features of the full simulation with a fraction of the generation time. ×
CaloScore v2: Single-shot Calorimeter Shower Simulation with Diffusion Models
V. Mikuni, B. Nachman
e-Print: 2308.03847
Cite Article
@article{2308.03847,
author="{V. Mikuni, B. Nachman}",
title="{CaloScore v2: Single-shot Calorimeter Shower Simulation with Diffusion Models}",
eprint="2308.03847",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2023"}
×
CaloScore v2: Single-shot Calorimeter Shower Simulation with Diffusion Models
Diffusion generative models are promising alternatives for fast surrogate models, producing high-fidelity physics simulations. However, the generation time often requires an expensive denoising process with hundreds of function evaluations, restricting the current applicability of these models in a realistic setting. In this work, we report updates on the CaloScore architecture, detailing the changes in the diffusion process, which produces higher quality samples, and the use of progressive distillation, resulting in a diffusion model capable of generating new samples with a single function evaluation. We demonstrate these improvements using the Calorimeter Simulation Challenge 2022 dataset. ×
The Interplay of Machine Learning--based Resonant Anomaly Detection Methods
T. Golling, G. Kasieczka, C. Krause, R. Mastandrea, B. Nachman, J. Raine, D. Sengupta, D. Shih, M. Sommerhalder
e-Print: 2307.11157
Cite Article
@article{2307.11157,
author="{T. Golling, G. Kasieczka, C. Krause, R. Mastandrea, B. Nachman, J. Raine, D. Sengupta, D. Shih, M. Sommerhalder}",
title="{The Interplay of Machine Learning--based Resonant Anomaly Detection Methods}",
eprint="2307.11157",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2023"}
×
The Interplay of Machine Learning--based Resonant Anomaly Detection Methods
Machine learning--based anomaly detection (AD) methods are promising tools for extending the coverage of searches for physics beyond the Standard Model (BSM). One class of AD methods that has received significant attention is resonant anomaly detection, where the BSM is assumed to be localized in at least one known variable. While there have been many methods proposed to identify such a BSM signal that make use of simulated or detected data in different ways, there has not yet been a study of the methods' complementarity. To this end, we address two questions. First, in the absence of any signal, do different methods pick the same events as signal-like? If not, then we can significantly reduce the false-positive rate by comparing different methods on the same dataset. Second, if there is a signal, are different methods fully correlated? Even if their maximum performance is the same, since we do not know how much signal is present, it may be beneficial to combine approaches. Using the Large Hadron Collider (LHC) Olympics dataset, we provide quantitative answers to these questions. We find that there are significant gains possible by combining multiple methods, which will strengthen the search program at the LHC and beyond. ×
Comparison of Point Cloud and Image-based Models for Calorimeter Fast Simulation
F. Acosta, V. Mikuni, B. Nachman, M. Arratia, K. Barish, B. Karki, R. Milton, P. Karande, A. Angerami
e-Print: 2307.04780
Cite Article
@article{2307.04780,
author="{F. Acosta, V. Mikuni, B. Nachman, M. Arratia, K. Barish, B. Karki, R. Milton, P. Karande, A. Angerami}",
title="{Comparison of Point Cloud and Image-based Models for Calorimeter Fast Simulation}",
eprint="2307.04780",
archivePrefix = "arXiv",
primaryClass = "cs.LG",
year = "2023"}
×
Comparison of Point Cloud and Image-based Models for Calorimeter Fast Simulation
Score based generative models are a new class of generative models that have been shown to accurately generate high dimensional calorimeter datasets. Recent advances in generative models have used images with 3D voxels to represent and model complex calorimeter showers. Point clouds, however, are likely a more natural representation of calorimeter showers, particularly in calorimeters with high granularity. Point clouds preserve all of the information of the original simulation, more naturally deal with sparse datasets, and can be implemented with more compact models and data files. In this work, two state-of-the-art score based models are trained on the same set of calorimeter simulation and directly compared. ×
Learning to Isolate Muons in Data
E. Witkowski, B. Nachman, D. Whiteson
e-Print: 2306.15737
Cite Article
@article{2306.15737,
author="{E. Witkowski, B. Nachman, D. Whiteson}",
title="{Learning to Isolate Muons in Data}",
eprint="2306.15737",
archivePrefix = "arXiv",
primaryClass = "hep-ex",
year = "2023"}
×
Learning to Isolate Muons in Data
We use unlabeled collision data and weakly-supervised learning to train models which can distinguish prompt muons from non-prompt muons using patterns of low-level particle activity in the vicinity of the muon, and interpret the models in the space of energy flow polynomials. Particle activity associated with muons is a valuable tool for identifying prompt muons, those due to heavy boson decay, from muons produced in the decay of heavy flavor jets. The high-dimensional information is typically reduced to a single scalar quantity, isolation, but previous work in simulated samples suggests that valuable discriminating information is lost in this reduction. We extend these studies in LHC collisions recorded by the CMS experiment, where true class labels are not available, requiring the use of the invariant mass spectrum to obtain macroscopic sample information. This allows us to employ Classification Without Labels (CWoLa), a weakly supervised learning technique, to train models. Our results confirm that isolation does not describe events as well as the full low-level calorimeter information, and we are able to identify single energy flow polynomials capable of closing the performance gap. These polynomials are not the same ones derived from simulation, highlighting the importance of training directly on data. ×
High-dimensional and Permutation Invariant Anomaly Detection
V. Mikuni, B. Nachman
e-Print: 2306.03933
Cite Article
@article{2306.03933,
author="{V. Mikuni, B. Nachman}",
title="{High-dimensional and Permutation Invariant Anomaly Detection}",
eprint="2306.03933",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2023"}
×
High-dimensional and Permutation Invariant Anomaly Detection
Methods for anomaly detection of new physics processes are often limited to low-dimensional spaces due to the difficulty of learning high-dimensional probability densities. Particularly at the constituent level, incorporating desirable properties such as permutation invariance and variable-length inputs becomes difficult within popular density estimation methods. In this work, we introduce a permutation-invariant density estimator for particle physics data based on diffusion models, specifically designed to handle variable-length inputs. We demonstrate the efficacy of our methodology by utilizing the learned density as a permutation-invariant anomaly detection score, effectively identifying jets with low likelihood under the background-only hypothesis. To validate our density estimation method, we investigate the ratio of learned densities and compare to those obtained by a supervised classification algorithm. ×
Fitting a Deep Generative Hadronization Model
J. Chan, X. Ju, A. Kania, B. Nachman, V. Sangli, A. Siodmok
e-Print: 2305.17169
Cite Article
@article{2305.17169,
author="{J. Chan, X. Ju, A. Kania, B. Nachman, V. Sangli, A. Siodmok}",
title="{Fitting a Deep Generative Hadronization Model}",
eprint="2305.17169",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2023"}
×
Fitting a Deep Generative Hadronization Model
Hadronization is a critical step in the simulation of high-energy particle and nuclear physics experiments. As there is no first principles understanding of this process, physically-inspired hadronization models have a large number of parameters that are fit to data. Deep generative models are a natural replacement for classical techniques, since they are more flexible and may be able to improve the overall precision. Proof of principle studies have shown how to use neural networks to emulate specific hadronization when trained using the inputs and outputs of classical methods. However, these approaches will not work with data, where we do not have a matching between observed hadrons and partons. In this paper, we develop a protocol for fitting a deep generative hadronization model in a realistic setting, where we only have access to a set of hadrons in data. Our approach uses a variation of a Generative Adversarial Network with a permutation invariant discriminator. We find that this setup is able to match the hadronization model in Herwig with multiple sets of parameters. This work represents a significant step forward in a longer term program to develop, train, and integrate machine learning-based hadronization models into parton shower Monte Carlo programs. ×
Learning Likelihood Ratios with Neural Network Classifiers
S. Rizvi, M. Pettee, B. Nachman
e-Print: 2305.10500
Cite Article
@article{2305.10500,
author="{S. Rizvi, M. Pettee, B. Nachman}",
title="{Learning Likelihood Ratios with Neural Network Classifiers}",
eprint="2305.10500",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2023"}
×
Learning Likelihood Ratios with Neural Network Classifiers
The likelihood ratio is a crucial quantity for statistical inference in science that enables hypothesis testing, construction of confidence intervals, reweighting of distributions, and more. Many modern scientific applications, however, make use of data- or simulation-driven models for which computing the likelihood ratio can be very difficult or even impossible. By applying the so-called ``likelihood ratio trick,'' approximations of the likelihood ratio may be computed using clever parametrizations of neural network-based classifiers. A number of different neural network setups can be defined to satisfy this procedure, each with varying performance in approximating the likelihood ratio when using finite training data. We present a series of empirical studies detailing the performance of several common loss functionals and parametrizations of the classifier output in approximating the likelihood ratio of two univariate and multivariate Gaussian distributions as well as simulated high-energy particle physics datasets. ×
ELSA -- Enhanced latent spaces for improved collider simulations
B. Nachman, R. Winterhalder
e-Print: 2305.07696
Cite Article
@article{2305.07696,
author="B. Nachman, R. Winterhalder",
title="{ELSA -- Enhanced latent spaces for improved collider simulations}",
eprint="2305.07696",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2023"
×
ELSA -- Enhanced latent spaces for improved collider simulations
Simulations play a key role for inference in collider physics. We explore various approaches for enhancing the precision of simulations using machine learning, including interventions at the end of the simulation chain (reweighting), at the beginning of the simulation chain (pre-processing), and connections between the end and beginning (latent space refinement). To clearly illustrate our approaches, we use W+jets matrix element surrogate simulations based on normalizing flows as a prototypical example. First, weights in the data space are derived using machine learning classifiers. Then, we pull back the data-space weights to the latent space to produce unweighted examples and employ the Latent Space Refinement (LASER) protocol using Hamiltonian Monte Carlo. An alternative approach is an augmented normalizing flow, which allows for different dimensions in the latent and target spaces. These methods are studied for various pre-processing strategies, including a new and general method for massive particles at hadron colliders that is a tweak on the widely-used RAMBO-on-diet mapping. We find that modified simulations can achieve sub-percent precision across a wide range of phase space. ×
Weakly-Supervised Anomaly Detection in the Milky Way
M. Pettee, S. Thanvantri, B. Nachman, D. Shih, M. Buckley, J. Collins
e-Print: 2305.03761
Cite Article
@article{2305.03761,
author="{M. Pettee, S. Thanvantri, B. Nachman, D. Shih, M. Buckley, J. Collins}",
title="{Weakly-Supervised Anomaly Detection in the Milky Way}",
eprint="2305.03761",
archivePrefix = "arXiv",
primaryClass = "astro-ph.GA",
year = "2023"}
×
Weakly-Supervised Anomaly Detection in the Milky Way
Large-scale astrophysics datasets present an opportunity for new machine learning techniques to identify regions of interest that might otherwise be overlooked by traditional searches. To this end, we use Classification Without Labels (CWoLa), a weakly-supervised anomaly detection method, to identify cold stellar streams within the more than one billion Milky Way stars observed by the Gaia satellite. CWoLa operates without the use of labeled streams or knowledge of astrophysical principles. Instead, we train a classifier to distinguish between mixed samples for which the proportions of signal and background samples are unknown. This computationally lightweight strategy is able to detect both simulated streams and the known stream GD-1 in data. Originally designed for high-energy collider physics, this technique may have broad applicability within astrophysics as well as other domains interested in identifying localized anomalies. ×
Parton Labeling without Matching: Unveiling Emergent Labelling Capabilities in Regression Models
S. Qiu, S. Han, X. Ju, B. Nachman, H. Wang
e-Print: 2304.09208
Cite Article
@article{2304.09208,
author="S. Qiu, S. Han, X. Ju, B. Nachman, H. Wang",
title="{Parton Labeling without Matching: Unveiling Emergent Labelling Capabilities in Regression Models}",
eprint="2304.09208",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2023"
×
Parton Labeling without Matching: Unveiling Emergent Labelling Capabilities in Regression Models
Parton labeling methods are widely used when reconstructing collider events with top quarks or other massive particles. State-of-the-art techniques are based on machine learning and require training data with events that have been matched using simulations with truth information. In nature, there is no unique matching between partons and final state objects due to the properties of the strong force and due to acceptance effects. We propose a new approach to parton labeling that circumvents these challenges by recycling regression models. The final state objects that are most relevant for a regression model to predict the properties of a particular top quark are assigned to said parent particle without having any parton-matched training data. This approach is demonstrated using simulated events with top quarks and outperforms the widely-used chi-squared method. ×
Unbinned Deep Learning Jet Substructure Measurement in High Q2 ep collisions at HERA
H1 Collaboration
e-Print: 2303.13620
Cite Article
@article{2303.13620,
author="H1 Collaboration",
title="{Unbinned Deep Learning Jet Substructure Measurement in High $Q^2$ ep collisions at HERA}",
eprint="2303.13620",
archivePrefix = "arXiv",
primaryClass = "hep-ex",
year = "2023"
×
Unbinned Deep Learning Jet Substructure Measurement in High Q2 ep collisions at HERA
The radiation pattern within high energy quark- and gluon-initiated jets (jet substructure) is used extensively as a precision probe of the strong force as well as an environment for optimizing event generators with numerous applications in high energy particle and nuclear physics. Looking at electron-proton collisions is of particular interest as many of the complications present at hadron colliders are absent. A detailed study of modern jet substructure observables, jet angularities, in electron-proton collisions is presented using data recorded using the H1 detector at HERA. The measurement is unbinned and multi-dimensional, using machine learning to correct for detector effects. All of the available reconstructed object information of the respective jets is interpreted by a graph neural network, achieving superior precision on a selected set of jet angularities. Training these networks was enabled by the use of a large number of GPUs in the Perlmutter supercomputer at Berkeley Lab. The particle jets are reconstructed in the laboratory frame, using the kT jet clustering algorithm. Results are reported at high transverse momentum transfer, and mid inelasticity. The analysis is also performed in sub-regions of Q2, thus probing scale dependencies of the substructure variables. The data are compared with a variety of predictions and point towards possible improvements of such models. ×
Machine learning-assisted measurement of azimuthal angular asymmetries in deep-inelastic scattering with the H1 detector
H1 Collaboration
Public note: H1prelim-23-031
Cite Article
@article{H1prelim-21-031,
author="{H1 Collaboration}",
title="Machine learning-assisted measurement of azimuthal angular asymmetries in deep-inelastic scattering with the H1 detector}",
journal = "H1prelim-23-031",
url = "https://www-h1.desy.de/h1/www/publications/htmlsplit/H1prelim-23-031.long.html",
year = "2023",
}
×
Machine learning-assisted measurement of azimuthal angular asymmetries in deep-inelastic scattering with the H1 detector
Jet-lepton azimuthal asymmetry harmonics are measured in deep inelastic scattering data collected by the H1 detector using HERA Run II collisions. When the average transverse momentum of the lepton-jet system, is much larger than the total transverse momentum of the system, the asymmetry between them is expected to be generated by initial and final state soft gluon radiation and can be predicted using perturbation theory. Quantifying the angular properties of the asymmetry therefore provides a novel test of the strong force and is also an important background to constrain for future measurements of intrinsic asymmetries generated by the proton's constituents through Transverse Momentum Dependent (TMD) Parton Distribution Functions (PDF). Moments of the azimuthal asymmetries are measured using a machine learning technique that does not require binning and thus does not introduce discretization artifacts. ×
Unbinned Profiled Unfolding
J. Chan, B. Nachman
e-Print: 2302.05390
Cite Article
@article{2302.05390,
author="J. Chan, B. Nachman",
title="{Unbinned Profiled Unfolding}",
eprint="2302.05390",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2023"
×
Unbinned Profiled Unfolding
Unfolding is an important procedure in particle physics experiments which corrects for detector effects and provides differential cross section measurements that can be used for a number of downstream tasks, such as extracting fundamental physics parameters. Traditionally, unfolding is done by discretizing the target phase space into a finite number of bins and is limited in the number of unfolded variables. Recently, there have been a number of proposals to perform unbinned unfolding with machine learning. However, none of these methods (like most unfolding methods) allow for simultaneously constraining (profiling) nuisance parameters. We propose a new machine learning-based unfolding method that results in an unbinned differential cross section and can profile nuisance parameters. The machine learning loss function is the full likelihood function, based on binned inputs at detector-level. We first demonstrate the method with simple Gaussian examples and then show the impact on a simulated Higgs boson cross section measurement. ×
FETA: Flow-Enhanced Transportation for Anomaly Detection
T. Golling, S. Klein, R. Mastandrea, B. Nachman
e-Print: 2212.11285
Cite Article
@article{2212.11285,
author="T. Golling, S. Klein, R. Mastandrea, B. Nachman",
title="{FETA: Flow-Enhanced Transportation for Anomaly Detection}",
eprint="2212.11285",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2022",
×
FETA: Flow-Enhanced Transportation for Anomaly Detection
Resonant anomaly detection is a promising framework for model-independent searches for new particles. Weakly supervised resonant anomaly detection methods compare data with a potential signal against a template of the Standard Model (SM) background inferred from sideband regions. We propose a means to generate this background template that uses a flow-based model to create a mapping between high-fidelity SM simulations and the data. The flow is trained in sideband regions with the signal region blinded, and the flow is conditioned on the resonant feature (mass) such that it can be interpolated into the signal region. To illustrate this approach, we use simulated collisions from the Large Hadron Collider (LHC) Olympics Dataset. We find that our flow-constructed background method has competitive sensitivity with other recent proposals and can therefore provide complementary information to improve future searches. ×
Resonant Anomaly Detection with Multiple Reference Datasets
M. F. Chen, B. Nachman, F. Sala
e-Print: 2212.10579
Cite Article
@article{2212.10579,
author="M. F. Chen, B. Nachman, F. Sala",
title="{Resonant Anomaly Detection with Multiple Reference Datasets}",
eprint="2212.11285",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2022",
×
Resonant Anomaly Detection with Multiple Reference Datasets
An important class of techniques for resonant anomaly detection in high energy physics builds models that can distinguish between reference and target datasets, where only the latter has appreciable signal. Such techniques, including Classification Without Labels (CWoLa) and Simulation Assisted Likelihood-free Anomaly Detection (SALAD) rely on a single reference dataset. They cannot take advantage of commonly-available multiple datasets and thus cannot fully exploit available information. In this work, we propose generalizations of CWoLa and SALAD for settings where multiple reference datasets are available, building on weak supervision techniques. We demonstrate improved performance in a number of settings with realistic and synthetic data. As an added benefit, our generalizations enable us to provide finite-sample guarantees, improving on existing asymptotic analyses. ×
Efficiently Moving Instead of Reweighting Collider Events with Machine Learning
R. Mastandrea and B. Nachman
e-Print: 2212.06155
Cite Article
@article{2212.06155,
author="R. Mastandrea and B. Nachman",
title="{Efficiently Moving Instead of Reweighting Collider Events with Machine Learning}",
eprint="2212.06155",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2022",
×
Efficiently Moving Instead of Reweighting Collider Events with Machine Learning
There are many cases in collider physics and elsewhere where a calibration dataset is used to predict the known physics and / or noise of a target region of phase space. This calibration dataset usually cannot be used out-of-the-box but must be tweaked, often with conditional importance weights, to be maximally realistic. Using resonant anomaly detection as an example, we compare a number of alternative approaches based on transporting events with normalizing flows instead of reweighting them. We find that the accuracy of the morphed calibration dataset depends on the degree to which the transport task is set up to carry out optimal transport, which motivates future research into this area. ×
Efficient quantum implementation of 2+1 U(1) lattice gauge theories with Gauss law constraints
C. Kane, D. M. Grabowska, B. Nachman, C. W. Bauer
e-Print: 2211.10497
Cite Article
@article{2211.10497,
author="C. Kane, D. M. Grabowska, B. Nachman, C. W. Bauer",
title="{Efficient quantum implementation of 2+1 U(1) lattice gauge theories with Gauss law constraints}",
eprint="2211.10497",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2022",
×
Efficient quantum implementation of 2+1 U(1) lattice gauge theories with Gauss law constraints
The study of real-time evolution of lattice quantum field theories using classical computers is known to scale exponentially with the number of lattice sites. Due to a fundamentally different computational strategy, quantum computers hold the promise of allowing for detailed studies of these dynamics from first principles. However, much like with classical computations, it is important that quantum algorithms do not have a cost that scales exponentially with the volume. Recently, it was shown how to break the exponential scaling of a naive implementation of a U(1) gauge theory in two spatial dimensions through an operator redefinition. In this work, we describe modifications to how operators must be sampled in the new operator basis to keep digitization errors small. We compare the precision of the energies and plaquette expectation value between the two operator bases and find they are comparable. Additionally, we provide an explicit circuit construction for the Suzuki-Trotter implementation of the theory using the Walsh function formalism. The gate count scaling is studied as a function of the lattice volume, for both exact circuits and approximate circuits where rotation gates with small arguments have been dropped. We study the errors from finite Suzuki-Trotter time-step, circuit approximation, and quantum noise in a calculation of an explicit observable using IBMQ superconducting qubit hardware. We find the gate count scaling for the approximate circuits can be further reduced by up to a power of the volume without introducing larger errors. ×
Geometry Optimization for Long-lived Particle Detectors
T. Gorordo, S. Knapen, B. Nachman, D. J. Robinson, A. Suresh
e-Print: 2211.08450
Cite Article
@article{2211.08450,
author="T. Gorordo, S. Knapen, B. Nachman, D. J. Robinson, A. Suresh",
title="{Geometry Optimization for Long-lived Particle Detectors}",
eprint="2211.08450",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2022",
×
Geometry Optimization for Long-lived Particle Detectors
The proposed designs of many auxiliary long-lived particle (LLP) detectors at the LHC call for the instrumentation of a large surface area inside the detector volume, in order to reliably reconstruct tracks and LLP decay vertices. Taking the CODEX-b detector as an example, we provide a proof-of-concept optimization analysis that demonstrates the required instrumented surface area can be substantially reduced for many LLP models, while only marginally affecting the LLP signal efficiency. This optimization permits a significant reduction in cost and installation time, and may also inform the installation order for modular detector elements. We derive a branch-and-bound based optimization algorithm that permits highly computationally efficient determination of optimal detector configurations, subject to any specified LLP vertex and track reconstruction requirements. We outline the features of a newly-developed generalized simulation framework, for the computation of LLP signal efficiencies across a range of LLP models and detector geometries. ×
Statistical Patterns of Theory Uncertainties
A. Ghosh, B. Nachman, T. Plehn, L. Shire, T. M.P. Tait, D. Whiteson
e-Print: 2210.15167
Cite Article
@article{2210.15167,
author="A. Ghosh, B. Nachman, T. Plehn, L. Shire, T. M.P. Tait, D. Whiteson",
title="{Statistical Patterns of Theory Uncertainties}",
eprint="2210.15167",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2022",
×
Statistical Patterns of Theory Uncertainties
A comprehensive uncertainty estimation is vital for the precision program of the LHC. While experimental uncertainties are often described by stochastic processes and well-defined nuisance parameters, theoretical uncertainties lack such a description. We study uncertainty estimates for cross-section predictions based on scale variations across a large set of processes. We find patterns similar to a stochastic origin, with accurate uncertainties for processes mediated by the strong force, but a systematic underestimate for electroweak processes. We propose an improved scheme, based on the scale variation of reference processes, which reduces outliers in the mapping from leading order to next-to-leading-order in perturbation theory. ×
Machine-Learning Compression for Particle Physics Discoveries
J. H. Collins, Y. Huang, S. Knapen, B. Nachman, D. Whiteson
NeurIPS Machine Learning and Physical Sciences (2022) · e-Print: 2210.11489
Cite Article
@article{2210.11489,
author="J. H. Collins, Y. Huang, S. Knapen, B. Nachman, D. Whiteson",
title="{Machine-Learning Compression for Particle Physics Discoveries}",
eprint="2210.11489",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal="NeurIPS Machine Learning and Physical Sciences",
year = "2022",
×
Machine-Learning Compression for Particle Physics Discoveries
In collider-based particle and nuclear physics experiments, data are produced at such extreme rates that only a subset can be recorded for later analysis. Typically, algorithms select individual collision events for preservation and store the complete experimental response. A relatively new alternative strategy is to additionally save a partial record for a larger subset of events, allowing for later specific analysis of a larger fraction of events. We propose a strategy that bridges these paradigms by compressing entire events for generic offline analysis but at a lower fidelity. An optimal-transport-based β Variational Autoencoder (VAE) is used to automate the compression and the hyperparameter beta controls the compression fidelity. We introduce a new approach for multi-objective learning functions by simultaneously learning a VAE appropriate for all values of beta through parameterization. We present an example use case, a di-muon resonance search at the Large Hadron Collider (LHC), where we show that simulated data compressed by our beta-VAE has enough fidelity to distinguish distinct signal morphologies. ×
The Future of High Energy Physics Software and Computing
V. D. Elvira, S. Gottlieb, O. Gutsche, B. Nachman (frontier conveners), et al.
e-Print: 2210.05822
Cite Article
@article{2210.05822,
author="{V. D. Elvira, S. Gottlieb, O. Gutsche, B. Nachman (frontier conveners), et al.}",
title="{The Future of High Energy Physics Software and Computing}",
eprint="2210.05822",
archivePrefix = "arXiv",
primaryClass = "hep-ex",
year = "2022,
}
×
The Future of High Energy Physics Software and Computing
Software and Computing (S&C) are essential to all High Energy Physics (HEP) experiments and many theoretical studies. The size and complexity of S&C are now commensurate with that of experimental instruments, playing a critical role in experimental design, data acquisition/instrumental control, reconstruction, and analysis. Furthermore, S&C often plays a leading role in driving the precision of theoretical calculations and simulations. Within this central role in HEP, S&C has been immensely successful over the last decade. This report looks forward to the next decade and beyond, in the context of the 2021 Particle Physics Community Planning Exercise ("Snowmass") organized by the Division of Particles and Fields (DPF) of the American Physical Society. ×
Anomaly Detection under Coordinate Transformations
G. Kasieczka, R. Mastandrea, V. Mikuni, B. Nachman, M. Pettee, D. Shih
e-Print: 2209.06225
Cite Article
@article{2209.06225,
author="{G. Kasieczka, R. Mastandrea, V. Mikuni, B. Nachman, M. Pettee, D. Shih}",
title="{Anomaly Detection under Coordinate Transformations}",
eprint="2209.06225",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2022",
}
×
Anomaly Detection under Coordinate Transformations
There is a growing need for machine learning-based anomaly detection strategies to broaden the search for Beyond-the-Standard-Model (BSM) physics at the Large Hadron Collider (LHC) and elsewhere. The first step of any anomaly detection approach is to specify observables and then use them to decide on a set of anomalous events. One common choice is to select events that have low probability density. It is a well-known fact that probability densities are not invariant under coordinate transformations, so the sensitivity can depend on the initial choice of coordinates. The broader machine learning community has recently connected coordinate sensitivity with anomaly detection and our goal is to bring awareness of this issue to the growing high energy physics literature on anomaly detection. In addition to analytical explanations, we provide numerical examples from simple random variables and from the LHC Olympics Dataset that show how using probability density as an anomaly score can lead to events being classified as anomalous or not depending on the coordinate frame. ×
Point Cloud Deep Learning Methods for Pion Reconstruction in the ATLAS Experiment
ATLAS Collaboration
Public note: ATL-PHYS-PUB-2022-040
Cite Article
@article{ATL-PHYS-PUB-2022-040,
author="{ATLAS Collaboration}",
title="{Point Cloud Deep Learning Methods for Pion Reconstruction in the ATLAS Experiment}",
journal = "ATL-PHYS-PUB-2022-040",
url = "https://cds.cern.ch/record/2825379",
year = "2022",
}
×
Point Cloud Deep Learning Methods for Pion Reconstruction in the ATLAS Experiment
The reconstruction and calibration of hadronic final states in the ATLAS detector present complex experimental challenges. For isolated pions in particular, classifying pi0 versus charged pions and calibrating pion energy deposits in the ATLAS calorimeters are key steps in the hadronic reconstruction process. The baseline methods for local hadronic calibration were optimized early in the lifetime of the ATLAS experiment. This note presents a significant improvement over existing techniques using machine learning methods that do not require the input variables to be projected onto a fixed and regular grid. Instead, Transformer, Deep Sets, and Graph Neural Network architectures are used to process calorimeter clusters and particle tracks as point clouds, or a collection of data points representing a three-dimensional object in space. This note demonstrates the performance of these new approaches as an important step towards a low-level hadronic reconstruction scheme that fully takes advantage of deep learning to improve its performance. ×
Constituent-Based Top-Quark Tagging with the ATLAS Detector
ATLAS Collaboration
Public note: ATL-PHYS-PUB-2022-039
Cite Article
@article{ATL-PHYS-PUB-2022-039,
author="{ATLAS Collaboration}",
title="{Constituent-Based Top-Quark Tagging with the ATLAS Detector}",
journal = "ATL-PHYS-PUB-2022-039",
url = "https://cds.cern.ch/record/2825328",
year = "2022",
}
×
Constituent-Based Top-Quark Tagging with the ATLAS Detector
This note presents the performance of constituent-based jet taggers on large radius boosted top quark jets reconstructed from optimized jet input objects in simulated collisions at sqrt(s) = 13 TeV. Several taggers which consider all the information contained in the kinematic information of the jet constituents are tested, and compared to a tagger which relies on high-level summary quantities similar to the taggers used by ATLAS in Runs 1 and 2. Several constituent based taggers are found to out-perform the high level quantity based tagger, with the best achieving a factor of two increase in background rejection across the kinematic range. To enable further development and study, the data set described in this note is made publicly available. ×
Overcoming exponential scaling with system size in Trotter-Suzuki implementations of constrained Hamiltonians: 2+1 U(1) lattice gauge theories
D. M. Grabowska, C. Kane, B. Nachman, C. W. Bauer
e-Print: 2208.03333
Cite Article
@article{2208.03333,
author="D. M. Grabowska, C. Kane, B. Nachman, C. W. Bauer",
title="{Overcoming exponential scaling with system size in Trotter-Suzuki implementations of constrained Hamiltonians: 2+1 U(1) lattice gauge theories}",
eprint="2208.03333",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2022",
×
Overcoming exponential scaling with system size in Trotter-Suzuki implementations of constrained Hamiltonians: 2+1 U(1) lattice gauge theories
For many quantum systems of interest, the classical computational cost of simulating their time evolution scales exponentially in the system size. At the same time, quantum computers have been shown to allow for simulations of some of these systems using resources that scale polynomially with the system size. Given the potential for using quantum computers for simulations that are not feasible using classical devices, it is paramount that one studies the scaling of quantum algorithms carefully. This work identifies a term in the Hamiltonian of a class of constrained systems that naively requires quantum resources that scale exponentially in the system size. An important example is a compact U(1) gauge theory on lattices with periodic boundary conditions. Imposing the magnetic Gauss' law a priori introduces a constraint into that Hamiltonian that naively results in an exponentially deep circuit. A method is then developed that reduces this scaling to polynomial in the system size, using a redefinition of the operator basis. An explicit construction, as well as the scaling of the associated computational cost, of the matrices defining the operator basis is given. ×
Morphing parton showers with event derivatives
B. Nachman and S. Prestel
e-Print: 2208.02274
Cite Article
@article{2208.02274,
author="B. Nachman and S. Prestel",
title="{Morphing parton showers with event derivatives}",
eprint="2208.02274",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2022",
×
Morphing parton showers with event derivatives
We develop EventMover, a differentiable parton shower event generator. This tool generates high- and variable-length scattering events that can be moved with simulation derivatives to change the value of the scale Lambda QCD defining the strong coupling constant, without introducing statistical variations between samples. To demonstrate the potential for EventMover, we compare the output of the simulation with electron positron data to show how one could fit Lambda QCD with only a single event sample. This is a critical step towards a fully differentiable event generator for particle and nuclear physics. ×
Systematic Quark/Gluon Identification with Ratios of Likelihoods
S. Bright-Thonney, I. Moult, B. Nachman, S. Prestel
e-Print: 2207.12411
Cite Article
@article{2207.12411
author="S. Bright-Thonney, I. Moult, B. Nachman, S. Prestel",
title="{Systematic Quark/Gluon Identification with Ratios of Likelihoods}",
eprint="2207.12411",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2022",
×
Systematic Quark/Gluon Identification with Ratios of Likelihoods
Discriminating between quark- and gluon-initiated jets has long been a central focus of jet substructure, leading to the introduction of numerous observables and calculations to high perturbative accuracy. At the same time, there have been many attempts to fully exploit the jet radiation pattern using tools from statistics and machine learning. We propose a new approach that combines a deep analytic understanding of jet substructure with the optimality promised by machine learning and statistics. After specifying an approximation to the full emission phase space, we show how to construct the optimal observable for a given classification task. This procedure is demonstrated for the case of quark and gluons jets, where we show how to systematically capture sub-eikonal corrections in the splitting functions, and prove that linear combinations of weighted multiplicity is the optimal observable. In addition to providing a new and powerful framework for systematically improving jet substructure observables, we demonstrate the performance of several quark versus gluon jet tagging observables in parton-level Monte Carlo simulations, and find that they perform at or near the level of a deep neural network classifier. Combined with the rapid recent progress in the development of higher order parton showers, we believe that our approach provides a basis for systematically exploiting subleading effects in jet substructure analyses at the Large Hadron Collider (LHC) and beyond. ×
Machine learning-assisted measurement of multi-differential lepton-jet correlations in deep-inelastic scattering with the H1 detector
H1 Collaboration
Public note: H1prelim-22-031
Cite Article
@article{H1prelim-22-031,
author="{H1 Collaboration}",
title="Machine learning-assisted measurement of multi-differential lepton-jet correlations in deep-inelastic scattering with the H1 detector}",
journal = "H1prelim-22-031",
url = "https://www-h1.desy.de/h1/www/publications/htmlsplit/H1prelim-22-031.long.html",
year = "2022",
}
×
Machine learning-assisted measurement of multi-differential lepton-jet correlations in deep-inelastic scattering with the H1 detector
The lepton-jet momentum imbalance in deep inelastic scattering events offers a useful set of observables for unifying collinear and transverse-momentum-dependent frameworks for describing high energy Quantum Chromodynamics (QCD) interactions. We recently performed a measurement of this imbalance in the laboratory frame using positron-proton collisions from HERA Run II [1]. With a new machine learning method, the measurement was performed simultaneously and unbinned in eight dimensions. The results in Ref. [1] were presented projected onto four key observables. This paper extends those results by showing the multi-differential nature of the unfolded result. In particular, we present lepton-jet correlation observables deferentially in kinematic properties of the scat- tering process, Q^2 and y. We compare these results with parton shower Monte Carlo predictions as well as calculations from perturbative QCD and from a Transverse Momentum Dependent (TMD) factorization framework. ×
Score-based Generative Models for Calorimeter Shower Simulation
V. Mikuni, B. Nachman
e-Print: 2206.11898
Cite Article
@article{2206.11898,
author="{V. Mikuni, B. Nachman}",
title="{Score-based Generative Models for Calorimeter Shower Simulation}",
eprint="2206.11898",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2022",
}
×
Score-based Generative Models for Calorimeter Shower Simulation
Score-based generative models are a new class of generative algorithms that have been shown to produce realistic images even in high dimensional spaces, currently surpassing other state-of-the-art models for different benchmark categories and applications. In this work we introduce CaloScore, a score-based generative model for collider physics applied to calorimeter shower generation. Three different diffusion models are investigated using the Fast Calorimeter Simulation Challenge 2022 dataset. CaloScore is the first application of a score-based generative model in collider physics and is able to produce high-fidelity calorimeter images for all datasets, providing an alternative paradigm for calorimeter shower simulation. ×
Going off topics to demix quark and gluon jets in $\alpha_S$ extractions
M. LeBlanc, B. Nachman, C. Sauer
e-Print: 2206.10642
Cite Article
@article{2206.10642,
author="{M. LeBlanc, B. Nachman, C. Sauer}",
title="{Going off topics to demix quark and gluon jets in $\alpha_S$ extractions}",
eprint="2206.10642",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2022",
}
×
Going off topics to demix quark and gluon jets in $\alpha_S$ extractions
Quantum chromodynamics is the theory of the strong interaction between quarks and gluons; the coupling strength of the interaction, $\alpha_S$, is the least precisely-known of all interactions in nature. An extraction of the strong coupling from the radiation pattern within jets would provide a complementary approach to conventional extractions from jet production rates and hadronic event shapes, and would be a key achievement of jet substructure at the Large Hadron Collider (LHC). Presently, the relative fraction of quark and gluon jets in a sample is the limiting factor in such extractions, as this fraction is degenerate with the value of $\alpha_S$ for the most well-understood observables. To overcome this limitation, we apply recently proposed techniques to statistically demix multiple mixtures of jets and obtain purified quark and gluon distributions based on an operational definition. We illustrate that studying quark and gluon jet substructure separately can significantly improve the sensitivity of such extractions of the strong coupling. We also discuss how using machine learning techniques or infrared- and collinear-unsafe information can improve the demixing performance without the loss of theoretical control. While theoretical research is required to connect the extract topics with the quark and gluon objects in cross section calculations, our study illustrates the potential of demixing to reduce the dominant uncertainty for the $\alpha_S$ extraction from jet substructure at the LHC. ×
Quantum Anomaly Detection for Collider Physics
S. Alvi, C. Bauer, B. Nachman
e-Print: 2206.08391
Cite Article
@article{2206.08391,
author="{S. Alvi, C. Bauer, B. Nachman}",
title="{Quantum Anomaly Detection for Collider Physics}",
eprint="2206.08391",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2022",
}
×
Quantum Anomaly Detection for Collider Physics
Quantum Machine Learning (QML) is an exciting tool that has received significant recent attention due in part to advances in quantum computing hardware. While there is currently no formal guarantee that QML is superior to classical ML for relevant problems, there have been many claims of an empirical advantage with high energy physics datasets. These studies typically do not claim an exponential speedup in training, but instead usually focus on an improved performance with limited training data. We explore an analysis that is characterized by a low statistics dataset. In particular, we study an anomaly detection task in the four-lepton final state at the Large Hadron Collider that is limited by a small dataset. We explore the application of QML in a semi-supervised mode to look for new physics without specifying a particular signal model hypothesis. We find no evidence that QML provides any advantage over classical ML. It could be that a case where QML is superior to classical ML for collider physics will be established in the future, but for now, classical ML is a powerful tool that will continue to expand the science of the LHC and beyond. ×
Self-supervised Anomaly Detection for New Physics
B. M. Dillon, R. Mastandrea, B. Nachman
e-Print: 2205.10380
Cite Article
@article{2205.10380,
author="{B. M. Dillon, R. Mastandrea, B. Nachman}",
title="{Self-supervised Anomaly Detection for New Physics}",
eprint="2205.10380",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2022",
}
×
Self-supervised Anomaly Detection for New Physics
We investigate a method of model-agnostic anomaly detection through studying jets, collimated sprays of particles produced in high-energy collisions. We train a transformer neural network to encode simulated QCD "event space" dijets into a low-dimensional "latent space" representation. We optimize the network using the self-supervised contrastive loss, which encourages the preservation of known physical symmetries of the dijets. We then train a binary classifier to discriminate a BSM resonant dijet signal from a QCD dijet background both in the event space and the latent space representations. We find the classifier performances on the event and latent spaces to be comparable. We finally perform an anomaly detection search using a weakly supervised bump hunt on the latent space dijets, finding again a comparable performance to a search run on the physical space dijets. This opens the door to using low-dimensional latent representations as a computationally efficient space for resonant anomaly detection in generic particle collision events. ×
Bias and Priors in Machine Learning Calibrations for High Energy Physics
R. Gambhir, B. Nachman, J. Thaler
Phys. Rev. D 106 (2022) 036011 · e-Print: 2205.05084
Cite Article
@article{2205.05084,
author="{R. Gambhir, B. Nachman, J. Thaler}",
title="{Bias and Priors in Machine Learning Calibrations for High Energy Physics}",
eprint="2205.05084",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal="Phys. Rev. D",
volume="106",
pages="036011",
doi="10.1103/PhysRevD.106.036011",
year = "2022",
}
×
Bias and Priors in Machine Learning Calibrations for High Energy Physics
Machine learning offers an exciting opportunity to improve the calibration of nearly all reconstructed objects in high-energy physics detectors. However, machine learning approaches often depend on the spectra of examples used during training, an issue known as prior dependence. This is an undesirable property of a calibration, which needs to be applicable in a variety of environments. The purpose of this paper is to explicitly highlight the prior dependence of some machine learning-based calibration strategies. We demonstrate how some recent proposals for both simulation-based and data-based calibrations inherit properties of the sample used for training, which can result in biases for downstream analyses. In the case of simulation-based calibration, we argue that our recently proposed Gaussian Ansatz approach can avoid some of the pitfalls of prior dependence, whereas prior-independent data-based calibration remains an open problem. ×
Learning Uncertainties the Frequentist Way: Calibration and Correlation in High Energy Physics
R. Gambhir, B. Nachman, J. Thaler
Phys. Rev. Lett. 129 (2022) 082001 · e-Print: 2205.03413
Cite Article
@article{2205.03413,
author="{R. Gambhir, B. Nachman, J. Thaler}",
title="{Learning Uncertainties the Frequentist Way: Calibration and Correlation in High Energy Physics}",
eprint="2205.03413",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal="Phys. Rev. Lett.",
volume="129",
pages="082001",
doi="10.1103/PhysRevLett.129.082001",
year = "2022",
}
×
Learning Uncertainties the Frequentist Way: Calibration and Correlation in High Energy Physics
Calibration is a common experimental physics problem, whose goal is to infer the value and uncertainty of an unobservable quantity Z given a measured quantity X. Additionally, one would like to quantify the extent to which X and Z are correlated. In this paper, we present a machine learning framework for performing frequentist maximum likelihood inference with Gaussian uncertainty estimation, which also quantifies the mutual information between the unobservable and measured quantities. This framework uses the Donsker-Varadhan representation of the Kullback-Leibler divergence -- parametrized with a novel Gaussian Ansatz -- to enable a simultaneous extraction of the maximum likelihood values, uncertainties, and mutual information in a single training. We demonstrate our framework by extracting jet energy corrections and resolution factors from a simulation of the CMS detector at the Large Hadron Collider. By leveraging the high-dimensional feature space inside jets, we improve upon the nominal CMS jet resolution by upwards of 15%. ×
Multi-differential Jet Substructure Measurement in High Q^2 DIS Events with HERA-II Data
H1 Collaboration
Public note: H1prelim-22-034
Cite Article
@article{H1prelim-22-034,
author="{H1 Collaboration}",
title="Multi-differential Jet Substructure Measurement in High $Q^2$ DIS Events with HERA-II Data}",
journal = "H1prelim-22-034",
url = "https://www-h1.desy.de/h1/www/publications/htmlsplit/H1prelim-22-034.long.html",
year = "2022",
}
×
Multi-differential Jet Substructure Measurement in High Q^2 DIS Events with HERA-II Data
A measurement of different jet substructure observables in high Q^2 neutral-current deep-inelastic scattering events close to the Born kinematics is presented. Differential and multi-differential cross-sections are presented as a function of the jet’s charged constituent multiplicity, momentum dispersion, jet charge, as well as three values of jet angularities. Results are split into multiple Q^ intervals, probing the evolution of jet observables with energy scale. These measurements probe the descrip- tion of parton showers and provide insight into non-perturbative QCD. Unfolded results are derived without binning using the machine learning-based method Omnifold. All observables are unfolded simultaneously by using reconstructed particles inside jets as inputs to a graph neural network. Results are compared with a variety of predictions ×
Exploring the Universality of Hadronic Jet Classification
K. Cheung, Y. Chung, S. Hsu, B. Nachman
e-Print: 2204.03812
Cite Article
@article{2204.03812,
author="{K. Cheung, Y. Chung, S. Hsu, B. Nachman}",
title="{Exploring the Universality of Hadronic Jet Classification}",
eprint="2204.03812",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2022",
}
×
Exploring the Universality of Hadronic Jet Classification
The modeling of jet substructure significantly differs between Parton Shower Monte Carlo (PSMC) programs. Despite this, we observe that machine learning classifiers trained on different PSMCs learn nearly the same function. This means that when these classifiers are applied to the same PSMC for testing, they result in nearly the same performance. This classifier universality indicates that a machine learning model trained on one simulation and tested on another simulation (or data) will likely be optimal. Our observations are based on detailed studies of shallow and deep neural networks applied to simulated Lorentz boosted Higgs jet tagging at the LHC. ×
Optimizing Observables with Machine Learning for Better Unfolding
M. Arratia, D. Britzger, O. Long, B. Nachman
JINST 17 (2022) P07009 · e-Print: 2203.16722
Cite Article
@article{2203.16722,
author="{M. Arratia, D. Britzger, O. Long, B. Nachman}",
title="{Optimizing Observables with Machine Learning for Better Unfolding}",
eprint="2203.16722",
archivePrefix = "arXiv",
primaryClass = "hep-ex",
journal = "JINST",
volume = "17",
pages = "P07009",
doi = "10.1088/1748-0221/17/07/P07009",
year = "2022",
}
×
Optimizing Observables with Machine Learning for Better Unfolding
Most measurements in particle and nuclear physics use matrix-based unfolding algorithms to correct for detector effects. In nearly all cases, the observable is defined analogously at the particle and detector level. We point out that while the particle-level observable needs to be physically motivated to link with theory, the detector-level need not be and can be optimized. We show that using deep learning to define detector-level observables has the capability to improve the measurement when combined with standard unfolding methods. ×
Towards a Deep Learning Model for Hadronization
A. Ghosh, X. Ju, B. Nachman, A. Siodmok
e-Print: 2203.12660
Cite Article
@article{2203.12660,
author="{A. Ghosh, X. Ju, B. Nachman, A. Siodmok}",
title="{Towards a Deep Learning Model for Hadronization}",
eprint="2203.12660",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2022",
}
×
Towards a Deep Learning Model for Hadronization
Hadronization is a complex quantum process whereby quarks and gluons become hadrons. The widely-used models of hadronization in event generators are based on physically-inspired phenomenological models with many free parameters. We propose an alternative approach whereby neural networks are used instead. Deep generative models are highly flexible, differentiable, and compatible with Graphical Processing Unit (GPUs). We make the first step towards a data-driven machine learning-based hadronization model by replacing a compont of the hadronization model within the Herwig event generator (cluster model) with a Generative Adversarial Network (GAN). We show that a GAN is capable of reproducing the kinematic properties of cluster decays. Furthermore, we integrate this model into Herwig to generate entire events that can be compared with the output of the public Herwig simulator as well as with electron-positron data. ×
Improving Quantum Simulation Efficiency of Final State Radiation with Dynamic Quantum Circuits
P. Deliyannis, J. Sud, D. Chamaki, Z. Webb-Mack, C. W. Bauer, B. Nachman
Phys. Rev. D 106 (2022) 036007 · e-Print: 2203.10018
Cite Article
@article{2203.10018,
author="P. Deliyannis, J. Sud, D. Chamaki, Z. Webb-Mack, C. W. Bauer, B. Nachman",
title="{Improving Quantum Simulation Efficiency of Final State Radiation with Dynamic Quantum Circuits}",
eprint="2203.10018",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "Phys. Rev. D",
pages = "036007",
volume = "106",
year = "2022",
doi="10.1103/PhysRevD.106.036007",
×
Improving Quantum Simulation Efficiency of Final State Radiation with Dynamic Quantum Circuits
Reference arXiv:1904.03196 recently introduced an algorithm (QPS) for simulating parton showers with intermediate flavor states using polynomial resources on a digital quantum computer. We make use of a new quantum hardware capability called dynamical quantum computing to improve the scaling of this algorithm to significantly improve the method precision. In particular, we modify the quantum parton shower circuit to incorporate mid-circuit qubit measurements, resets, and quantum operations conditioned on classical information. This reduces the computational depth and the qubit requirements. Using "matrix product state" statevector simulators, we demonstrate that the improved algorithm yields expected results for 2, 3, 4, and 5-steps of the algorithm. We compare absolute costs with the original QPS algorithm, and show that dynamical quantum computing can significantly reduce costs in the class of digital quantum algorithms representing quantum walks (which includes the QPS). ×
Simulation-based Anomaly Detection for Multileptons at the LHC
K. Krzyzanska and B. Nachman
e-Print: 2203.09601
Cite Article
@article{2203.09601,
author="K. Krzyzanska and B. Nachman",
title="{Simulation-based Anomaly Detection for Multileptons at the LHC}",
eprint="2203.09601",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2022",
×
Simulation-based Anomaly Detection for Multileptons at the LHC
Decays of Higgs boson-like particles into multileptons is a well-motivated process for investigating physics beyond the Standard Model (SM). A unique feature of this final state is the precision with which the SM is known. As a result, simulations are used directly to estimate the background. Current searches consider specific models and typically focus on those with a single free parameter to simplify the analysis and interpretation. In this paper, we explore recent proposals for signal model agnostic searches using machine learning in the multilepton final state. These tools can be used to simultaneously search for many models, some of which have no dedicated search at the Large Hadron Collider. We find that the machine learning methods offer broad coverage across parameter space beyond where current searches are sensitive, with a necessary loss of performance compared to dedicated searches by only about one order of magnitude. ×
Data-Directed Search for New Physics based on Symmetries of the SM
M. Birman, B. Nachman, R. Sebbah, G. Sela, O. Turetz, S. Bressler
Eur. Phys. J. C 82 (2022) 508 · e-Print: 2203.07529
Cite Article
@article{2203.07529,
author="M. Birman, B. Nachman, R. Sebbah, G. Sela, O. Turetz, S. Bressler",
title="{Data-Directed Search for New Physics based on Symmetries of the SM}",
eprint="2203.07529",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "Eur. Phys. J. C.",
volume="82",
pages="508",
doi="10.1140/epjc/s10052-022-10454-2",
year = "2022",
×
Data-Directed Search for New Physics based on Symmetries of the SM
We propose exploiting symmetries (exact or approximate) of the Standard Model (SM) to search for physics Beyond the Standard Model (BSM) using the data-directed paradigm (DDP). Symmetries are very powerful because they provide two samples that can be compared without requiring simulation. Focusing on the data, exclusive selections which exhibit significant asymmetry can be identified efficiently and marked for further study. Using a simple and generic test statistic which compares two matrices already provides good sensitivity, only slightly worse than that of the profile likelihood ratio test statistic which relies on the exact knowledge of the signal shape. This can be exploited for rapidly scanning large portions of the measured data, in an attempt to identify regions of interest. Weakly supervised Neural Networks could be used for this purpose as well. ×
A Holistic Approach to Predicting Top Quark Kinematic Properties with the Covariant Particle Transformer
S. Qiu, S. Han, X. Ju, B. Nachman, H. Wang
e-Print: 2203.05687
Cite Article
@article{2203.05687,
author="S. Qiu, S. Han, X. Ju, B. Nachman, H. Wang",
title="{A Holistic Approach to Predicting Top Quark Kinematic Properties with the Covariant Particle Transformer}",
eprint="2203.05687",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2022",
×
A Holistic Approach to Predicting Top Quark Kinematic Properties with the Covariant Particle Transformer
Precise reconstruction of top quark properties is a challenging task at the Large Hadron Collider due to combinatorial backgrounds and missing information. We introduce a physics-informed neural network architecture called the Covariant Particle Transformer (CPT) for directly predicting the top quark kinematic properties from reconstructed final state objects. This approach is permutation invariant and partially Lorentz covariant and can account for a variable number of input objects. In contrast to previous machine learning-based reconstruction methods, CPT is able to predict top quark four-momenta regardless of the jet multiplicity in the event. Using simulations, we show that the CPT performs favorably compared with other machine learning top quark reconstruction approaches. ×
Ephemeral Learning -- Augmenting Triggers with Online-Trained Normalizing Flows
A. Butter et al.
SciPost Phys. 13 (2022) 087 · e-Print: 2202.09375
Cite Article
@article{2202.09375,
author="A. Butter and others",
title="{Ephemeral Learning -- Augmenting Triggers with Online-Trained Normalizing Flows}",
eprint="2202.09375",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal="SciPost Physics",
volume = "13",
pages = "087",
doi = "10.21468/SciPostPhys.13.4.087",
year = "2022",
×
Ephemeral Learning -- Augmenting Triggers with Online-Trained Normalizing Flows
The large data rates at the LHC require an online trigger system to select relevant collisions. Rather than compressing individual events, we propose to compress an entire data set at once. We use a normalizing flow as a deep generative model to learn the probability density of the data online. The events are then represented by the generative neural network and can be inspected offline for anomalies or used for other analysis purposes. We demonstrate our new approach for a toy model and a correlation-enhanced bump hunt. ×
Calomplification -- The Power of Generative Calorimeter Models
S. Bieringer et al.
JINST 17 (2022) P09028 · e-Print: 2202.07352
Cite Article
@article{2202.07352,
author="S. Bieringer and others",
title="{Calomplification -- The Power of Generative Calorimeter Models}",
eprint="2202.07352",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal="JINST",
doi="10.1088/1748-0221/17/09/P09028",
volume="17",
pages="P09028",
year = "2022",
×
Calomplification -- The Power of Generative Calorimeter Models
Motivated by the high computational costs of classical simulations, machine-learned generative models can be extremely useful in particle physics and elsewhere. They become especially attractive when surrogate models can efficiently learn the underlying distribution, such that a generated sample outperforms a training sample of limited size. This kind of GANplification has been observed for simple Gaussian models. We show the same effect for a physics simulation, specifically photon showers in an electromagnetic calorimeter. ×
SymmetryGAN: Symmetry Discovery with Deep Learning
K. Desai, B. Nachman, J. Thaler
Phys. Rev. D 105 (2022) 096031 · e-Print: 2112.05722
Cite Article
@article{2112.05722,
author="{K. Desai, B. Nachman, J. Thaler}",
title="{SymmetryGAN: Symmetry Discovery with Deep Learning}",
eprint="2112.05722",
archivePrefix = "arXiv",
journal = "Phys. Rev. D",
volume = "105",
pages = "096031",
primaryClass = "hep-ph",
doi = "10.1103/PhysRevD.105.096031",
year = "2022",
}
×
SymmetryGAN: Symmetry Discovery with Deep Learning
What are the symmetries of a dataset? Whereas the symmetries of an individual data element can be characterized by its invariance under various transformations, the symmetries of an ensemble of data elements are ambiguous due to Jacobian factors introduced while changing coordinates. In this paper, we provide a rigorous statistical definition of the symmetries of a dataset, which involves inertial reference densities, in analogy to inertial frames in classical mechanics. We then propose SymmetryGAN as a novel and powerful approach to automatically discover symmetries using a deep learning method based on generative adversarial networks (GANs). When applied to Gaussian examples, SymmetryGAN shows excellent empirical performance, in agreement with expectations from the analytic loss landscape. SymmetryGAN is then applied to simulated dijet events from the Large Hadron Collider (LHC) to demonstrate the potential utility of this method in high energy collider physics applications. Going beyond symmetry discovery, we consider procedures to infer the underlying symmetry group from empirical data. ×
Machine Learning in the Search for New Fundamental Physics
G. Karagiorgi, G. Kasieczka, S. Kravitz, B. Nachman, D. Shih
Nat. Rev. Phys. (2022) · e-Print: 2112.03769
Cite Article
@article{2112.03769,
author="{G. Karagiorgi, G. Kasieczka, S. Kravitz, B. Nachman, D. Shih}",
title="{Machine Learning in the Search for New Fundamental Physics}",
eprint="2112.03769",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal="Nat. Rev. Phys.",
doi = "10.1038/s42254-022-00455-1",
year = "2022",
}
×
Machine Learning in the Search for New Fundamental Physics
Machine learning plays a crucial role in enhancing and accelerating the search for new fundamental physics. We review the state of machine learning methods and applications for new physics searches in the context of terrestrial high energy physics experiments, including the Large Hadron Collider, rare event searches, and neutrino experiments. While machine learning has a long history in these fields, the deep learning revolution (early 2010s) has yielded a qualitative shift in terms of the scope and ambition of research. These modern machine learning developments are the focus of the present review. ×
There is a growing need for anomaly detection methods that can broaden the search for new particles in a model-agnostic manner. Most proposals for new methods focus exclusively on signal sensitivity. However, it is not enough to select anomalous events - there must also be a strategy to provide context to the selected events. We propose the first complete strategy for unsupervised detection of non-resonant anomalies that includes both signal sensitivity and a data-driven method for background estimation. Our technique is built out of two simultaneously-trained autoencoders that are forced to be decorrelated from each other. This method can be deployed offline for non-resonant anomaly detection and is also the first complete online-compatible anomaly detection strategy. We show that our method achieves excellent performance on a variety of signals prepared for the ADC2021 data challenge. ×
Computationally Efficient Zero Noise Extrapolation for Quantum Gate Error Mitigation
V. R. Pascuzzi, A. He, C. W. Bauer, W. A. de Jong, B. Nachman
Phys. Rev. A 105 (2022) 042406 · e-Print: 2110.13338
Cite Article
@article{2110.13338,
author="{V. R. Pascuzzi, A. He, C. W. Bauer, W. A. de Jong, B. Nachman}",
title="{Computationally Efficient Zero Noise Extrapolation for Quantum Gate Error Mitigation}",
eprint="2110.13338",
archivePrefix = "arXiv",
primaryClass = "quant-ph",
journal="Phys. Rev. A",
year = "2022",
volume="105",
pages="042406",
doi="10.1103/PhysRevA.105.042406"
}
×
Computationally Efficient Zero Noise Extrapolation for Quantum Gate Error Mitigation
Zero noise extrapolation (ZNE) is a widely used technique for gate error mitigation on near term quantum computers because it can be implemented in software and does not require knowledge of the quantum computer noise parameters. Traditional ZNE requires a significant resource overhead in terms of quantum operations. A recent proposal using a targeted (or random) instead of fixed identity insertion method (RIIM versus FIIM) requires significantly fewer quantum gates for the same formal precision. We start by showing that RIIM can allow for ZNE to be deployed on deeper circuits than FIIM, but requires many more measurements to maintain the same statistical uncertainty. We develop two extensions to FIIM and RIIM. The List Identity Insertion Method (LIIM) allows to mitigate the error from certain CNOT gates, typically those with the largest error. Set Identity Insertion Method (SIIM) naturally interpolates between the measurement-efficient FIIM and the gate-efficient RIIM, allowing to trade off fewer CNOT gates for more measurements. Finally, we investigate a way to boost the number of measurements, namely to run ZNE in parallel, utilizing as many quantum devices as are available. We explore the performance of RIIM in a parallel setting where there is a non-trivial spread in noise across sets of qubits within or across quantum computers. ×
Reconstructing the Kinematics of Deep Inelastic Scattering with Deep Learning
M. Arratia, D. Britzger, O. Long, B. Nachman
e-Print: 2110.05505
Cite Article
@article{2110.05505,
author="{M. Arratia, D. Britzger, O. Long, B. Nachman}",
title="{Reconstructing the Kinematics of Deep Inelastic Scattering with Deep Learning}",
eprint="2110.05505",
archivePrefix = "arXiv",
primaryClass = "hep-ex",
year = "2021,
}
×
Reconstructing the Kinematics of Deep Inelastic Scattering with Deep Learning
We introduce a method to reconstruct the kinematics of neutral-current deep inelastic scattering (DIS) using a deep neural network (DNN). Unlike traditional methods, it exploits the full kinematic information of both the scattered electron and the hadronic-final state, and it accounts for QED radiation by identifying events with radiated photons and event-level momentum imbalance. The method is studied with simulated events at HERA and the future Electron-Ion Collider (EIC). We show that the DNN method outperforms all the traditional methods over the full phase space, improving resolution and reducing bias. Our method has the potential to extend the kinematic reach of future experiments at the EIC, and thus their discovery potential in polarized and nuclear DIS. ×
Machine learning tools have empowered a qualitatively new way to perform differential cross section measurements whereby the data are unbinned, possibly in many dimensions. Unbinned measurements can enable, improve, or at least simplify comparisons between experiments and with theoretical predictions. Furthermore, many-dimensional measurements can be used to define observables after the measurement instead of before. There is currently no community standard for publishing unbinned data. While there are also essentially no measurements of this type public, unbinned measurements are expected in the near future given recent methodological advances. The purpose of this paper is to propose a scheme for presenting and using unbinned results, which can hopefully form the basis for a community standard to allow for integration into analysis workflows. This is foreseen to be the start of an evolving community dialogue, in order to accommodate future developments in this field that is rapidly evolving. ×
Practical considerations for the preparation of multivariate Gaussian states on quantum computers
C. W. Bauer, P. Deliyannis, M. Freytsis, B. Nachman
e-Print: 2109.10918
Cite Article
@article{2109.10918,
author="{C. W. Bauer, P. Deliyannis, M. Freytsis, and B. Nachman}",
title="{Practical considerations for the preparation of multivariate Gaussian states on quantum computers}",
eprint="2109.10918",
archivePrefix = "arXiv",
primaryClass = "quant-ph",
year = "2021",
}
×
Practical considerations for the preparation of multivariate Gaussian states on quantum computers
We provide explicit circuits implementing the Kitaev-Webb algorithm for the preparation of multi-dimensional Gaussian states on quantum computers. While asymptotically efficient due to its polynomial scaling, we find that the circuits implementing the preparation of one-dimensional Gaussian states and those subsequently entangling them to reproduce the required covariance matrix differ substantially in terms of both the gates and ancillae required. The operations required for the preparation of one-dimensional Gaussians are sufficiently involved that generic exponentially-scaling state-preparation algorithms are likely to be preferred in the near term for many states of interest. Conversely, polynomial-resource algorithms for implementing multi-dimensional rotations quickly become more efficient for all but the very smallest states, and their deployment will be a key part of any direct multidimensional state preparation method in the future. ×
A Cautionary Tale of Decorrelating Theory Uncertainties
A. Ghosh and B. Nachman
Eur. Phys. J. C 82 (2022) 46 · e-Print: 2109.08159
Cite Article
@article{2109.08159,
author="{A. Ghosh and B. Nachman}",
title="{A Cautionary Tale of Decorrelating Theory Uncertainties}",
eprint="2109.08159",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2021",
}
×
A Cautionary Tale of Decorrelating Theory Uncertainties
A variety of techniques have been proposed to train machine learning classifiers that are independent of a given feature. While this can be an essential technique for enabling background estimation, it may also be useful for reducing uncertainties. We carefully examine theory uncertainties, which typically do not have a statistical origin. We will provide explicit examples of two-point (fragmentation modeling) and continuous (higher-order corrections) uncertainties where decorrelating significantly reduces the apparent uncertainty while the actual uncertainty is much larger. These results suggest that caution should be taken when using decorrelation for these types of uncertainties as long as we do not have a complete decomposition into statistically meaningful components. ×
Classifying Anomalies THrough Outer Density Estimation (CATHODE)
A. Hallin, J. Isaacson, G. Kasieczka, C. Krause, B. Nachman, T. Quadfasel, M. Schlaffer, D. Shih, M. Sommerhalder
e-Print: 2109.00546
Cite Article
@article{2109.00546,
author="{A. Hallin, J. Isaacson, G. Kasieczka, C. Krause, B. Nachman, T. Quadfasel, M. Schlaffer, D. Shih, M. Sommerhalder}",
title="{Classifying Anomalies THrough Outer Density Estimation (CATHODE)}",
eprint="2109.00546",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2021",
}
×
Classifying Anomalies THrough Outer Density Estimation (CATHODE)
We propose a new model-agnostic search strategy for physics beyond the standard model (BSM) at the LHC, based on a novel application of neural density estimation to anomaly detection. Our approach, which we call Classifying Anomalies THrough Outer Density Estimation (CATHODE), assumes the BSM signal is localized in a signal region (defined e.g. using invariant mass). By training a conditional density estimator on a collection of additional features outside the signal region, interpolating it into the signal region, and sampling from it, we produce a collection of events that follow the background model. We can then train a classifier to distinguish the data from the events sampled from the background model, thereby approaching the optimal anomaly detector. Using the LHC Olympics R&D dataset, we demonstrate that CATHODE nearly saturates the best possible performance, and significantly outperforms other approaches that aim to enhance the bump hunt (CWoLa Hunting and ANODE). Finally, we demonstrate that CATHODE is very robust against correlations between the features and maintains nearly-optimal performance even in this more challenging setting. ×
High-dimensional Anomaly Detection with Radiative Return in electron-positron Collisions
J. Gonski, J. Lai, B. Nachman, I. Ochoa
JHEP 04 (2022) 156 · e-Print: 2108.13451
Cite Article
@article{2108.13451,
author="{J. Gonski, J. Lai, B. Nachman, I. Ochoa}",
title="{High-dimensional Anomaly Detection with Radiative Return in $e^+$ $e^−$ Collisions}",
eprint="2108.13451",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "JHEP",
volume = "04",
pages = "156",
doi = "10.1007/JHEP04(2022)156",
year = "2022",
}
×
High-dimensional Anomaly Detection with Radiative Return in electron-positron Collisions
Experiments at a future e+ e− collider will be able to search for new particles with masses below the nominal centre-of-mass energy by analyzing collisions with initial-state radiation (radiative return). We show that machine learning methods based on semisupervised and weakly supervised learning can achieve model-independent sensitivity to the production of new particles in radiative return events. In addition to a first application of these methods in e+ e− collisions, our study is the first to combine weak supervision with variable-dimensional information by deploying a deep sets neural network architecture. We have also investigated some of the experimental aspects of anomaly detection in radiative return events and discuss these in the context of future detector design. ×
Active Readout Error Mitigation
R. Hicks, B. Kobrin, C. W. Bauer, B. Nachman
e-Print: 2108.12432
Cite Article
@article{2108.12432,
author="{R. Hicks, B. Kobrin, C. W. Bauer, B. Nachman}",
title="{Active Readout Error Mitigation}",
eprint="2108.12432",
archivePrefix = "arXiv",
primaryClass = "quant-ph",
year = "2021",
}
×
Active Readout Error Mitigation
Mitigating errors is a significant challenge for near term quantum computers. One of the most important sources of errors is related to the readout of the quantum state into a classical bit stream. A variety of techniques have been proposed to mitigate these errors with post-hoc corrections. We propose a complementary scheme to actively reduce readout errors on a shot-by-shot basis by encoding single qubits, immediately prior to readout, into multi-qubit states. The computational resources of our technique are independent of the circuit depth and fully compatible with current hardware error rates and connectivity. We analyze the potential of our approach using two types of error-correcting codes and, as a proof of principle, demonstrate an 80% improvement in readout error on the IBMQ Mumbai quantum computer. ×
Measurement of lepton-jet correlation in deep-inelastic scattering with the H1 detector using machine learning for unfolding
H1 Collaboration
e-Print: 2108.12376
Cite Article
@article{2108.12376,
author="{H1 Collaboration}",
title="{Measurement of lepton-jet correlation in deep-inelastic scattering with the H1 detector using machine learning for unfolding}",
eprint="2108.12376",
archivePrefix = "arXiv",
primaryClass = "hep-ex",
year = "2021",
}
×
Measurement of lepton-jet correlation in deep-inelastic scattering with the H1 detector using machine learning for unfolding
The first measurement of lepton-jet momentum imbalance and azimuthal correlation in lepton-proton scattering at high momentum transfer is presented. These data, taken with the H1 detector at HERA, are corrected for detector effects using an unbinned machine learning algorithm OmniFold, which considers eight observables simultaneously in this first application. The unfolded cross sections are compared to calculations performed within the context of collinear or transverse-momentum-dependent (TMD) factorization in Quantum Chromodynamics (QCD) as well as Monte Carlo event generators. The measurement probes a wide range of QCD phenomena, including TMD parton distribution functions and their evolution with energy in so far unexplored kinematic regions. ×
Digluon Tagging using sqrt(s) = 13 TeV pp Collisions in the ATLAS Detector
ATLAS Collaboration
Public note: ATL-PHYS-PUB-2021-027
Cite Article
@article{ATL-PHYS-PUB-2021-027,
author="{ATLAS Collaboration}",
title="{Digluon Tagging using sqrt(s) = 13 TeV pp Collisions in the ATLAS Detector}",
journal = "ATL-PHYS-PUB-2021-027",
url = "http://cdsweb.cern.ch/record/2776780",
year = "2021",
}
×
Digluon Tagging using sqrt(s) = 13 TeV pp Collisions in the ATLAS Detector
Jet substructure has played a key role in the development of two-prong taggers designed to identify Lorentz-boosted massive particles. Traditionally, these taggers have focused on Lorentz-boosted W, Z, and Higgs bosons decaying into pairs of quarks. However, there are a variety of models that predict new bosons with two-prong decays at other masses. In particular, light scalar or pseudoscalar particles (a bosons) from extended Higgs sectors or axion-like particle models could result in Lorentz-boosted digluon jets (a to gg). If the mass of the a particle is much less than the mass of the Standard Model Higgs boson, then the two gluons will be collimated inside a single jet. This note studies the properties of digluon jets and investigates advanced techniques based on deep learning to separate them from generic quark and gluon jets. ×
Neural Conditional Reweighting
B. Nachman and J. Thaler
Phys. Rev. D 105 (2022) 076015 · e-Print: 2107.08979
Cite Article
@article{2107.08979,
author="B. Nachman and J. Thaler",
title="{Neural Conditional Reweighting}",
eprint="2107.08979",
archivePrefix = "arXiv",
journal = "Phys. Rev. D",
volume = "105",
pages = "076015",
doi = "10.1103/PhysRevD.105.076015",
primaryClass = "physics.data-an",
year = "2022",
}
×
Neural Conditional Reweighting
There is a growing use of neural network classifiers as unbinned, high-dimensional (and variable-dimensional) reweighting functions. To date, the focus has been on marginal reweighting, where a subset of features are used for reweighting while all other features are integrated over. There are some situations, though, where it is preferable to condition on auxiliary features instead of marginalizing over them. In this paper, we introduce neural conditional reweighting, which extends neural marginal reweighting to the conditional case. This approach is particularly relevant in high-energy physics experiments for reweighting detector effects conditioned on particle-level truth information. We leverage a custom loss function that not only allows us to achieve neural conditional reweighting through a single training procedure, but also yields sensible interpolation even in the presence of phase space holes. As a specific example, we apply neural conditional reweighting to the energy response of high-energy jets, which could be used to improve the modeling of physics objects in parametrized fast simulation packages. ×
New Methods and Datasets for Group Anomaly Detection From Fundamental Physics
G. Kasieczka, B. Nachman, and D. Shih
ANDEA (Anomaly and Novelty Detection, Explanation and Accommodation) Workshop at KDD 2021 · e-Print: 2107.02821
Cite Article
@article{2107.02821,
author="G. Kasieczka, B. Nachman, and D. Shih",
title="{New Methods and Datasets for Group Anomaly Detection From Fundamental Physics}",
eprint="2107.02821",
archivePrefix = "arXiv",
primaryClass = "stat.ml",
journal = "ANDEA",
year = "2021",
}
×
New Methods and Datasets for Group Anomaly Detection From Fundamental Physics
The identification of anomalous overdensities in data - group or collective anomaly detection - is a rich problem with a large number of real world applications. However, it has received relatively little attention in the broader ML community, as compared to point anomalies or other types of single instance outliers. One reason for this is the lack of powerful benchmark datasets. In this paper, we first explain how, after the Nobel-prize winning discovery of the Higgs boson, unsupervised group anomaly detection has become a new frontier of fundamental physics (where the motivation is to find new particles and forces). Then we propose a realistic synthetic benchmark dataset (LHCO2020) for the development of group anomaly detection algorithms. Finally, we compare several existing statistically-sound techniques for unsupervised group anomaly detection, and demonstrate their performance on the LHCO2020 dataset. ×
Measurements of sensor radiation damage in the ATLAS inner detector using leakage currents
ATLAS Collaboration
JINST 16 (2021) P08025 · e-Print: 2106.09287
Cite Article
@article{2106.09287,
author="{ATLAS Collaboration}",
title="{Measurements of sensor radiation damage in the ATLAS inner detector using leakage currents}",
eprint="2106.09287",
archivePrefix = "arXiv",
primaryClass = "hep-ex",
journal="JINST",
volume="16",
pages="P08025",
doi="10.1088/1748-0221/16/08/P08025",
year = "2021",
}
×
Measurements of sensor radiation damage in the ATLAS inner detector using leakage currents
Non-ionizing energy loss causes bulk damage to the silicon sensors of the ATLAS pixel and strip detectors. This damage has important implications for data-taking operations, charged-particle track reconstruction, detector simulations, and physics analysis. This paper presents simulations and measurements of the leakage current in the ATLAS pixel detector and semiconductor tracker as a function of location in the detector and time, using data collected in Run 1 (2010-2012) and Run 2 (2015-2018) of the Large Hadron Collider. The extracted fluence shows a much stronger |z|-dependence in the innermost layers than is seen in simulation. Furthermore, the overall fluence on the second innermost layer is significantly higher than in simulation, with better agreement in layers at higher radii. These measurements are important for validating the simulation models and can be used in part to justify safety factors for future detector designs and interventions. ×
Latent Space Refinement for Deep Generative Models
R. Winterhalder, M. Bellegente, B. Nachman
e-Print: 2106.00792
Cite Article
@article{2106.00792,
author="R. Winterhalder, M. Bellegente, B. Nachman",
title="{Latent Space Refinement for Deep Generative Models}",
eprint="2106.00792",
archivePrefix = "arXiv",
primaryClass = "stat.ml",
year = "2021",
}
×
Latent Space Refinement for Deep Generative Models
Deep generative models are becoming widely used across science and industry for a variety of purposes. A common challenge is achieving a precise implicit or explicit representation of the data probability density. Recent proposals have suggested using classifier weights to refine the learned density of deep generative models. We extend this idea to all types of generative models and show how latent space refinement via iterated generative modeling can circumvent topological obstructions and improve precision. This methodology also applies to cases were the target model is non-differentiable and has many internal latent dimensions which must be marginalized over before refinement. We demonstrate our Latent Space Refinement (LaSeR) protocol on a variety of examples, focusing on the combinations of Normalizing Flows and Generative Adversarial Networks. ×
Preserving New Physics while Simultaneously Unfolding All Observables
P. Komiske, W. P. McCormack, B. Nachman
Phys. Rev. D 104 (2021) 076027 · e-Print: 2105.09923
Cite Article
@article{2105.09923,
author="{P. Komiske, W. P. McCormack, B. Nachman}",
title="{Preserving New Physics while Simultaneously Unfolding All Observables}",
eprint="2105.09923",
journal="Phys. Rev. D",
volume="104",
pages="076027",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2021,
}
×
Preserving New Physics while Simultaneously Unfolding All Observables
Direct searches for new particles at colliders have traditionally been factorized into model proposals by theorists and model testing by experimentalists. With the recent advent of machine learning methods that allow for the simultaneous unfolding of all observables in a given phase space region, there is a new opportunity to blur these traditional boundaries by performing searches on unfolded data. This could facilitate a research program where data are explored in their natural high dimensionality with as little model bias as possible. We study how the information about physics beyond the Standard Model is preserved by full phase space unfolding using an important physics target at the Large Hadron Collider (LHC): exotic Higgs boson decays involving hadronic final states. We find that if the signal cross section is high enough, information about the new physics is visible in the unfolded data. We will show that in some cases, quantifiably all of the information about the new physics is encoded in the unfolded data. Finally, we show that there are still many cases when the unfolding does not work fully or precisely, such as when the signal cross section is small. This study will serve as an important benchmark for enhancing unfolding methods for the LHC and beyond. ×
Uncertainty Aware Learning for High Energy Physics
A. Ghosh, B. Nachman, D. Whiteson
Phys. Rev. D 104 (2021) 056026 · e-Print: 2105.08742
Cite Article
@article{2105.08742,
author="{A. Ghosh, B. Nachman, D. Whiteson}",
title="{Uncertainty Aware Learning for High Energy Physics}",
eprint="2105.08742",
journal="Phys. Rev. D",
volume="104",
pages="056026",
archivePrefix = "arXiv",
primaryClass = "hep-ex",
year = "2021,
}
×
Uncertainty Aware Learning for High Energy Physics
Machine learning techniques are becoming an integral component of data analysis in High Energy Physics (HEP). These tools provide a significant improvement in sensitivity over traditional analyses by exploiting subtle patterns in high-dimensional feature spaces. These subtle patterns may not be well-modeled by the simulations used for training machine learning methods, resulting in an enhanced sensitivity to systematic uncertainties. Contrary to the traditional wisdom of constructing an analysis strategy that is invariant to systematic uncertainties, we study the use of a classifier that is fully aware of uncertainties and their corresponding nuisance parameters. We show that this dependence can actually enhance the sensitivity to parameters of interest. Studies are performed using a synthetic Gaussian dataset as well as a more realistic HEP dataset based on Higgs boson decays to tau leptons. For both cases, we show that the uncertainty aware approach can achieve a better sensitivity than alternative machine learning strategies. ×
Identifying the Quantum Properties of Hadronic Resonances using Machine Learning
J. Filipek, S. Hsu, J. Kruper, K. Mohan, and B. Nachman
e-Print: 2105.04582
Cite Article
@article{2105.04582,
author="{J. Filipek, S. Hsu, J. Kruper, K. Mohan, and B. Nachman}",
title="{Identifying the Quantum Properties of Hadronic Resonances using Machine Learning}",
eprint="2105.04582",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2021",
×
Identifying the Quantum Properties of Hadronic Resonances using Machine Learning
With the great promise of deep learning, discoveries of new particles at the Large Hadron Collider (LHC) may be imminent. Following the discovery of a new Beyond the Standard model particle in an all-hadronic channel, deep learning can also be used to identify its quantum numbers. Convolutional neural networks (CNNs) using jet-images can significantly improve upon existing techniques to identify the quantum chromodynamic (QCD) (`color') as well as the spin of a two-prong resonance using its substructure. Additionally, jet-images are useful in determining what information in the jet radiation pattern is useful for classification, which could inspire future taggers. These techniques improve the categorization of new particles and are an important addition to the growing jet substructure toolkit, for searches and measurements at the LHC now and in the future. ×
Scaffolding Simulations with Deep Learning for High-dimensional Deconvolution
A. Andreassen, P. T. Komiske, E. M. Metodiev, B. Nachman, A. Suresh, and J. Thaler
ICLR simDL workshop (2021) · e-Print: 2105.04448
Cite Article
@article{2105.04448,
author="{A. Andreassen, P. T. Komiske, E. M. Metodiev, B. Nachman, A. Suresh, and J. Thaler}",
title="{Scaffolding Simulations with Deep Learning for High-dimensional Deconvolution}",
eprint="2105.04448",
journal="ICLR simDL workshop (2021)",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2021",
}
×
Scaffolding Simulations with Deep Learning for High-dimensional Deconvolution
A common setting for scientific inference is the ability to sample from a high-fidelity forward model (simulation) without having an explicit probability density of the data. We propose a simulation-based maximum likelihood deconvolution approach in this setting called OmniFold. Deep learning enables this approach to be naturally unbinned and (variable-, and) high-dimensional. In contrast to model parameter estimation, the goal of deconvolution is to remove detector distortions in order to enable a variety of down-stream inference tasks. Our approach is the deep learning generalization of the common Richardson-Lucy approach that is also called Iterative Bayesian Unfolding in particle physics. We show how OmniFold can not only remove detector distortions, but it can also account for noise processes and acceptance effects. ×
Measurement of lepton-jet correlations in high Q^2 neutral-current DIS with the H1 detector at HERA
H1 Collaboration
Public note: H1prelim-21-031
Cite Article
@article{H1prelim-21-031,
author="{H1 Collaboration}",
title="Measurement of lepton-jet correlations in high $Q^2$ neutral-current DIS with the H1 detector at HERA}",
journal = "H1prelim-21-031",
url = "https://www-h1.desy.de/h1/www/publications/htmlsplit/H1prelim-21-031.long.html",
year = "2021",
}
×
Measurement of lepton-jet correlations in high Q^2 neutral-current DIS with the H1 detector at HERA
A measurement of jet production in high Q2 neutral-current DIS events close to the Born-level configuration is presented. This cross section is measured deferentially as a function of the jet transverse momentum and pseudorapidity, as well as lepton-jet momentum imbalance and azimuthal angle correlation. The jets are reconstructed in the laboratory frame with the kT algorithm and a distance parameter of 1.0. The data are corrected for detector effects using the OmniFold method, which incorporates a simultaneous and unbinned unfolding in four dimensions using machine learning. The results are compared with leading order Mont Carlo event generators and higher order calculations performed within the context of collinear or transverse-momentum-dependent (TMD) factorization in Quantum Chromodynamics (QCD). The measurement probes a wide range of QCD phenomena, including TMD parton-distribution functions (PDFs) and their evolution with energy. ×
Categorizing Readout Error Correlations on Near Term Quantum Computers
B. Nachman and M. R. Geller
e-Print: 2104.04607
Cite Article
@article{2104.04607,
author="B. Nachman and M. R. Geller",
title="{Categorizing Readout Error Correlations on Near Term Quantum Computers}",
eprint="2104.04607",
archivePrefix = "arXiv",
primaryClass = "quant-ph",
year = "2021",
×
Categorizing Readout Error Correlations on Near Term Quantum Computers
Readout errors are a significant source of noise for near term quantum computers. A variety of methods have been proposed to mitigate these errors using classical post processing. For a system with n qubits, the entire readout error profile is specified by a 2^n x 2^n matrix. Recent proposals to use sub-exponential approximations rely on small and/or short-ranged error correlations. In this paper, we introduce and demonstrate a methodology to categorize and quantify multiqubit readout error correlations. Two distinct types of error correlations are considered: sensitivity of the measurement of a given qubit to the state of nearby "spectator" qubits, and measurement operator covariances. We deploy this methodology on IBMQ quantum computers, finding that error correlations are indeed small compared to the single-qubit readout errors on IBMQ Melbourne (15 qubits) and IBMQ Manhattan (65 qubits), but that correlations on IBMQ Melbourne are long-ranged and do not decay with inter-qubit distance. ×
Comparing Weak- and Unsupervised Methods for Resonant Anomaly Detection
J. H. Collins, P. Martin-Ramiro, B. Nachman, D. Shih
Eur. Phys. J. C 81 (2021) 617 · e-Print: 2104.02092
Cite Article
@article{2104.02092,
author="J. H. Collins, P. Martin-Ramiro, B. Nachman, D. Shih",
title="{Comparing Weak- and Unsupervised Methods for Resonant Anomaly Detection}",
eprint="2104.02092",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2021",
journal="Eur. Phys. J. C",
volume="81",
pages="617",
doi="10.1140/epjc/s10052-021-09389-x"
×
Comparing Weak- and Unsupervised Methods for Resonant Anomaly Detection
Anomaly detection techniques are growing in importance at the Large Hadron Collider (LHC), motivated by the increasing need to search for new physics in a model-agnostic way. In this work, we provide a detailed comparative study between a well-studied unsupervised method called the autoencoder (AE) and a weakly-supervised approach based on the Classification Without Labels (CWoLa) technique. We examine the ability of the two methods to identify a new physics signal at different cross sections in a fully hadronic resonance search. By construction, the AE classification performance is independent of the amount of injected signal. In contrast, the CWoLa performance improves with increasing signal abundance. When integrating these approaches with a complete background estimate, we find that the two methods have complementary sensitivity. In particular, CWoLa is effective at finding diverse and moderately rare signals while the AE can provide sensitivity to very rare signals, but only with certain topologies. We therefore demonstrate that both techniques are complementary and can be used together for anomaly detection at the LHC. ×
Mitigating depolarizing noise on quantum computers with noise-estimation circuits
M. Urbanek, B. Nachman, V. R. Pascuzzi, A. He, C. W. Bauer, W. A. de Jong
e-Print: 2103.08591
Cite Article
@article{2103.08591,
author="M. Urbanek, B. Nachman, V. R. Pascuzzi, A. He, C. W. Bauer, W. A. de Jong",
title="{Mitigating depolarizing noise on quantum computers with noise-estimation circuits}",
eprint="2103.08591",
archivePrefix = "arXiv",
primaryClass = "quant-ph",
year = "2021",
×
Mitigating depolarizing noise on quantum computers with noise-estimation circuits
A significant problem for current quantum computers is noise. While there are many distinct noise channels, the depolarizing noise model often appropriately describes average noise for large circuits involving many qubits and gates. We present a method to mitigate the depolarizing noise by first estimating its rate with a noise-estimation circuit and then correcting the output of the target circuit using the estimated rate. The method is experimentally validated on the simulation of the Heisenberg model. We find that our approach in combination with readout-error correction, randomized compiling, and zero-noise extrapolation produces results close to exact results even for circuits containing hundreds of CNOT gates. ×
Quantum Gate Pattern Recognition and Circuit Optimization for Scientific Applications
W. Jang, K. Terashi, M. Saito, C. W. Bauer, B. Nachman, Y. Iiyama, T. Kishimoto, R. Okubo, R. Sawada, J. Tanaka
e-Print: 2102.10008
Cite Article
@article{2102.10008,
author="{W. Jang, K. Terashi, M. Saito, C. W. Bauer, B. Nachman, Y. Iiyama, T. Kishimoto, R. Okubo, R. Sawada, J. Tanaka}",
title="{Quantum Gate Pattern Recognition and Circuit Optimization for Scientific Applications}",
eprint="2102.10008",
archivePrefix = "arXiv",
primaryClass = "quant-ph",
year = "2021",
}
×
Quantum Gate Pattern Recognition and Circuit Optimization for Scientific Applications
There is no unique way to encode a quantum algorithm into a quantum circuit. With limited qubit counts, connectivities, and coherence times, circuit optimization is essential to make the best use of near-term quantum devices. We introduce two separate ideas for circuit optimization and combine them in a multi-tiered quantum circuit optimization protocol called AQCEL. The first ingredient is a technique to recognize repeated patterns of quantum gates, opening up the possibility of future hardware co-optimization. The second ingredient is an approach to reduce circuit complexity by identifying zero- or low-amplitude computational basis states and redundant gates. As a demonstration, AQCEL is deployed on an iterative and efficient quantum algorithm designed to model final state radiation in high energy physics. For this algorithm, our optimization scheme brings a significant reduction in the gate count without losing any accuracy compared to the original circuit. Additionally, we have investigated whether this can be demonstrated on a quantum computer using polynomial resources. Our technique is generic and can be useful for a wide variety of quantum algorithms. ×
Simulating collider physics on quantum computers using effective field theories
C. Bauer, M. Freytsis, B. Nachman
Phys. Rev. Lett. 127 (2021) 212001 · e-Print: 2102.05044
Cite Article
@article{2102.05044,
author="C. Bauer, M. Freytsis, B. Nachman",
title="{Simulating collider physics on quantum computers using effective field theories}",
eprint="2102.05044",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal="Phys. Rev. Lett",
volume="127",
pages="212001",
doi = {10.1103/PhysRevLett.127.212001},
year = "2021",
}
×
Simulating collider physics on quantum computers using effective field theories
Simulating the full dynamics of a quantum field theory over a wide range of energies requires exceptionally large quantum computing resources. Yet for many observables in particle physics, perturbative techniques are sufficient to accurately model all but a constrained range of energies within the validity of the theory. We demonstrate that effective field theories (EFTs) provide an efficient mechanism to separate the high energy dynamics that is easily calculated by traditional perturbation theory from the dynamics at low energy and show how quantum algorithms can be used to simulate the dynamics of the low energy EFT from first principles. As an explicit example we calculate the expectation values of vacuum-to-vacuum and vacuum-to-one-particle transitions in the presence of a time-ordered product of two Wilson lines in scalar field theory, an object closely related to those arising in EFTs of the Standard Model of particle physics. Calculations are performed using simulations of a quantum computer as well as measurements using the IBMQ Manhattan machine. ×
A Living Review of Machine Learning for Particle Physics
M. Feickert and B. Nachman
e-Print: 2102.02770
Cite Article
@article{2102.02770,
author="{M. Feickert and B. Nachman}",
title="{A Living Review of Machine Learning for Particle Physics}",
eprint="2102.02770",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2021",
}
×
A Living Review of Machine Learning for Particle Physics
Modern machine learning techniques, including deep learning, are rapidly being applied, adapted, and developed for high energy physics. Given the fast pace of this research, we have created a living review with the goal of providing a nearly comprehensive list of citations for those developing and applying these approaches to experimental, phenomenological, or theoretical analyses. As a living document, it will be updated as often as possible to incorporate the latest developments. A list of proper (unchanging) reviews can be found within. Papers are grouped into a small set of topics to be as useful as possible. Suggestions and contributions are most welcome, and we provide instructions for participating. ×
The LHC Olympics 2020: A Community Challenge for Anomaly Detection in High Energy Physics
G. Kasieczka, B. Nachman, D. Shih (editors) et al.
@article{2101.08320,
author="{G. Kasieczka, B. Nachman, D. Shih (editors) and others}",
title="{The LHC Olympics 2020: A Community Challenge for Anomaly Detection in High Energy Physics}",
eprint="2101.08320",
journal="Rep. Prog. Phys.",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2021",
pages="124201",
volume="84",
doi="10.1088/1361-6633/ac36b9"
}
×
The LHC Olympics 2020: A Community Challenge for Anomaly Detection in High Energy Physics
A new paradigm for data-driven, model-agnostic new physics searches at colliders is emerging, and aims to leverage recent breakthroughs in anomaly detection and machine learning. In order to develop and benchmark new anomaly detection methods within this framework, it is essential to have standard datasets. To this end, we have created the LHC Olympics 2020, a community challenge accompanied by a set of simulated collider events. Participants in these Olympics have developed their methods using an R&D dataset and then tested them on black boxes: datasets with an unknown anomaly (or not). This paper will review the LHC Olympics 2020 challenge, including an overview of the competition, a description of methods deployed in the competition, lessons learned from the experience, and implications for data analyses with future datasets as well as future colliders. ×
E Pluribus Unum Ex Machina: Learning from Many Collider Events at Once
B. Nachman and J. Thaler
Phys. Rev. D. 103 (2021) 116013 · e-Print: 2101.07263
Cite Article
@article{2101.07263,
author="{B. Nachman and J. Thaler}",
title="{E Pluribus Unum Ex Machina: Learning from Many Collider Events at Once}",
eprint="2101.07263",
journal="Phys. Rev. D",
volume="103",
pages="116013",
archivePrefix = "arXiv",
primaryClass = "physics.data-an",
year = "2021,
}
×
E Pluribus Unum Ex Machina: Learning from Many Collider Events at Once
There have been a number of recent proposals to enhance the performance of machine learning strategies for collider physics by combining many distinct events into a single ensemble feature. To evaluate the efficacy of these proposals, we study the connection between single-event classifiers and multi-event classifiers under the assumption that collider events are independent and identically distributed (IID). We show how one can build optimal multi-event classifiers from single-event classifiers, and we also show how to construct multi-event classifiers such that they produce optimal single-event classifiers. This is illustrated for a Gaussian example as well as for classification tasks relevant for searches and measurements at the Large Hadron Collider. We extend our discussion to regression tasks by showing how they can be phrased in terms of parametrized classifiers. Empirically, we find that training a single-event (per-instance) classifier is more effective than training a multi-event (per-ensemble) classifier, as least for the cases we studied, and we relate this fact to properties of the loss function gradient in the two cases. While we did not identify a clear benefit from using multi-event classifiers in the collider context, we speculate on the potential value of these methods in cases involving only approximate independence, as relevant for jet substructure studies. ×
Total recall: episodic memory retrieval, choice, and memory confidence in the rat
H. Joo et al.
Current Biology 31 (2021) 4571 · e-Print: 2020.12.14.420174bioRxiv
Cite Article
@article{2012.420174,
author="H. Joo and others",
title="{Total recall: episodic memory retrieval, choice, and memory confidence in the rat}",
eprint="2020.12.14.420174",
archivePrefix = "bioRxiv",
year = "2021",
journal="Current Biology",
doi="10.1016/j.cub.2021.08.013",
}
×
Total recall: episodic memory retrieval, choice, and memory confidence in the rat
Episodic memory enables recollection of past experiences to guide future behavior. Humans know which memories to trust (high confidence) and which to doubt (low confidence). How memory retrieval, memory confidence, and memory-guided decisions are related, however, is not understood. Additionally, whether animals can assess confidence in episodic memories to guide behavior is unknown. We developed a spatial episodic memory task in which rats were incentivized to gamble their time: betting more following a correct choice yielded greater reward. Rat behavior reflected memory confidence, with higher temporal bets following correct choices. We applied modern machine learning to identify a memory decision variable, and built a generative model of memories evolving over time that accurately predicted both choices and confidence reports. Our results reveal in rats an ability thought to exist exclusively in primates, and introduce a unified model of memory dynamics, retrieval, choice, and confidence. ×
Beyond 4D Tracking: Using Cluster Shapes for Track Seeding
P. J. Fox, S. Huang, J. Isaacson, X. Ju, and B. Nachman
JINST 16 (2021) P05001 · e-Print: 2012.04533
Cite Article
@article{2012.04533,
author="{P. J. Fox, S. Huang, J. Isaacson, X. Ju, and B. Nachman}",
title="{Beyond 4D Tracking: Using Cluster Shapes for Track Seeding}",
eprint="2012.04533",
journal="JINST",
volume=16,
pages = P05001,
archivePrefix = "arXiv",
doi = 10.1088/1748-0221/16/05/p05001,
primaryClass = "physics.ins-det",
year = "2021",
}
×
Beyond 4D Tracking: Using Cluster Shapes for Track Seeding
Tracking is one of the most time consuming aspects of event reconstruction at the Large Hadron Collider (LHC) and its high-luminosity upgrade (HL-LHC). Innovative detector technologies extend tracking to four-dimensions by including timing in the pattern recognition and parameter estimation. However, present and future hardware already have additional information that is largely unused by existing track seeding algorithms. The shape of clusters provides an additional dimension for track seeding that can significantly reduce the combinatorial challenge of track finding. We use neural networks to show that cluster shapes can reduce significantly the rate of fake combinatorical backgrounds while preserving a high efficiency. We demonstrate this using the information in cluster singlets, doublets and triplets. Numerical results are presented with simulations from the TrackML challenge. ×
Anomaly Detection for Physics Analysis and Less than Supervised Learning
B. Nachman
e-Print: 2010.14554
Cite Article
@article{2010.14554,
author="B. Nachman",
title="{Anomaly Detection for Physics Analysis and Less than Supervised Learning}",
eprint="2010.14554",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2020",
}
×
Anomaly Detection for Physics Analysis and Less than Supervised Learning
Modern machine learning tools offer exciting possibilities to qualitatively change the paradigm for new particle searches. In particular, new methods can broaden the search program by gaining sensitivity to unforeseen scenarios by learning directly from data. There has been a significant growth in new ideas and they are just starting to be applied to experimental data. This chapter introduces these new anomaly detection methods, which range from fully supervised algorithms to unsupervised, and include weakly supervised methods. ×
Enhancing searches for resonances with machine learning and moment decomposition
O. Kitouni, B. Nachman, C. Weisser, M. Williams
JHEP 04 (2021) 70 · e-Print: 2010.09745
Cite Article
@article{2010.09745,
author="O. Kitouni, B. Nachman, C. Weisser, M. Williams",
title="{Enhancing searches for resonances with machine learning and moment decomposition}",
eprint="2010.09745",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "JHEP",
volume = "04",
pages = "70",
doi = "10.1007/JHEP04%282021%29070",
year = "2021",
}
×
Enhancing searches for resonances with machine learning and moment decomposition
A key challenge in searches for resonant new physics is that classifiers trained to enhance potential signals must not induce localized structures. Such structures could result in a false signal when the background is estimated from data using sideband methods. A variety of techniques have been developed to construct classifiers which are independent from the resonant feature (often a mass). Such strategies are sufficient to avoid localized structures, but are not necessary. We develop a new set of tools using a novel moment loss function (Moment Decomposition or MoDe) which relax the assumption of independence without creating structures in the background. By allowing classifiers to be more flexible, we enhance the sensitivity to new physics without compromising the fidelity of the background estimation. ×
Readout Rebalancing for Near Term Quantum Computers
R. Hicks, C. Bauer, and B. Nachman
Phys. Rev. A 103 (2021) 022407 · e-Print: 2010.07496
Cite Article
@article{2010.07496,
author="R. Hicks, C. Bauer, and B. Nachman",
title="{Readout Rebalancing for Near Term Quantum Computers}",
eprint="2010.07496",
archivePrefix = "arXiv",
primaryClass = "quant-ph",
journal="Phys. Rev. A",
volume="103",
pages="022407",
year = "2021",
}
×
Readout Rebalancing for Near Term Quantum Computers
Readout errors are a significant source of noise for near term intermediate scale quantum computers. Mismeasuring a qubit as a 1 when it should be 0 occurs much less often than mismeasuring a qubit as a 0 when it should have been 1. We make the simple observation that one can improve the readout fidelity of quantum computers by applying targeted X gates prior to performing a measurement. These X gates are placed so that the expected number of qubits in the 1 state is minimized. Classical post processing can undo the effect of the X gates so that the expectation value of any observable is unchanged. We show that the statistical uncertainty following readout error corrections is smaller when using readout rebalancing. The statistical advantage is circuit- and computer-dependent, and is demonstrated for the W state, a Grover search, and for a Gaussian state. The benefit in statistical precision is most pronounced (and nearly a factor of two in some cases) when states with many qubits in the excited state have high probability. ×
Parameter Estimation using Neural Networks in the Presence of Detector Effects
A. Andreassen, S. Hsu, B. Nachman, N. Suaysom, A. Suresh
Phys. Rev. D 103 (2021) 036001 · e-Print: 2010.03569
Cite Article
@article{2010.03569,
author="{A. Andreassen, S. Hsu, B. Nachman, N. Suaysom, A. Suresh}",
title="{Parameter Estimation using Neural Networks in the Presence of Detector Effects}",
eprint="2010.03569",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal="Phys. Rev. D",
volume="103",
pages="036001",
year = "2021",
}
×
Parameter Estimation using Neural Networks in the Presence of Detector Effects
Histogram-based template fits are the main technique used for estimating parameters of high energy physics Monte Carlo generators. Parameterized neural network reweighting can be used to extend this fitting procedure to many dimensions and does not require binning. If the fit is to be performed using reconstructed data, then expensive detector simulations must be used for training the neural networks. We introduce a new two-level fitting approach that only requires one dataset with detector simulation and then a set of additional generation-level datasets without detector effects included. This Simulation-level fit based on Reweighting Generator-level events with Neural networks (SRGN) is demonstrated using simulated datasets for a variety of examples including a simple Gaussian random variable, parton shower tuning, and the top quark mass extraction. ×
Disentangling Boosted Higgs Boson Production Modes with Machine Learning
Y. Chung, S. Hsu, and B. Nachman
JINST 16 (2021) P07002 · e-Print: 2009.05930
Cite Article
@article{2009.05930,
author="Y. Chung, S. Hsu, and B. Nachman",
title="{Disentangling Boosted Higgs Boson Production Modes with Machine Learning}",
eprint="2009.05930",
archivePrefix = "arXiv",
primaryClass = "hep-ph,
journal = "JINST",
volume = 16,
pages = "P07002",
year = "2021",
}
×
Disentangling Boosted Higgs Boson Production Modes with Machine Learning
Higgs Bosons produced via gluon-gluon fusion (ggF) with large transverse momentum (pT) are sensitive probes of physics beyond the Standard Model. However, high pT Higgs Boson production is contaminated by a diversity of production modes other than ggF: vector boson fusion, production of a Higgs boson in association with a vector boson, and production of a Higgs boson with a top-quark pair. Combining jet substructure and event information with modern machine learning, we demonstrate the ability to focus on particular production modes. These tools hold great discovery potential for boosted Higgs bosons produced via ggF and may also provide additional information about the Higgs Boson sector of the Standard Model in extreme phase space regions for other production modes as well. ×
DCTRGAN: Improving the Precision of Generative Models with Reweighting
S. Diefenbacher, E. Eren, G. Kasieczka, A. Korol, B. Nachman, and D. Shih
JINST 15 (2020) P11004 · e-Print: 2009.03796
Cite Article
@article{2009.03796,
author="S. Diefenbacher, E. Eren, G. Kasieczka, A. Korol, B. Nachman, and D. Shih",
title="{DCTRGAN: Improving the Precision of Generative Models with Reweighting}",
eprint="2009.03796",
archivePrefix = "arXiv",
primaryClass = "hep-ph,
journal = "Journal of Instrumentation",
volume = "15",
pages = "P11004",
doi = "10.1088/1748-0221/15/11/p11004",
year = "2020",
}
×
DCTRGAN: Improving the Precision of Generative Models with Reweighting
Significant advances in deep learning have led to more widely used and precise neural network-based generative models such as Generative Adversarial Networks (Gans). We introduce a post-hoc correction to deep generative models to further improve their fidelity, based on the Deep neural networks using the Classification for Tuning and Reweighting (DCTR) protocol. The correction takes the form of a reweighting function that can be applied to generated examples when making predictions from the simulation. We illustrate this approach using Gans trained on standard multimodal probability densities as well as calorimeter simulations from high energy physics. We show that the weighted Gan examples significantly improve the accuracy of the generated samples without a large loss in statistical power. This approach could be applied to any generative model and is a promising refinement method for high energy physics applications and beyond. ×
Simulation-Assisted Decorrelation for Resonant Anomaly Detection
K. Benkendorfer, L. Le Pottier, and B. Nachman
Phys. Rev. D 104 (2021) 035003 · e-Print: 2009.02205
Cite Article
@article{2009.02205,
author="K. Benkendorfer, L. Le Pottier, and B. Nachman",
title="{Simulation-Assisted Decorrelation for Resonant Anomaly Detection}",
eprint="2009.02205",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "Phys. Rev. D",
doi="10.1103/PhysRevD.104.035003",
volume = "104",
pages = "035003",
year = "2021",
}
×
Simulation-Assisted Decorrelation for Resonant Anomaly Detection
A growing number of weak- and unsupervised machine learning approaches to anomaly detection are being proposed to significantly extend the search program at the Large Hadron Collider and elsewhere. One of the prototypical examples for these methods is the search for resonant new physics, where a bump hunt can be performed in an invariant mass spectrum. A significant challenge to methods that rely entirely on data is that they are susceptible to sculpting artificial bumps from the dependence of the machine learning classifier on the invariant mass. We explore two solutions to this challenge by minimally incorporating simulation into the learning. In particular, we study the robustness of Simulation Assisted Likelihood-free Anomaly Detection (SALAD) to correlations between the classifier and the invariant mass. Next, we propose a new approach that only uses the simulation for decorrelation but the Classification without Labels (CWoLa) approach for achieving signal sensitivity. Both methods are compared using a full background fit analysis on simulated data from the LHC Olympics and are robust to correlations in the data. ×
New Method for Silicon Sensor Charge Calibration Using Compton Scattering
P. McCormack, M. Garcia-Sciveres, T. Heim, B, Nachman, M. Lauritzen
e-Print: 2008.11860
Cite Article
@article{2008.11860,
author="P. McCormack, M. Garcia-Sciveres, T. Heim, B, Nachman, M. Lauritzen",
title="{New Method for Silicon Sensor Charge Calibration Using Compton Scattering}",
eprint="2008.11860",
archivePrefix = "arXiv",
primaryClass = "physics.ins-det",
year = "2020",
}
×
New Method for Silicon Sensor Charge Calibration Using Compton Scattering
In order to cope with increasing lifetime radiation damage expected at collider experiments, silicon sensors are becoming increasingly thin. To achieve adequate detection efficiency, the next generation of detectors may have to operate with thresholds below 1000 electron-hole pairs. The readout chips attached to these sensors should be calibrated to some known external charge, but there is a lack of traditional sources in this charge regime. We present a new method for absolute charge calibration based on Compton scattering. In the past, this method has been used for calibration of scintillators, but to our knowledge never for silicon detectors. Here it has been studied using a 150 micron thick planar silicon sensor on an RD53A readout integrated circuit. ×
GANplifying Event Samples
A. Butter, S. Diefenbacher, G. Kasieczka, B. Nachman, and T. Plehn
@article{2008.06545,
author="A. Butter, S. Diefenbacher, G. Kasieczka, B. Nachman, and T. Plehn",
title="{GANplifying Event Samples}",
eprint="2008.06545",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal="SciPost Physics",
volume="10",
pages="139",
year = "2021",
}
×
GANplifying Event Samples
A critical question concerning generative networks applied to event generation in particle physics is if the generated events add statistical precision beyond the training sample. We show for a simple example with increasing dimensionality how generative networks indeed amplify the training statistics. We quantify their impact through an amplification factor or equivalent numbers of sampled events. ×
Supervised Jet Clustering with Graph Neural Networks for Lorentz Boosted Bosons
X. Ju and B. Nachman
Phys. Rev. D 102 (2020) 075014 · e-Print: 2008.06064
Cite Article
@article{2008.06064,
author="X. Ju and B. Nachman",
title="{Supervised Jet Clustering with Graph Neural Networks for Lorentz Boosted Bosons}",
eprint="2008.06064",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "Phys. Rev. D",
volume = "102",
pages = "075014",
year = "2020",
}
×
Supervised Jet Clustering with Graph Neural Networks for Lorentz Boosted Bosons
Jet clustering is traditionally an unsupervised learning task because there is no unique way to associate hadronic final states with the quark and gluon degrees of freedom that generated them. However, for uncolored particles like W, Z, and Higgs bosons, it is possible to precisely (though not exactly) associate final state hadrons to their ancestor. By labeling simulated final state hadrons as descending from an uncolored particle, it is possible to train a supervised learning method to learn to create boson jets. Such a method much operate on individual particles and identifies connections between particles originating from the same uncolored particle. Graph neural networks are well-suited for this purpose as they can act on unordered sets and naturally create strong connections between particles with the same label. These networks are used to train a supervised jet clustering algorithm. The kinematic properties of these graph jets better match the properties of simulated Lorentz-boosted W bosons. Furthermore, the graph jets contain more information for discriminating W jets from generic quark jets. This work marks the beginning of a new exploration in jet physics to use machine learning to optimize the construction of jets and not only the observables computed from jet constituents. ×
Measurement of the ATLAS Detector Jet Mass Response using Forward Folding with 80/fb of sqrt(s)=13 TeV pp data
ATLAS Collaboration
Public note: ATLAS-CONF-2020-022
Cite Article
@article{ATLAS-PHYS-CONF-2020-022,
author="{ATLAS Collaboration}",
title="{Measurement of the ATLAS Detector Jet Mass Response using Forward Folding with 80 fb$^{-1}$ of $\sqrt{s}=13$ TeV $pp$ data}",
journal = "ATLAS-PHYS-CONF-2020-022",
url = "http://cdsweb.cern.ch/record/2724442",
year = "2020",
}
×
Measurement of the ATLAS Detector Jet Mass Response using Forward Folding with 80/fb of sqrt(s)=13 TeV pp data
This note reports a measurement of the jet mass response of large-radius jets reconstructed by the ATLAS experiment using 80/fb of sqrt(s) = 13 TeV pp data. The response is defined as the distribution of the measured mass given the particle-level jet mass and is characterised by its central value (jet mass scale) and spread (jet mass resolution). In order to account for non-Gaussian behavior of the response as well as non-trivial contributions from the intrinsic particle-level jet mass probability density, the forward-folding method is chosen for the measurement. This procedure is applied to both a top-quark pair final state (200 GeV < p_T < 600 GeV for W boson jets and 350 GeV < p_T < 1000 GeV for top-quark jets) as well as inclusive W/Z+jets events (500 GeV < p_T < 1200 GeV). Results are presented for trimmed anti-kt R = 1.0 jets built using only calorimeter information as well as for the track-assisted jet mass that combines calorimeter and tracker information, and for reclustered small-radius jets also using R=1.0. This note extends previous results by including more data, incorporating the W/Z +jets final state, and by comparing various jet mass definitions. In addition, the jet mass response is studied for different numbers of subjets within reclustered jets and found to be universal. For both the jet mass scale and jet mass resolution, good agreement is observed between the data and simulated samples. Uncertainties are evaluated to be 1-5% for the scale and 10-20% for the resolution and they are driven by the parton shower and hadronisation modelling. ×
Deep Learning for Pion Identification and Energy Calibration with the ATLAS Detector
ATLAS Collaboration
Public note: ATL-PHYS-PUB-2020-018
Cite Article
@article{ATL-PHYS-PUB-2020-018,
author="{ATLAS Collaboration}",
title="{Deep Learning for Pion Identification and Energy Calibration with the ATLAS Detector}",
journal = "ATL-PHYS-PUB-2020-018",
url = "http://cdsweb.cern.ch/record/2724632",
year = "2020",
}
×
Deep Learning for Pion Identification and Energy Calibration with the ATLAS Detector
Separating charged and neutral pions as well as calibrating the pion energy response is a core component of reconstruction in the ATLAS calorimeter. This note presents an investigation of deep learning techniques for these tasks, representing the signal in the ATLAS calorimeter layers as pixelated images. Deep learning approaches outperform the classification applied in the baseline local hadronic calibration and are able to improve the energy resolution for a wide range in particle momenta, especially for low energy pions. This work demonstrates the potential of deep-learning-based low-level hadronic calibrations to significantly improve the quality of particle reconstruction in the ATLAS calorimeter. ×
ABCDisCo: Automating the ABCD Method with Machine Learning
G. Kasieczka, B. Nachman, M. Schwartz, D. Shih
Phys. Rev. D 103 (2021) 035021 · e-Print: 2007.14400
Cite Article
@article{2007.14400,
author="G. Kasieczka, B. Nachman, M. Schwartz, D. Shih",
title="{ABCDisCo: Automating the ABCD Method with Machine Learning}",
eprint="2007.14400",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal="Phys. Rev. D",
volume="103",
doi="10.1103/PhysRevD.103.035021",
pages="035021",
year = "2021",
}
×
ABCDisCo: Automating the ABCD Method with Machine Learning
The ABCD method is one of the most widely used data-driven background estimation techniques in high energy physics. Cuts on two statistically-independent classifiers separate signal and background into four regions, so that background in the signal region can be estimated simply using the other three control regions. Typically, the independent classifiers are chosen "by hand" to be intuitive and physically motivated variables. Here, we explore the possibility of automating the design of one or both of these classifiers using machine learning. We show how to use state-of-the-art decorrelation methods to construct powerful yet independent discriminators. Along the way, we uncover a previously unappreciated aspect of the ABCD method: its accuracy hinges on having low signal contamination in control regions not just overall, but relative to the signal fraction in the signal region. We demonstrate the method with three examples: a simple model consisting of three-dimensional Gaussians; boosted hadronic top jet tagging; and a recasted search for paired dijet resonances. In all cases, automating the ABCD method with machine learning significantly improves performance in terms of ABCD closure, background rejection and signal contamination. ×
A Neural Resampler for Monte Carlo Reweighting with Preserved Uncertainties
B. Nachman and J. Thaler
Phys. Rev. D 102 (2020) 076004 · e-Print: 2007.11586
Cite Article
@article{2007.11586,
author="B. Nachman and J. Thaler",
title="{A Neural Resampler for Monte Carlo Reweighting with Preserved Uncertainties}",
eprint="2007.11586",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
volume="102",
pages="076004",
year = "2020",
}
×
A Neural Resampler for Monte Carlo Reweighting with Preserved Uncertainties
Monte Carlo event generators are an essential tool for data analysis in collider physics. To include subleading quantum corrections, these generators often need to produce negative weight events, which leads to statistical dilution of the datasets and downstream computational costs for detector simulation. Recently, the authors of 2005.09375 proposed a positive resampler method to rebalance weights within histogram bins to remove negative weight events. Building on this work, we introduce neural resampling: an unbinned approach to Monte Carlo reweighting based on neural networks that scales well to high-dimensional and variable-dimensional phase space. We pay particular attention to preserving the statistical properties of the event sample, such that neural resampling not only maintains the mean value of any observable but also its Monte Carlo uncertainty. To illustrate our neural resampling approach, we present a case study from the Large Hadron Collider of top quark pair production at next-to-leading order matched to a parton shower. ×
Dijet resonance search with weak supervision using sqrt(s) = 13 TeV pp collisions in the ATLAS detector
ATLAS Collaboration
Phys. Rev. Lett. 125 (2020) 131801 · e-Print: 2005.02983
Cite Article
@article{2005.02983,
author="{ATLAS Collaboration}",
title="{Dijet resonance search with weak supervision using $\sqrt{s} = 13$ TeV $pp$ collisions in the ATLAS detector}",
eprint="2005.02983",
archivePrefix = "arXiv",
primaryClass = "hep-ex",
journal="Phys. Rev. Lett",
volume="125",
pages="131801",
year = "2020"
}
×
Dijet resonance search with weak supervision using sqrt(s) = 13 TeV pp collisions in the ATLAS detector
This Letter describes a search for resonant new physics using a machine-learning anomaly detection procedure that does not rely on a signal model hypothesis. Weakly supervised learning is used to train classifiers directly on data to enhance potential signals. The targeted topology is dijet events and the features used for machine learning are the masses of the two jets. The resulting analysis is essentially a three-dimensional search A goes to B+C, for m_A ~ O(TeV), m_B,m_C ~ O(100 GeV) and B,C are reconstructed as large-radius jets, without paying a penalty associated with a large trials factor in the scan of the masses of the two jets. The full Run 2 sqrt(s) = 13 TeV pp collision data set of 139/fb recorded by the ATLAS detector at the Large Hadron Collider is used for the search. There is no significant evidence of a localized excess in the dijet invariant mass spectrum between 1.8 and 8.2 TeV. Cross-section limits for narrow-width A, B, and C particles vary with m_A, m_B, and m_C. For example, when m_A = 3 TeV and m_B > 200 GeV, a production cross section between 1 and 5 fb is excluded at 95% confidence level, depending on m_C. For certain masses, these limits are up to 10 times more sensitive than those obtained by the inclusive dijet search. ×
Measurement of the Lund jet plane using charged particles in 13 TeV proton-proton collisions with the ATLAS detector
ATLAS Collaboration
Phys. Rev. Lett. 124 (2020) 222002 · e-Print: 2004.03540
Cite Article
@article{2004.03540,
author="{ATLAS Collaboration}",
title="{Measurement of the Lund jet plane using charged particles in 13 TeV proton-proton collisions with the ATLAS detector}",
eprint="2004.03540",
archivePrefix = "arXiv",
primaryClass = "hep-ex",
journal = "Phys. Rev. Lett.",
volume = "124",
pages = "222002",
doi = "10.1103/PhysRevLett.124.222002",
year = "2020",
}
×
Measurement of the Lund jet plane using charged particles in 13 TeV proton-proton collisions with the ATLAS detector
The prevalence of hadronic jets at the LHC requires that a deep understanding of jet formation and structure is achieved in order to reach the highest levels of experimental and theoretical precision. There have been many measurements of jet substructure at the LHC and previous colliders, but the targeted observables mix physical effects from various origins. Based on a recent proposal to factorize physical effects, this Letter presents a double-differential cross-section measurement of the Lund jet plane using 139/fb of sqrt(s) = 13 TeV proton-proton collision data collected with the ATLAS detector using jets with transverse momentum above 675 GeV. The measurement uses charged particles to achieve a fine angular resolution and is corrected for acceptance and detector effects. Several parton shower Monte Carlo models are compared with the data. No single model is found to be in agreement with the measured data across the entire plane. ×
Resource Efficient Zero Noise Extrapolation with Identity Insertions
A. He, B. Nachman, W. A. de Jong, and C. W. Bauer
Phys. Rev. A 102 (2020) · e-Print: 2003.04941
Cite Article
@article{2003.04941,
author="A. He, B. Nachman, W. A. de Jong, and C. W. Bauer",
title="{Resource Efficient Zero Noise Extrapolation with Identity Insertions}",
eprint="2003.04941",
archivePrefix = "arXiv",
primaryClass = "quant-ph",
journal = "Phys. Rev. A",
volume = "102",
doi = "10.1103/PhysRevA.102.012426",
year = "2020",
}
×
Resource Efficient Zero Noise Extrapolation with Identity Insertions
In addition to readout errors, two-qubit gate noise is the main challenge for complex quantum algorithms on noisy intermediate-scale quantum (NISQ) computers. These errors are a significant challenge for making accurate calculations for quantum chemistry, nuclear physics, high energy physics, and other emerging scientific and industrial applications. There are two proposals for mitigating two-qubit gate errors: error-correcting codes and zero-noise extrapolation. This paper focuses on the latter, studying it in detail and proposing modifications to existing approaches. In particular, we propose a random identity insertion method (RIIM) that can achieve competitive asymptotic accuracy with far fewer gates than the traditional fixed identity insertion method (FIIM). For example, correcting the leading order depolarizing gate noise requires n_{CNOT}+2 gates for RIIM instead of 3n_{CNOT} gates for FIIM. This significant resource saving may enable more accurate results for state-of-the-art calculations on near term quantum hardware. ×
Jet Studies: Four decades of gluons
S. Manzani, B. Nachman, et al.
Les Houches 2019: Physics at TeV Colliders Standard Model Working Group Report · e-Print: 2003.01700
Cite Article
@article{2003.01700,
author="S. Manzani, B. Nachman, et al.",
title="{Jet Studies: Four decades of gluons}",
eprint="2003.01700",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "Les Houches 2019: Physics at TeV Colliders Standard Model Working Group Report",
year = "2020",
}
×
Jet Studies: Four decades of gluons
This Report summarizes the proceedings of the 2019 Les Houches workshop on Physics at TeV Colliders. Session 1 dealt with (I) new developments for high precision Standard Model calculations, (II) the sensitivity of parton distribution functions to the experimental inputs, (III) new developments in jet substructure techniques and a detailed examination of gluon fragmentation at the LHC, (IV) issues in the theoretical description of the production of Standard Model Higgs bosons and how to relate experimental measurements, and (V) Monte Carlo event generator studies relating to PDF evolution and comparisons of important processes at the LHC. ×
Simultaneous Jet Energy and Mass Calibrations with Neural Networks
ATLAS Collaboration
Public note: ATL-PHYS-PUB-2020-001
Cite Article
@article{ATL-PHYS-PUB-2020-001,
author="{ATLAS Collaboration}",
title="{Simultaneous Jet Energy and Mass Calibrations with Neural Networks}",
journal = "ATL-PHYS-PUB-2020-001",
url = "http://cdsweb.cern.ch/record/2706189",
year = "2020",
}
×
Simultaneous Jet Energy and Mass Calibrations with Neural Networks
The jet mass is one of the most important observables for identifying boosted, hadronically decaying, massive particles. ATLAS has historically calibrated the jet mass after calibrating the jet energy independently of jet mass. As the jet energy response depends on the jet mass, this sequential approach can lead to a non-closure in the jet energy calibration. This note illustrates how to simultaneously calibrate the jet energy and jet mass within the generalized numerical inversion framework. As the jet mass response often has long asymmetric tails, traditional regression techniques can be biased away from the mode. In addition to the simultaneous energy and mass calibration, this note also uses a tailored loss function to directly learn the mode of the response. ×
Given the lack of evidence for new particle discoveries at the Large Hadron Collider (LHC), it is critical to broaden the search program. A variety of model-independent searches have been proposed, adding sensitivity to unexpected signals. There are generally two types of such searches: those that rely heavily on simulations and those that are entirely based on (unlabeled) data. This paper introduces a hybrid method that makes the best of both approaches. For potential signals that are resonant in one known feature, this new method first learns a parameterized reweighting function to morph a given simulation to match the data in sidebands. This function is then interpolated into the signal region and then the reweighted background-only simulation can be used for supervised learning as well as for background estimation. The background estimation from the reweighted simulation allows for non-trivial correlations between features used for classification and the resonant feature. A dijet search with jet substructure is used to illustrate the new method. Future applications of Simulation Assisted Likelihood-free Anomaly Detection (SALAD) include a variety of final states and potential combinations with other model-independent approaches. ×
Anomaly Detection with Density Estimation
B. Nachman and D. Shih
Phys. Rev. D 101 (2020) 075042. · e-Print: 2001.04990
Cite Article
@article{2001.04990,
author="B. Nachman and D. Shih",
title="{Anomaly Detection with Density Estimation}",
eprint="2001.04990",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "Phys. Rev. D",
volume = "101",
pages = "075042",
doi = "10.1103/PhysRevD.101.075042",
year = "2020",
}
×
Anomaly Detection with Density Estimation
We leverage recent breakthroughs in neural density estimation to propose a new unsupervised anomaly detection technique (ANODE). By estimating the probability density of the data in a signal region and in sidebands, and interpolating the latter into the signal region, a likelihood ratio of data vs. background can be constructed. This likelihood ratio is broadly sensitive to overdensities in the data that could be due to localized anomalies. In addition, a unique potential benefit of the ANODE method is that the background can be directly estimated using the learned densities. Finally, ANODE is robust against systematic differences between signal region and sidebands, giving it broader applicability than other methods. We demonstrate the power of this new approach using the LHC Olympics 2020 R\&D Dataset. We show how ANODE can enhance the significance of a dijet bump hunt by up to a factor of 7 with a 10\% accuracy on the background prediction. While the LHC is used as the recurring example, the methods developed here have a much broader applicability to anomaly detection in physics and beyond. ×
A measurement of soft-drop jet observables in pp collisions with the ATLAS detector at sqrt(s) = 13 TeV
ATLAS Collaboration
Phys. Rev. D 101 (2020) 052007 · e-Print: 1912.09837
Cite Article
@article{1912.09837,
author="{ATLAS Collaboration}",
title="{A measurement of soft-drop jet observables in $pp$ collisions with the ATLAS detector at $\sqrt{s}=13$ TeV}",
eprint="1912.09837",
archivePrefix = "arXiv",
primaryClass = "hep-ex",
journal = "Phys. Rev. D",
volume = "101",
pages = "052007",
doi = "https://journals.aps.org/prd/abstract/10.1103/PhysRevD.101.052007",
year = "2020",
}
×
A measurement of soft-drop jet observables in pp collisions with the ATLAS detector at sqrt(s) = 13 TeV
Jet substructure quantities are measured using jets groomed with the soft-drop grooming procedure in dijet events from 32.9/fb of pp collisions collected with the ATLAS detector at sqrt(s) = TeV. These observables are sensitive to a wide range of QCD phenomena. Some observables, such as the jet mass and opening angle between the two subjets which pass the soft-drop condition, can be described by a high-order (resummed) series in the strong coupling constant \alpha_s. Other observables, such as the momentum sharing between the two subjets, are nearly independent of \alpha_s. These observables can be constructed using all interacting particles or using only charged particles reconstructed in the inner tracking detectors. Track-based versions of these observables are not collinear safe, but are measured more precisely, and universal non-perturbative functions can absorb the collinear singularities. The unfolded data are directly compared with QCD calculations and hadron-level Monte Carlo simulations. The measurements are performed in different pseudorapidity regions, which are then used to extract quark and gluon jet shapes using the predicted quark and gluon fractions in each region. All of the parton shower and analytical calculations provide an excellent description of the data in most regions of phase space. ×
OmniFold: A Method to Simultaneously Unfold All Observables
A. Andreassen, E. Metodiev, P. Komiske, B. Nachman, J. Thaler
Phys. Rev. Lett. 124 (2020) 182001 · e-Print: 1911.09107
Cite Article
@article{1911.09107,
author="A. Andreassen, E. Metodiev, P. Komiske, B. Nachman, J. Thaler",
title="{OmniFold: A Method to Simultaneously Unfold All Observables}",
eprint="1911.09107",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "Phys. Rev. Lett.",
volume = "124",
pages = "182001",
doi = "10.1103/PhysRevLett.124.182001",
year = "2020",
}
×
OmniFold: A Method to Simultaneously Unfold All Observables
Collider data must be corrected for detector effects ("unfolded") to be compared with many theoretical calculations and measurements from other experiments. Unfolding is traditionally done for individual, binned observables without including all information relevant for characterizing the detector response. We introduce OmniFold, an unfolding method that iteratively reweights a simulated dataset, using machine learning to capitalize on all available information. Our approach is unbinned, works for arbitrarily high-dimensional data, and naturally incorporates information from the full phase space. We illustrate this technique on a realistic jet substructure example from the Large Hadron Collider and compare it to standard binned unfolding methods. This new paradigm enables the simultaneous measurement of all observables, including those not yet invented at the time of the analysis. ×
Expression of Interest for the CODEX-b Detector
CODEX-b Collaboration
EPJC (accepted Nov. 2020) · e-Print: 1911.00481
Cite Article
@article{1911.00481,
author="{CODEX-b Collaboration}",
title="{Expression of Interest for the CODEX-b Detector}",
eprint="1911.00481",
journal="EPJC (accepted Nov. 2020)",
archivePrefix = "arXiv",
primaryClass = "hep-ex",
year = "2020,
}
×
Expression of Interest for the CODEX-b Detector
This document presents the physics case and ancillary studies for the proposed CODEX-b long-lived particle (LLP) detector, as well as for a smaller proof-of-concept demonstrator detector, CODEX-\beta, to be operated during Run 3 of the LHC. Our development of the CODEX-b physics case synthesizes "top-down" and "bottom-up" theoretical approaches, providing a detailed survey of both minimal and complete models featuring LLPs. Several of these models have not been studied previously, and for some others we amend studies from previous literature: In particular, for gluon and fermion-coupled axion-like particles. We moreover present updated simulations of expected backgrounds in CODEX-b's actively shielded environment, including the effects of post-propagation uncertainties, high-energy tails and variation in the shielding design. Initial results are also included from a background measurement and calibration campaign. A design overview is presented for the CODEX-\beta demonstrator detector, which will enable background calibration and detector design studies. Finally, we lay out brief studies of various design drivers of the CODEX-b experiment and potential extensions of the baseline design, including the physics case for a calorimeter element, precision timing, event tagging within LHCb, and precision low-momentum tracking. ×
AI Safety for High Energy Physics
B. Nachman, and C. Shimmin
e-Print: 1910.08606
Cite Article
@article{1910.08606,
author="B. Nachman, and C. Shimmin",
title="{AI Safety for High Energy Physics}",
eprint="1910.08606",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
year = "2019",
}
×
AI Safety for High Energy Physics
The field of high-energy physics (HEP), along with many scientific disciplines, is currently experiencing a dramatic influx of new methodologies powered by modern machine learning techniques. Over the last few years, a growing body of HEP literature has focused on identifying promising applications of deep learning in particular, and more recently these techniques are starting to be realized in an increasing number of experimental measurements. The overall conclusion from this impressive and extensive set of studies is that rarer and more complex physics signatures can be identified with the new set of powerful tools from deep learning. However, there is an unstudied systematic risk associated with combining the traditional HEP workflow and deep learning with high-dimensional data. In particular, calibrating and validating the response of deep neural networks is in general not experimentally feasible, and therefore current methods may be biased in ways that are not covered by current uncertainty estimates. By borrowing ideas from AI safety, we illustrate these potential issues and propose a method to bound the size of unaccounted for uncertainty. In addition to providing a pragmatic diagnostic, this work will hopefully begin a dialogue within the community about the robust application of deep learning to experimental analyses. ×
Identifying Merged Tracks in Dense Environments with Machine Learning
P. McCormack, M. Ganai, B. Nachman, M. Garcia-Sciveres
CTD/WIT 2019 Proceedings · e-Print: 1910.06286
Cite Article
@article{1910.06286,
author="P. McCormack, M. Ganai, B. Nachman, M. Garcia-Sciveres",
title="{Identifying Merged Tracks in Dense Environments with Machine Learning}",
eprint="1910.06286",
archivePrefix = "arXiv",
primaryClass = "hep-ex",
journal = "CTD/WIT 2019 Proceedings",
url = "https://inspirehep.net/conferences/1693765",
year = "2019",
}
×
Identifying Merged Tracks in Dense Environments with Machine Learning
Tracking in high density environments plays an important role in many physics analyses at the LHC. In such environments, it is possible that two nearly collinear particles contribute to the same hits as they travel through the ATLAS pixel detector and semiconductor tracker. If the two particles are sufficiently collinear, it is possible that only a single track candidate will be created, denominated a "merged track", leading to a decrease in tracking efficiency. These proceedings show a possible new technique that uses a boosted decision tree to classify reconstructed tracks as merged. An application of this new method is the recovery of the number of reconstructed tracks in high transverse momentum three-pronged \tau decays, leading to an increased \tau reconstruction efficiency. The observed mistag rate is small. ×
Parametrizing the Detector Response with Neural Networks
S. Cheong, A. Cukierman, B. Nachman, M. Safdari, A. Schwartzman
JINST 15 (2020) P01030 · e-Print: 1910.03773
Cite Article
@article{1910.03773,
author="S. Cheong, A. Cukierman, B. Nachman, M. Safdari, A. Schwartzman",
title="{Parametrizing the Detector Response with Neural Networks}",
eprint="1910.03773",
archivePrefix = "arXiv",
primaryClass = "physics.data-an",
journal = "JINST",
volume = "15",
pages = "P01030",
doi = "10.1088/1748-0221/15/01/P01030",
year = "2020",
}
×
Parametrizing the Detector Response with Neural Networks
In high energy physics, characterizing the response of a detector to radiation is one of the most important and basic experimental tasks. In many cases, this task is accomplished by parameterizing summary statistics of the full detector response probability density. The parameterized detector response can then be used for calibration as well as for directly improving physics analysis sensitivity. This paper discusses how to parameterize summary statistics of the detector response using neural networks. In particular, neural networks are powerful tools for incorporating multidimensional data and the loss function used during training determines which summary statistic is learned. One common summary statistic that has not been combined with deep learning (as far as the authors are aware) is the mode. A neural network-based approach to mode learning is proposed and empirically demonstrated in the context of high energy jet calibrations. Altogether, the neural network-based toolkit for detector response parameterization can enhance the utility of data collected at high energy physics experiments and beyond. ×
Unfolding Quantum Computer Readout Noise
B. Nachman, M. Urbanek, W. de Jong, C. Bauer
npj Quantum Information 6 (2020) · e-Print: 1910.01969
Cite Article
@article{1910.01969,
author="B. Nachman, M. Urbanek, W. de Jong, C. Bauer",
title="{Unfolding Quantum Computer Readout Noise.}",
eprint="1910.01969",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "npj Quantum Information",
volume="6",
doi="10.1038/s41534-020-00309-7",
year = "2020",
}
×
Unfolding Quantum Computer Readout Noise
In the current era of noisy intermediate-scale quantum (NISQ) computers, noisy qubits can result in biased results for early quantum algorithm applications. This is a significant challenge for interpreting results from quantum computer simulations for quantum chemistry, nuclear physics, high energy physics, and other emerging scientific applications. An important class of qubit errors are readout errors. The most basic method to correct readout errors is matrix inversion, using a response matrix built from simple operations to probe the rate of transitions from known initial quantum states to readout outcomes. One challenge with inverting matrices with large off-diagonal components is that the results are sensitive to statistical fluctuations. This challenge is familiar to high energy physics, where prior-independent regularized matrix inversion techniques ('unfolding') have been developed for years to correct for acceptance and detector effects when performing differential cross section measurements. We study various unfolding methods in the context of universal gate-based quantum computers with the goal of connecting the fields of quantum information science and high energy physics and providing a reference for future work. The method known as iterative Bayesian unfolding is shown to avoid pathologies from commonly used matrix inversion and least squares methods. ×
Quantum error detection improves accuracy of chemical calculations on a quantum computer
M. Urbanek, B. Nachman, W. de Jong
Phys. Rev. A 102 (2020) 022427 · e-Print: 1910.00129
Cite Article
@article{1910.00129,
author="M. Urbanek, B. Nachman, W. de Jong",
title="{Quantum error detection improves accuracy of chemical calculations on a quantum computer.}",
eprint="1910.00129",
archivePrefix = "arXiv",
primaryClass = "quant-ph",
journal="Phys. Rev. A",
volume="102",
pages="022427",
year = "2020",
}
×
Quantum error detection improves accuracy of chemical calculations on a quantum computer
The ultimate goal of quantum error correction is to achieve the fault-tolerance threshold beyond which quantum computers can be made arbitrarily accurate. This requires extraordinary resources and engineering efforts. We show that even without achieving full fault-tolerance, quantum error detection is already useful on the current generation of quantum hardware. We demonstrate this by executing an end-to-end chemical calculation for the hydrogen molecule encoded in the [[4, 2, 2]] quantum error-detecting code. The encoded calculation with logical qubits significantly improves the accuracy of the molecular ground-state energy. ×
A guide for deploying Deep Learning in LHC searches: How to achieve optimality and account for uncertainty
B. Nachman
SciPost Phys. 8 (2020) 090 · e-Print: 1909.03081
Cite Article
@article{1909.03081,
author="B. Nachman",
title="{A guide for deploying Deep Learning in LHC searches: How to achieve optimality and account for uncertainty}",
eprint="1909.03081",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "SciPost Phys.",
volume = "8",
pages = "090",
doi = "10.21468/SciPostPhys.8.6.090",
year = "2020",
}
×
A guide for deploying Deep Learning in LHC searches: How to achieve optimality and account for uncertainty
Deep learning tools can incorporate all of the available information into a search for new particles, thus making the best use of the available data. This paper reviews how to optimally integrate information with deep learning and explicitly describes the corresponding sources of uncertainty. Simple illustrative examples show how these concepts can be applied in practice. ×
The Measurement of Position Resolution of RD53A Pixel Modules
G. Zang, B. Nachman, Shih-Chieh Hsu, Xin Chen
SLAC eConf C1907293 · e-Print: 1908.10973
Cite Article
@article{1908.10973,
author="G. Zang, B. Nachman, Shih-Chieh Hsu, Xin Chen",
title="{The Measurement of Position Resolution of RD53A Pixel Modules}",
eprint="1908.10973",
archivePrefix = "arXiv",
primaryClass = "physics.ins-det",
journal = "SLAC eConf C1907293",
url = "https://www.slac.stanford.edu/econf/C1907293/",
year = "2019",
}
×
The Measurement of Position Resolution of RD53A Pixel Modules
Position resolution is a key property of the innermost layer of the upgraded ATLAS and CMS pixel detectors for determining track reconstruction and flavor tagging performance. The 11 GeV electron beam at the SLAC End Station A was used to measure the position resolution of RD53A modules with a 50 x 50 and a 25 x 100 \mu m^2 pitch. Tracks are reconstructed from hits on telescope planes using the EUTelescope package. The position resolution is extracted by comparing the extrapolated track and the hit position on the RD53A modules, correcting for the tracking resolution. 10.9 and 6.8 \mu m resolution can be achieved for the 50 and 25 \mu m directions, respectively, with a 13 degree tilt. ×
Convolutional Neural Networks with Event Images for Pileup Mitigation with the ATLAS Detector
ATLAS Collaboration
Public note: ATL-PHYS-PUB-2019-028
Cite Article
@article{ATL-PHYS-PUB-2019-028,
author="{ATLAS Collaboration}",
title="{Convolutional Neural Networks with Event Images for Pileup Mitigation with the ATLAS Detector}",
journal = "ATL-PHYS-PUB-2019-028",
url = "http://cdsweb.cern.ch/record/2684070",
year = "2019",
}
×
Convolutional Neural Networks with Event Images for Pileup Mitigation with the ATLAS Detector
The addition of multiple, nearly simultaneous collisions to hard-scatter collisions (pileup) is a significant challenge for most physics analyses at the LHC. Many techniques have been proposed to mitigate the impact of pileup on jets and other reconstructed objects. This study investigates the application of convolutional neural networks to pileup mitigation by treating events as images. By using as much of the available information about the event properties as possible, the neural networks are able to provide a local pileup energy correction. The impact of this correction is studied in the context of a global event observable: the missing transverse momentum (MET). The MET is particularly sensitive to pileup and the potential benefits of a neural-network approach is analyzed alongside other constituent pileup mitigation techniques and the ATLAS default MET reconstruction algorithm. ×
Measurement of the Lund Jet Plane using charged particles with the ATLAS detector from 13 TeV proton--proton collisions
ATLAS Collaboration
Public note: ATLAS-CONF-2019-035
Cite Article
@article{ATLAS-CONF-2019-035,
author="{ATLAS Collaboration}",
title="{Measurement of the Lund Jet Plane using charged particles with the ATLAS detector from 13 TeV proton--proton collisions}",
journal = "ATLAS-CONF-2019-035",
url = "https://cds.cern.ch/record/2683993",
year = "2019",
×
Measurement of the Lund Jet Plane using charged particles with the ATLAS detector from 13 TeV proton--proton collisions
The prevalence of hadronic jets at the Large Hadron Collider (LHC) requires that a deep understanding of jet formation and structure must be achieved in order to reach the highest levels of experimental and theoretical precision. There have been many measurements of jet substructure at the LHC and previous colliders, but the targeted observables mix physical effects from various origins. Based on a new proposal to factorize physical effects, this note presents a double-differential cross section measurement of the Lund jet plane using 139/fb of sqrt(s) = 13 TeV pp data collected with the ATLAS detector. The measurement uses charged particles to achieve a fine angular resolution and is corrected for acceptance and detector effects. Several parton shower Monte Carlo models are compared with the data. Multiple parton shower Monte Carlo simulations are compared with data to study the modeling of various physical effects across the plane. ×
Neural Networks for Full Phase-space Reweighting and Parameter Tuning
A. Andreassen and B. Nachman
Phys. Rev. D 101 (2020) 091901(R). · e-Print: 1907.08209
Cite Article
@article{1907.08209,
author="A. Andreassen and B. Nachman",
title="{Neural Networks for Full Phase-space Reweighting and Parameter Tuning}",
eprint="1907.08209",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "Phys. Rev. D",
volume = "101",
pages = "091901(R)",
doi = "10.1103/PhysRevD.101.091901",
year = "2020",
}
×
Neural Networks for Full Phase-space Reweighting and Parameter Tuning
Precise scientific analysis in collider-based particle physics is possible because of complex simulations that connect fundamental theories to observable quantities. The significant computational cost of these programs limits the scope, precision, and accuracy of Standard Model measurements and searches for new phenomena. We therefore introduce Deep neural networks using Classification for Tuning and Reweighting (DCTR), a neural network-based approach to reweight and fit simulations using all kinematic and flavor information -- the full phase space. DCTR can perform tasks that are currently not possible with existing methods, such as estimating non-perturbative fragmentation uncertainties. The core idea behind the new approach is to exploit powerful high-dimensional classifiers to reweight phase space as well as to identify the best parameters for describing data. Numerical examples from e^+ e^- to jets demonstrate the fidelity of these methods for simulation parameters that have a big and broad impact on phase space as well as those that have a minimal and/or localized impact. The high fidelity of the full phase-space reweighting enables a new paradigm for simulations, parameter tuning, and model systematic uncertainties across particle physics and possibly beyond. ×
The motivation and status of two-body resonance decays after the LHC Run 2 and beyond
J. Kim, K. Kong, B. Nachman, D. Whiteson
JHEP 04 (2020) 30 · e-Print: 1907.06659
Cite Article
@article{1907.06659,
author="J. Kim, K. Kong, B. Nachman, D. Whiteson",
title="{The motivation and status of two-body resonance decays after the LHC Run 2 and beyond}",
eprint="1907.06659",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "JHEP",
volume = "04",
pages = "30",
doi = "10.1007/JHEP04(2020)030",
year = "2020",
}
×
The motivation and status of two-body resonance decays after the LHC Run 2 and beyond
Searching for two-body resonance decays is a central component of the high energy physics energy frontier research program. While many of the possibilities are covered when the two bodies are Standard Model (SM) particles, there are still significant gaps. If one or both of the bodies are themselves non-SM particles, there is very little coverage from existing searches. We review the status of two-body searches and motivate the need to search for the missing combinations. It is likely that the search program of the future will be able to cover all possibilities with a combination of dedicated and model agnostic search approaches. ×
Properties of jet fragmentation using charged particles measured with the ATLAS detector in pp collisions at sqrt(s) = 13 TeV
ATLAS Collaboration
Phys. Rev. D 100 (2019) 052011 · e-Print: 1906.09254
Cite Article
@article{1906.09254,
author="{ATLAS Collaboration}",
title="{Properties of jet fragmentation using charged particles measured with the ATLAS detector in $pp$ collisions at $\sqrt{s}=13$ TeV}",
eprint="1906.09254",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "Phys. Rev. D",
volume = "100",
pages = "052011",
doi = "10.1103/PhysRevD.100.052011",
year = "2019",
}
×
Properties of jet fragmentation using charged particles measured with the ATLAS detector in pp collisions at sqrt(s) = 13 TeV
This paper presents a measurement of quantities related to the formation of jets from high-energy quarks and gluons (fragmentation). Jets with transverse momentum 100 GeV < p_T < 2.5 TeV and pseudorapidity |\eta| < 2.1 from an integrated luminosity of 33/fb of sqrt(s)=13 TeV proton-proton collisions are reconstructed with the ATLAS detector at the Large Hadron Collider. Charged-particle tracks with pT>500 MeV and |\eta| < 2.5 are used to probe the detailed structure of the jet. The fragmentation properties of the more forward and the more central of the two leading jets from each event are studied. The data are unfolded to correct for detector resolution and acceptance effects. Comparisons with parton shower Monte Carlo generators indicate that existing models provide a reasonable description of the data across a wide range of phase space, but there are also significant differences. Furthermore, the data are interpreted in the context of quark- and gluon-initiated jets by exploiting the rapidity dependence of the jet flavor fraction. A first measurement of the charged-particle multiplicity using model-independent jet labels (topic modeling) provides a promising alternative to traditional quark and gluon extractions using input from simulation. The simulations provide a reasonable description of the quark-like data across the jet pT range presented in this measurement, but the gluon-like data have systematically fewer charged particles than the simulations. ×
Modelling radiation damage to pixel sensors in the ATLAS detector
ATLAS Collaboration
JINST 14 (2019) P06012 · e-Print: 1905.03739
Cite Article
@article{1905.03739,
author="{ATLAS Collaboration}",
title="{Modelling radiation damage to pixel sensors in the ATLAS detector}",
eprint="1905.03739",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "JINST",
volume = "14",
pages = "P06012",
doi = "10.1088/1748-0221/14/06/P06012",
year = "2019",
}
×
Modelling radiation damage to pixel sensors in the ATLAS detector
Silicon pixel detectors are at the core of the current and planned upgrade of the ATLAS experiment at the LHC. Given their close proximity to the interaction point, these detectors will be exposed to an unprecedented amount of radiation over their lifetime. The current pixel detector will receive damage from non-ionizing radiation in excess of 10^{15} 1 MeV n_{eq}/cm^2, while the pixel detector designed for the high-luminosity LHC must cope with an order of magnitude larger fluence. This paper presents a digitization model incorporating effects of radiation damage to the pixel sensors. The model is described in detail and predictions for the charge collection efficiency and Lorentz angle are compared with collision data collected between 2015 and 2017 (< 10^{15} 1 MeV n_{eq}/cm^2). ×
A quantum algorithm for high energy physics simulations
C. Bauer, W. de Jong, B. Nachman, D. Provasoli
Phys. Rev. Lett. 126 (2021) 062001 · e-Print: 1904.03196
Cite Article
@article{1904.03196,
author="C. Bauer, W. de Jong, B. Nachman, D. Provasoli",
title="{A quantum algorithm for high energy physics simulations}",
eprint="1904.03196",
archivePrefix = "arXiv",
primaryClass = "quant-ph",
journal = "Phys. Rev. Lett",
volume="126",
pages="062001",
year = "2021",
}
×
A quantum algorithm for high energy physics simulations
Particles produced in high energy collisions that are charged under one of the fundamental forces will radiate proportionally to their charge, such as photon radiation from electrons in quantum electrodynamics. At sufficiently high energies, this radiation pattern is enhanced collinear to the initiating particle, resulting in a complex, many-body quantum system. Classical Markov Chain Monte Carlo simulation approaches work well to capture many of the salient features of the shower of radiation, but cannot capture all quantum effects. We show how quantum algorithms are well-suited for describing the quantum properties of final state radiation. In particular, we develop a polynomial time quantum final state shower that accurately models the effects of intermediate spin states similar to those present in high energy electroweak showers. The algorithm is explicitly demonstrated for a simplified quantum field theory on a quantum computer. ×
Extracting the Top-Quark Width from Non-Resonant Production
C. Herwig, T. Jezo, B. Nachman
Phys. Rev. Lett. 122 (2019) 231803 · e-Print: 1903.10519
Cite Article
@article{1903.10519,
author="C. Herwig, T. Jezo, B. Nachman",
title="{Extracting the Top-Quark Width from Non-Resonant Production}",
eprint="1903.10519",
archivePrefix = "arXiv",
primaryClass = "hep-ex",
journal = "Phys. Rev. Lett.",
volume = "122",
pages = "231803",
doi = "10.1103/PhysRevLett.122.231803",
year = "2019",
}
×
Extracting the Top-Quark Width from Non-Resonant Production
In the context of the Standard Model (SM) of particle physics, the relationship between the top-quark mass and width (\Gamma_t) has been precisely calculated. However, the uncertainty from current direct measurements of the width is nearly 50%. A new approach for directly measuring the top-quark width using events away from the resonance peak is presented. By using an orthogonal dataset to traditional top-quark width extractions, this new method may enable significant improvements in the experimental sensitivity in a method combination. Recasting a recent ATLAS differential cross section measurement, we find \Gamma_t = 1.28 +/- 0.30 GeV (1.33 +/- 0.29 GeV expected), providing the most precise direct measurement of the width. ×
Machine Learning Templates for QCD Factorization in the Search for Physics Beyond the Standard Model
J. Lin, W. Bhimji, B. Nachman
JHEP 05 (2019) 181. · e-Print: 1903.02556
Cite Article
@article{1903.02556,
author="J. Lin, W. Bhimji, B. Nachman",
title="{Machine Learning Templates for QCD Factorization in the Search for Physics Beyond the Standard Model}",
eprint="1903.02556",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "JHEP",
volume = "05",
pages = "181",
doi = "10.1007/JHEP05(2019)181",
year = "2019",
}
×
Machine Learning Templates for QCD Factorization in the Search for Physics Beyond the Standard Model
High-multiplicity all-hadronic final states are an important, but difficult final state for searching for physics beyond the Standard Model. A powerful search method is to look for large jets with accidental substructure due to multiple hard partons falling within a single jet. One way for estimating the background in this search is to exploit an approximate factorization in quantum chromodynamics whereby the jet mass distribution is determined only by its kinematic properties. Traditionally, this approach has been executed using histograms constructed in a background-rich region. We propose a new approach based on Generative Adversarial Networks (GANs). These neural network approaches are naturally unbinned and can be readily conditioned on multiple jet properties. In addition to using vanilla GANs for this purpose, a modification to the traditional WGAN approach has been investigated where weight clipping is replaced with a naturally compact set (in this case, the circle). Both the vanilla and modified WGAN approaches significantly outperform the histogram method, especially when modeling the dependence on features not used in the histogram construction. These results can be useful for enhancing the sensitivity of LHC searches to high-multiplicity final states involving many quarks and gluons and serve as a useful benchmark where GANs may have immediate benefit to the HEP community. ×
Nonlocal Thresholds for Improving the Spatial Resolution of Pixel Detectors
B. Nachman and A. Spies
JINST 14 (2019) P09028 · e-Print: 1903.01624
Cite Article
@article{1903.01624,
author="B. Nachman and A. Spies",
title="{Nonlocal Thresholds for Improving the Spatial Resolution of Pixel Detectors}",
eprint="1903.01624",
archivePrefix = "arXiv",
primaryClass = "physics.ins-det",
journal = "JINST",
volume = "14",
pages = "P09028",
doi = "10.1088/1748-0221/14/09/P09028",
year = "2019",
}
×
Nonlocal Thresholds for Improving the Spatial Resolution of Pixel Detectors
Pixel detectors only record signals above a tuned threshold in order to suppress noise. As sensors become thinner, pitches decrease, and radiation damage reduces the collected charge, it is increasingly desirable to lower thresholds. By making the simple, but powerful observation that hit pixels tend to be spatially close to each other, we introduce a scheme for dynamic thresholds. This dynamic scheme can enhance the signal efficiency without significantly increasing the occupancy. In addition to presenting a selection of empirical results, we also discuss some potential methods for implementing dynamic thresholds in a realistic readout chip for the Large Hadron Collider or other future colliders. ×
Automating the Construction of Jet Observables with Machine Learning
K. Datta, A. Larkoski, B. Nachman
Phys. Rev. D 100 (2019) 095016. · e-Print: 1902.07180
Cite Article
@article{1902.07180,
author="K. Datta, A. Larkoski, B. Nachman",
title="{Automating the Construction of Jet Observables with Machine Learning}",
eprint="1902.07180",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "Phys. Rev. D",
volume = "100",
pages = "095016",
doi = "10.1103/PhysRevD.100.095016",
year = "2019",
}
×
Automating the Construction of Jet Observables with Machine Learning
Machine-learning assisted jet substructure tagging techniques have the potential to significantly improve searches for new particles and Standard Model measurements in hadronic final states. Techniques with simple analytic forms are particularly useful for establishing robustness and gaining physical insight. We introduce a procedure to automate the construction of a large class of observables that are chosen to completely specify M-body phase space. The procedure is validated on the task of distinguishing H to bb from g to bb, where M=3 and previous brute-force approaches to construct an optimal product observable for the M-body phase space have established the baseline performance. We then use the new method to design tailored observables for the boosted Z' search, where M=4 and brute-force methods are intractable. The new classifiers outperform standard 2-prong tagging observables, illustrating the power of the new optimization method for improving searches and measurement at the LHC and beyond. ×
Extending the Bump Hunt with Machine Learning
J. Collins, K. Howe, B. Nachman
Phys. Rev. D 99 (2019) 014038. · e-Print: 1902.02634
Cite Article
@article{1902.02634,
author="J. Collins, K. Howe, B. Nachman",
title="{Extending the Bump Hunt with Machine Learning}",
eprint="1902.02634",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "Phys. Rev. D",
volume = "99",
pages = "014038",
doi = "10.1103/PhysRevD.99.014038",
year = "2019",
}
×
Extending the Bump Hunt with Machine Learning
The oldest and most robust technique to search for new particles is to look for `bumps' in invariant mass spectra over smoothly falling backgrounds. We present a new extension of the bump hunt that naturally benefits from modern machine learning algorithms while remaining model-agnostic. This approach is based on the Classification Without Labels (CWoLa) method where the invariant mass is used to create two potentially mixed samples, one with little or no signal and one with a potential resonance. Additional features that are uncorrelated with the invariant mass can be used for training the classifier. Given the lack of new physics signals at the Large Hadron Collider (LHC), such model-agnostic approaches are critical for ensuring full coverage to fully exploit the rich datasets from the LHC experiments. In addition to illustrating how the new method works in simple test cases, we demonstrate the power of the extended bump hunt on a realistic all-hadronic resonance search in a channel that would not be covered with existing techniques. ×
A Quantum Algorithm to Efficiently Sample from Interfering Binary Trees
D. Provasoli, B. Nachman, W. de Jong, C. Bauer
Quantum Science and Technology 5 (2020) 035004 · e-Print: 1901.08148
Cite Article
@article{1901.08148,
author="D. Provasoli, B. Nachman, W. de Jong, C. Bauer",
title="{A Quantum Algorithm to Efficiently Sample from Interfering Binary Trees}",
eprint="1901.08148",
archivePrefix = "arXiv",
primaryClass = "quant-ph",
journal = "Quantum Science and Technology",
volume = "5",
pages = "035004",
doi = "10.1088/2058-9565/ab8359",
year = "2019",
}
×
A Quantum Algorithm to Efficiently Sample from Interfering Binary Trees
Quantum computers provide an opportunity to efficiently sample from probability distributions that include non-trivial interference effects between amplitudes. Using a simple process wherein all possible state histories can be specified by a binary tree, we construct an explicit quantum algorithm that runs in polynomial time to sample from the process once. The corresponding naive Markov Chain algorithm does not produce the correct probability distribution and an explicit classical calculation of the full distribution requires exponentially many operations. However, the problem can be reduced to a system of two qubits with repeated measurements, shedding light on a quantum-inspired efficient classical algorithm. ×
Properties of g->bb at small opening angles in pp collisions with the ATLAS detector at sqrt(s) = 13 TeV
ATLAS Collaboration
Phys. Rev. D 99 (2019) 052004 · e-Print: 1812.09283
Cite Article
@article{1812.09283,
author="{ATLAS Collaboration}",
title="{Properties of g->bb at small opening angles in pp collisions with the ATLAS detector at sqrt(s) = 13 TeV}",
eprint="1812.09283",
journal="Phys. Rev. D",
volume=99,
pages = 052004,
archivePrefix = "arXiv",
doi = 10.1103/PhysRevD.99.052004,
primaryClass = "hep-ex",
year = "2019",
}
×
Properties of g->bb at small opening angles in pp collisions with the ATLAS detector at sqrt(s) = 13 TeV
The fragmentation of high-energy gluons at small opening angles is largely unconstrained by present measurements. Gluon splitting to b-quark pairs is a unique probe into the properties of gluon fragmentation because identified b-tagged jets provide a proxy for the quark daughters of the initial gluon. In this study, key differential distributions related to the g->bb process are measured using 33/fb of sqrt(s) = 13 TeV pp collision data recorded by the ATLAS experiment at the LHC in 2016. Jets constructed from charged-particle tracks, clustered with the anti-kt jet algorithm with radius parameter R = 0.2 are used to probe angular scales below the R = 0.4 jet radius. The observables are unfolded to particle level in order to facilitate direct comparisons with predictions from present and future simulations. Multiple significant differences are observed between the data and parton shower Monte Carlo predictions, providing input to improve these predictions of the main source of background events in analyses involving boosted Higgs bosons decaying into b-quarks. ×
Charm-quark Yukawa Coupling in Higgs to ccy at the LHC
T. Han, B. Nachman, X. Wang
Phys. Lett. B 793 (2019) 90 · e-Print: 1812.06992
Cite Article
@article{1812.06992,
author="T. Han, B. Nachman, X. Wang",
title="{Charm-quark Yukawa Coupling in Higgs to $c\bar{c}\gamma$ at the LHC}",
eprint="1812.06992",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "Phys. Lett. B",
volume = "793",
pages = "90",
doi = "10.1016/j.physletb.2019.04.031",
year = "2019",
}
×
Charm-quark Yukawa Coupling in Higgs to ccy at the LHC
It is extremely challenging to probe the charm-quark Yukawa coupling at hadron colliders primarily due to the large Standard Model (SM) background (including Higgs to bb) and the lack of an effective trigger for the signal h to cc. We examine the feasibility of probing this coupling at the LHC via a Higgs radiative decay h to ccy. The existence of an additional photon in the final state may help for the signal identification and background suppression. Adopting a refined triggering strategy and utilizing basic machine learning, we find that a coupling limit of about 8 times the SM value may be reached with 2σ sensitivity after the High Luminosity LHC (HL-LHC). Our result is comparable and complementary to other projections for direct and indirect probes of h to cc at the HL-LHC. Without a significant change in detector capabilities, there would be no significant improvement for this search from higher energy hadron colliders. ×
Prospects for Dark Matter searches in mono-photon and VBF+MET final states in ATLAS
ATLAS Collaboration
Public note: ATL-PHYS-PUB-2018-038
Cite Article
@article{ATL-PHYS-PUB-2018-038,
author="{ATLAS Collaboration}",
title="{Prospects for Dark Matter searches in mono-photon and VBF+MET final states in ATLAS}",
journal = "ATL-PHYS-PUB-2018-038",
url = "http://cdsweb.cern.ch/record/2649443",
year = "2018",
}
×
Prospects for Dark Matter searches in mono-photon and VBF+MET final states in ATLAS
This document presents a prospect study for dark matter searches with the ATLAS detector at luminosities as expected at HL-LHC. A scenario where the Standard Model is extended by the addition of an electroweak fermionic triplet with null hyper charge is considered. The lightest mass state of the triplet constitutes a weakly interacting massive particle dark matter candidate. This model is inspired by Supersymmetry and by the Minimal Dark Matter setup, and provides a benchmark in the spirit of simplified models. Projections for an integrated luminosity of 3000/fb are presented for the dark matter searches in the mono-photon and VBF+MET final states, based on the run-2 analyses strategy. To illustrate the experimental challenges associated to a high pile-up environment due to the high luminosity, the VBF+MET topology is considered and the effect of the increased pile-up on the VBF invisibly decaying Higgs boson is studied as a benchmark process. ×
Investigating the Topology Dependence of Quark and Gluon Jets
S. Bright-Thonney and B. Nachman
JHEP 03 (2019) 098. · e-Print: 1810.05653
Cite Article
@article{1810.05653,
author="S. Bright-Thonney and B. Nachman",
title="{Investigating the Topology Dependence of Quark and Gluon Jets}",
eprint="1810.05653",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "JHEP",
volume = 03"",
pages = "098",
doi = "10.1007/JHEP03(2019)098",
year = "2019",
}
×
Investigating the Topology Dependence of Quark and Gluon Jets
As most target final states for searches and measurements at the Large Hadron Collider have a particular quark/gluon composition, tools for distinguishing quark- from gluon-initiated jets can be very powerful. In addition to the difficulty of the classification task, quark-versus-gluon jet tagging is challenging to calibrate. The difficulty arises from the topology dependence of quark-versus-gluon jet tagging: since quarks and gluons have net quantum chromodynamic color charge while only colorless hadrons are measured, the radiation pattern inside a jet of a particular type depends on the rest of its environment. Given a definition of a quark or gluon jet, this paper studies the topology dependence of such jets in simulation. A set of phase space regions and jet substructure observables are identified for further comparative studies between generators and eventually in data. ×
Leveraging the ALICE/L3 cavern for long-lived exotics
V. Gligorov, S. Knapen, B. Nachman, M. Papucci, D. Robinson
Phys. Rev. D 99 (2019) 015023 · e-Print: 1810.03636
Cite Article
@article{1810.03636,
author="V. Gligorov, S. Knapen, B. Nachman, M. Papucci, D. Robinson",
title="{Leveraging the ALICE/L3 cavern for long-lived exotics}",
eprint="1810.03636",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "Phys. Rev. D",
volume = "99",
pages = "015023",
doi = "10.1103/PhysRevD.99.015023",
year = "2019",
}
×
Leveraging the ALICE/L3 cavern for long-lived exotics
Run 5 of the HL-LHC era (and beyond) may provide new opportunities to search for physics beyond the standard model (BSM) at interaction point 2 (IP2). In particular, taking advantage of the existing ALICE detector and infrastructure provides an opportunity to search for displaced decays of beyond standard model long-lived particles (LLPs). While this proposal may well be preempted by ongoing ALICE physics goals, examination of its potential new physics reach provides a compelling comparison with respect to other LLP proposals. In particular, full event reconstruction and particle identification could be possible by making use of the existing L3 magnet and ALICE time projection chamber. For several well-motivated portals, the reach competes with or exceeds the sensitivity of MATHUSLA and SHiP, provided that a total integrated luminosity of approximately 100/fb could be delivered to IP2. ×
Generalized Numerical Inversion: A Neural Network Approach to Jet Calibration
ATLAS Collaboration
Public note: ATL-PHYS-PUB-2018-013
Cite Article
@article{ATL-PHYS-PUB-2018-013,
author="{ATLAS Collaboration}",
title="{Generalized Numerical Inversion: A Neural Network Approach to Jet Calibration}",
journal = "ATL-PHYS-PUB-2018-013",
url = "http://cdsweb.cern.ch/record/2630972",
year = "2018",
}
×
Generalized Numerical Inversion: A Neural Network Approach to Jet Calibration
Jets that are reconstructed by the ATLAS detector are corrected to ensure that the reported energy is an unbiased measurement of the particle-level jet energy. This jet energy scale correction consists of multiple steps where features of the reconstructed jet are used se- quentially, in order to improve the resolution and reduce the differences between quark and gluon jets (flavor dependence). This study reports on a new method based on multivari- ate regression, demonstrated with neural networks, that generalizes the current (iterated) one-dimensional technique for performing jet energy scale corrections (numerical inversion), called generalized numerical inversion. The new method remains an unbiased measurement of the particle-level energy, but allows for simultaneously using multiple features, such as the number of tracks inside jets and the average track radius, in order to account for correlations in the dependencies between features and with the jet energy. This new procedure can further improve the jet energy resolution and flavor dependence beyond a sequential approach and can be systematically improved by exploiting more variables and their interdependence with the jet energy response. ×
Impact of Pile-up on Jet Constituent Multiplicity in ATLAS
ATLAS Collaboration
Public note: ATL-PHYS-PUB-2018-011
Cite Article
@article{ATL-PHYS-PUB-2018-011,
author="{ATLAS Collaboration}",
title="{Impact of Pile-up on Jet Constituent Multiplicity in ATLAS}",
journal = "ATL-PHYS-PUB-2018-011",
url = "http://cdsweb.cern.ch/record/2630603",
year = "2018",
}
×
Impact of Pile-up on Jet Constituent Multiplicity in ATLAS
One of the biggest challenges facing jet constituent multiplicity measurements is contamination from extraneous sources of soft radiation, such as pile-up. This note studies the impact of pile-up on various definitions of multiplicity. In particular, the impact of selected pile-up suppression techniques is evaluated: soft-drop declustering and constituent subtraction. Additionally, a first study of iterative soft-drop (ISD) multiplicity in ATLAS is presented. It is found that the application of pile-up suppression counteracts the increase in the measured multiplicity values due to pile-up, as expected. As constituent multiplicity is the single most important observable for distinguishing quark-initiated from gluon-initiated jets, these studies are an important input for developing a powerful and robust quark-versus-gluon jet tagger in the future. ×
Boosting H to bb with Machine Learning
J. Lin, M. Freytsis, I. Moult, and B. Nachman
JHEP 10 (2018) 101. · e-Print: 1807.10768
Cite Article
@article{1807.10768,
author="J. Lin, M. Freytsis, I. Moult, and B. Nachman",
title="{Boosting H to bb with Machine Learning}",
eprint="1807.10768",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "JHEP",
volume = "10",
pages = "101",
doi = "10.1007/JHEP10(2018)101",
year = "2018",
}
×
Boosting H to bb with Machine Learning
High p_T Higgs production at hadron colliders provides a direct probe of the internal structure of the gg to H loop with the H to bb decay offering the most statistics due to the large branching ratio. Despite the overwhelming QCD background, recent advances in jet substructure have put the observation of the gg to H to bb channel at the LHC within the realm of possibility. In order to enhance the sensitivity to this process, we develop a two stream convolutional neural network, with one stream acting on jet information and one using global event properties. The neural network significantly increases the discovery potential of a Higgs signal, both for high p_T Standard Model production as well for possible beyond the Standard Model contributions. Unlike most studies for boosted hadronically decaying massive particles, the boosted Higgs search is unique because double b-tagging rejects nearly all background processes that do not have two hard prongs. In this context --- which goes beyond state-of-the-art two-prong tagging --- the network is studied to identify the origin of the additional information leading to the increased significance. The procedures described here are also applicable to related final states where they can be used to identify additional sources of discrimination power that are not being exploited by current techniques. ×
Modeling the Mobility and Lorentz angle for the ATLAS Pixel Detector
ATLAS Collaboration
Public note: ATL-INDET-PUB-2018-001
Cite Article
@article{ATL-INDET-PUB-2018-001,
author="{ATLAS Collaboration}",
title="{Modeling the Mobility and Lorentz angle for the ATLAS Pixel Detector}",
journal = "ATL-INDET-PUB-2018-001",
url = "http://cds.cern.ch/record/2629889",
year = "2018",
}
×
Modeling the Mobility and Lorentz angle for the ATLAS Pixel Detector
The electron and hole mobility plays a key role in determining the cluster shape in the ATLAS silicon pixel detectors. A proper model of the mobility is therefore important for properly simulating track reconstruction. Various mobility models are studied in the ATLAS simulation framework to determine their predictions for Run 2 conditions. One way to probe the mobility directly in data is to study the Lorentz angle. Measurements of the Lorentz angle are used to assess the mobility model quality. As a result of these studies, the default mobility model is updated from the Run 1 default. ×
Limits on new coloured fermions using precision jet data from the Large Hadron Collider
J. Llorente and B. Nachman
Nucl. Phys. B 936 (2018) 106. · e-Print: 1807.00894
Cite Article
@article{1807.00894,
author="J. Llorente and B. Nachman",
title="{Limits on new coloured fermions using precision jet data from the Large Hadron Collider}",
eprint="1807.00894",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "Nucl. Phys. B",
volume = "936",
pages = "106",
doi = "https://doi.org/10.1016/j.nuclphysb.2018.09.008",
year = "2018",
}
×
Limits on new coloured fermions using precision jet data from the Large Hadron Collider
This work presents an interpretation of high precision jet data from the ATLAS experiment in terms of exclusion limits for new coloured matter. To this end, the effect of a new coloured fermion with a mass m_X on the solution of the renormalization group equation QCD is studied. Theoretical predictions for the transverse energy-energy correlation function and its asymmetry are obtained with such a modified solution and, from the comparison to data, 95\% CL exclusion limits are set on such models. ×
Identifying merged clusters in the ATLAS strip detector
ATLAS Collaboration
Public note: ATL-INDET-PROC-2018-006
Cite Article
@article{ATL-INDET-PROC-2018-006,
author="{ATLAS Collaboration}",
title="{Identifying merged clusters in the ATLAS strip detector}",
journal = "ATL-INDET-PROC-2018-006",
url = "https://cds.cern.ch/record/2630070",
year = "2018",
}
×
Identifying merged clusters in the ATLAS strip detector
Tracking in high density environments, particularly in high energy jets, plays an important role in many physics analyses at the LHC. In such environments, there is degradation of track reconstruction performance in ATLAS due to hit-merging in the pixel and strip detectors. We present a new algorithm for determining which strip clusters come from multiple particles. The performance of this technique is found to be competitive with the existing algorithm for identifying merged pixel clusters. We also show the gain in reconstruction efficiency achieved by allowing these clusters to be shared by multiple tracks. ×
Electromagnetic Showers Beyond Shower Shapes
L. de Oliveira, B. Nachman, M. Paganini
Nucl. Instrum. Meth. A 951 (2020) 162879 · e-Print: 1806.05667
Cite Article
@article{1806.05667,
author="L. de Oliveira, B. Nachman, M. Paganini",
title="{Electromagnetic Showers Beyond Shower Shapes}",
eprint="1806.05667",
archivePrefix = "arXiv",
primaryClass = "hep-ex",
journal = "Nucl. Instrum. Meth. A",
volume = "951",
pages = "162879",
doi = "10.1016/j.nima.2019.162879",
year = "2020",
}
×
Electromagnetic Showers Beyond Shower Shapes
Correctly identifying the nature and properties of outgoing particles from high energy collisions at the Large Hadron Collider is a crucial task for all aspects of data analysis. Classical calorimeter-based classification techniques rely on shower shapes -- observables that summarize the structure of the particle cascade that forms as the original particle propagates through the layers of material. This work compares shower shape-based methods with computer vision techniques that take advantage of lower level detector information. In a simplified calorimeter geometry, our DenseNet-based architecture matches or outperforms other methods on e^+-\gamma and e^+-\pi^+ classification tasks. In addition, we demonstrate that key kinematic properties can be inferred directly from the shower representation in image format. ×
Probing the quantum interference between singly and doubly resonant top-quark production in pp collisions at sqrt(s) = 13 TeV with the ATLAS detector
ATLAS Collaboration
Phys. Rev. Lett. 121 (2018) 152002 · e-Print: 1806.04667
Cite Article
@article{1806.04667,
author="{ATLAS Collaboration}",
title="{Probing the quantum interference between singly and doubly resonant top-quark production in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector}",
eprint="1806.04667",
archivePrefix = "arXiv",
primaryClass = "hep-ex",
journal = "Phys. Rev. Lett.",
volume = "121",
pages = "152002",
doi = "10.1103/PhysRevLett.121.152002",
year = "2018",
}
×
Probing the quantum interference between singly and doubly resonant top-quark production in pp collisions at sqrt(s) = 13 TeV with the ATLAS detector
This Letter presents a normalized differential cross-section measurement in a fiducial phase-space region where interference effects between top-quark pair production and associated production of a single top quark with a W boson and a b-quark are significant. Events with exactly two leptons (ee, \mu\mu, or e\mu) and two b-tagged jets that satisfy a multi-particle invariant mass requirement are selected from 36.1/fb of proton-proton collision data taken at sqrt(s) = 13 TeV with the ATLAS detector at the LHC in 2015 and 2016. The results are compared with predictions from simulations using various strategies for the interference. The standard prescriptions for interference modeling are significantly different from each other but are within 2 sigma of the data. State-of-the-art predictions that naturally incorporate interference effects provide the best description of the data in the measured region of phase space most sensitive to these effects. These results provide an important constraint on interference models and will guide future model development and tuning. ×
CWoLa Hunting: Extending the Bump Hunt with Machine Learning
J. Collins, K. Howe, B. Nachman
Phys. Rev. Lett. 121 (2018) 241803. · e-Print: 1805.02664
Cite Article
@article{1805.02664,
author="J. Collins, K. Howe, B. Nachman",
title="{CWoLa Hunting: Extending the Bump Hunt with Machine Learning}",
eprint="1805.02664",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "Phys. Rev. Lett.",
volume = "121",
pages = "241803",
doi = "10.1103/PhysRevLett.121.241803",
year = "2018",
}
×
CWoLa Hunting: Extending the Bump Hunt with Machine Learning
Despite extensive theoretical motivation for physics beyond the Standard Model (BSM) of particle physics, searches at the Large Hadron Collider (LHC) have found no significant evidence for BSM physics. Therefore, it is essential to broaden the sensitivity of the search program to include unexpected scenarios. We present a new model-agnostic anomaly detection technique that naturally benefits from modern machine learning algorithms. The only requirement on the signal for this new procedure is that it is localized in at least one known direction in phase space. Any other directions of phase space that are uncorrelated with the localized one can be used to search for unexpected features. This new method is applied to the dijet resonance search to show that it can turn a modest 2 sigma excess into a 7 sigma excess for a model with an intermediate BSM particle that is not currently targeted by a dedicated search. ×
The Optimal Use of Silicon Pixel Charge Information for Particle Identification
H. Patton and B. Nachman
Nucl. Instrum. Meth. A 913 (2018) 91 · e-Print: 1803.08974
Cite Article
@article{1803.08974,
author="H. Patton and B. Nachman",
title="{The Optimal Use of Silicon Pixel Charge Information for Particle Identification}",
eprint="1803.08974",
archivePrefix = "arXiv",
primaryClass = "physics.ins-det",
journal = "Nucl. Instrum. Meth. A",
volume = "913",
pages = "91",
doi = "10.1016/j.nima.2018.10.120",
year = "2018",
}
×
The Optimal Use of Silicon Pixel Charge Information for Particle Identification
Particle identification using the energy loss in silicon detectors is a powerful technique for probing the Standard Model (SM) as well as searching for new particles beyond the SM. Traditionally, such techniques use the truncated mean of the energy loss on multiple layers, in order to mitigate heavy tails in the charge fluctuation distribution. We show that the optimal scheme using the charge in multiple layers significantly outperforms the truncated mean. Truncation itself does not significantly degrade performance and the optimal classifier is well-approximated by a linear combination of the truncated mean and truncated variance. ×
Towards extracting the strong coupling constant from jet substructure at the LHC
I. Moult, B. Nachman, G. Soyez, J. Thaler et al.
Les Houches 2017: Physics at TeV Colliders Standard Model Working Group Report · e-Print: 1803.07977
Cite Article
@article{1803.07977,
author="I. Moult, B. Nachman, G. Soyez, J. Thaler et al.",
title="{Towards extracting the strong coupling constant from jet substructure at the LHC}",
eprint="1803.07977",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "Les Houches 2017: Physics at TeV Colliders Standard Model Working Group Report",
year = "2018",
}
×
Towards extracting the strong coupling constant from jet substructure at the LHC
This Report summarizes the proceedings of the 2017 Les Houches workshop on Physics at TeV Colliders. Session 1 dealt with (I) new developments relevant for high precision Standard Model calculations, (II) theoretical uncertainties and dataset dependence of parton distribution functions, (III) new developments in jet substructure techniques, (IV) issues in the theoretical description of the production of Standard Model Higgs bosons and how to relate experimental measurements, (V) phenomenological studies essential for comparing LHC data from Run II with theoretical predictions and projections for future measurements, and (VI) new developments in Monte Carlo event generators. ×
Jet Substructure at the Large Hadron Collider: Experimental Review
R. Kogler, B. Nachman, A. Schmidt (editors), et al.
@article{1803.06991,
author="R. Kogler, B. Nachman, A. Schmidt (editors), et al.",
title="{Jet Substructure at the Large Hadron Collider: Experimental Review}",
eprint="1803.06991",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "Rev. Mod. Phys. 91",
volume = "91",
pages = "045003",
doi = "10.1103/RevModPhys.91.045003",
year = "2019",
}
×
Jet Substructure at the Large Hadron Collider: Experimental Review
Jet substructure has emerged to play a central role at the Large Hadron Collider, where it has provided numerous innovative ways to search for new physics and to probe the Standard Model, particularly in extreme regions of phase space. In this article we focus on a review of the development and use of state-of-the-art jet substructure techniques by the ATLAS and CMS experiments. ×
Learning to Classify from Impure Samples
P. Komiske, E. Metodiev, B. Nachman, and M. Schwartz
Phys. Rev. D 98 (2018) 011502(R) · e-Print: 1801.10158
Cite Article
@article{1801.10158,
author="P. Komiske, E. Metodiev, B. Nachman, and M. Schwartz",
title="{Learning to Classify from Impure Samples}",
eprint="1801.10158",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "Phys. Rev. D",
volume = "98",
pages = "011502(R)",
doi = "10.1103/PhysRevD.98.011502",
year = "2018",
}
×
Learning to Classify from Impure Samples
A persistent challenge in practical classification tasks is that labeled training sets are not always available. In particle physics, this challenge is surmounted by the use of simulations. These simulations accurately reproduce most features of data, but cannot be trusted to capture all of the complex correlations exploitable by modern machine learning methods. Recent work in weakly supervised learning has shown that simple, low-dimensional classifiers can be trained using only the impure mixtures present in data. Here, we demonstrate that complex, high-dimensional classifiers can also be trained on impure mixtures using weak supervision techniques, with performance comparable to what could be achieved with pure samples. Using weak supervision will therefore allow us to avoid relying exclusively on simulations for high-dimensional classification. This work opens the door to a new regime whereby complex models are trained directly on data, providing direct access to probe the underlying physics. ×
CaloGAN: Simulating 3D high energy particle showers in multilayer electromagnetic calorimeters with generative adversarial networks
M. Paganini, L. de Oliveira, and B. Nachman
Phys. Rev. D 97 (2018) 014021. · e-Print: 1712.10321
Cite Article
@article{1712.10321,
author="M. Paganini, L. de Oliveira, and B. Nachman",
title="{CaloGAN: Simulating 3D high energy particle showers in multilayer electromagnetic calorimeters with generative adversarial networks}",
eprint="1712.10321",
archivePrefix = "arXiv",
primaryClass = "hep-ex",
journal = "Phys. Rev. D",
volume = "97",
pages = "014021",
doi = "10.1103/PhysRevD.97.014021",
year = "2018",
}
×
CaloGAN: Simulating 3D high energy particle showers in multilayer electromagnetic calorimeters with generative adversarial networks
The precise modeling of subatomic particle interactions and propagation through matter is paramount for the advancement of nuclear and particle physics searches and precision measurements. The most computationally expensive step in the simulation pipeline of a typical experiment at the Large Hadron Collider (LHC) is the detailed modeling of the full complexity of physics processes that govern the motion and evolution of particle showers inside calorimeters. We introduce CaloGAN, a new fast simulation technique based on generative adversarial networks (GANs). We apply these neural networks to the modeling of electromagnetic showers in a longitudinally segmented calorimeter, and achieve speedup factors comparable to or better than existing full simulation techniques on CPU (100x-1000x) and even faster on GPU (up to about 10^5x). There are still challenges for achieving precision across the entire phase space, but our solution can reproduce a variety of geometric shower shape properties of photons, positrons and charged pions. This represents a significant stepping stone toward a full neural network-based detector simulation that could save significant computing time and enable many analyses now and in the future. ×
A measurement of the soft-drop jet mass in pp collisions at sqrt(s) = 13 TeV with the ATLAS detector
ATLAS Collaboration
Phys. Rev. Lett. 121 (2018) 092001 · e-Print: 1711.08341
Cite Article
@article{1711.08341,
author="{ATLAS Collaboration}",
title="{A measurement of the soft-drop jet mass in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector}",
eprint="1711.08341",
archivePrefix = "arXiv",
primaryClass = "hep-ex",
journal = "Phys. Rev. Lett.",
volume = "121",
pages = "092001",
doi = "10.1103/PhysRevLett.121.092001",
year = "2018",
}
×
A measurement of the soft-drop jet mass in pp collisions at sqrt(s) = 13 TeV with the ATLAS detector
Jet substructure observables have significantly extended the search program for physics beyond the Standard Model at the Large Hadron Collider. The state-of-the-art tools have been motivated by theoretical calculations, but there has never been a direct comparison between data and calculations of jet substructure observables that are accurate beyond leading-logarithm approximation. Such observables are significant not only for probing the collinear regime of QCD that is largely unexplored at a hadron collider, but also for improving the understanding of jet substructure properties that are used in many studies at the Large Hadron Collider. This Letter documents a measurement of the first jet substructure quantity at a hadron collider to be calculated at next-to-next-to-leading-logarithm accuracy. The normalized, differential cross-section is measured as a function of log_10 \rho^2, where \rho is the ratio of the soft-drop mass to the ungroomed jet transverse momentum. This quantity is measured in dijet events from 32.9/fb of sqrt(s) = 13 TeV proton-proton collisions recorded by the ATLAS detector. The data are unfolded to correct for detector effects and compared to precise QCD calculations and leading-logarithm particle-level Monte Carlo simulations. ×
The Impact of Incorporating Shell-corrections to Energy Loss in Silicon
F. Wang, S. Dong, B. Nachman, M. Garcia-Sciveres, Q. Zeng
Nucl. Instrum. Meth. A 899 (2018) 1 · e-Print: 1711.05465
Cite Article
@article{1711.05465,
author="F. Wang, S. Dong, B. Nachman, M. Garcia-Sciveres, Q. Zeng",
title="{The Impact of Incorporating Shell-corrections to Energy Loss in Silicon}",
eprint="1711.05465",
archivePrefix = "arXiv",
primaryClass = "physics.ins-det",
journal = "Nucl. Instrum. Meth. A",
volume = "899",
pages = "1",
doi = "10.1016/j.nima.2018.04.061",
year = "2018",
}
×
The Impact of Incorporating Shell-corrections to Energy Loss in Silicon
Modern silicon tracking detectors based on hybrid or fully integrated CMOS technology are continuing to push to thinner sensors. The ionization energy loss fluctuation in very thin silicon sensors significantly deviates from the Landau distribution. Therefore, we have developed a charge deposition setup that implements the Bichsel straggling function, which accounts for shell-effects. This enhanced simulation is important for comparing with testbeam or collision data with thin sensors as demonstrated by reproducing more realistically the degraded position resolution compared with naïve ionization models based on simple Landau-like fluctuation. Our implementation of the Bichsel model and the multipurpose photo absorption ionization (PAI) model in Geant4 produce similar results above a few microns thickness. Below a few microns, the PAI model does not fully capture the complete shell effects that are in the Bichsel model. The code is made publicly available as part of the Allpix software package in order to facilitate predictions for new detector designs and comparisons with testbeam data. ×
Ultimate position resolution of pixel clusters with binary readout for particle tracking
F. Wang, B. Nachman, M. Garcia-Sciveres
Nucl. Instrum. Meth. A 899 (2018) 10 · e-Print: 1711.00590
Cite Article
@article{1711.00590,
author="F. Wang, B. Nachman, M. Garcia-Sciveres",
title="{Ultimate position resolution of pixel clusters with binary readout for particle tracking}",
eprint="1711.00590",
archivePrefix = "arXiv",
primaryClass = "physics.ins-det",
journal = "Nucl. Instrum. Meth. A",
volume = "899",
pages = "10",
doi = "10.1016/j.nima.2018.04.053",
year = "2018",
}
×
Ultimate position resolution of pixel clusters with binary readout for particle tracking
Silicon tracking detectors can record the charge in each channel (analog or digitized) or have only binary readout (hit or no hit). While there is significant literature on the position resolution obtained from interpolation of charge measurements, a comprehensive study of the resolution obtainable with binary readout is lacking. It is commonly assumed that the binary resolution is pitch/sqrt(12), but this is generally a worst case upper limit. In this paper we study, using simulation, the best achievable resolution for minimum ionizing particles in binary readout pixels. A wide range of incident angles and pixel sizes are simulated with a standalone code, using the Bichsel model for charge deposition. The results show how the resolution depends on angles and sensor geometry. Until the pixel pitch becomes so small as to be comparable to the distance between energy deposits in silicon, the resolution is always better, and in some cases much better, than pitch/sqrt(12). ×
A number of recent applications of jet substructure, in particular searches for light new particles, require substructure observables that are decorrelated with the jet mass. In this paper we introduce the Convolved SubStructure (CSS) approach, which uses a theoretical understanding of the observable to decorrelate the complete shape of its distribution. This decorrelation is performed by convolution with a shape function whose parameters and mass dependence are derived analytically. We consider in detail the case of the D_2 observable and perform an illustrative case study using a search for a light hadronically decaying Z'. We find that the CSS approach completely decorrelates the D_2 observable over a wide range of masses. Our approach highlights the importance of improving the theoretical understanding of jet substructure observables to exploit increasingly subtle features for performance. ×
Optimal use of Charge Information for the HL-LHC Pixel Detector Readout
Y. Chen, E. Frangipane, M. Garcia-Sciveres, L. Jeanty, B. Nachman, S. Pagan Griso, F. Wang
@article{1710.02582,
author="Y. Chen, E. Frangipane, M. Garcia-Sciveres, L. Jeanty, B. Nachman, S. Pagan Griso, F. Wang",
title="{Optimal use of Charge Information for the HL-LHC Pixel Detector Readout}",
eprint="1710.02582",
archivePrefix = "arXiv",
primaryClass = "physics.ins-det",
journal = "Nucl. Instrum. Meth. A",
volume = "902",
pages = "197",
doi = "https://doi.org/10.1016/j.nima.2018.01.091",
year = "2018",
}
×
Optimal use of Charge Information for the HL-LHC Pixel Detector Readout
The pixel detectors for the High Luminosity upgrades of the ATLAS and CMS detectors will preserve digitized charge information in spite of extremely high hit rates. Both circuit physical size and output bandwidth will limit the number of bits to which charge can be digitized and stored. We therefore study the effect of the number of bits used for digitization and storage on single and multi-particle cluster resolution, efficiency, classification, and particle identification. We show how performance degrades as fewer bits are used to digitize and to store charge. We find that with limited charge information (4 bits), one can achieve near optimal performance on a variety of tasks. ×
Technical Design Report for the ATLAS Inner Tracker Pixel Detector
ATLAS Collaboration
Public note: ATLAS-TDR-030
Cite Article
@article{ATLAS-TDR-030,
author="{ATLAS Collaboration}",
title="{Technical Design Report for the ATLAS Inner Tracker Pixel Detector}",
journal = "ATLAS-TDR-030",
url = "http://cdsweb.cern.ch/record/2285585",
year = "2017",
}
×
Technical Design Report for the ATLAS Inner Tracker Pixel Detector
This is the second of two Technical Design Report documents that describe the upgrade of the central tracking system for the ATLAS experiment for the operation at the High Luminosity LHC (HL-LHC) starting in the middle of 2026. At that time the LHC will have been upgraded to reach a peak instantaneous luminosity of 7.5\times 10^{34} cm^{-2}s^{-1}, which corresponds to an average of about 200 inelastic proton-proton collisions per beam-crossing. The new Inner Tracker (ITk) will be operational for more than ten years, during which time ATLAS aims to accumulate a total data set of 4000/fb. Many of the features of the tracker have already been presented in the first Technical Design Report that detailed the construction of the ITk Strip Tracker. That report was published in April 2017. This document focuses on the ITk Pixel Detector. A baseline design is described in detail, and the motivations for the chosen technologies are illustrated. In some cases, alternative solutions are also illustrated. In this case, we indicate the advantage in pursuing the other designs, and describe the time line for a decision. The design, construction and expected performance are set out in detail. When considering performance we pay particular attention to those parameters that are determined by the performance of the Pixel Detector. We describe in detail the design and construction of the Pixel Detector, including the results of measurements of prototype modules and associated support structures and we explain the status of the plans for their mass production. We present details of the decommissioning of the existing tracking detector and the replacement of the inner layers of the ITk Pixel Detector part way through the lifetime of the High Luminosity LHC. Finally, we describe the costing and schedule, including major milestones, to construct the detector. ×
Jet Substructure at the Large Hadron Collider: A Review of Recent Advances in Theory and Machine Learning
@article{1709.04464,
author="A. J. Larkoski, I. Moult, and B. Nachman",
title="{Jet Substructure at the Large Hadron Collider: A Review of Recent Advances in Theory and Machine Learning}",
eprint="1709.04464",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "Physics Reports",
volume = 841"",
pages = ""1,
doi = "10.1016/j.physrep.2019.11.001",
year = "2020",
}
×
Jet Substructure at the Large Hadron Collider: A Review of Recent Advances in Theory and Machine Learning
Jet substructure has emerged to play a central role at the Large Hadron Collider (LHC), where it has provided numerous innovative new ways to search for new physics and to probe the Standard Model in extreme regions of phase space. In this article we provide a comprehensive review of state of the art theoretical and machine learning developments in jet substructure. This article is meant both as a pedagogical introduction, covering the key physical principles underlying the calculation of jet substructure observables, the development of new observables, and cutting edge machine learning techniques for jet substructure, as well as a comprehensive reference for experts. We hope that it will prove a useful introduction to the exciting and rapidly developing field of jet substructure at the LHC. ×
Observables for possible QGP signatures in central pp collisions
M. Mangano and B. Nachman
Eur. Phys. J. C 78 (2018) 343 · e-Print: 1708.08369
Cite Article
@article{1708.08369,
author="M. Mangano and B. Nachman",
title="{Observables for possible QGP signatures in central $pp$ collisions.}",
eprint="1708.08369",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "Eur. Phys. J. C",
volume = "78",
pages = "343",
doi = "10.1140/epjc/s10052-018-5826-9",
year = "2018",
}
×
Observables for possible QGP signatures in central pp collisions
Proton-proton (pp) data show collective effects, such as long-range azimuthal correlations and strangeness enhancement, which are similar to phenomenology observed in heavy ion collisions. Using simulations with and without explicit existing models of collective effects, we explore new ways to probe pp collisions at high multiplicity, in order to suggest measurements that could help identify the similarities and differences between large- and small-scale collective effects. In particular, we focus on the properties of jets produced in ultra-central pp collisions in association with a Z boson. We consider observables such as jet energy loss and jet shapes, which could point to the possible existence of an underlying quark-gluon plasma, or other new dynamical effects related to the presence of large hadronic densities. ×
Classification without labels: Learning from mixed samples in high energy physics
E. Metodiev, B. Nachman, J. Thaler
JHEP 10 (2017) 174. · e-Print: 1708.02949
Cite Article
@article{1708.02949,
author="E. Metodiev, B. Nachman, J. Thaler",
title="{Classification without labels: Learning from mixed samples in high energy physics}",
eprint="1708.02949",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "JHEP",
volume = "10",
pages = "174",
doi = "10.1007/JHEP10(2017)174",
year = "2017",
}
×
Classification without labels: Learning from mixed samples in high energy physics
Modern machine learning techniques can be used to construct powerful models for difficult collider physics problems. In many applications, however, these models are trained on imperfect simulations due to a lack of truth-level information in the data, which risks the model learning artifacts of the simulation. In this paper, we introduce the paradigm of classification without labels (CWoLa) in which a classifier is trained to distinguish statistical mixtures of classes, which are common in collider physics. Crucially, neither individual labels nor class proportions are required, yet we prove that the optimal classifier in the CWoLa paradigm is also the optimal classifier in the traditional fully-supervised case where all label information is available. After demonstrating the power of this method in an analytical toy example, we consider a realistic benchmark for collider physics: distinguishing quark- versus gluon-initiated jets using mixed quark/gluon training samples. More generally, CWoLa can be applied to any classification problem where labels or class proportions are unknown or simulations are unreliable, but statistical mixtures of the classes are available. ×
Modelling of Track Reconstruction Inside Jets with the 2016 ATLAS sqrt(s) = 13 TeV pp Dataset
ATLAS Collaboration
Public note: ATL-PHYS-PUB-2017-016
Cite Article
@article{ATL-PHYS-PUB-2017-016,
author="{ATLAS Collaboration}",
title="{Modelling of Track Reconstruction Inside Jets with the 2016 ATLAS $\sqrt{s}=13$ TeV $pp$ Dataset}",
journal = "ATL-PHYS-PUB-2017-016",
url = "http://cdsweb.cern.ch/record/2275639",
year = "2016",
}
×
Modelling of Track Reconstruction Inside Jets with the 2016 ATLAS sqrt(s) = 13 TeV pp Dataset
Inside the core of high transverse momentum jets, the particle density is so high that the tracks of charged particles begin to overlap, and due to the different charged particles, pixel clusters in the ATLAS inner detector begin to merge. This high density environment results in a degradation of track reconstruction. Recent innovations to the ambiguity solving in the charged particle pattern recognition partially mitigate the loss in performance. However, it is critical for all physics results using tracks inside jets that the algorithms be well modeled by simulation. This note presents new measurements of the charged particle reconstruction inefficiency and fake rate inside jets with the sqrt(s) = 13 TeV pp dataset collected by the ATLAS experiment at the LHC in 2016. ×
Quark and gluon tagging with Jet Images in ATLAS
ATLAS Collaboration
Public note: ATL-PHYS-PUB-2017-017
Cite Article
@article{ATL-PHYS-PUB-2017-017,
author="{ATLAS Collaboration}",
title="{Quark and gluon tagging with Jet Images in ATLAS}",
journal = "ATL-PHYS-PUB-2017-017",
url = "http://cdsweb.cern.ch/record/2275641",
year = "2017",
}
×
Quark and gluon tagging with Jet Images in ATLAS
Distinguishing quark-initiated from gluon-initiated jets is useful for many measurements and searches at the LHC. This note presents a jet tagger for distinguishing quark-initiated from gluon-initiated jets, which uses the full radiation pattern inside a jet processed as an image in a deep neural network classifier. The study is conducted using simulated dijet events in sqrt(s) =13 TeV pp collisions with the ATLAS detector. Across a wide range of quark jet identification efficiencies, the neural network tagger achieves a gluon jet rejection that is comparable to or better than the performance of the jet width and track multiplicity observables conventionally used for quark-versus-gluon jet tagging. ×
Jet reclustering and close-by effects in ATLAS Run 2
ATLAS Collaboration
Public note: ATLAS-CONF-2017-062
Cite Article
@article{ATLAS-CONF-2017-062,
author="{ATLAS Collaboration}",
title="{Jet reclustering and close-by effects in ATLAS Run 2}",
journal = "ATLAS-CONF-2017-062",
url = "http://cdsweb.cern.ch/record/2275655",
year = "2017",
}
×
Jet reclustering and close-by effects in ATLAS Run 2
The reconstruction of hadronically-decaying, high-p_T W, Z, and and Higgs bosons and top quarks is instrumental to exploit the physics potential of the ATLAS detector in pp collisions at a centre-of-mass energy of 13 TeV at the Large Hadron Collider (LHC). The jet reclustering procedure reconstructs such objects by using calibrated anti-kt jets as inputs to the anti-kt algorithm with a larger distance parameter. The performance of these reclustered large-radius jets during LHC Run 2 is studied, and compared with that of trimmed anti-kt large-radius jets directly constructed from locally calibrated topological clusters. The propagation of calibrations to reclustered jets is found to be sufficient to restore their average energy and mass scales to particle level. The modelling of small-radius anti-kt jets in each other's vicinity is studied using methods which combine tracking and calorimeter information. Systematic uncertainties resulting from the propagation of the uncertainties on the input jets to the reclustering procedure are studied. Comparisons between 33.2/fb of data collected during 2016 operations and simulation are shown, and the relative jet mass scale and resolution for reclustered and conventional jets are extracted using the forward-folding technique. ×
In-situ measurements of large-radius jet reconstruction performance
ATLAS Collaboration
Public note: ATLAS-CONF-2017-063
Cite Article
@article{ATLAS-CONF-2017-063,
author="{ATLAS Collaboration}",
title="{In-situ measurements of large-radius jet reconstruction performance}",
journal = "ATLAS-CONF-2017-063",
url = "2017",
year = "http://cdsweb.cern.ch/record/2275655",
}
×
In-situ measurements of large-radius jet reconstruction performance
The response of the ATLAS experiment to groomed large (R=1.0) radius jets is measured in-situ with 33/fb of sqrt(s) = 13 TeV LHC proton--proton collisions collected in 2016. Results from several methods are combined. The jet transverse momentum scale and resolution are measured in events where the jet recoils against a reference object, either a calibrated photon, another jet, or a recoiling system of jets. The jet mass is constrained using mass peaks formed by boosted W-bosons and top quarks and by comparison to the jet mass calculated with track jets. Generally, the Monte Carlo description is found to be adequate. Small discrepancies are incorporated as in-situ corrections. The constraint on the transverse momentum scale is 1-2% for p_T < 2 TeV, that on the mass scale 2-4%. The p_T (mass) resolution is constrained to 10% (20%). ×
Pileup Mitigation with Machine Learning (PUMML)
P. Komiske, E. Metodiev, B. Nachman, and M. Schwartz
JHEP 12 (2017) 51. · e-Print: 1707.08600
Cite Article
@article{1707.08600,
author="P. Komiske, E. Metodiev, B. Nachman, and M. Schwartz",
title="{Pileup Mitigation with Machine Learning (PUMML)}",
eprint="1707.08600",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "JHEP 12",
volume = "12",
pages = "51",
doi = "10.1007/JHEP12(2017)051",
year = "2017",
}
×
Pileup Mitigation with Machine Learning (PUMML)
Pileup involves the contamination of the energy distribution arising from the primary collision of interest (leading vertex) by radiation from soft collisions (pileup). We develop a new technique for removing this contamination using machine learning and convolutional neural networks. The network takes as input the energy distribution of charged leading vertex particles, charged pileup particles, and all neutral particles and outputs the energy distribution of particles coming from leading vertex alone. The PUMML algorithm performs remarkably well at eliminating pileup distortion on a wide range of simple and complex jet observables. We test the robustness of the algorithm in a number of ways and discuss how the network can be trained directly on data. ×
Accelerating science with generative adversarial networks: An application to 3D particle showers in multilayer calorimeters
M. Paganini, L. de Oliveira, and B. Nachman
Phys. Rev. Lett. 120 (2018) 042003 · e-Print: 1705.02355
Cite Article
@article{1705.02355,
author="M. Paganini, L. de Oliveira, and B. Nachman",
title="{Accelerating science with generative adversarial networks: An application to 3D particle showers in multilayer calorimeters.}",
eprint="1705.02355",
archivePrefix = "arXiv",
primaryClass = "hep-ex",
journal = "Phys. Rev. Lett.",
volume = "120",
pages = "042003",
doi = "10.1103/PhysRevLett.120.042003",
year = "2018",
}
×
Accelerating science with generative adversarial networks: An application to 3D particle showers in multilayer calorimeters
Physicists at the Large Hadron Collider (LHC) rely on detailed simulations of particle collisions to build expectations of what experimental data may look like under different theory modeling assumptions. Petabytes of simulated data are needed to develop analysis techniques, though they are expensive to generate using existing algorithms and computing resources. The modeling of detectors and the precise description of particle cascades as they interact with the material in the calorimeter are the most computationally demanding steps in the simulation pipeline. We therefore introduce a deep neural network-based generative model to enable high-fidelity, fast, electromagnetic calorimeter simulation. There are still challenges for achieving precision across the entire phase space, but our current solution can reproduce a variety of particle shower properties while achieving speed-up factors of up to 100,000x. This opens the door to a new era of fast simulation that could save significant computing time and disk space, while extending the reach of physics searches and precision measurements at the LHC and beyond. ×
Quark versus Gluon Jet Tagging Using Charged Particle Multiplicity with the ATLAS Detector
ATLAS Collaboration
Public note: ATL-PHYS-PUB-2017-009
Cite Article
@article{ATL-PHYS-PUB-2017-009,
author="{ATLAS Collaboration}",
title="{Quark versus Gluon Jet Tagging Using Charged Particle Multiplicity with the ATLAS Detector}",
journal = "ATL-PHYS-PUB-2017-009",
url = "http://cdsweb.cern.ch/record/2263679",
year = "2017",
}
×
Quark versus Gluon Jet Tagging Using Charged Particle Multiplicity with the ATLAS Detector
Distinguishing quark-initiated from gluon-initiated jets is useful for many measurements and searches at the LHC. This note presents a quark-initiated versus gluon-initiated jet tagger using the number of charged particle tracks inside the jet. For a 60% quark jet efficiency working point, a gluon jet efficiency between 10 and 20% is achieved across a wide range in jet p_T with systematic uncertainties that are about 5%. ×
Weakly Supervised Classification in High Energy Physics
L. M. Dery, B. Nachman, F. Rubbo, A. Schwartzman
JHEP 05 (2017) 145. · e-Print: 1702.00414
Cite Article
@article{1702.00414,
author="L. M. Dery, B. Nachman, F. Rubbo, and A. Schwartzman",
title="{Weakly Supervised Classification in High Energy Physics.}",
eprint="1702.00414",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "JHEP",
volume = "05",
pages = "145",
doi = "10.1007/JHEP05(2017)145",
year = "2017",
}
×
Weakly Supervised Classification in High Energy Physics
As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. This paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics - quark versus gluon tagging - we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervised classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available. ×
Learning Particle Physics by Example: Location-Aware Generative Adversarial Networks for Physics Synthesis
L. de Oliveira, M. Paganini, and B. Nachman
Computing and Software for Big Science 1 (2017) 4 · e-Print: 1701.05927
Cite Article
@article{1701.05927,
author="L. de Oliveira, M. Paganini, and B. Nachman",
title="{Learning Particle Physics by Example: Location-Aware Generative Adversarial Networks for Physics Synthesis.}",
eprint="1701.05927",
archivePrefix = "arXiv",
primaryClass = "stat.ML",
journal = "Computing and Software for Big Science (2017)",
volume = "1",
pages = "4",
doi = "10.1007/s41781-017-0004-6",
year = "2017",
}
×
Learning Particle Physics by Example: Location-Aware Generative Adversarial Networks for Physics Synthesis
We provide a bridge between generative modeling in the Machine Learning community and simulated physical processes in High Energy Particle Physics by applying a novel Generative Adversarial Network (GAN) architecture to the production of jet images -- 2D representations of energy depositions from particles interacting with a calorimeter. We propose a simple architecture, the Location-Aware Generative Adversarial Network, that learns to produce realistic radiation patterns from simulated high energy particle collisions. The pixel intensities of GAN-generated images faithfully span over many orders of magnitude and exhibit the desired low-dimensional physical properties (i.e., jet mass, n-subjettiness, etc.). We shed light on limitations, and provide a novel empirical validation of image quality and validity of GAN-produced simulations of the natural world. This work provides a base for further explorations of GANs for use in faster simulation in High Energy Particle Physics. ×
Mathematical Properties of Numerical Inversion for Jet Calibrations
A. Cukierman and B. Nachman
Nucl. Instrum. Meth. A 858 (2017) 1. · e-Print: 1609.05195
Cite Article
@article{1609.05195,
author="A. Cukierman and B. Nachman",
title="{Mathematical Properties of Numerical Inversion for Jet Calibrations.}",
eprint="1609.05195",
archivePrefix = "arXiv",
primaryClass = "physics.data-an",
journal = "Nucl. Instrum. Meth. A",
volume = "858",
pages = "1",
doi = "10.1016/j.nima.2017.03.038",
year = "2017",
}
×
Mathematical Properties of Numerical Inversion for Jet Calibrations
Numerical inversion is a general detector calibration technique that is independent of the underlying spectrum. This procedure is formalized and important statistical properties are presented, using high energy jets at the Large Hadron Collider as an example setting. In particular, numerical inversion is inherently biased and common approximations to the calibrated jet energy tend to over-estimate the resolution. Analytic approximations to the closure and calibrated resolutions are demonstrated to effectively predict the full forms under realistic conditions. Finally, extensions of numerical inversion are presented which can reduce the inherent biases. These methods will be increasingly important to consider with degraded resolution at low jet energies due to a much higher instantaneous luminosity in the near future. ×
Search strategy using LHC pileup interactions as a zero bias sample
B. Nachman and F. Rubbo
Phys. Rev. D 97 (2018) 092002 · e-Print: 1608.06299
Cite Article
@article{1608.06299,
author="B. Nachman and F. Rubbo",
title="{Search strategy using LHC pileup interactions as a zero bias sample}",
eprint="1608.06299",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "Phys. Rev. D",
volume = "97",
pages = "092002",
doi = "10.1103/PhysRevD.97.092002",
year = "2018",
}
×
Search strategy using LHC pileup interactions as a zero bias sample
Due to a limited bandwidth and a large proton-proton interaction cross-section relative to the rate of interesting physics processes, most events produced at the Large Hadron Collider (LHC) are discarded in real time. A sophisticated trigger system must quickly decide which events should be kept and is very efficient for a broad range of processes. However, there are many processes that cannot be accommodated by this trigger system. Furthermore, there may be models of physics beyond the Standard Model (BSM) constructed after data taking that could have been triggered, but no trigger was implemented at run time. Both of these cases can be covered by exploiting pileup interactions as an effective zero bias sample. At the end of High-Luminosity LHC operations, this zero bias dataset will have accumulated about 1/fb of data from which a bottom line cross-section limit of O(1) fb can be set for BSM models already in the literature and those yet to come. ×
Jet mass reconstruction with the ATLAS Detector in early Run 2 data
ATLAS Collaboration
Public note: ATLAS-CONF-2016-035
Cite Article
@article{ATLAS-CONF-2016-035,
author="{ATLAS Collaboration}",
title="{Jet mass reconstruction with the ATLAS Detector in early Run 2 data}",
journal = "ATLAS-CONF-2016-035",
url = "http://cdsweb.cern.ch/record/2200211",
year = "2016",
}
×
Jet mass reconstruction with the ATLAS Detector in early Run 2 data
This note presents the details of the ATLAS jet mass reconstruction for groomed large-radius jets. The jet mass scale calibrations are determined from Monte Carlo simulation. An alternative jet mass definition that incorporates tracking information called the track-assisted jet mass is introduced and its performance is compared to the traditional calorimeter-based jet mass definition. Events enriched in boosted W, Z boson and top quark jets are used to directly compare the jet mass scale and jet mass resolution between data and simulation. This in-situ technique is also extended to constrain the jet energy scale and resolution. ×
Search for top squarks in final states with one isolated lepton, jets, and missing transverse momentum in sqrt(s) = 13 TeV pp collisions with the ATLAS detector
@article{1606.03903,
author="{ATLAS Collaboration}",
title="{Search for top squarks in final states with one isolated lepton, jets, and missing transverse momentum in $\sqrt{s}=13$ TeV $pp$ collisions with the ATLAS detector}",
eprint="1606.03903",
archivePrefix = "arXiv",
primaryClass = "hep-ex",
journal = "Phys. Rev. D",
volume = "94",
pages = "052009",
doi = "10.1103/PhysRevD.94.052009",
year = "2016",
}
×
Search for top squarks in final states with one isolated lepton, jets, and missing transverse momentum in sqrt(s) = 13 TeV pp collisions with the ATLAS detector
The results of a search for the stop, the supersymmetric partner of the top quark, in final states with one isolated electron or muon, jets, and missing transverse momentum are reported. The search uses the 2015 LHC pp collision data at a center-of-mass energy of sqrt(s) = 13 TeV recorded by the ATLAS detector and corresponding to an integrated luminosity of 3.2/fb. The analysis targets two types of signal models: gluino-mediated pair production of stops with a nearly mass-degenerate stop and neutralino; and direct pair production of stops, decaying to the top quark and the lightest neutralino. The experimental signature in both signal scenarios is similar to that of a top quark pair produced in association with large missing transverse momentum. No significant excess over the Standard Model background prediction is observed, and exclusion limits on gluino and stop masses are set at 95% confidence level. The results extend the LHC Run-1 exclusion limit on the gluino mass up to 1460 GeV in the gluino-mediated scenario in the high gluino and low stop mass region, and add an excluded stop mass region from 745 to 780 GeV for the direct stop model with a massless lightest neutralino. The results are also reinterpreted to set exclusion limits in a model of vector-like top quarks. ×
Search for top squarks in final states with one isolated lepton, jets, and missing transverse momentum in sqrt(s) = 13 TeV pp collisions of ATLAS data
ATLAS Collaboration
Public note: ATLAS-CONF-2016-007
Cite Article
@article{ATLAS-CONF-2016-007,
author="{ATLAS Collaboration}",
title="{Search for top squarks in final states with one isolated lepton, jets, and missing transverse momentum in $\sqrt{s} = 13$ TeV $pp$ collisions of ATLAS data}",
journal = "ATLAS-CONF-2016-007",
url = "http://cdsweb.cern.ch/record/2139641",
year = "2016",
}
×
Search for top squarks in final states with one isolated lepton, jets, and missing transverse momentum in sqrt(s) = 13 TeV pp collisions of ATLAS data
A search for the stop, the supersymmetric partner of the top quark, is conducted in final states with one isolated electron or muon, jets, and missing transverse momentum using the 2015 LHC pp collision data at a centre-of-mass energy of sqrt(s) = 13TeV recorded by the ATLAS detector and corresponding to an integrated luminosity of 3.2/fb. The analysis targets two types of signal models: gluino-mediated pair production of stops with a nearly mass degenerate stop and neutralino; and direct pair production of stops, decaying to the top quark and the lightest neutralino. The experimental signature in both signal scenarios is similar to that of a top quark pair produced in association with large missing transverse momentum. No significant excess over the Standard Model prediction is observed, and exclusion limits on gluino and stop masses are set at 95% CL. The results extend the LHC Run 1 exclusion limit on the gluino mass up to 1460 GeV in the gluino-mediated scenario in the high gluino and low stop mass region, and for the direct stop model add an excluded stop mass region from 745 to 780 GeV for a massless lightest neutralino. ×
Measurement of the jet mass scale and resolution uncertainty for large radius jets at sqrt(s) = 8 TeV using the ATLAS detector
ATLAS Collaboration
Public note: ATLAS-CONF-2016-008
Cite Article
@article{ATLAS-CONF-2016-008,
author="{ATLAS Collaboration}",
title="{Measurement of the jet mass scale and resolution uncertainty for large radius jets at $\sqrt{s}=8$ TeV using the ATLAS detector}",
journal = "ATLAS-CONF-2016-008",
url = "http://cdsweb.cern.ch/record/2139642",
year = "2016",
}
×
Measurement of the jet mass scale and resolution uncertainty for large radius jets at sqrt(s) = 8 TeV using the ATLAS detector
This note presents a measurement of the jet mass scale and jet mass resolution uncertainty for large radius jets using the full sqrt(s) = 8 TeV dataset from the ATLAS experiment. Large radius jets are calibrated so that on average the reconstructed jet transverse momentum is the same as the corresponding particle-level jet transverse momentum in simulation. The ratio of the reconstructed jet mass to the particle-level jet mass is the jet mass response. The mean response is the jet mass scale and the standard deviation of the jet mass response distribution is the jet mass resolution. The uncertainty on these quantities is measured by fitting the W boson resonant peak in the large radius jet mass spectrum from lepton plus jets ttbar events in both data and Mont Carlo. Large radius jets with p_T > 200 GeV and |\eta|<2.0 are used in this study. Two fitting procedures are used and give comparable results. For the more precise method, the ratio between the data and the Monte Carlo simulation is 1.001 +/- 0.004 (stat) +/- 0.024 (syst) for the jet mass scale and 0.96 +/- 0.05 (stat) +/- 0.18 (syst) for the jet mass resolution. ×
Measurement of the charged particle multiplicity inside jets from sqrt(s) = 8 TeV pp collisions with the ATLAS detector
ATLAS Collaboration
Eur. Phys. J. C 76 (2016) 1 · e-Print: 1602.00988
Cite Article
@article{1602.00988,
author="{ATLAS Collaboration}",
title="{Measurement of the charged particle multiplicity inside jets from $\sqrt{s}=8$ TeV $pp$ collisions with the ATLAS detector}",
eprint="1602.00988",
archivePrefix = "arXiv",
primaryClass = "hep-ex",
journal = "Eur. Phys. J. C",
volume = "76",
pages = "1",
doi = "10.1140/epjc/s10052-016-4126-5",
year = "2016",
}
×
Measurement of the charged particle multiplicity inside jets from sqrt(s) = 8 TeV pp collisions with the ATLAS detector
The number of charged particles inside jets is a widely used discriminant for identifying the quark or gluon nature of the initiating parton and is sensitive to both the perturbative and non-perturbative components of fragmentation. This paper presents a measurement of the average number of charged particles with p_T > 500 MeV inside high-momentum jets in dijet events using 20.3/fb of data recorded with the ATLAS detector in pp collisions at sqrt(s) = 8 TeV collisions at the LHC. The jets considered have transverse momenta from 50 GeV up to and beyond 1.5 TeV. The reconstructed charged-particle track multiplicity distribution is unfolded to remove distortions from detector effects and the resulting charged-particle multiplicity is compared to several models. Furthermore, quark and gluon jet fractions are used to extract the average charged-particle multiplicity for quark and gluon jets separately. ×
Simulation of top quark production for the ATLAS experiment at sqrt(s) = 13 TeV
ATLAS Collaboration
Public note: ATL-PHYS-PUB-2016-004
Cite Article
@article{ATL-PHYS-PUB-2016-004,
author="{ATLAS Collaboration}",
title="{Simulation of top quark production for the ATLAS experiment at $\sqrt{s} = 13$ TeV}",
journal = "ATL-PHYS-PUB-2016-004",
url = "http://cdsweb.cern.ch/record/2120417",
year = "2016",
}
×
Simulation of top quark production for the ATLAS experiment at sqrt(s) = 13 TeV
This note summarises the Monte Carlo simulation setup for the pair and single production of top quarks for the ATLAS experiment at the LHC for sqrt(s)=13 TeV. In addition to the settings available and recommended for analyses using the 2015 dataset, the anticipated setup for 2016 analysis is also discussed. ×
Jet-Images -- Deep Learning Edition
L. de Oliveira, M. Kagan, L. Mackey, B. Nachman, and A. Schwartzman
JHEP 07 (2016) 069. · e-Print: 1511.05190
Cite Article
@article{1511.05190,
author="L. de Oliveira, M. Kagan, L. Mackey, B. Nachman, and A. Schwartzman",
title="{Jet-Images -- Deep Learning Edition.}",
eprint="1511.05190",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "JHEP",
volume = "07",
pages = "069",
doi = "10.1007/JHEP07(2016)069",
year = "2016",
}
×
Jet-Images -- Deep Learning Edition
Building on the notion of a particle physics detector as a camera and the collimated streams of high energy particles, or jets, it measures as an image, we investigate the potential of machine learning techniques based on deep learning architectures to identify highly boosted W bosons. Modern deep learning algorithms trained on jet images can out-perform standard physically-motivated feature driven approaches to jet tagging. We develop techniques for visualizing how these features are learned by the network and what additional information is used to improve performance. This interplay between physically-motivated feature driven tools and supervised learning algorithms is general and can be used to significantly increase the sensitivity to discover new particles and new forces, and gain a deeper understanding of the physics within jets. ×
Performance of jet substructure techniques in early sqrt(s)=13 TeV pp collisions with the ATLAS detector
ATLAS Collaboration
Public note: ATLAS-CONF-2015-035
Cite Article
@article{ATLAS-CONF-2015-035,
author="{ATLAS Collaboration}",
title="{Performance of jet substructure techniques in early $\sqrt{s}=13$ TeV $pp$ collisions with the ATLAS detector}",
journal = "ATLAS-CONF-2015-035",
url = "http://cdsweb.cern.ch/record/2041462",
year = "2015",
}
×
Performance of jet substructure techniques in early sqrt(s)=13 TeV pp collisions with the ATLAS detector
This note provides first studies of large-radius jet properties with the ATLAS detector in pp collisions delivered by the LHC at sqrt(s) = 13 TeV. These properties include the jet mass, N-subjettiness, splitting scales, and energy correlation functions in addition to other jet substructure related quantities. Multiple large-radius jet types are investigated in 50/pb of recorded data and compared to expected results obtained with simulated multijet events. ×
Measurement of jet charge in dijet events from sqrt(s) = 8 TeV pp collisions with the ATLAS detector
ATLAS Collaboration
Phys. Rev. D 93, 052003 (2016) · e-Print: 1509.05190
Cite Article
@article{1509.05190,
author="{ATLAS Collaboration}",
title="{Measurement of jet charge in dijet events from $\sqrt{s}=8$ TeV $pp$ collisions with the ATLAS detector}",
eprint="1509.05190",
archivePrefix = "arXiv",
primaryClass = "hep-ex",
journal = "Phys. Rev. D",
volume = "93",
pages = "052003",
doi = "10.1103/PhysRevD.93.052003",
year = "2016",
}
×
Measurement of jet charge in dijet events from sqrt(s) = 8 TeV pp collisions with the ATLAS detector
The momentum-weighted sum of the charges of tracks associated to a jet is sensitive to the charge of the initiating quark or gluon. This paper presents a measurement of the distribution of momentum-weighted sums, called jet charge, in dijet events using 20.3/fb of data recorded with the ATLAS detector at sqrt(s) = 8 TeV in pp collisions at the LHC. The jet charge distribution is unfolded to remove distortions from detector effects and the resulting particle-level distribution is compared with several models. The p_T-dependence of the jet charge distribution average and standard deviation are compared to predictions obtained with several LO and NLO parton distribution functions. The data are also compared to different Monte Carlo simulations of QCD dijet production using various settings of the free parameters within these models. The chosen value of the strong coupling constant used to calculate gluon radiation is found to have a significant impact on the predicted jet charge. There is evidence for a p_T-dependence of the jet charge distribution for a given jet flavor. In agreement with perturbative QCD predictions, the data show that the average jet charge of quark-initiated jets decreases in magnitude as the energy of the jet increases. ×
A new method to distinguish hadronically decaying boosted Z bosons from W bosons using the ATLAS detector
ATLAS Collaboration
Eur. Phys. J. C76 (2016) 238 · e-Print: 1509.04939
Cite Article
@article{1509.04939,
author="{ATLAS Collaboration}",
title="{A new method to distinguish hadronically decaying boosted Z bosons from W bosons using the ATLAS detector}",
eprint="1509.04939",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "Eur. Phys. J. C",
volume = "76",
pages = "238",
doi = "10.1140/epjc/s10052-016-4065-1",
year = "2016",
}
×
A new method to distinguish hadronically decaying boosted Z bosons from W bosons using the ATLAS detector
The distribution of particles inside hadronic jets produced in the decay of boosted W and Z bosons can be used to discriminate such jets from the continuum background. Given that a jet has been identified as likely resulting from the hadronic decay of a boosted W or Z boson, this paper presents a technique for further differentiating Z bosons from W bosons. The variables used are jet mass, jet charge, and a b-tagging discriminant. A likelihood tagger is constructed from these variables and tested in the simulation of W' goes to WZ for bosons in the transverse momentum range 200 GeV < p_T < 400 GeV in sqrt(s) = 8 TeV pp collisions with the ATLAS detector at the LHC. For Z-boson tagging efficiencies of 90%, 50%, and 10%, one can achieve W^+-boson tagging rejection factors (1/W^+ efficiency) of 1.7, 8.3 and 1000, respectively. It is not possible to measure these efficiencies in the data due to the lack of a pure sample of high pT, hadronically decaying Z bosons. However, the modelling of the tagger inputs for boosted W bosons is studied in data using a ttbar-enriched sample of events in 20.3/fb of data at sqrt(s) = 8 TeV. The inputs are well modelled within uncertainties, which builds confidence in the expected tagger performance. ×
Fuzzy Jets
L. Mackey, B. Nachman, A. Schwartzman, and C. Stansbury
JHEP 06 (2016) 010. · e-Print: 1509.02216
Cite Article
@article{1509.02216,
author="L. Mackey, B. Nachman, A. Schwartzman, and C. Stansbury",
title="{Fuzzy Jets}",
eprint="1509.02216",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "JHEP",
volume = "06",
pages = "010",
doi = "10.1007/JHEP06(2016)010",
year = "2016",
}
×
Fuzzy Jets
Collimated streams of particles produced in high energy physics experiments are organized using clustering algorithms to form jets. To construct jets, the experimental collaborations based at the Large Hadron Collider (LHC) primarily use agglomerative hierarchical clustering schemes known as sequential recombination. We propose a new class of algorithms for clustering jets that use infrared and collinear safe mixture models. These new algorithms, known as fuzzy jets, are clustered using maximum likelihood techniques and can dynamically determine various properties of jets like their size. We show that the fuzzy jet size adds additional information to conventional jet tagging variables. Furthermore, we study the impact of pileup and show that with some slight modifications to the algorithm, fuzzy jets can be stable up to high pileup interaction multiplicities. ×
Superposition Coding is Almost Always Optimal for the Poisson Broadcast Channel
H. Kim, B. Nachman, and A. El Gamal
IEEE Transactions on Information Theory 62 (2015) 1782 · e-Print: 1508.04228
Cite Article
@article{1508.04228,
author="H. Kim, B. Nachman, and A. El Gamal",
title="{Superposition Coding is Almost Always Optimal for the Poisson Broadcast Channel}",
eprint="1508.04228",
archivePrefix = "arXiv",
primaryClass = "cs.IT",
journal = "IEEE Transactions on Information Theory",
volume = "62",
pages = "1782",
doi = "10.1109/TIT.2016.2527790",
year = "2015",
}
×
Superposition Coding is Almost Always Optimal for the Poisson Broadcast Channel
This paper shows that the capacity region of the continuous-time Poisson broadcast channel is achieved via superposition coding for most channel parameter values. Interestingly, the channel in some subset of these parameter values does not belong to any of the existing classes of broadcast channels for which superposition coding is optimal (e.g., degraded, less noisy, more capable). In particular, we introduce the notion of effectively less noisy broadcast channel and show that it implies less noisy but is not in general implied by more capable. For the rest of the channel parameter values, we show that there is a gap between Marton's inner bound and the UV outer bound. ×
Measurement of jet charge in dijet events from sqrt(s) = 8 TeV pp collisions with the ATLAS detector
ATLAS Collaboration
Public note: ATLAS-CONF-2015-025
Cite Article
@article{ATLAS-CONF-2015-025,
author="{ATLAS Collaboration}",
title="{Measurement of jet charge in dijet events from $\sqrt{s} = 8$ TeV $pp$ collisions with the ATLAS detector}",
journal = "ATLAS-CONF-2015-025",
url = "https://cds.cern.ch/record/2037618",
year = "2015",
}
×
Measurement of jet charge in dijet events from sqrt(s) = 8 TeV pp collisions with the ATLAS detector
The momentum-weighted sum of the charges of tracks associated to a jet is sensitive to the charge of the initiating quark or gluon. This paper presents a measurement of the distribution of one class of momentum-weighted sums, called the jet charge, in dijet events using 20.3/fb of data recorded with the ATLAS detector at sqrt(s) = 8 TeV pp collisions at the LHC. The jet charge distribution is unfolded to remove distortions from detector effects and the resulting particle level distribution is compared with several models. The p_T-dependence of the jet charge distribution average and standard deviation are compared to predictions obtained with several LO and NLO parton density functions and the best description of the data is found with CTEQ6L1. The data are also compared to different Monte Carlo predictions of QCD using various settings of the free parameters within these models. The choice of the strong coupling constant \alpha_s used to calculate QCD radiation is found to have a significant impact on the predicted jet charge. There is evidence for a p_T-dependence of the jet charge distribution for a given jet flavor. In agreement with perturbative QCD predictions, the data show that the average jet charge of quark-initiated jets decreases in magnitude as the energy of the jet increases. ×
Measurement of colour flow with the jet pull angle in ttbar events using the ATLAS detector at sqrt(s) = 8 TeV
ATLAS Collaboration
Phys. Lett. B (2015) 475 · e-Print: 1506.05629
Cite Article
@article{1506.05629,
author="{ATLAS Collaboration}",
title="{Measurement of colour flow with the jet pull angle in ttbar events using the ATLAS detector at sqrt(s) = 8 TeV}",
eprint="1506.05629",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "Phys. Lett. B (2015) ",
volume = "750",
pages = "475",
doi = "10.1016/j.physletb.2015.09.051",
year = "2015",
}
×
Measurement of colour flow with the jet pull angle in ttbar events using the ATLAS detector at sqrt(s) = 8 TeV
The distribution and orientation of energy inside jets is predicted to be an experimental handle on colour connections between the hard--scatter quarks and gluons initiating the jets. This Letter presents a measurement of the distribution of one such variable, the jet pull angle. The pull angle is measured for jets produced in ttbar events with one W boson decaying leptonically and the other decaying to jets using 20.3/fb of data recorded with the ATLAS detector at a centre-of-mass energy of sqrt(s) = 8 TeV at the LHC. The jet pull angle distribution is corrected for detector resolution and acceptance effects and is compared to various models. ×
Less is More when Gluinos Mediate
B. Nachman
Mod. Phys. Lett. A, 31, 1650052 (2016) · e-Print: 1505.00994
Cite Article
@article{1505.00994,
author="B. Nachman",
title="{Less is More when Gluinos Mediate}",
eprint="1505.00994",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "Mod. Phys. Lett. A",
volume = "31",
pages = "1650052",
doi = "10.1142/S0217732316500528",
year = "2016",
}
×
Less is More when Gluinos Mediate
Compressed mass spectra are generally more difficult to identify than spectra with large splittings. In particular, gluino pair production with four high energy top or bottom quarks leaves a striking signature in a detector. However, if any of the mass splittings are compressed, the power of traditional techniques may deteriorate. Searches for direct stop/sbottom pair production can fill in the gaps. As a demonstration, we show that for gluino to top + stop and the stop mass is about the same as the neutralino mass, limits on the stop mass at 8 TeV can be extended by least 300 GeV for a 1.1 TeV gluino using a di-stop search search. At 13 TeV, the effective cross section for the gluino mediated process is twice the direct stop/sbottom pair production cross section, suggesting that direct stop/sbottom searches could be sensitive to discover new physics earlier than expected. ×
A fast, simple, and naturally machine-precision algorithm for calculating both symmetric and asymmetric MT2, for any physical inputs
C.G. Lester and B. Nachman
JHEP 03 (2015) 100 · e-Print: 1411.4312
Cite Article
@article{1411.4312,
author="C.G. Lester and B. Nachman",
title="{A fast, simple, and naturally machine-precision algorithm for calculating both symmetric and asymmetric MT2, for any physical inputs}",
eprint="1411.4312",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "JHEP",
volume = "03",
pages = "100",
doi = "10.1007/JHEP03(2015)100",
year = "2015",
}
×
A fast, simple, and naturally machine-precision algorithm for calculating both symmetric and asymmetric MT2, for any physical inputs
An MT2 calculation algorithm is described. It is shown to achieve better precision than the fastest and most popular existing bisection-based methods. Most importantly, it is also the first algorithm to be able to reliably calculate asymmetric MT2 to machine-precision, at speeds comparable to the fastest commonly used symmetric calculators. ×
Sneaky Light Stop
T. Eifert and B. Nachman
Phys. Lett. B 743 (2015) 218. · e-Print: 1410.7025
Cite Article
@article{1410.7025,
author="T. Eifert and B. Nachman",
title="{Sneaky Light Stop}",
eprint="1410.7025",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "Phys. Lett. B",
volume = "743",
pages = "218",
doi = "10.1016/j.physletb.2015.02.039",
year = "2015",
}
×
Sneaky Light Stop
A light supersymmetric top quark partner (stop) with a mass nearly degenerate with that of the Standard Model (SM) top quark can evade direct searches. The precise measurement of SM top properties such as the cross-section has been suggested to give a handle for this `stealth stop' scenario. We present an estimate of the potential impact a light stop may have on top quark mass measurements. The results indicate that certain light stop models may induce a bias of up to a few GeV, and that this effect can hide the shift in, and hence sensitivity from, cross-section measurements. The studies make some simplifying assumptions for the top quark measurement technique, and are based on truth-level samples. ×
A Meta-analysis of the 8 TeV ATLAS and CMS SUSY Searches
B. Nachman and T. Rudelius
JHEP 02 (2015) 004. · e-Print: 1410.2270
Cite Article
@article{1410.2270,
author="B. Nachman and T. Rudelius",
title="{A Meta-analysis of the 8 TeV ATLAS and CMS SUSY Searches}",
eprint="1410.2270",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "JHEP",
volume = "02",
pages = "004",
doi = "10.1007/JHEP02(2015)004",
year = "2015",
}
×
A Meta-analysis of the 8 TeV ATLAS and CMS SUSY Searches
Between the ATLAS and CMS collaborations at the LHC, hundreds of individual event selections have been measured in the data to look for evidence of supersymmetry at a center of mass energy of 8 TeV. While there is currently no significant evidence for any particular model of supersymmetry, the large number of searches should have produced some large statistical fluctuations. By analyzing the distribution of p-values from the various searches, we determine that the number of excesses is consistent with the Standard Model only hypothesis. However, we do find a shortage of signal regions with far fewer observed events than expected in both the ATLAS and CMS datasets (at 1.65 sigma and 2.77 sigma, respectively). While not as compelling as a surplus of excesses, the lack of deficits could be a hint of new physics already in the 8 TeV datasets. ×
Jets from Jets: Re-clustering as a tool for large radius jet reconstruction and grooming at the LHC
B. Nachman. P. Nef, A. Schwartzman, M. Swiatlowski, and C. Wanotayaroj
JHEP 02 (2015) 075. · e-Print: 1407.2922
Cite Article
@article{1407.2922,
author="B. Nachman. P. Nef, A. Schwartzman, M. Swiatlowski, and C. Wanotayaroj",
title="{Jets from Jets: Re-clustering as a tool for large radius jet reconstruction and grooming at the LHC.}",
eprint="1407.2922",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "JHEP",
volume = "02",
pages = "075",
doi = "10.1007/JHEP02(2015)075",
year = "2015",
}
×
Jets from Jets: Re-clustering as a tool for large radius jet reconstruction and grooming at the LHC
Jets with a large radius R > 1 and grooming algorithms are widely used to fully capture the decay products of boosted heavy particles at the Large Hadron Collider (LHC). Unlike most discriminating variables used in such studies, the jet radius is usually not optimized for specific physics scenarios. This is because every jet configuration must be calibrated, insitu, to account for detector response and other experimental effects. One solution to enhance the availability of large-R jet configurations used by the LHC experiments is jet re-clustering. Jet re-clustering introduces an intermediate scale r < R at which jets are calibrated and used as the inputs to reconstruct large radius jets. In this paper we systematically study and propose new jet re-clustering configurations and show that re-clustered large radius jets have essentially the same jet mass performance as large radius groomed jets. Jet re-clustering has the benefit that no additional large-R calibration is necessary, allowing the re-clustered large radius parameter to be optimized in the context of specific precision measurements or searches for new physics. ×
Reconstruction and Modelling of Jet Pull with the ATLAS Detector
ATLAS Collaboration
Public note: ATLAS-CONF-2014-048
Cite Article
@article{ATLAS-CONF02014-048,
author="{ATLAS Collaboration}",
title="{Reconstruction and Modelling of Jet Pull with the ATLAS Detector}",
journal = "ATLAS-CONF-2014-048",
url = "http://cdsweb.cern.ch/record/1741708",
year = "2014",
}
×
Reconstruction and Modelling of Jet Pull with the ATLAS Detector
Weighted radial moments over the constituents of a jet have previously been shown to be an experimental handle on colour connections between the initiating partons. This note presents a study of the detector performance in reconstructing one such moment, the jet pull angle for jets produced in ttbar events with one leptonically decaying W boson using 20.3/fb of data recorded with the ATLAS detector at sqrt(s) = 8 TeV. ×
Search for top squark pair production in final states with one isolated lepton, jets, and missing transverse momentum in sqrt(s)= 8 TeV
ATLAS Collaboration
JHEP 11 (2014) 118 · e-Print: 1407.0583
Cite Article
@article{1407.0583,
author="{ATLAS Collaboration}",
title="{Search for top squark pair production in final states with one isolated lepton, jets, and missing transverse momentum in $\sqrt{s}= 8$ TeV}",
eprint="1407.0583",
archivePrefix = "arXiv",
primaryClass = "hep-ex",
journal = "JHEP",
volume = "11",
pages = "118",
doi = "10.1007/JHEP11(2014)118",
year = "2014",
}
×
Search for top squark pair production in final states with one isolated lepton, jets, and missing transverse momentum in sqrt(s)= 8 TeV
The results of a search for top squark (stop) pair production in final states with one isolated lepton, jets, and missing transverse momentum are reported. The analysis is performed with proton--proton collision data at sqrt(s) = 8 TeV collected with the ATLAS detector at the LHC in 2012 corresponding to an integrated luminosity of 20/fb. The lightest supersymmetric particle (LSP) is taken to be the lightest neutralino which only interacts weakly and is assumed to be stable. The stop decay modes considered are those to a top quark and the LSP as well as to a bottom quark and the lightest chargino, where the chargino decays to the LSP by emitting a W boson. A wide range of scenarios with different mass splittings between the stop, the lightest neutralino and the lightest chargino are considered, including cases where the W bosons or the top quarks are off-shell. Decay modes involving the heavier charginos and neutralinos are addressed using a set of phenomenological models of supersymmetry. No significant excess over the Standard Model prediction is observed. A stop with a mass between 210 and 640 GeV decaying directly to a top quark and a massless LSP is excluded at 95 % confidence level, and in models where the mass of the lightest chargino is twice that of the LSP, stops are excluded at 95 % confidence level up to a mass of 500 GeV for an LSP mass in the range of 100 to 150 GeV. Stringent exclusion limits are also derived for all other stop decay modes considered, and model-independent upper limits are set on the visible cross-section for processes beyond the Standard Model. ×
Investigating Multiple Solutions in the Constrained Minimal Supersymmetric Standard Model
B.C. Allanach, Damien P. George, and Benjamin Nachman
JHEP 02 (2014) 031 · e-Print: 1311.3960
Cite Article
@article{1311.3960,
author="B.C. Allanach, Damien P. George, and Benjamin Nachman",
title="{Investigating Multiple Solutions in the Constrained Minimal Supersymmetric Standard Model}",
eprint="1311.3960",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "JHEP",
volume = "02",
pages = "031",
doi = "10.1007/JHEP02(2014)031",
year = "2013",
}
×
Investigating Multiple Solutions in the Constrained Minimal Supersymmetric Standard Model
Recent work has shown that the Constrained Minimal Supersymmetric Standard Model (CMSSM) can possess several distinct solutions for certain values of its parameters. The extra solutions were not previously found by public supersymmetric spectrum generators because fixed point iteration (the algorithm used by the generators) is unstable in the neighbourhood of these solutions. The existence of the additional solutions calls into question the robustness of exclusion limits derived from collider experiments and cosmological observations upon the CMSSM, because limits were only placed on one of the solutions. Here, we map the CMSSM by exploring its multi-dimensional parameter space using the shooting method, which is not subject to the stability issues which can plague fixed point iteration. We are able to find multiple solutions where in all previous literature only one was found. The multiple solutions are of two distinct classes. One class, close to the border of bad electroweak symmetry breaking, is disfavoured by LEP2 searches for neutralinos and charginos. The other class has sparticles that are heavy enough to evade the LEP2 bounds. Chargino masses may differ by up to around 10% between the different solutions, whereas other sparticle masses differ at the sub-percent level. The prediction for the dark matter relic density can vary by a hundred percent or more between the different solutions, so analyses employing the dark matter constraint are incomplete without their inclusion. ×
Jet Charge Studies with the ATLAS Detector Using sqrt(s) = 8 TeV Proton-Proton Collision Data
ATLAS Collaboration
Public note: ATLAS-CONF-2013-086
Cite Article
@article{ATLAS-CONF-2013-086,
author="{ATLAS Collaboration}",
title="{Jet Charge Studies with the ATLAS Detector Using $\sqrt{s} = 8$ TeV Proton-Proton Collision Data}",
journal = "ATLAS-CONF-2013-086",
url = "http://cdsweb.cern.ch/record/1572980",
year = "2013",
}
×
Jet Charge Studies with the ATLAS Detector Using sqrt(s) = 8 TeV Proton-Proton Collision Data
The momentum-weighted sum of the charges of tracks associated to a jet provides an experimental handle on the electric charge of fundamental strongly-interacting particles. Presented here is a study of this jet charge observable for jets produced in dijet, W+jets, and semileptonic ttbar events using 5.8-15.2/fb of data with the ATLAS detector at sqrt(s) = 8 TeV. In addition to providing a constraint on hadronization models, jet charge has many possible applications in measurements and searches. This note documents the study of the modelling of jet charge and its performance as a charge-tagger, in order to establish this observable as a tool for future physics analyses. ×
Measurement of masses in the ttbar system by kinematic endpoints in pp collisions at sqrt(s) = 7 TeV
CMS Collaboration
Eur. Phys. J. C 73 (2013) 2494 · e-Print: 1304.5783
Cite Article
@article{1304.5783,
author="{ATLAS Collaboration}",
title="{Measurement of masses in the $t\bar{t}$ system by kinematic endpoints in $pp$ collisions at $\sqrt{s} = 7$ TeV}",
eprint="1304.5783",
archivePrefix = "arXiv",
primaryClass = "hep-ex",
journal = "Eur. Phys. J. C",
volume = "73",
pages = "2494",
doi = "10.1140/epjc/s10052-013-2494-7",
year = "2013",
}
×
Measurement of masses in the ttbar system by kinematic endpoints in pp collisions at sqrt(s) = 7 TeV
A simultaneous measurement of the top-quark, W-boson, and neutrino masses is reported for t t-bar events selected in the dilepton final state from a data sample corresponding to an integrated luminosity of 5.0 inverse femtobarns collected by the CMS experiment in pp collisions at sqrt(s) = 7 TeV. The analysis is based on endpoint determinations in kinematic distributions. When the neutrino and W-boson masses are constrained to their world-average values, a top-quark mass value of M[t] = 173.9 +/- 0.9 (stat.) +1.7/-2.1 (syst.) GeV is obtained. When such constraints are not used, the three particle masses are obtained in a simultaneous fit. In this unconstrained mode the study serves as a test of mass determination methods that may be used in beyond standard model physics scenarios where several masses in a decay chain may be unknown and undetected particles lead to underconstrained kinematics. ×
Significance Variables
B. Nachman and C. G. Lester
Phys. Rev. D88 (2013) 075013 · e-Print: 1303.7009
Cite Article
@article{1303.7009,
author="B. Nachman and C. G. Lester",
title="{Significance Variables}",
eprint="1303.7009",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
journal = "Phys. Rev. D",
volume = "88",
pages = "075013",
doi = "10.1103/PhysRevD.88.075013",
year = "2013",
}
×
Significance Variables
Many particle physics analyses which need to discriminate some background process from a signal ignore event-by-event resolutions of kinematic variables. Adding this information, as is done for missing momentum significance, can only improve the power of existing techniques. We therefore propose the use of significance variables which combine kinematic information with event-by-event resolutions. We begin by giving some explicit examples of constructing optimal significance variables. Then, we consider three applications: new heavy gauge bosons, Higgs to \tau\tau, and direct stop squark pair production. We find that significance variables can provide additional discriminating power over the original kinematic variables: about 20% improvement over mT in the case of H to \tau\tau case, and about 30% impovement over mT2 in the case of the direct stop search. ×
Search for Direct Top Squark Pair Production in Final States with One Isolated Lepton, Jets, and Missing Transverse Momentum in sqrt(s) = 8 TeV pp Collisions using 210/fb of ATLAS Data
ATLAS Collaboration
Public note: ATLAS-CONF-2013-037
Cite Article
@article{ATLAS-CONF-2013-038,
author="{ATLAS Collaboration}",
title="{Search for Direct Top Squark Pair Production in Final States with One Isolated Lepton, Jets, and Missing Transverse Momentum in $\sqrt{s}=8$ TeV $pp$ Collisions using 21.0/fb of ATLAS Data}",
journal = "ATLAS-CONF-2013-037",
url = "http://cdsweb.cern.ch/record/1532431",
year = "2013",
}
×
Search for Direct Top Squark Pair Production in Final States with One Isolated Lepton, Jets, and Missing Transverse Momentum in sqrt(s) = 8 TeV pp Collisions using 210/fb of ATLAS Data
A search is presented for direct top squark pair production in final states with one isolated electron or muon, jets, and missing transverse momentum in proton-proton collisions at a centre-of-mass energy of 8 TeV. The analysis is based on 20.7/fb of data collected with the ATLAS detector at the LHC. The top squarks are assumed to decay to a top quark and the lightest supersymmetric particle (LSP) or to a bottom quark and a chargino, where the chargino decays to an on- or off-shell W boson and to the LSP. The data are found to be consistent with Standard Model expectations. Assuming both top squarks decay to a top quark and the LSP, top squark masses between 200 and 610 GeV are excluded at 95% confidence level for massless LSPs, and top squark masses around 500 GeV are excluded for LSP masses up to 250 GeV. Assuming both top squarks decay to a bottom quark and the lightest chargino, top squark masses up to 410 GeV are excluded for massless LSPs and an assumed chargino mass of 150 GeV. ×
Droplet Breakup of the Nematic Liquid Crystal MBBA
B. Nachman and I. Cohen
e-Print: 1212.5976
Cite Article
@article{1212.5976,
author="B. Nachman and I. Cohen",
title="{Droplet Breakup of the Nematic Liquid Crystal MBBA}",
eprint="1212.5976",
year = "2012",
}
×
Droplet Breakup of the Nematic Liquid Crystal MBBA
Droplet breakup is a well studied phenomena in Newtonian fluids. One property of this behavior is that, independent of initial conditions, the minimum radius exhibits power law scaling with the time left to breakup tau. Because they have additional structure and shear dependent viscosity, liquid crystals pose an interesting complication to such studies. Here, we investigate the breakup of a synthetic nematic liquid crystal known as MBBA. We determine the phase of the solution by using a cross polarizer setup in situ with the liquid bridge breakup apparatus. Consistent with previous studies of scaling behavior in viscous-inertial fluid breakup, when MBBA is in the isotropic phase, the minimum radius decreases as tau^{1.03 \pm 0.04}. In the nematic phase however, we observe very different thinning behavior. Our measurements of the thinning profile are consistent with two interpretations. In the first interpretation, the breakup is universal and consists of two different regimes. The first regime is characterized by a symmetric profile with a single minimum whose radius decreases as tau^{1.51 \pm 0.06}. The second and final regime is characterized by two minima whose radii decrease as tau^{0.52 \pm 0.11}. These results are in excellent agreement with previous measurements of breakup in the nematic phase of liquid crystal 8CB and 5CB. Interestingly, we find that the entire thinning behavior can also be fit with an exponential decay such that R_{min} \sim exp((1.2\times 10^2 Hz) tau). This dependence is more reminiscent of breakup in polymers where entropic stretching slows the thinning process. An analogous mechanism for slowing in liquid crystals could arise from the role played by topological constraints governing defect dynamics. Consistent with this interpretation, crossed polarizer images indicate that significant alignment of the liquid crystal domains occurs during breakup. ×
Search for Direct Top Squark Pair Production in Final States with One Isolated Lepton, Jets, and Missing Transverse Momentum in sqrt(s) = 8 TeV pp Collisions using 130/fb of ATLAS Data
ATLAS Collaboration
Public note: ATLAS-CONF-2012-166
Cite Article
@article{ATLAS-CONF-2012-166,
author="{ATLAS Collaboration}",
title="{Search for Direct Top Squark Pair Production in Final States with One Isolated Lepton, Jets, and Missing Transverse Momentum in $\sqrt{s}=8$ TeV $pp$ Collisions using 13.0/fb of ATLAS Data}",
journal = "ATLAS-CONF-2012-166",
url = "http://cdsweb.cern.ch/record/1497732",
year = "2012",
}
×
Search for Direct Top Squark Pair Production in Final States with One Isolated Lepton, Jets, and Missing Transverse Momentum in sqrt(s) = 8 TeV pp Collisions using 130/fb of ATLAS Data
A search is presented for direct top squark pair production in final states with one isolated electron or muon, jets, and missing transverse momentum in proton-proton collisions at a center-of-mass energy of 8 TeV. The analysis is based on 13.0 fb−1 of data collected with the ATLAS detector at the LHC. The top squarks are assumed to decay each either to a top quark and the lightest supersymmetric particle (LSP) or each to a bottom quark and a chargino, where the chargino decays to an on- or off-shell W boson and to the LSP. The data are found to be consistent with Standard Model expectations. Assuming both top squarks decay to a top quark and LSP, top squark masses between 225 and 560 GeV are excluded at 95% confidence for massless LSPs, and top squark masses around 500 GeV are excluded for LSP masses up to 175 GeV. Assuming both top squarks decay to a bottom quark and chargino, top squark masses up to 350 GeV are excluded for massless LSPs and a chargino mass of 150 GeV. ×
Generating Sequences of PSL(2,p)
B. Nachman
J. Group Theory 17 (2014) 925 · e-Print: 1210.2073
Cite Article
@article{1210.2073,
author="B. Nachman",
title="{Generating Sequences of PSL(2,p)}",
eprint="1210.2073",
archivePrefix = "arXiv",
primaryClass = "math.GR",
journal = "J. Group Theory",
volume = "17",
pages = "925",
doi = "10.1515/jgt-2014-0013",
year = "2012",
}
×
Generating Sequences of PSL(2,p)
Julius Whiston and Jan Saxl showed that the size of an irredundant generating set of the group G=PSL(2,p) is at most four and computed the size m(G) of a maximal set for many primes. We will extend this result to a larger class of primes, with a surprising result that when p\not\equiv\pm 1\mod 10, m(G)=3 except for the special case p=7. In addition, we will determine which orders of elements in irredundant generating sets of PSL(2,p) with lengths less than or equal to four are possible in most cases. We also give some remarks about the behavior of PSL(2,p) with respect to the replacement property for groups. ×
Evidence for Conservatism in SUSY Searches
B. Nachman and T. Rudelius
Eur. Phys. J. Plus (2012) 127: 157 · e-Print: 1209.3522
Cite Article
@article{1209.3522,
author="B. Nachman and T. Rudelius",
title="{Evidence for Conservatism in SUSY Searches}",
eprint="1209.3522",
archivePrefix = "arXiv",
primaryClass = "stat.AP",
journal = "Eur. Phys. J. Plus",
volume = "127",
pages = "157",
doi = "10.1140/epjp/i2012-12157-0",
year = "2012",
}
×
Evidence for Conservatism in SUSY Searches
The standard in the high energy physics community for claiming discovery of new physics is a 5 sigma excess in the observed signal over the estimated background. While a 3 sigma excess is not enough to claim discovery, it is certainly enough to pique the interest of both experimentalists and theorists. However, with a large number of searches performed by both the ATLAS and CMS collaborations at the LHC, one expects a nonzero number of multi-sigma results simply due to statistical fluctuations in the no-signal scenario. Our analysis examines the distribution of p-values for CMS and ATLAS supersymmetry (SUSY) searches using the full 2011 data set to determine if the collaborations are being overly conservative in their analyses. We find that there is a statistically significant excess of `medium' sigma values at the level of p=0.005, indicating over-conservativism in the estimation of uncertainties. ×