1 code implementation • 24 Jan 2024 • Lukas Heinrich, Tobias Golling, Michael Kagan, Samuel Klein, Matthew Leigh, Margarita Osadchy, John Andrew Raine
We propose masked particle modeling (MPM) as a self-supervised method for learning generic, transferable, and reusable representations on unordered sets of inputs for use in high energy physics (HEP) scientific data.
no code implementations • 15 Dec 2023 • Debajyoti Sengupta, Matthew Leigh, John Andrew Raine, Samuel Klein, Tobias Golling
We introduce a new technique called Drapes to enhance the sensitivity in searches for new physics at the LHC.
no code implementations • 12 Sep 2023 • Tobias Golling, Samuel Klein, Radha Mastandrea, Benjamin Nachman, John Andrew Raine
We propose a protocol called flows for flows for training normalizing flows to morph one dataset into another even if the underlying probability density of neither dataset is known explicitly.
no code implementations • 3 Aug 2023 • Sarah Schwettmann, Neil Chowdhury, Samuel Klein, David Bau, Antonio Torralba
Language models demonstrate remarkable capacity to generalize representations learned in one modality to downstream tasks in other modalities.
no code implementations • 8 May 2023 • Debajyoti Sengupta, Samuel Klein, John Andrew Raine, Tobias Golling
Model independent techniques for constructing background data templates using generative models have shown great promise for use in searches for new physics processes at the LHC.
1 code implementation • 4 Nov 2022 • Samuel Klein, Tobias Golling
The sensitivity of many physics analyses can be enhanced by constructing discriminants that preferentially select signal events.
1 code implementation • 4 Nov 2022 • Samuel Klein, John Andrew Raine, Tobias Golling
Normalizing flows are constructed from a base distribution with a known density and a diffeomorphism with a tractable Jacobian.
1 code implementation • 30 May 2022 • Bálint Máté, Samuel Klein, Tobias Golling, François Fleuret
On the other hand, neural networks only perform a forward pass on the input, there is neither a notion of an inverse of a neural network nor is there one of its likelihood contribution.
1 code implementation • 15 Dec 2021 • Samuel Klein, John A. Raine, Sebastian Pina-Otey, Slava Voloshynovskiy, Tobias Golling
Normalizing flows are diffeomorphic, typically dimension-preserving, models trained using the likelihood of the model.
1 code implementation • ICCV 2021 • Sarah Schwettmann, Evan Hernandez, David Bau, Samuel Klein, Jacob Andreas, Antonio Torralba
A large body of recent work has identified transformations in the latent spaces of generative adversarial networks (GANs) that consistently and interpretably transform generated images.
1 code implementation • Processes 2021 • Oliver Mey, André Schneider, Olaf Enge-Rosenblatt, Dirk Mayer, Christian Schmidt, Samuel Klein, Hans-Georg Herrmann
Early damage detection and classification by condition monitoring systems is crucial to enable predictive maintenance of manufacturing systems and industrial facilities.