Search Results for author: Peter Sadowski

Found 17 papers, 7 papers with code

Searching for Exotic Particles in High-Energy Physics with Deep Learning

2 code implementations19 Feb 2014 Pierre Baldi, Peter Sadowski, Daniel Whiteson

Standard approaches have relied on `shallow' machine learning models that have a limited capacity to learn complex non-linear functions of the inputs, and rely on a pain-staking search through manually constructed non-linear features.

High Energy Physics - Phenomenology High Energy Physics - Experiment

Enhanced Higgs to $τ^+τ^-$ Searches with Deep Learning

no code implementations13 Oct 2014 Pierre Baldi, Peter Sadowski, Daniel Whiteson

The Higgs boson is thought to provide the interaction that imparts mass to the fundamental fermions, but while measurements at the Large Hadron Collider (LHC) are consistent with this hypothesis, current analysis techniques lack the statistical power to cross the traditional 5$\sigma$ significance barrier without more data.

Bayesian Optimization

A Theory of Local Learning, the Learning Channel, and the Optimality of Backpropagation

no code implementations22 Jun 2015 Pierre Baldi, Peter Sadowski

The nature of the communicated information about the targets and the structure of the learning channel partition the space of learning algorithms.

Parameterized Machine Learning for High-Energy Physics

2 code implementations28 Jan 2016 Pierre Baldi, Kyle Cranmer, Taylor Faucett, Peter Sadowski, Daniel Whiteson

We investigate a new structure for machine learning classifiers applied to problems in high-energy physics by expanding the inputs to include not only measured features but also physics parameters.

BIG-bench Machine Learning Vocal Bursts Intensity Prediction

Theano: A Python framework for fast computation of mathematical expressions

1 code implementation9 May 2016 The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang

Since its introduction, it has been one of the most used CPU and GPU mathematical compilers - especially in the machine learning community - and has shown steady performance improvements.

BIG-bench Machine Learning Clustering +2

Learning in the Machine: Random Backpropagation and the Deep Learning Channel

no code implementations8 Dec 2016 Pierre Baldi, Peter Sadowski, Zhiqin Lu

It is remarkable both because of its effectiveness, in spite of using random matrices to communicate error information, and because it completely removes the taxing requirement of maintaining symmetric weights in a physical neural system.

Decorrelated Jet Substructure Tagging using Adversarial Neural Networks

no code implementations10 Mar 2017 Chase Shimmin, Peter Sadowski, Pierre Baldi, Edison Weik, Daniel Whiteson, Edward Goul, Andreas Søgaard

We describe a strategy for constructing a neural network jet substructure tagger which powerfully discriminates boosted decay signals while remaining largely uncorrelated with the jet mass.

Jet Tagging

Efficient Antihydrogen Detection in Antimatter Physics by Deep Learning

1 code implementation6 Jun 2017 Peter Sadowski, Balint Radics, Ananya, Yasunori Yamazaki, Pierre Baldi

Antihydrogen is at the forefront of antimatter research at the CERN Antiproton Decelerator.

Learning in the Machine: the Symmetries of the Deep Learning Channel

no code implementations22 Dec 2017 Pierre Baldi, Peter Sadowski, Zhiqin Lu

Specifically, random backpropagation and its variations can be performed with the same non-linear neurons used in the main input-output forward channel, and the connections in the learning channel can be adapted using the same algorithm used in the forward channel, removing the need for any specialized hardware in the learning channel.

Neural Network Regression with Beta, Dirichlet, and Dirichlet-Multinomial Outputs

no code implementations ICLR 2019 Peter Sadowski, Pierre Baldi

We show that each target can be modeled as a sample from a Dirichlet distribution, where the parameters of the Dirichlet are provided by the output of a neural network, and that the combined model can be trained using the gradient of the data likelihood.

Decision Making regression

Tourbillon: a Physically Plausible Neural Architecture

no code implementations13 Jul 2021 Mohammadamin Tavakoli, Peter Sadowski, Pierre Baldi

The circular autoencoders are trained in self-supervised mode by recirculation algorithms and the top layer in supervised mode by stochastic gradient descent, with the option of propagating error information through the entire stack using non-symmetric connections.

Quantitative Imaging Principles Improves Medical Image Learning

1 code implementation14 Jun 2022 Lambert T. Leong, Michael C. Wong, Yannik Glaser, Thomas Wolfgruber, Steven B. Heymsfield, Peter Sadowski, John A. Shepherd

Differences between image types are primarily due to the imaging modality and medical images utilize a wide range of physics based techniques while natural images are captured using only visible light.

Self-Supervised Learning Transfer Learning

FishNet: Deep Neural Networks for Low-Cost Fish Stock Estimation

no code implementations16 Mar 2024 Moseli Mots'oehli, Anton Nikolaev, Wawan B. IGede, John Lynham, Peter J. Mous, Peter Sadowski

These models are trained on a dataset of 50, 000 hand-annotated images containing 163 different fish species, ranging in length from 10cm to 250cm.

Classification object-detection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.