Search Results for author: Nhan Tran

Found 32 papers, 15 papers with code

Architectural Implications of Neural Network Inference for High Data-Rate, Low-Latency Scientific Applications

no code implementations13 Mar 2024 Olivia Weng, Alexander Redding, Nhan Tran, Javier Mauricio Duarte, Ryan Kastner

With more scientific fields relying on neural networks (NNs) to process data incoming at extreme throughputs and latencies, it is crucial to develop NNs with all their parameters stored on-chip.

Neural Architecture Codesign for Fast Bragg Peak Analysis

no code implementations10 Dec 2023 Luke McDermott, Jason Weitz, Dmitri Demler, Daniel Cummings, Nhan Tran, Javier Duarte

We develop an automated pipeline to streamline neural architecture codesign for fast, real-time Bragg peak analysis in high-energy diffraction microscopy.

Model Compression Network Pruning +2

Low latency optical-based mode tracking with machine learning deployed on FPGAs on a tokamak

no code implementations30 Nov 2023 Yumou Wei, Ryan F. Forelli, Chris Hansen, Jeffrey P. Levesque, Nhan Tran, Joshua C. Agar, Giuseppe Di Guglielmo, Michael E. Mauel, Gerald A. Navratil

Active feedback control in magnetic confinement fusion devices is desirable to mitigate plasma instabilities and enable robust operation.

On-Sensor Data Filtering using Neuromorphic Computing for High Energy Physics Experiments

no code implementations20 Jul 2023 Shruti R. Kulkarni, Aaron Young, Prasanna Date, Narasinga Rao Miniskar, Jeffrey S. Vetter, Farah Fahim, Benjamin Parpillon, Jennet Dickinson, Nhan Tran, Jieun Yoo, Corrinne Mills, Morris Swartz, Petar Maksimovic, Catherine D. Schuman, Alice Bean

We present our insights on the various system design choices - from data encoding to optimal hyperparameters of the training algorithm - for an accurate and compact SNN optimized for hardware deployment.

Differentiable Earth Mover's Distance for Data Compression at the High-Luminosity LHC

no code implementations7 Jun 2023 Rohan Shenoy, Javier Duarte, Christian Herwig, James Hirschauer, Daniel Noonan, Maurizio Pierini, Nhan Tran, Cristina Mantilla Suarez

In this paper, we train a convolutional neural network (CNN) to learn a differentiable, fast approximation of the EMD and demonstrate that it can be used as a substitute for computing-intensive EMD implementations.

Data Compression

Structural Re-weighting Improves Graph Domain Adaptation

1 code implementation5 Jun 2023 Shikun Liu, Tianchun Li, Yongbin Feng, Nhan Tran, Han Zhao, Qiu Qiang, Pan Li

This work examines different impacts of distribution shifts caused by either graph structure or node attributes and identifies a new type of shift, named conditional structure shift (CSS), which current GDA approaches are provably sub-optimal to deal with.

Attribute Domain Adaptation

End-to-end codesign of Hessian-aware quantized neural networks for FPGAs and ASICs

no code implementations13 Apr 2023 Javier Campos, Zhen Dong, Javier Duarte, Amir Gholami, Michael W. Mahoney, Jovan Mitrevski, Nhan Tran

We develop an end-to-end workflow for the training and implementation of co-designed neural networks (NNs) for efficient field-programmable gate array (FPGA) and application-specific integrated circuit (ASIC) hardware.

Quantization

Neural network accelerator for quantum control

1 code implementation4 Aug 2022 David Xu, A. Barış Özgüler, Giuseppe Di Guglielmo, Nhan Tran, Gabriel N. Perdue, Luca Carloni, Farah Fahim

Efficient quantum control is necessary for practical quantum computing implementations with current technologies.

BIG-bench Machine Learning

QONNX: Representing Arbitrary-Precision Quantized Neural Networks

1 code implementation15 Jun 2022 Alessandro Pappalardo, Yaman Umuroglu, Michaela Blott, Jovan Mitrevski, Ben Hawks, Nhan Tran, Vladimir Loncar, Sioni Summers, Hendrik Borras, Jules Muhizi, Matthew Trahms, Shih-Chieh Hsu, Scott Hauck, Javier Duarte

We present extensions to the Open Neural Network Exchange (ONNX) intermediate representation format to represent arbitrary-precision quantized neural networks.

Quantization

Applications and Techniques for Fast Machine Learning in Science

no code implementations25 Oct 2021 Allison McCarn Deiana, Nhan Tran, Joshua Agar, Michaela Blott, Giuseppe Di Guglielmo, Javier Duarte, Philip Harris, Scott Hauck, Mia Liu, Mark S. Neubauer, Jennifer Ngadiuba, Seda Ogrenci-Memik, Maurizio Pierini, Thea Aarrestad, Steffen Bahr, Jurgen Becker, Anne-Sophie Berthold, Richard J. Bonventre, Tomas E. Muller Bravo, Markus Diefenthaler, Zhen Dong, Nick Fritzsche, Amir Gholami, Ekaterina Govorkova, Kyle J Hazelwood, Christian Herwig, Babar Khan, Sehoon Kim, Thomas Klijnsma, Yaling Liu, Kin Ho Lo, Tri Nguyen, Gianantonio Pezzullo, Seyedramin Rasoulinezhad, Ryan A. Rivera, Kate Scholberg, Justin Selig, Sougata Sen, Dmitri Strukov, William Tang, Savannah Thais, Kai Lukas Unger, Ricardo Vilalta, Belinavon Krosigk, Thomas K. Warburton, Maria Acosta Flechas, Anthony Aportela, Thomas Calvet, Leonardo Cristella, Daniel Diaz, Caterina Doglioni, Maria Domenica Galati, Elham E Khoda, Farah Fahim, Davide Giri, Benjamin Hawks, Duc Hoang, Burt Holzman, Shih-Chieh Hsu, Sergo Jindariani, Iris Johnson, Raghav Kansal, Ryan Kastner, Erik Katsavounidis, Jeffrey Krupa, Pan Li, Sandeep Madireddy, Ethan Marx, Patrick McCormack, Andres Meza, Jovan Mitrevski, Mohammed Attia Mohammed, Farouk Mokhtar, Eric Moreno, Srishti Nagu, Rohin Narayan, Noah Palladino, Zhiqiang Que, Sang Eon Park, Subramanian Ramamoorthy, Dylan Rankin, Simon Rothman, ASHISH SHARMA, Sioni Summers, Pietro Vischia, Jean-Roch Vlimant, Olivia Weng

In this community review report, we discuss applications and techniques for fast machine learning (ML) in science -- the concept of integrating power ML methods into the real-time experimental data processing loop to accelerate scientific discovery.

BIG-bench Machine Learning

Semi-supervised Graph Neural Network for Particle-level Noise Removal

no code implementations NeurIPS Workshop AI4Scien 2021 Tianchun Li, Shikun Liu, Yongbin Feng, Nhan Tran, Miaoyuan Liu, Pan Li

The graph neural network is trained on charged particles with well-known labels, which can be obtained from simulation truth information or measurements from data, and inferred on neutral particles of which such labeling is missing.

A reconfigurable neural network ASIC for detector front-end data compression at the HL-LHC

no code implementations4 May 2021 Giuseppe Di Guglielmo, Farah Fahim, Christian Herwig, Manuel Blanco Valentin, Javier Duarte, Cristian Gingu, Philip Harris, James Hirschauer, Martin Kwok, Vladimir Loncar, Yingyi Luo, Llovizna Miranda, Jennifer Ngadiuba, Daniel Noonan, Seda Ogrenci-Memik, Maurizio Pierini, Sioni Summers, Nhan Tran

We demonstrate that a neural network autoencoder model can be implemented in a radiation tolerant ASIC to perform lossy data compression alleviating the data transmission problem while preserving critical information of the detector energy profile.

Data Compression Quantization

Ps and Qs: Quantization-aware pruning for efficient low latency neural network inference

1 code implementation22 Feb 2021 Benjamin Hawks, Javier Duarte, Nicholas J. Fraser, Alessandro Pappalardo, Nhan Tran, Yaman Umuroglu

We study various configurations of pruning during quantization-aware training, which we term quantization-aware pruning, and the effect of techniques like regularization, batch normalization, and different pruning schemes on performance, computational complexity, and information content metrics.

Bayesian Optimization Computational Efficiency +2

Real-time Artificial Intelligence for Accelerator Control: A Study at the Fermilab Booster

1 code implementation14 Nov 2020 Jason St. John, Christian Herwig, Diana Kafkes, William A. Pellico, Gabriel N. Perdue, Andres Quintero-Parra, Brian A. Schupbach, Kiyomi Seiya, Nhan Tran, Javier M. Duarte, Yunzhi Huang, Malachi Schram, Rachael Keller

We describe a method for precisely regulating the gradient magnet power supply at the Fermilab Booster accelerator complex using a neural network trained via reinforcement learning.

Accelerator Physics

FPGAs-as-a-Service Toolkit (FaaST)

2 code implementations16 Oct 2020 Dylan Sheldon Rankin, Jeffrey Krupa, Philip Harris, Maria Acosta Flechas, Burt Holzman, Thomas Klijnsma, Kevin Pedro, Nhan Tran, Scott Hauck, Shih-Chieh Hsu, Matthew Trahms, Kelvin Lin, Yu Lou, Ta-Wei Ho, Javier Duarte, Mia Liu

Computing needs for high energy physics are already intensive and are expected to increase drastically in the coming years.

Computational Physics Distributed, Parallel, and Cluster Computing High Energy Physics - Experiment Data Analysis, Statistics and Probability Instrumentation and Detectors

Graph Neural Networks for Particle Reconstruction in High Energy Physics detectors

no code implementations25 Mar 2020 Xiangyang Ju, Steven Farrell, Paolo Calafiura, Daniel Murnane, Prabhat, Lindsey Gray, Thomas Klijnsma, Kevin Pedro, Giuseppe Cerati, Jim Kowalkowski, Gabriel Perdue, Panagiotis Spentzouris, Nhan Tran, Jean-Roch Vlimant, Alexander Zlokapa, Joosep Pata, Maria Spiropulu, Sitong An, Adam Aurisano, Jeremy Hewes, Aristeidis Tsaris, Kazuhiro Terao, Tracy Usher

Pattern recognition problems in high energy physics are notably different from traditional machine learning applications in computer vision.

Instrumentation and Detectors High Energy Physics - Experiment Computational Physics Data Analysis, Statistics and Probability

Fast inference of Boosted Decision Trees in FPGAs for particle physics

3 code implementations5 Feb 2020 Sioni Summers, Giuseppe Di Guglielmo, Javier Duarte, Philip Harris, Duc Hoang, Sergo Jindariani, Edward Kreinar, Vladimir Loncar, Jennifer Ngadiuba, Maurizio Pierini, Dylan Rankin, Nhan Tran, Zhenbin Wu

We describe the implementation of Boosted Decision Trees in the hls4ml library, which allows the translation of a trained model into FPGA firmware through an automated conversion process.

Translation

FPGA-accelerated machine learning inference as a service for particle physics computing

1 code implementation18 Apr 2019 Javier Duarte, Philip Harris, Scott Hauck, Burt Holzman, Shih-Chieh Hsu, Sergo Jindariani, Suffian Khan, Benjamin Kreis, Brian Lee, Mia Liu, Vladimir Lončar, Jennifer Ngadiuba, Kevin Pedro, Brandon Perez, Maurizio Pierini, Dylan Rankin, Nhan Tran, Matthew Trahms, Aristeidis Tsaris, Colin Versteeg, Ted W. Way, Dustin Werran, Zhenbin Wu

New heterogeneous computing paradigms on dedicated hardware with increased parallelization, such as Field Programmable Gate Arrays (FPGAs), offer exciting solutions with large potential gains.

Data Analysis, Statistics and Probability High Energy Physics - Experiment Computational Physics Instrumentation and Detectors

Fast inference of deep neural networks in FPGAs for particle physics

2 code implementations16 Apr 2018 Javier Duarte, Song Han, Philip Harris, Sergo Jindariani, Edward Kreinar, Benjamin Kreis, Jennifer Ngadiuba, Maurizio Pierini, Ryan Rivera, Nhan Tran, Zhenbin Wu

For our example jet substructure model, we fit well within the available resources of modern FPGAs with a latency on the scale of 100 ns.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.