We identify primary ways in which self-supervision can be added to adversarial training, and observe that using a self-supervised loss to optimize both network parameters and find adversarial examples leads to the strongest improvement in model robustness, as this can be viewed as a form of ensemble adversarial training.
This work tests whether deep neural networks can clean laser induced breakdown spectroscopy (LIBS) signals by using only uncleaned raw measurements.
Constructing probability densities for inference in high-dimensional spectral data is often intractable.
Because all steps involved in calculating this modified GDD are differentiable, we demonstrate that it is possible for a small neural network model to learn edge weights which minimize loss.
This work proposes a spectral convolutional neural network (CNN) operating on laser induced breakdown spectroscopy (LIBS) signals to learn to (1) disentangle spectral signals from the sources of sensor uncertainty (i. e., pre-process) and (2) get qualitative and quantitative measures of chemical content of a sample given a spectral signal (i. e., calibrate).
Catastrophic failure in brittle materials is often due to the rapid growth and coalescence of cracks aided by high internal stresses.
We then use our small set of manually labeled patent diagram images via transfer learning to adapt the image search from sketches of natural images to diagrams.
Binary image based classification and retrieval of documents of an intellectual nature is a very challenging problem.
Line segment detection is an essential task in computer vision and image analysis, as it is the critical foundation for advanced tasks such as shape modeling and road lane line detection for autonomous driving.
no code implementations • 4 Jan 2019 • Carleton Coffrin, James Arnold, Stephan Eidenbenz, Derek Aberle, John Ambrosiano, Zachary Baker, Sara Brambilla, Michael Brown, K. Nolan Carter, Pinghan Chu, Patrick Conry, Keeley Costigan, Ariane Eberhardt, David M. Fobes, Adam Gausmann, Sean Harris, Donovan Heimer, Marlin Holmes, Bill Junor, Csaba Kiss, Steve Linger, Rodman Linn, Li-Ta Lo, Jonathan MacCarthy, Omar Marcillo, Clay McGinnis, Alexander McQuarters, Eric Michalak, Arvind Mohan, Matt Nelson, Diane Oyen, Nidhi Parikh, Donatella Pasqualini, Aaron s. Pope, Reid Porter, Chris Rawlings, Hannah Reinbolt, Reid Rivenburgh, Phil Romero, Kevin Schoonover, Alexei Skurikhin, Daniel Tauritz, Dima Tretiak, Zhehui Wang, James Wernicke, Brad Wolfe, Phillip Wolfram, Jonathan Woodring
This report describes eighteen projects that explored how commercial cloud computing services can be utilized for scientific computation at national laboratories.
5 code implementations • • Patrick J. Coles, Stephan Eidenbenz, Scott Pakin, Adetokunbo Adedoyin, John Ambrosiano, Petr Anisimov, William Casper, Gopinath Chennupati, Carleton Coffrin, Hristo Djidjev, David Gunter, Satish Karra, Nathan Lemons, Shizeng Lin, Andrey Lokhov, Alexander Malyzhenkov, David Mascarenas, Susan Mniszewski, Balu Nadiga, Dan O'Malley, Diane Oyen, Lakshman Prasad, Randy Roberts, Phil Romero, Nandakishore Santhi, Nikolai Sinitsyn, Pieter Swart, Marc Vuffray, Jim Wendelberger, Boram Yoon, Richard Zamora, Wei Zhu
As quantum computers have become available to the general public, the need has arisen to train a cohort of quantum programmers, many of whom have been developing classic computer programs for most of their career.
Emerging Technologies Quantum Physics
Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering.
Bayesian network structure learning algorithms with limited data are being used in domains such as systems biology and neuroscience to gain insight into the underlying processes that produce observed data.
We then show that by imposing a bias towards learning similar dependency networks for each condition the false discovery rates can be reduced to acceptable levels, at the cost of finding a reduced number of differences.