Search Results for author: Neale Ratzlaff

Found 9 papers, 1 papers with code

Contrastive Identification of Covariate Shift in Image Data

no code implementations18 Aug 2021 Matthew L. Olson, Thuy-Vy Nguyen, Gaurav Dixit, Neale Ratzlaff, Weng-Keen Wong, Minsuk Kahng

Identifying covariate shift is crucial for making machine learning systems robust in the real world and for detecting training data biases that are not reflected in test data.

Attribute

Generative Particle Variational Inference via Estimation of Functional Gradients

no code implementations1 Mar 2021 Neale Ratzlaff, Qinxun Bai, Li Fuxin, Wei Xu

Recently, particle-based variational inference (ParVI) methods have gained interest because they can avoid arbitrary parametric assumptions that are common in variational inference.

Variational Inference

Avoiding Side Effects in Complex Environments

2 code implementations NeurIPS 2020 Alexander Matt Turner, Neale Ratzlaff, Prasad Tadepalli

By preserving optimal value for a single randomly generated reward function, AUP incurs modest overhead while leading the agent to complete the specified task and avoid many side effects.

Implicit Generative Modeling for Efficient Exploration

no code implementations ICML 2020 Neale Ratzlaff, Qinxun Bai, Li Fuxin, Wei Xu

Each random draw from our generative model is a neural network that instantiates the dynamic function, hence multiple draws would approximate the posterior, and the variance in the future prediction based on this posterior is used as an intrinsic reward for exploration.

Efficient Exploration Future prediction

HyperGAN: A Generative Model for Diverse, Performant Neural Networks

no code implementations30 Jan 2019 Neale Ratzlaff, Li Fuxin

We introduce HyperGAN, a new generative model for learning a distribution of neural network parameters.

General Classification

HyperGAN: Exploring the Manifold of Neural Networks

no code implementations27 Sep 2018 Neale Ratzlaff, Li Fuxin

We introduce HyperGAN, a generative network that learns to generate all the weight parameters of deep neural networks.

Unifying Bilateral Filtering and Adversarial Training for Robust Neural Networks

no code implementations5 Apr 2018 Neale Ratzlaff, Li Fuxin

To evaluate against an adversary with complete knowledge of our defense, we adapt the bilateral filter as a trainable layer in a neural network and show that adding this layer makes ImageNet images significantly more robust to attacks.

Cannot find the paper you are looking for? You can Submit a new open access paper.