Search Results for author: James Lucas

Found 15 papers, 7 papers with code

Causal Scene BERT: Improving object detection by searching for challenging groups of data

no code implementations8 Feb 2022 Cinjon Resnick, Or Litany, Amlan Kar, Karsten Kreis, James Lucas, Kyunghyun Cho, Sanja Fidler

Our main contribution is a pseudo-automatic method to discover such groups in foresight by performing causal interventions on simulated scenes.

Autonomous Vehicles Object Detection

Causal Scene BERT: Improving object detection by searching for challenging groups

no code implementations29 Sep 2021 Cinjon Resnick, Or Litany, Amlan Kar, Karsten Kreis, James Lucas, Kyunghyun Cho, Sanja Fidler

We verify that the prioritized groups found via intervention are challenging for the object detector and show that retraining with data collected from these groups helps inordinately compared to adding more IID data.

Autonomous Vehicles Object Detection

Analyzing Monotonic Linear Interpolation in Neural Network Loss Landscapes

1 code implementation22 Apr 2021 James Lucas, Juhan Bae, Michael R. Zhang, Stanislav Fort, Richard Zemel, Roger Grosse

Linear interpolation between initial neural network parameters and converged parameters after training with stochastic gradient descent (SGD) typically leads to a monotonic decrease in the training objective.

Exploring representation learning for flexible few-shot tasks

no code implementations1 Jan 2021 Mengye Ren, Eleni Triantafillou, Kuan-Chieh Wang, James Lucas, Jake Snell, Xaq Pitkow, Andreas S. Tolias, Richard Zemel

In this work, we consider a realistic setting where the relationship between examples can change from episode to episode depending on the task context, which is not given to the learner.

Few-Shot Learning Representation Learning

Few-Shot Attribute Learning

no code implementations10 Dec 2020 Mengye Ren, Eleni Triantafillou, Kuan-Chieh Wang, James Lucas, Jake Snell, Xaq Pitkow, Andreas S. Tolias, Richard Zemel

Compared to standard few-shot learning of semantic classes, in which novel classes may be defined by attributes that were relevant at training time, learning new attributes imposes a stiffer challenge.

Few-Shot Learning Zero-Shot Learning

Theoretical bounds on estimation error for meta-learning

no code implementations ICLR 2021 James Lucas, Mengye Ren, Irene Kameni, Toniann Pitassi, Richard Zemel

Machine learning models have traditionally been developed under the assumption that the training and test distributions match exactly.

Few-Shot Learning

Regularized linear autoencoders recover the principal components, eventually

1 code implementation NeurIPS 2020 Xuchan Bao, James Lucas, Sushant Sachdeva, Roger Grosse

Our understanding of learning input-output relationships with neural nets has improved rapidly in recent years, but little is known about the convergence of the underlying representations, even in the simple case of linear autoencoders (LAEs).

Don't Blame the ELBO! A Linear VAE Perspective on Posterior Collapse

no code implementations NeurIPS 2019 James Lucas, George Tucker, Roger Grosse, Mohammad Norouzi

Posterior collapse in Variational Autoencoders (VAEs) arises when the variational posterior distribution closely matches the prior for a subset of latent variables.

Variational Inference

Understanding Posterior Collapse in Generative Latent Variable Models

no code implementations ICLR Workshop DeepGenStruct 2019 James Lucas, George Tucker, Roger Grosse, Mohammad Norouzi

Posterior collapse in Variational Autoencoders (VAEs) arises when the variational distribution closely matches the uninformative prior for a subset of latent variables.

Variational Inference

Sorting out Lipschitz function approximation

1 code implementation13 Nov 2018 Cem Anil, James Lucas, Roger Grosse

We identify a necessary property for such an architecture: each of the layers must preserve the gradient norm during backpropagation.

Adversarial Robustness Generalization Bounds

Distilling the Posterior in Bayesian Neural Networks

no code implementations ICML 2018 Kuan-Chieh Wang, Paul Vicol, James Lucas, Li Gu, Roger Grosse, Richard Zemel

We propose a framework, Adversarial Posterior Distillation, to distill the SGLD samples using a Generative Adversarial Network (GAN).

Active Learning Anomaly Detection

Adversarial Distillation of Bayesian Neural Network Posteriors

1 code implementation27 Jun 2018 Kuan-Chieh Wang, Paul Vicol, James Lucas, Li Gu, Roger Grosse, Richard Zemel

We propose a framework, Adversarial Posterior Distillation, to distill the SGLD samples using a Generative Adversarial Network (GAN).

Active Learning Anomaly Detection

Aggregated Momentum: Stability Through Passive Damping

2 code implementations ICLR 2019 James Lucas, Shengyang Sun, Richard Zemel, Roger Grosse

Momentum is a simple and widely used trick which allows gradient-based optimizers to pick up speed along low curvature directions.

Cannot find the paper you are looking for? You can Submit a new open access paper.