Search Results for author: Ian Fischer

Found 30 papers, 13 papers with code

A Human-Inspired Reading Agent with Gist Memory of Very Long Contexts

no code implementations15 Feb 2024 Kuang-Huei Lee, Xinyun Chen, Hiroki Furuta, John Canny, Ian Fischer

Current Large Language Models (LLMs) are not only limited to some maximum context length, but also are not able to robustly consume long inputs.

Reading Comprehension Retrieval

Weighted Ensemble Self-Supervised Learning

no code implementations18 Nov 2022 Yangjun Ruan, Saurabh Singh, Warren Morningstar, Alexander A. Alemi, Sergey Ioffe, Ian Fischer, Joshua V. Dillon

Ensembling has proven to be a powerful technique for boosting model performance, uncertainty estimation, and robustness in supervised learning.

Self-Supervised Learning

PI-QT-Opt: Predictive Information Improves Multi-Task Robotic Reinforcement Learning at Scale

no code implementations15 Oct 2022 Kuang-Huei Lee, Ted Xiao, Adrian Li, Paul Wohlhart, Ian Fischer, Yao Lu

The predictive information, the mutual information between the past and future, has been shown to be a useful representation learning auxiliary loss for training reinforcement learning agents, as the ability to model what will happen next is critical to success on many control tasks.

reinforcement-learning Reinforcement Learning (RL) +2

Deep Hierarchical Planning from Pixels

1 code implementation8 Jun 2022 Danijar Hafner, Kuang-Huei Lee, Ian Fischer, Pieter Abbeel

Despite operating in latent space, the decisions are interpretable because the world model can decode goals into images for visualization.

Atari Games Hierarchical Reinforcement Learning

Multi-Game Decision Transformers

1 code implementation30 May 2022 Kuang-Huei Lee, Ofir Nachum, Mengjiao Yang, Lisa Lee, Daniel Freeman, Winnie Xu, Sergio Guadarrama, Ian Fischer, Eric Jang, Henryk Michalewski, Igor Mordatch

Specifically, we show that a single transformer-based model - with a single set of weights - trained purely offline can play a suite of up to 46 Atari games simultaneously at close-to-human performance.

Atari Games Offline RL

An Empirical Investigation of Representation Learning for Imitation

2 code implementations16 May 2022 Xin Chen, Sam Toyer, Cody Wild, Scott Emmons, Ian Fischer, Kuang-Huei Lee, Neel Alex, Steven H Wang, Ping Luo, Stuart Russell, Pieter Abbeel, Rohin Shah

We propose a modular framework for constructing representation learning algorithms, then use our framework to evaluate the utility of representation learning for imitation across several environment suites.

Image Classification Imitation Learning +1

Sparsity-Inducing Categorical Prior Improves Robustness of the Information Bottleneck

no code implementations4 Mar 2022 Anirban Samaddar, Sandeep Madireddy, Prasanna Balaprakash, Tapabrata Maiti, Gustavo de los Campos, Ian Fischer

In addition, it provides a mechanism for learning a joint distribution of the latent variable and the sparsity and hence can account for the complete uncertainty in the latent space.

Compressive Visual Representations

1 code implementation NeurIPS 2021 Kuang-Huei Lee, Anurag Arnab, Sergio Guadarrama, John Canny, Ian Fischer

We verify this by developing SimCLR and BYOL formulations compatible with the Conditional Entropy Bottleneck (CEB) objective, allowing us to both measure and control the amount of compression in the learned representation, and observe their impact on downstream tasks.

Contrastive Learning Self-Supervised Image Classification

VIB is Half Bayes

no code implementations pproximateinference AABI Symposium 2021 Alexander A Alemi, Warren R Morningstar, Ben Poole, Ian Fischer, Joshua V Dillon

In discriminative settings such as regression and classification there are two random variables at play, the inputs X and the targets Y.

regression

Cycles in Causal Learning

no code implementations24 Jul 2020 Katie Everett, Ian Fischer

In the causal learning setting, we wish to learn cause-and-effect relationships between variables such that we can correctly infer the effect of an intervention.

An Unsupervised Information-Theoretic Perceptual Quality Metric

no code implementations NeurIPS 2020 Sangnie Bhardwaj, Ian Fischer, Johannes Ballé, Troy Chinen

We show that PIM is competitive with supervised metrics on the recent and challenging BAPPS image quality assessment dataset and outperforms them in predicting the ranking of image compression methods in CLIC 2020.

Image Compression Image Quality Assessment +2

CEB Improves Model Robustness

1 code implementation13 Feb 2020 Ian Fischer, Alexander A. Alemi

We demonstrate that the Conditional Entropy Bottleneck (CEB) can improve model robustness.

Adversarial Robustness Data Augmentation

The Conditional Entropy Bottleneck

no code implementations ICLR 2019 Ian Fischer

We experimentally test our hypothesis by comparing the performance of CEB models with deterministic models and Variational Information Bottleneck (VIB) models on a variety of different datasets and robustness challenges.

Out of Distribution (OOD) Detection

Phase Transitions for the Information Bottleneck in Representation Learning

no code implementations ICLR 2020 Tailin Wu, Ian Fischer

In the Information Bottleneck (IB), when tuning the relative strength between compression and prediction terms, how do the two terms behave, and what's their relationship with the dataset and the learned representation?

Representation Learning

Information-Bottleneck Approach to Salient Region Discovery

no code implementations22 Jul 2019 Andrey Zhmoginov, Ian Fischer, Mark Sandler

We propose a new method for learning image attention masks in a semi-supervised setting based on the Information Bottleneck principle.

Learnability for the Information Bottleneck

no code implementations ICLR Workshop LLD 2019 Tailin Wu, Ian Fischer, Isaac L. Chuang, Max Tegmark

However, in practice, not only is $\beta$ chosen empirically without theoretical guidance, there is also a lack of theoretical understanding between $\beta$, learnability, the intrinsic nature of the dataset and model capacity.

Representation Learning

Dueling Decoders: Regularizing Variational Autoencoder Latent Spaces

no code implementations17 May 2019 Bryan Seybold, Emily Fertig, Alex Alemi, Ian Fischer

Variational autoencoders learn unsupervised data representations, but these models frequently converge to minima that fail to preserve meaningful semantic information.

TherML: The Thermodynamics of Machine Learning

no code implementations27 Sep 2018 Alexander A. Alemi, Ian Fischer

In this work we offer an information-theoretic framework for representation learning that connects with a wide class of existing objectives in machine learning.

BIG-bench Machine Learning Representation Learning

TherML: Thermodynamics of Machine Learning

no code implementations11 Jul 2018 Alexander A. Alemi, Ian Fischer

In this work we offer a framework for reasoning about a wide class of existing objectives in machine learning.

BIG-bench Machine Learning

Uncertainty in the Variational Information Bottleneck

no code implementations2 Jul 2018 Alexander A. Alemi, Ian Fischer, Joshua V. Dillon

We present a simple case study, demonstrating that Variational Information Bottleneck (VIB) can improve a network's classification calibration as well as its ability to detect out-of-distribution data.

General Classification

GILBO: One Metric to Measure Them All

1 code implementation NeurIPS 2018 Alexander A. Alemi, Ian Fischer

We propose a simple, tractable lower bound on the mutual information contained in the joint generative density of any latent variable generative model: the GILBO (Generative Information Lower BOund).

An information-theoretic analysis of deep latent-variable models

no code implementations ICLR 2018 Alex Alemi, Ben Poole, Ian Fischer, Josh Dillon, Rif A. Saurus, Kevin Murphy

We present an information-theoretic framework for understanding trade-offs in unsupervised learning of deep latent-variables models using variational inference.

Variational Inference

Fixing a Broken ELBO

1 code implementation ICML 2018 Alexander A. Alemi, Ben Poole, Ian Fischer, Joshua V. Dillon, Rif A. Saurous, Kevin Murphy

Recent work in unsupervised representation learning has focused on learning deep directed latent-variable models.

Representation Learning

Generative Models of Visually Grounded Imagination

no code implementations ICLR 2018 Ramakrishna Vedantam, Ian Fischer, Jonathan Huang, Kevin Murphy

It is easy for people to imagine what a man with pink hair looks like, even if they have never seen such a person before.

Attribute

Adversarial Transformation Networks: Learning to Generate Adversarial Examples

2 code implementations28 Mar 2017 Shumeet Baluja, Ian Fischer

We efficiently train feed-forward neural networks in a self-supervised manner to generate adversarial examples against a target network or set of networks.

Adversarial examples for generative models

1 code implementation22 Feb 2017 Jernej Kos, Ian Fischer, Dawn Song

We explore methods of producing adversarial examples on deep generative models such as the variational autoencoder (VAE) and the VAE-GAN.

Classification General Classification

Deep Variational Information Bottleneck

9 code implementations1 Dec 2016 Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, Kevin Murphy

We present a variational approximation to the information bottleneck of Tishby et al. (1999).

Adversarial Attack

Speed/accuracy trade-offs for modern convolutional object detectors

14 code implementations CVPR 2017 Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang song, Sergio Guadarrama, Kevin Murphy

On the opposite end in which accuracy is critical, we present a detector that achieves state-of-the-art performance measured on the COCO detection task.

Ranked #220 on Object Detection on COCO test-dev (using extra training data)

Object object-detection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.