Search Results for author: Philip Bachman

Found 24 papers, 14 papers with code

Decomposing Mutual Information for Representation Learning

no code implementations1 Jan 2021 Alessandro Sordoni, Nouha Dziri, Hannes Schulz, Geoff Gordon, Remi Tachet des Combes, Philip Bachman

In this paper, we transform each view into a set of subviews and then decompose the original MI bound into a sum of bounds involving conditional MI between the subviews.

Dialogue Generation Representation Learning

Representation Learning with Video Deep InfoMax

no code implementations27 Jul 2020 R. Devon Hjelm, Philip Bachman

DeepInfoMax (DIM) is a self-supervised method which leverages the internal structure of deep networks to construct such views, forming prediction tasks between local features which depend on small patches in an image and global features which depend on the whole image.

Action Recognition Data Augmentation +2

Data-Efficient Reinforcement Learning with Self-Predictive Representations

1 code implementation ICLR 2021 Max Schwarzer, Ankesh Anand, Rishab Goel, R. Devon Hjelm, Aaron Courville, Philip Bachman

We further improve performance by adding data augmentation to the future prediction loss, which forces the agent's representations to be consistent across multiple views of an observation.

Atari Games 100k Data Augmentation +4

Deep Reinforcement and InfoMax Learning

1 code implementation NeurIPS 2020 Bogdan Mazoure, Remi Tachet des Combes, Thang Doan, Philip Bachman, R. Devon Hjelm

We begin with the hypothesis that a model-free agent whose representations are predictive of properties of future states (beyond expected rewards) will be more capable of solving and adapting to new RL problems.

Continual Learning

Learning Representations by Maximizing Mutual Information Across Views

4 code implementations NeurIPS 2019 Philip Bachman, R. Devon Hjelm, William Buchwalter

Following our proposed approach, we develop a model which learns image representations that significantly outperform prior methods on the tasks we consider.

Data Augmentation Representation Learning +2

Learning Invariances for Policy Generalization

1 code implementation7 Sep 2018 Remi Tachet, Philip Bachman, Harm van Seijen

While recent progress has spawned very powerful machine learning systems, those agents remain extremely specialized and fail to transfer the knowledge they gain to similar yet unseen tasks.

Data Augmentation Meta-Learning +1

Augmented CycleGAN: Learning Many-to-Many Mappings from Unpaired Data

3 code implementations ICML 2018 Amjad Almahairi, Sai Rajeswar, Alessandro Sordoni, Philip Bachman, Aaron Courville

Learning inter-domain mappings from unpaired data can improve performance in structured prediction tasks, such as image segmentation, by reducing the need for paired data.

Semantic Segmentation Structured Prediction

Deep Reinforcement Learning that Matters

5 code implementations19 Sep 2017 Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, David Meger

In recent years, significant progress has been made in solving challenging problems across various domains using deep reinforcement learning (RL).

reinforcement-learning

Variational Generative Stochastic Networks with Collaborative Shaping

1 code implementation2 Aug 2017 Philip Bachman, Doina Precup

We develop an approach to training generative models based on unrolling a variational auto-encoder into a Markov chain, and shaping the chain's trajectories using a technique inspired by recent work in Approximate Bayesian computation.

reinforcement-learning

Calibrating Energy-based Generative Adversarial Networks

1 code implementation6 Feb 2017 Zihang Dai, Amjad Almahairi, Philip Bachman, Eduard Hovy, Aaron Courville

In this paper, we propose to equip Generative Adversarial Networks with the ability to produce direct energy estimates for samples. Specifically, we propose a flexible adversarial training framework, and prove this framework not only ensures the generator converges to the true data distribution, but also enables the discriminator to retain the density information at the global optimal.

Image Generation

An Architecture for Deep, Hierarchical Generative Models

no code implementations NeurIPS 2016 Philip Bachman

We present an architecture which lets us train deep, directed generative models with many layers of latent variables.

Towards Information-Seeking Agents

no code implementations8 Dec 2016 Philip Bachman, Alessandro Sordoni, Adam Trischler

We develop a general problem setting for training and testing the ability of agents to gather information efficiently.

reinforcement-learning

Iterative Alternating Neural Attention for Machine Reading

1 code implementation7 Jun 2016 Alessandro Sordoni, Philip Bachman, Adam Trischler, Yoshua Bengio

We propose a novel neural attention architecture to tackle machine comprehension tasks, such as answering Cloze-style queries with respect to a document.

Ranked #4 on Question Answering on Children's Book Test (Accuracy-NE metric)

Question Answering Reading Comprehension

Data Generation as Sequential Decision Making

1 code implementation NeurIPS 2015 Philip Bachman, Doina Precup

We connect a broad class of generative models through their shared reliance on sequential decision making.

Decision Making Imputation

Learning with Pseudo-Ensembles

no code implementations NeurIPS 2014 Philip Bachman, Ouais Alsharif, Doina Precup

We formalize the notion of a pseudo-ensemble, a (possibly infinite) collection of child models spawned from a parent model by perturbing it according to some noise process.

Sentiment Analysis

Representation as a Service

no code implementations24 Feb 2014 Ouais Alsharif, Philip Bachman, Joelle Pineau

Consider a Machine Learning Service Provider (MLSP) designed to rapidly create highly accurate learners for a never-ending stream of new tasks.

Cannot find the paper you are looking for? You can Submit a new open access paper.