Search Results for author: R. Devon Hjelm

Found 27 papers, 14 papers with code

Representation Learning with Video Deep InfoMax

no code implementations27 Jul 2020 R. Devon Hjelm, Philip Bachman

DeepInfoMax (DIM) is a self-supervised method which leverages the internal structure of deep networks to construct such views, forming prediction tasks between local features which depend on small patches in an image and global features which depend on the whole image.

Action Recognition Data Augmentation +2

Data-Efficient Reinforcement Learning with Self-Predictive Representations

1 code implementation ICLR 2021 Max Schwarzer, Ankesh Anand, Rishab Goel, R. Devon Hjelm, Aaron Courville, Philip Bachman

We further improve performance by adding data augmentation to the future prediction loss, which forces the agent's representations to be consistent across multiple views of an observation.

Atari Games 100k Data Augmentation +5

Deep Reinforcement and InfoMax Learning

1 code implementation NeurIPS 2020 Bogdan Mazoure, Remi Tachet des Combes, Thang Doan, Philip Bachman, R. Devon Hjelm

We begin with the hypothesis that a model-free agent whose representations are predictive of properties of future states (beyond expected rewards) will be more capable of solving and adapting to new RL problems.

Continual Learning

Object-Centric Image Generation from Layouts

no code implementations16 Mar 2020 Tristan Sylvain, Pengchuan Zhang, Yoshua Bengio, R. Devon Hjelm, Shikhar Sharma

In this paper, we start with the idea that a model must be able to understand individual objects and relationships between objects in order to generate complex scenes well.

Generative Adversarial Network Layout-to-Image Generation +1

An end-to-end approach for the verification problem: learning the right distance

1 code implementation ICML 2020 Joao Monteiro, Isabela Albuquerque, Jahangir Alam, R. Devon Hjelm, Tiago Falk

In this contribution, we augment the metric learning setting by introducing a parametric pseudo-distance, trained jointly with the encoder.

Metric Learning

Attraction-Repulsion Actor-Critic for Continuous Control Reinforcement Learning

no code implementations17 Sep 2019 Thang Doan, Bogdan Mazoure, Moloud Abdar, Audrey Durand, Joelle Pineau, R. Devon Hjelm

Continuous control tasks in reinforcement learning are important because they provide an important framework for learning in high-dimensional state spaces with deceptive rewards, where the agent can easily become trapped into suboptimal solutions.

Continuous Control reinforcement-learning +1

Unsupervised State Representation Learning in Atari

6 code implementations NeurIPS 2019 Ankesh Anand, Evan Racah, Sherjil Ozair, Yoshua Bengio, Marc-Alexandre Côté, R. Devon Hjelm

State representation learning, or the ability to capture latent generative factors of an environment, is crucial for building intelligent agents that can perform a wide variety of tasks.

Atari Games Representation Learning

Learning Representations by Maximizing Mutual Information Across Views

3 code implementations NeurIPS 2019 Philip Bachman, R. Devon Hjelm, William Buchwalter

Following our proposed approach, we develop a model which learns image representations that significantly outperform prior methods on the tasks we consider.

Data Augmentation Representation Learning +2

Batch weight for domain adaptation with mass shift

no code implementations29 May 2019 Mikołaj Bińkowski, R. Devon Hjelm, Aaron Courville

We also provide rigorous probabilistic setting for domain transfer and new simplified objective for training transfer networks, an alternative to complex, multi-component loss functions used in the current state-of-the art image-to-image translation models.

Domain Adaptation Image-to-Image Translation +1

Leveraging exploration in off-policy algorithms via normalizing flows

1 code implementation16 May 2019 Bogdan Mazoure, Thang Doan, Audrey Durand, R. Devon Hjelm, Joelle Pineau

The ability to discover approximately optimal policies in domains with sparse rewards is crucial to applying reinforcement learning (RL) in many real-world scenarios.

Continuous Control Reinforcement Learning (RL)

Unsupervised one-to-many image translation

no code implementations ICLR 2019 Samuel Lavoie-Marchildon, Sebastien Lachapelle, Mikołaj Bińkowski, Aaron Courville, Yoshua Bengio, R. Devon Hjelm

We perform completely unsupervised one-sided image to image translation between a source domain $X$ and a target domain $Y$ such that we preserve relevant underlying shared semantics (e. g., class, size, shape, etc).

Translation Unsupervised Image-To-Image Translation

Prediction of Progression to Alzheimer's disease with Deep InfoMax

no code implementations24 Apr 2019 Alex Fedorov, R. Devon Hjelm, Anees Abrol, Zening Fu, Yuhui Du, Sergey Plis, Vince D. Calhoun

Arguably, unsupervised learning plays a crucial role in the majority of algorithms for processing brain imaging.

General Classification

Spatio-Temporal Deep Graph Infomax

no code implementations12 Apr 2019 Felix L. Opolka, Aaron Solomon, Cătălina Cangea, Petar Veličković, Pietro Liò, R. Devon Hjelm

Spatio-temporal graphs such as traffic networks or gene regulatory systems present challenges for the existing deep learning methods due to the complexity of structural changes over time.

Representation Learning Traffic Prediction

On Adversarial Mixup Resynthesis

1 code implementation NeurIPS 2019 Christopher Beckham, Sina Honari, Vikas Verma, Alex Lamb, Farnoosh Ghadiri, R. Devon Hjelm, Yoshua Bengio, Christopher Pal

In this paper, we explore new approaches to combining information encoded within the learned representations of auto-encoders.

Resynthesis

Deep Graph Infomax

11 code implementations ICLR 2019 Petar Veličković, William Fedus, William L. Hamilton, Pietro Liò, Yoshua Bengio, R. Devon Hjelm

We present Deep Graph Infomax (DGI), a general approach for learning node representations within graph-structured data in an unsupervised manner.

General Classification Node Classification

On-line Adaptative Curriculum Learning for GANs

3 code implementations31 Jul 2018 Thang Doan, Joao Monteiro, Isabela Albuquerque, Bogdan Mazoure, Audrey Durand, Joelle Pineau, R. Devon Hjelm

We argue that less expressive discriminators are smoother and have a general coarse grained view of the modes map, which enforces the generator to cover a wide portion of the data distribution support.

Multi-Armed Bandits Stochastic Optimization

MINE: Mutual Information Neural Estimation

20 code implementations12 Jan 2018 Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeswar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, R. Devon Hjelm

We argue that the estimation of mutual information between high dimensional continuous random variables can be achieved by gradient descent over neural networks.

General Classification

Boundary Seeking GANs

no code implementations ICLR 2018 R. Devon Hjelm, Athul Paul Jacob, Adam Trischler, Gerry Che, Kyunghyun Cho, Yoshua Bengio

We introduce a method for training GANs with discrete data that uses the estimated difference measure from the discriminator to compute importance weights for generated samples, thus providing a policy gradient for training the generator.

Scene Understanding Text Generation

ACtuAL: Actor-Critic Under Adversarial Learning

no code implementations13 Nov 2017 Anirudh Goyal, Nan Rosemary Ke, Alex Lamb, R. Devon Hjelm, Chris Pal, Joelle Pineau, Yoshua Bengio

This makes it fundamentally difficult to train GANs with discrete data, as generation in this case typically involves a non-differentiable function.

Language Modelling

Variance Regularizing Adversarial Learning

no code implementations ICLR 2018 Karan Grewal, R. Devon Hjelm, Yoshua Bengio

We hypothesize that this approach ensures a non-zero gradient to the generator, even in the limit of a perfect classifier.

Boundary-Seeking Generative Adversarial Networks

6 code implementations27 Feb 2017 R. Devon Hjelm, Athul Paul Jacob, Tong Che, Adam Trischler, Kyunghyun Cho, Yoshua Bengio

We introduce a method for training GANs with discrete data that uses the estimated difference measure from the discriminator to compute importance weights for generated samples, thus providing a policy gradient for training the generator.

Scene Understanding Text Generation

Maximum-Likelihood Augmented Discrete Generative Adversarial Networks

no code implementations26 Feb 2017 Tong Che, Yan-ran Li, Ruixiang Zhang, R. Devon Hjelm, Wenjie Li, Yangqiu Song, Yoshua Bengio

Despite the successes in capturing continuous distributions, the application of generative adversarial networks (GANs) to discrete settings, like natural language tasks, is rather restricted.

Spatio-temporal Dynamics of Intrinsic Networks in Functional Magnetic Imaging Data Using Recurrent Neural Networks

no code implementations3 Nov 2016 R. Devon Hjelm, Eswar Damaraju, Kyunghyun Cho, Helmut Laufs, Sergey M. Plis, Vince Calhoun

We introduce a novel recurrent neural network (RNN) approach to account for temporal dynamics and dependencies in brain networks observed via functional magnetic resonance imaging (fMRI).

blind source separation

Variational Autoencoders for Feature Detection of Magnetic Resonance Imaging Data

no code implementations21 Mar 2016 R. Devon Hjelm, Sergey M. Plis, Vince C. Calhoun

Independent component analysis (ICA), as an approach to the blind source-separation (BSS) problem, has become the de-facto standard in many medical imaging settings.

blind source separation Dimensionality Reduction

Iterative Refinement of the Approximate Posterior for Directed Belief Networks

1 code implementation NeurIPS 2016 R. Devon Hjelm, Kyunghyun Cho, Junyoung Chung, Russ Salakhutdinov, Vince Calhoun, Nebojsa Jojic

Variational methods that rely on a recognition network to approximate the posterior of directed graphical models offer better inference and learning than previous methods.

Cannot find the paper you are looking for? You can Submit a new open access paper.