Search Results for author: Devon Hjelm

Found 8 papers, 3 papers with code

Test Sample Accuracy Scales with Training Sample Density in Neural Networks

1 code implementation15 Jun 2021 Xu Ji, Razvan Pascanu, Devon Hjelm, Balaji Lakshminarayanan, Andrea Vedaldi

Intuitively, one would expect accuracy of a trained neural network's prediction on test samples to correlate with how densely the samples are surrounded by seen training samples in representation space.

Image Classification

Cross-Modal Information Maximization for Medical Imaging: CMIM

no code implementations20 Oct 2020 Tristan Sylvain, Francis Dutil, Tess Berthier, Lisa Di Jorio, Margaux Luck, Devon Hjelm, Yoshua Bengio

In hospitals, data are siloed to specific information systems that make the same information available under different modalities such as the different medical imaging exams the patient undergoes (CT scans, MRI, PET, Ultrasound, etc.)

Image Classification Medical Image Classification

Locality and compositionality in zero-shot learning

no code implementations ICLR 2020 Tristan Sylvain, Linda Petrini, Devon Hjelm

In this work we study locality and compositionality in the context of learning representations for Zero Shot Learning (ZSL).

Representation Learning Zero-Shot Learning

Mutual Information Neural Estimation

no code implementations ICML 2018 Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeshwar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, Devon Hjelm

We argue that the estimation of mutual information between high dimensional continuous random variables can be achieved by gradient descent over neural networks.

General Classification

Learning Generative Models with Locally Disentangled Latent Factors

no code implementations ICLR 2018 Brady Neal, Alex Lamb, Sherjil Ozair, Devon Hjelm, Aaron Courville, Yoshua Bengio, Ioannis Mitliagkas

One of the most successful techniques in generative models has been decomposing a complicated generation task into a series of simpler generation tasks.

GibbsNet: Iterative Adversarial Inference for Deep Graphical Models

no code implementations NeurIPS 2017 Alex Lamb, Devon Hjelm, Yaroslav Ganin, Joseph Paul Cohen, Aaron Courville, Yoshua Bengio

Directed latent variable models that formulate the joint distribution as $p(x, z) = p(z) p(x \mid z)$ have the advantage of fast and exact sampling.

Cannot find the paper you are looking for? You can Submit a new open access paper.