Search Results for author: Gabriel Huang

Found 9 papers, 7 papers with code

GEO-Bench: Toward Foundation Models for Earth Monitoring

1 code implementation NeurIPS 2023 Alexandre Lacoste, Nils Lehmann, Pau Rodriguez, Evan David Sherwin, Hannah Kerner, Björn Lütjens, Jeremy Andrew Irvin, David Dao, Hamed Alemohammad, Alexandre Drouin, Mehmet Gunturkun, Gabriel Huang, David Vazquez, Dava Newman, Yoshua Bengio, Stefano Ermon, Xiao Xiang Zhu

Recent progress in self-supervision has shown that pre-training large neural networks on vast amounts of unsupervised data can lead to substantial increases in generalization to downstream tasks.

A Survey of Self-Supervised and Few-Shot Object Detection

1 code implementation27 Oct 2021 Gabriel Huang, Issam Laradji, David Vazquez, Simon Lacoste-Julien, Pau Rodriguez

Labeling data is often expensive and time-consuming, especially for tasks such as object detection and instance segmentation, which require dense labeling of the image.

Few-Shot Object Detection Instance Segmentation +3

Repurposing Pretrained Models for Robust Out-of-domain Few-Shot Learning

1 code implementation ICLR 2021 Namyeong Kwon, Hwidong Na, Gabriel Huang, Simon Lacoste-Julien

Model-agnostic meta-learning (MAML) is a popular method for few-shot learning but assumes that we have access to the meta-training set.

Few-Shot Learning

Multimodal Pretraining for Dense Video Captioning

1 code implementation Asian Chapter of the Association for Computational Linguistics 2020 Gabriel Huang, Bo Pang, Zhenhai Zhu, Clara Rivera, Radu Soricut

First, we construct and release a new dense video captioning dataset, Video Timeline Tags (ViTT), featuring a variety of instructional videos together with time-stamped annotations.

 Ranked #1 on Dense Video Captioning on YouCook2 (ROUGE-L metric, using extra training data)

Dense Video Captioning

Are Few-shot Learning Benchmarks Too Simple ?

no code implementations25 Sep 2019 Gabriel Huang, Hugo Larochelle, Simon Lacoste-Julien

We argue that the widely used Omniglot and miniImageNet benchmarks are too simple because their class semantics do not vary across episodes, which defeats their intended purpose of evaluating few-shot classification methods.

Classification Few-Shot Learning

Are Few-Shot Learning Benchmarks too Simple ? Solving them without Task Supervision at Test-Time

1 code implementation22 Feb 2019 Gabriel Huang, Hugo Larochelle, Simon Lacoste-Julien

We show that several popular few-shot learning benchmarks can be solved with varying degrees of success without using support set Labels at Test-time (LT).

Clustering Few-Shot Learning +1

Scattering Networks for Hybrid Representation Learning

1 code implementation17 Sep 2018 Edouard Oyallon, Sergey Zagoruyko, Gabriel Huang, Nikos Komodakis, Simon Lacoste-Julien, Matthew Blaschko, Eugene Belilovsky

In particular, by working in scattering space, we achieve competitive results both for supervised and unsupervised learning tasks, while making progress towards constructing more interpretable CNNs.

Representation Learning

Negative Momentum for Improved Game Dynamics

1 code implementation12 Jul 2018 Gauthier Gidel, Reyhane Askari Hemmat, Mohammad Pezeshki, Remi Lepriol, Gabriel Huang, Simon Lacoste-Julien, Ioannis Mitliagkas

Games generalize the single-objective optimization paradigm by introducing different objective functions for different players.

Parametric Adversarial Divergences are Good Losses for Generative Modeling

no code implementations ICLR 2018 Gabriel Huang, Hugo Berard, Ahmed Touati, Gauthier Gidel, Pascal Vincent, Simon Lacoste-Julien

Parametric adversarial divergences, which are a generalization of the losses used to train generative adversarial networks (GANs), have often been described as being approximations of their nonparametric counterparts, such as the Jensen-Shannon divergence, which can be derived under the so-called optimal discriminator assumption.

Structured Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.