no code implementations • 1 Jan 2021 • Joao Monteiro, Isabela Albuquerque, Jahangir Alam, Tiago Falk
Recent metric learning approaches parametrize semantic similarity measures through the use of an encoder trained along with a similarity model, which operates over pairs of representations.
no code implementations • 30 Mar 2020 • Isabela Albuquerque, Nikhil Naik, Junnan Li, Nitish Keskar, Richard Socher
Self-supervised feature representations have been shown to be useful for supervised classification, few-shot learning, and adversarial robustness.
Ranked #90 on
Domain Generalization
on PACS
1 code implementation • ICML 2020 • Joao Monteiro, Isabela Albuquerque, Jahangir Alam, R. Devon Hjelm, Tiago Falk
In this contribution, we augment the metric learning setting by introducing a parametric pseudo-distance, trained jointly with the encoder.
1 code implementation • 13 Nov 2019 • Hubert Banville, Isabela Albuquerque, Aapo Hyvärinen, Graeme Moffat, Denis-Alexander Engemann, Alexandre Gramfort
The supervised learning paradigm is limited by the cost - and sometimes the impracticality - of data collection and labeling in multiple domains.
1 code implementation • 3 Nov 2019 • Isabela Albuquerque, João Monteiro, Mohammad Darvishi, Tiago H. Falk, Ioannis Mitliagkas
In this work, we tackle such problem by focusing on domain generalization: a formalization where the data generating process at test time may yield samples from never-before-seen domains (distributions).
Ranked #37 on
Domain Generalization
on PACS
no code implementations • 20 Jun 2019 • Isabela Albuquerque, João Monteiro, Olivier Rosanne, Abhishek Tiwari, Jean-François Gagnon, Tiago H. Falk
Besides shedding light on the assumptions that hold for a particular dataset, the estimates of statistical shifts obtained with the proposed approach can be used for investigating other aspects of a machine learning pipeline, such as quantitatively assessing the effectiveness of domain adaptation strategies.
1 code implementation • ICLR 2019 • Isabela Albuquerque, João Monteiro, Thang Doan, Breandan Considine, Tiago Falk, Ioannis Mitliagkas
Recent literature has demonstrated promising results for training Generative Adversarial Networks by employing a set of discriminators, in contrast to the traditional game involving one generator against a single adversary.
1 code implementation • 23 Jan 2019 • Isabela Albuquerque, João Monteiro, Tiago H. Falk
Afterwards, a recurrent model is trained with the goal of providing a sequence of inputs to the previously trained frames generator, thus yielding scenes which look natural.
3 code implementations • 16 Jan 2019 • Yannick Roy, Hubert Banville, Isabela Albuquerque, Alexandre Gramfort, Tiago H. Falk, Jocelyn Faubert
To help the field progress, we provide a list of recommendations for future studies and we make our summary table of DL and EEG papers available and invite the community to contribute.
3 code implementations • 31 Jul 2018 • Thang Doan, Joao Monteiro, Isabela Albuquerque, Bogdan Mazoure, Audrey Durand, Joelle Pineau, R. Devon Hjelm
We argue that less expressive discriminators are smoother and have a general coarse grained view of the modes map, which enforces the generator to cover a wide portion of the data distribution support.
no code implementations • 21 Feb 2018 • João Monteiro, Isabela Albuquerque, Zahid Akhtar, Tiago H. Falk
Non-linear binary classifiers trained on top of our proposed features can achieve a high detection rate (>90%) in a set of white-box attacks and maintain such performance when tested against unseen attacks.