1 code implementation • 22 Aug 2023 • Charles Guille-Escuret, Pierre-André Noël, Ioannis Mitliagkas, David Vazquez, Joao Monteiro
Our findings reveal that while these methods excel in detecting unknown classes, their performance is inconsistent when encountering other types of distribution shifts.
no code implementations • 30 Aug 2022 • Joao Monteiro, Pau Rodriguez, Pierre-Andre Noel, Issam Laradji, David Vazquez
In the add-on case, the original neural network's inference head is completely unaffected (so its accuracy remains the same) but we now have the option to use TAC's own confidence and prediction when determining which course of action to take in an hypothetical production workflow.
no code implementations • 17 May 2022 • Joao Monteiro, Mohamed Osama Ahmed, Hossein Hajimirsadeghi, Greg Mori
We study settings where gradient penalties are used alongside risk minimization with the goal of obtaining predictors satisfying different notions of monotonicity.
no code implementations • 29 Sep 2021 • Joao Monteiro, Mohamed Osama Ahmed, Hossein Hajimirsadeghi, Greg Mori
We study the setting where risk minimization is performed over general classes of models and consider two cases where monotonicity is treated as either a requirement to be satisfied everywhere or a useful property.
1 code implementation • 25 Jun 2021 • Joao Monteiro, Xavier Gibert, Jianqiao Feng, Vincent Dumoulin, Dar-Shyang Lee
Domain adaptation approaches thus appeared as a useful framework yielding extra flexibility in that distinct train and test data distributions are supported, provided that other assumptions are satisfied such as covariate shift, which expects the conditional distributions over labels to be independent of the underlying data distribution.
no code implementations • 1 Jan 2021 • Joao Monteiro, Isabela Albuquerque, Jahangir Alam, Tiago Falk
Recent metric learning approaches parametrize semantic similarity measures through the use of an encoder trained along with a similarity model, which operates over pairs of representations.
no code implementations • LREC 2020 • Joao Monteiro, Md Jahangir Alam, Tiago Falk
Automatic speech processing applications often have to deal with the problem of aggregating local descriptors (i. e., representations of input speech data corresponding to specific portions across the time dimension) and turning them into a single fixed-dimension representation, known as global descriptor, on top of which downstream classification tasks can be performed.
1 code implementation • ICML 2020 • Joao Monteiro, Isabela Albuquerque, Jahangir Alam, R. Devon Hjelm, Tiago Falk
In this contribution, we augment the metric learning setting by introducing a parametric pseudo-distance, trained jointly with the encoder.
1 code implementation • 25 Jan 2020 • Mirco Ravanelli, Jianyuan Zhong, Santiago Pascual, Pawel Swietojanski, Joao Monteiro, Jan Trmal, Yoshua Bengio
We then propose a revised encoder that better learns short- and long-term speech dynamics with an efficient combination of recurrent and convolutional networks.
4 code implementations • 9 Nov 2019 • Alex Bie, Bharat Venkitesh, Joao Monteiro, Md. Akmal Haidar, Mehdi Rezagholizadeh
While significant improvements have been made in recent years in terms of end-to-end automatic speech recognition (ASR) performance, such improvements were obtained through the use of very large neural networks, unfit for embedded use on edge devices.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 7 Nov 2018 • Gautam Bhattacharya, Joao Monteiro, Jahangir Alam, Patrick Kenny
Furthermore, we are able to significantly boost verification performance by averaging our different GAN models at the score level, achieving a relative improvement of 7. 2% over the baseline.
3 code implementations • 31 Jul 2018 • Thang Doan, Joao Monteiro, Isabela Albuquerque, Bogdan Mazoure, Audrey Durand, Joelle Pineau, R. Devon Hjelm
We argue that less expressive discriminators are smoother and have a general coarse grained view of the modes map, which enforces the generator to cover a wide portion of the data distribution support.