no code implementations • 26 Dec 2024 • Mehrnaz Mofakhami, Reza Bayat, Ioannis Mitliagkas, Joao Monteiro, Valentina Zantedeschi
Early Exiting (EE) is a promising technique for speeding up inference by adaptively allocating compute resources to data points based on their difficulty.
no code implementations • 5 Dec 2024 • Juan Rodriguez, Xiangru Jian, Siba Smarak Panigrahi, Tianyu Zhang, Aarash Feizi, Abhay Puri, Akshay Kalkunte, François Savard, Ahmed Masry, Shravan Nayak, Rabiul Awal, Mahsa Massoud, Amirhossein Abaskohi, Zichao Li, Suyuchen Wang, Pierre-André Noël, Mats Leon Richter, Saverio Vadacchino, Shubbam Agarwal, Sanket Biswas, Sara Shanian, Ying Zhang, Noah Bolger, Kurt MacDonald, Simon Fauvel, Sathwik Tejaswi, Srinivas Sunkara, Joao Monteiro, Krishnamurthy Dj Dvijotham, Torsten Scholak, Nicolas Chapados, Sepideh Kharagani, Sean Hughes, M. Özsu, Siva Reddy, Marco Pedersoli, Yoshua Bengio, Christopher Pal, Issam Laradji, Spandanna Gella, Perouz Taslakian, David Vazquez, Sai Rajeswar
We use an efficient data curation process to ensure our data is high-quality and license-permissive.
1 code implementation • 17 Jun 2024 • Joao Monteiro, Pierre-Andre Noel, Etienne Marcotte, Sai Rajeswar, Valentina Zantedeschi, David Vazquez, Nicolas Chapados, Christopher Pal, Perouz Taslakian
We run a large-scale benchmark comprising several state-of-the-art LLMs to uncover differences in performance across models of various types and sizes in a context-conditional language modeling setting.
1 code implementation • 22 Aug 2023 • Charles Guille-Escuret, Pierre-André Noël, Ioannis Mitliagkas, David Vazquez, Joao Monteiro
Our findings reveal that while these methods excel in detecting unknown classes, their performance is inconsistent when encountering other types of distribution shifts.
1 code implementation • NeurIPS 2023 • Charles Guille-Escuret, Pau Rodriguez, David Vazquez, Ioannis Mitliagkas, Joao Monteiro
This work explores the use of self-supervised contrastive learning to the simultaneous detection of two types of OOD samples: unseen classes and adversarial perturbations.
no code implementations • 30 Aug 2022 • Joao Monteiro, Pau Rodriguez, Pierre-Andre Noel, Issam Laradji, David Vazquez
In the add-on case, the original neural network's inference head is completely unaffected (so its accuracy remains the same) but we now have the option to use TAC's own confidence and prediction when determining which course of action to take in an hypothetical production workflow.
no code implementations • 17 May 2022 • Joao Monteiro, Mohamed Osama Ahmed, Hossein Hajimirsadeghi, Greg Mori
We study settings where gradient penalties are used alongside risk minimization with the goal of obtaining predictors satisfying different notions of monotonicity.
no code implementations • 29 Sep 2021 • Joao Monteiro, Mohamed Osama Ahmed, Hossein Hajimirsadeghi, Greg Mori
We study the setting where risk minimization is performed over general classes of models and consider two cases where monotonicity is treated as either a requirement to be satisfied everywhere or a useful property.
1 code implementation • 25 Jun 2021 • Joao Monteiro, Xavier Gibert, Jianqiao Feng, Vincent Dumoulin, Dar-Shyang Lee
Domain adaptation approaches thus appeared as a useful framework yielding extra flexibility in that distinct train and test data distributions are supported, provided that other assumptions are satisfied such as covariate shift, which expects the conditional distributions over labels to be independent of the underlying data distribution.
no code implementations • 1 Jan 2021 • Joao Monteiro, Isabela Albuquerque, Jahangir Alam, Tiago Falk
Recent metric learning approaches parametrize semantic similarity measures through the use of an encoder trained along with a similarity model, which operates over pairs of representations.
no code implementations • LREC 2020 • Joao Monteiro, Md Jahangir Alam, Tiago Falk
Automatic speech processing applications often have to deal with the problem of aggregating local descriptors (i. e., representations of input speech data corresponding to specific portions across the time dimension) and turning them into a single fixed-dimension representation, known as global descriptor, on top of which downstream classification tasks can be performed.
1 code implementation • ICML 2020 • Joao Monteiro, Isabela Albuquerque, Jahangir Alam, R. Devon Hjelm, Tiago Falk
In this contribution, we augment the metric learning setting by introducing a parametric pseudo-distance, trained jointly with the encoder.
1 code implementation • 25 Jan 2020 • Mirco Ravanelli, Jianyuan Zhong, Santiago Pascual, Pawel Swietojanski, Joao Monteiro, Jan Trmal, Yoshua Bengio
We then propose a revised encoder that better learns short- and long-term speech dynamics with an efficient combination of recurrent and convolutional networks.
4 code implementations • 9 Nov 2019 • Alex Bie, Bharat Venkitesh, Joao Monteiro, Md. Akmal Haidar, Mehdi Rezagholizadeh
While significant improvements have been made in recent years in terms of end-to-end automatic speech recognition (ASR) performance, such improvements were obtained through the use of very large neural networks, unfit for embedded use on edge devices.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 7 Nov 2018 • Gautam Bhattacharya, Joao Monteiro, Jahangir Alam, Patrick Kenny
Furthermore, we are able to significantly boost verification performance by averaging our different GAN models at the score level, achieving a relative improvement of 7. 2% over the baseline.
3 code implementations • 31 Jul 2018 • Thang Doan, Joao Monteiro, Isabela Albuquerque, Bogdan Mazoure, Audrey Durand, Joelle Pineau, R. Devon Hjelm
We argue that less expressive discriminators are smoother and have a general coarse grained view of the modes map, which enforces the generator to cover a wide portion of the data distribution support.