no code implementations • 2 Jul 2024 • Xavier Suau, Pieter Delobelle, Katherine Metcalf, Armand Joulin, Nicholas Apostoloff, Luca Zappella, Pau Rodríguez
We also show that AurA is effective with models of different scale (from 1. 5B to 40B parameters), and its effectiveness in mitigating toxic language, while preserving common-sense zero-shot abilities, holds across all scales.
no code implementations • 19 Oct 2023 • Jonathan Crabbé, Pau Rodríguez, Vaishaal Shankar, Luca Zappella, Arno Blaas
While for ImageNet distribution shifts it has been shown that such differences in robustness can be traced back predominantly to differences in training data, so far it is not known what that translates to in terms of what the model has learned.
1 code implementation • 20 Jul 2023 • Borja Rodríguez-Gálvez, Arno Blaas, Pau Rodríguez, Adam Goliński, Xavier Suau, Jason Ramapuram, Dan Busbridge, Luca Zappella
We consider a different lower bound on the MI consisting of an entropy and a reconstruction term (ER), and analyze the main MVSSL families through its lens.
no code implementations • 10 Jul 2022 • Timothée Lesort, Oleksiy Ostapenko, Diganta Misra, Md Rifat Arefin, Pau Rodríguez, Laurent Charlin, Irina Rish
In this paper, we study the progressive knowledge accumulation (KA) in DNNs trained with gradient-based algorithms in long sequences of tasks with data re-occurrence.
1 code implementation • 30 Apr 2022 • Oleksiy Ostapenko, Timothee Lesort, Pau Rodríguez, Md Rifat Arefin, Arthur Douillard, Irina Rish, Laurent Charlin
Motivated by this, we study the efficacy of pre-trained vision models as a foundation for downstream continual learning (CL) scenarios.
1 code implementation • 21 Aug 2021 • Issam Laradji, Pau Rodríguez, David Vazquez, Derek Nowrouzezahrai
In order to obtain the viewpoints for these unlabeled images, we propose to use a Siamese network that takes two images as input and outputs whether they correspond to the same viewpoint.
3 code implementations • 14 Apr 2021 • Frédéric Branchaud-Charron, Parmida Atighehchian, Pau Rodríguez, Grace Abuhamad, Alexandre Lacoste
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
4 code implementations • NeurIPS 2020 • Alexandre Lacoste, Pau Rodríguez, Frédéric Branchaud-Charron, Parmida Atighehchian, Massimo Caccia, Issam Laradji, Alexandre Drouin, Matt Craddock, Laurent Charlin, David Vázquez
Progress in the field of machine learning has been fueled by the introduction of benchmark datasets pushing the limits of existing algorithms.
1 code implementation • ECCV 2020 • Pau Rodríguez, Issam Laradji, Alexandre Drouin, Alexandre Lacoste
Furthermore, we show that embedding propagation consistently improves the accuracy of the models in multiple semi-supervised learning scenarios by up to 16\% points.
1 code implementation • ECCV 2018 • Pau Rodríguez, Josep M. Gonfaus, Guillem Cucurull, F. Xavier Roca, Jordi Gonzàlez
We propose a novel attention mechanism to enhance Convolutional Neural Networks for fine-grained recognition.
Ranked #107 on
Image Classification
on CIFAR-100
(using extra training data)
no code implementations • 28 Jun 2018 • Pau Rodríguez, Miguel A. Bautista, Jordi Gonzàlez, Sergio Escalera
Following this observation, we embed the targets into a low-dimensional space, drastically improving convergence speed while preserving accuracy.
no code implementations • 6 Feb 2018 • Guillem Cucurull, Pau Rodríguez, V. Oguz Yazici, Josep M. Gonfaus, F. Xavier Roca, Jordi Gonzàlez
Following this trend on visual-based social analysis, we present a novel methodology based on Deep Learning to build a combined image-and-text based personality trait model, trained with images posted together with words found highly correlated to specific personality traits.
no code implementations • ICLR 2018 • Pau Rodríguez, Guillem Cucurull, Jordi Gonzàlez, Josep M. Gonfaus, Xavier Roca
We propose a novel attention mechanism to enhance Convolutional Neural Networks for fine-grained recognition.
1 code implementation • 7 Nov 2016 • Pau Rodríguez, Jordi Gonzàlez, Guillem Cucurull, Josep M. Gonfaus, Xavier Roca
In this paper, we show that regularizing negatively correlated features is an obstacle for effective decorrelation and present OrthoReg, a novel regularization technique that locally enforces feature orthogonality.