1 code implementation • 14 Mar 2024 • Adam Tupper, Christian Gagné
Data augmentation is one of the most effective techniques to improve the generalization performance of deep neural networks.
no code implementations • 22 Jan 2024 • Juan Luis Jiménez Laredo, Carlos Fernandes, Juan Julián Merelo, Christian Gagné
Despite the intuition that the same population size is not needed throughout the run of an Evolutionary Algorithm (EA), most EAs use a fixed population size.
no code implementations • 8 Dec 2023 • Catherine Bouchard, Andréanne Deschênes, Vincent Boulanger, Jean-Michel Bellavance, Flavie Lavoie-Cardinal, Christian Gagné
The development of robust signal unmixing algorithms is essential for leveraging multimodal datasets acquired through a wide array of scientific imaging technologies, including hyperspectral or time-resolved acquisitions.
1 code implementation • 26 Nov 2023 • Jiaqi Li, Rui Wang, Yuanhao Lai, Changjian Shui, Sabyasachi Sahoo, Charles X. Ling, Shichun Yang, Boyu Wang, Christian Gagné, Fan Zhou
We conduct extensive experiments on various benchmarks, including a dataset with large-scale tasks, and compare our method against some recent state-of-the-art methods to demonstrate the effectiveness and scalability of our proposed method.
no code implementations • 8 May 2023 • Mohamed Abid, Arman Afrasiyabi, Ihsen Hedhli, Jean-François Lalonde, Christian Gagné
Conditioned on a target image, such methods extract the target style and combine it with the source image content, keeping coherence between the domains.
1 code implementation • 4 Mar 2023 • Thomas Philippon, Christian Gagné
Neural network ensembles have been studied extensively in the context of adversarial robustness and most ensemble-based approaches remain vulnerable to adaptive attacks.
1 code implementation • 19 Oct 2022 • Changjian Shui, Gezheng Xu, Qi Chen, Jiaqi Li, Charles Ling, Tal Arbel, Boyu Wang, Christian Gagné
In the upper-level, the fair predictor is updated to be close to all subgroup specific predictors.
no code implementations • 31 May 2022 • William Wei Wang, Gezheng Xu, Ruizhi Pu, Jiaqi Li, Fan Zhou, Changjian Shui, Charles Ling, Christian Gagné, Boyu Wang
Domain generalization aims to learn a predictive model from multiple different but related source tasks that can generalize well to a target task without the need of accessing any target data.
no code implementations • 26 May 2022 • Changjian Shui, Qi Chen, Jiaqi Li, Boyu Wang, Christian Gagné
We consider a fair representation learning perspective, where optimal predictors, on top of the data representation, are ensured to be invariant with respect to different sub-groups.
no code implementations • CVPR 2022 • Arman Afrasiyabi, Hugo Larochelle, Jean-François Lalonde, Christian Gagné
In image classification, it is common practice to train deep networks to extract a single feature vector per input image.
no code implementations • 26 Jan 2022 • Boyu Wang, Jorge Mendez, Changjian Shui, Fan Zhou, Di wu, Gezheng Xu, Christian Gagné, Eric Eaton
Unlike existing measures which are used as tools to bound the difference of expected risks between tasks (e. g., $\mathcal{H}$-divergence or discrepancy distance), we theoretically show that the performance gap can be viewed as a data- and algorithm-dependent regularizer, which controls the model complexity and leads to finer guarantees.
no code implementations • 30 May 2021 • Changjian Shui, Boyu Wang, Christian Gagné
Our regularization is orthogonal to and can be straightforwardly adopted in existing domain generalization algorithms for invariant representation learning.
1 code implementation • 9 May 2021 • Changjian Shui, Zijian Li, Jiaqi Li, Christian Gagné, Charles Ling, Boyu Wang
Multi-source domain adaptation aims at leveraging the knowledge from multiple tasks for predicting a related target domain.
no code implementations • 5 Mar 2021 • Hugo Siqueira Gomes, Benjamin Léger, Christian Gagné
From that framework's formulation, we propose to parameterize the algorithm using deep recurrent neural networks and use a meta-loss function based on stochastic algorithms' performance to train efficient data-driven optimizers over several related optimization tasks.
no code implementations • 12 Feb 2021 • Mohamed Abderrahmen Abid, Ihsen Hedhli, Christian Gagné
Traditionally, the main focus of image super-resolution techniques is on recovering the most likely high-quality images from low-quality images, using a one-to-one low- to high-resolution mapping.
no code implementations • 1 Jan 2021 • Changjian Shui, Zijian Li, Jiaqi Li, Christian Gagné, Charles Ling, Boyu Wang
We study the label shift problem in multi-source transfer learning and derive new generic principles to control the target generalization risk.
1 code implementation • ICCV 2021 • Arman Afrasiyabi, Jean-François Lalonde, Christian Gagné
In contrast, we propose to model base classes with mixture models by simultaneously training the feature extractor and learning the mixture model parameters in an online manner.
no code implementations • 30 Jul 2020 • Changjian Shui, Qi Chen, Jun Wen, Fan Zhou, Christian Gagné, Boyu Wang
We reveal the incoherence between the widely-adopted empirical domain adversarial training and its generally-assumed theoretical counterpart based on $\mathcal{H}$-divergence.
1 code implementation • 7 Feb 2020 • Sébastien de Blois, Mathieu Garon, Christian Gagné, Jean-François Lalonde
Computer vision datasets containing multiple modalities such as color, depth, and thermal properties are now commonly accessible and useful for solving a wide array of challenging tasks.
2 code implementations • ECCV 2020 • Arman Afrasiyabi, Jean-François Lalonde, Christian Gagné
Few-shot image classification aims at training a model from only a few examples for each of the "novel" classes.
1 code implementation • 20 Nov 2019 • Changjian Shui, Fan Zhou, Christian Gagné, Boyu Wang
In this paper, we are proposing a unified and principled method for both the querying and training processes in deep batch active learning.
no code implementations • 19 Oct 2019 • Marc-André Gardner, Yannick Hold-Geoffroy, Kalyan Sunkavalli, Christian Gagné, Jean-François Lalonde
We present a method to estimate lighting from a single image of an indoor scene.
no code implementations • 1 May 2019 • Azadeh Sadat Mozafari, Hugo Siqueira Gomes, Wilson Leão, Christian Gagné
The great performances of deep learning are undeniable, with impressive results over a wide range of tasks.
no code implementations • ICLR 2019 • Mahdieh Abbasi, Arezoo Rajabi, Azadeh Sadat Mozafari, Rakesh B. Bobba, Christian Gagné
As an appropriate training set for the extra class, we introduce two resources that are computationally efficient to obtain: a representative natural out-distribution set and interpolated in-distribution samples.
1 code implementation • 21 Mar 2019 • Changjian Shui, Mahdieh Abbasi, Louis-Émile Robitaille, Boyu Wang, Christian Gagné
Hence, an important aspect of multitask learning is to understand the similarities within a set of tasks.
1 code implementation • 4 Mar 2019 • Sébastien de Blois, Ihsen Hedhli, Christian Gagné
To evaluate their performance, existing dehazing approaches generally rely on distance measures between the generated image and its corresponding ground truth.
no code implementations • 27 Oct 2018 • Azadeh Sadat Mozafari, Hugo Siqueira Gomes, Wilson Leão, Steeven Janny, Christian Gagné
Temperature Scaling (TS) is a state-of-the-art among measure-based calibration methods which has low time and memory complexity as well as effectiveness.
no code implementations • 26 Oct 2018 • Changjian Shui, Ihsen Hedhli, Christian Gagné
We are providing a theoretical analysis of this algorithm, with a cumulative error upper bound for each task.
no code implementations • 18 Jun 2018 • Alejandro Cervantes, Christian Gagné, Pedro Isasi, Marc Parizeau
Incremental learning from non-stationary data poses special challenges to the field of machine learning.
no code implementations • 24 Apr 2018 • Mahdieh Abbasi, Arezoo Rajabi, Christian Gagné, Rakesh B. Bobba
Detection and rejection of adversarial examples in security sensitive and safety-critical systems using deep CNNs is essential.
no code implementations • 28 Mar 2018 • Louis-Émile Robitaille, Audrey Durand, Marc-André Gardner, Christian Gagné, Paul De Koninck, Flavie Lavoie-Cardinal
More specifically, we are proposing a system based on a deep neural network that can provide a quantitative quality measure of a STED image of neuronal structures given as input.
no code implementations • 9 Mar 2018 • Joel Lehman, Jeff Clune, Dusan Misevic, Christoph Adami, Lee Altenberg, Julie Beaulieu, Peter J. Bentley, Samuel Bernard, Guillaume Beslon, David M. Bryson, Patryk Chrabaszcz, Nick Cheney, Antoine Cully, Stephane Doncieux, Fred C. Dyer, Kai Olav Ellefsen, Robert Feldt, Stephan Fischer, Stephanie Forrest, Antoine Frénoy, Christian Gagné, Leni Le Goff, Laura M. Grabowski, Babak Hodjat, Frank Hutter, Laurent Keller, Carole Knibbe, Peter Krcah, Richard E. Lenski, Hod Lipson, Robert MacCurdy, Carlos Maestre, Risto Miikkulainen, Sara Mitri, David E. Moriarty, Jean-Baptiste Mouret, Anh Nguyen, Charles Ofria, Marc Parizeau, David Parsons, Robert T. Pennock, William F. Punch, Thomas S. Ray, Marc Schoenauer, Eric Shulte, Karl Sims, Kenneth O. Stanley, François Taddei, Danesh Tarapore, Simon Thibault, Westley Weimer, Richard Watson, Jason Yosinski
Biological evolution provides a creative fount of complex and subtle adaptations, often surprising the scientists who discover them.
no code implementations • 22 Feb 2018 • Changjian Shui, Azadeh Sadat Mozafari, Jonathan Marek, Ihsen Hedhli, Christian Gagné
Calibrating the confidence of supervised learning models is important for a variety of contexts where the certainty over predictions should be reliable.
1 code implementation • 20 Feb 2018 • Mahdieh Abbasi, Christian Gagné
The easiness at which adversarial instances can be generated in deep neural networks raises some fundamental questions on their functioning and concerns on their use in critical systems.
no code implementations • 1 Apr 2017 • Marc-André Gardner, Kalyan Sunkavalli, Ersin Yumer, Xiaohui Shen, Emiliano Gambaretto, Christian Gagné, Jean-François Lalonde
We propose an automatic method to infer high dynamic range illumination from a single, limited field-of-view, low dynamic range photograph of an indoor scene.
no code implementations • 22 Feb 2017 • Mahdieh Abbasi, Christian Gagné
We are proposing to use an ensemble of diverse specialists, where speciality is defined according to the confusion matrix.
no code implementations • 4 Jan 2017 • Audrey Durand, Christian Gagné
The question is: how good do estimations of these objectives have to be in order for the solution maximizing the preference function to remain unchanged?
no code implementations • 5 Nov 2016 • Farkhondeh Kiaee, Christian Gagné, Mahdieh Abbasi
This method alternates between promoting the sparsity of the network and optimizing the recognition performance, which allows us to exploit the two-part structure of the corresponding objective functions.
no code implementations • 20 May 2016 • Julien-Charles Lévesque, Christian Gagné, Robert Sabourin
Our method consists in building a fixed-size ensemble, optimizing the configuration of one classifier of the ensemble at each iteration of the hyperparameter optimization algorithm, taking into consideration the interaction with the other models when evaluating potential performances.
no code implementations • 10 Apr 2013 • François-Michel De Rainville, Michèle Sebag, Christian Gagné, Marc Schoenauer, Denis Laurendeau
At each iteration, the dynamic multi-armed bandit makes a decision on which species to evolve for a generation, using the history of progress made by the different species to guide the decisions.