no code implementations • ICLR 2019 • Xiang Jiang, Mohammad Havaei, Gabriel Chartrand, Hassan Chouaib, Thomas Vincent, Andrew Jesson, Nicolas Chapados, Stan Matwin
Current deep learning based text classification methods are limited by their ability to achieve fast learning and generalization when the data is scarce.
no code implementations • 15 Aug 2024 • Ahmed Imtiaz Humayun, Ibtihel Amara, Candice Schumann, Golnoosh Farnadi, Negar Rostamzadeh, Mohammad Havaei
Deep generative models learn continuous representations of complex data manifolds using a finite number of samples during training.
no code implementations • 3 Jun 2024 • Golnoosh Farnadi, Mohammad Havaei, Negar Rostamzadeh
The rise of foundation models holds immense promise for advancing AI, but this progress may amplify existing risks and inequalities, leaving marginalized communities behind.
1 code implementation • 15 May 2024 • Nima Fathi, Amar Kumar, Brennan Nichyporuk, Mohammad Havaei, Tal Arbel
Deep learning classifiers are prone to latching onto dominant confounders present in a dataset rather than on the causal markers associated with the target class, leading to poor generalization and biased predictions.
no code implementations • 6 Apr 2023 • Laya Rafiee Sevyeri, Ivaxi Sheth, Farhood Farahnak, Alexandre See, Samira Ebrahimi Kahou, Thomas Fevens, Mohammad Havaei
In addition, PD is augmented with a weighted MI maximization objective for label distribution shift.
no code implementations • 28 Nov 2022 • Ivaxi Sheth, Aamer Abdul Rahman, Mohammad Havaei, Samira Ebrahimi Kahou
Despite the boost in performance observed by using CBN layers, our work reveals that the visual features learned by introducing auxiliary data via CBN deteriorates.
no code implementations • 31 Oct 2022 • Sharut Gupta, Kartik Ahuja, Mohammad Havaei, Niladri Chatterjee, Yoshua Bengio
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
no code implementations • 31 May 2022 • Fereshteh Shakeri, Malik Boudiaf, Sina Mohammadi, Ivaxi Sheth, Mohammad Havaei, Ismail Ben Ayed, Samira Ebrahimi Kahou
We build few-shot tasks and base-training data with various tissue types, different levels of domain shifts stemming from various cancer sites, and different class-granularity levels, thereby reflecting realistic scenarios.
no code implementations • 23 May 2022 • Sharut Gupta, Kartik Ahuja, Mohammad Havaei, Niladri Chatterjee, Yoshua Bengio
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
1 code implementation • 27 Apr 2022 • Farshid Varno, Marzie Saghayi, Laya Rafiee Sevyeri, Sharut Gupta, Stan Matwin, Mohammad Havaei
In Federated Learning (FL), a number of clients or devices collaborate to train a model without sharing their data.
no code implementations • CVPR 2022 • Moslem Yazdanpanah, Aamer Abdul Rahman, Muawiz Chaudhary, Christian Desrosiers, Mohammad Havaei, Eugene Belilovsky, Samira Ebrahimi Kahou
Batch Normalization is a staple of computer vision models, including those employed in few-shot learning.
no code implementations • 14 Oct 2021 • Ahmad Pesaranghader, Yiping Wang, Mohammad Havaei
Diversity in data is critical for the successful training of deep learning models.
no code implementations • 15 Dec 2020 • Qicheng Lao, Xiang Jiang, Mohammad Havaei
We propose a hypothesis disparity regularized mutual information maximization~(HDMI) approach to tackle unsupervised hypothesis transfer -- as an effort towards unifying hypothesis transfer learning (HTL) and unsupervised domain adaptation (UDA) -- where the knowledge from a source domain is transferred solely through hypotheses and adapted to the target domain in an unsupervised manner.
no code implementations • 8 Dec 2020 • Mohammad Havaei, Ximeng Mao, Yiping Wang, Qicheng Lao
Current practices in using cGANs for medical image generation, only use a single variable for image generation (i. e., content) and therefore, do not provide much flexibility nor control over the generated image.
1 code implementation • ICML 2020 • Xiang Jiang, Qicheng Lao, Stan Matwin, Mohammad Havaei
We present an approach for unsupervised domain adaptation---with a strong focus on practical considerations of within-domain class imbalance and between-domain class distribution shift---from a class-conditioned domain alignment perspective.
Ranked #1 on Unsupervised Domain Adaptation on Office-Home (Avg accuracy metric)
no code implementations • 12 May 2020 • Saeid Asgari Taghanaki, Mohammad Havaei, Alex Lamb, Aditya Sanghi, Ara Danielyan, Tonya Custis
The latent variables learned by VAEs have seen considerable interest as an unsupervised way of extracting features, which can then be used for downstream tasks.
1 code implementation • 3 Apr 2020 • Mehrdad Noori, Sina Mohammadi, Sina Ghofrani Majelan, Ali Bahri, Mohammad Havaei
To address the second challenge, we propose an Attention-based Multi-level Integrator Module to give the model the ability to assign different weights to multi-level feature maps.
1 code implementation • 9 Mar 2020 • Qicheng Lao, Mehrzad Mortazavi, Marzieh Tahaei, Francis Dutil, Thomas Fevens, Mohammad Havaei
In this paper, we propose a general framework in continual learning for generative models: Feature-oriented Continual Learning (FoCL).
no code implementations • 9 Mar 2020 • Qicheng Lao, Xiang Jiang, Mohammad Havaei, Yoshua Bengio
Learning in non-stationary environments is one of the biggest challenges in machine learning.
3 code implementations • 29 Nov 2019 • Sina Mohammadi, Mehrdad Noori, Ali Bahri, Sina Ghofrani Majelan, Mohammad Havaei
Beneficial from Fully Convolutional Neural Networks (FCNs), saliency detection methods have achieved promising results.
no code implementations • ICCV 2019 • Qicheng Lao, Mohammad Havaei, Ahmad Pesaranghader, Francis Dutil, Lisa Di Jorio, Thomas Fevens
), and the style, which is usually not well described in the text (e. g., location, quantity, size, etc.).
no code implementations • ICLR 2019 • Xiang Jiang, Mohammad Havaei, Farshid Varno, Gabriel Chartrand, Nicolas Chapados, Stan Matwin
Neural networks can learn to extract statistical properties from data, but they seldom make use of structured information from the label space to help representation learning.
no code implementations • 28 Mar 2019 • Saeid Asgari Taghanaki, Mohammad Havaei, Tess Berthier, Francis Dutil, Lisa Di Jorio, Ghassan Hamarneh, Yoshua Bengio
The scarcity of richly annotated medical images is limiting supervised deep learning based solutions to medical image analysis tasks, such as localizing discriminatory radiomic disease signatures.
no code implementations • 3 Jun 2018 • Xiang Jiang, Mohammad Havaei, Gabriel Chartrand, Hassan Chouaib, Thomas Vincent, Andrew Jesson, Nicolas Chapados, Stan Matwin
Based on the Model-Agnostic Meta-Learning framework (MAML), we introduce the Attentive Task-Agnostic Meta-Learning (ATAML) algorithm for text classification.
no code implementations • 6 Oct 2017 • Chin-wei Huang, Ahmed Touati, Laurent Dinh, Michal Drozdzal, Mohammad Havaei, Laurent Charlin, Aaron Courville
In this paper, we study two aspects of the variational autoencoder (VAE): the prior distribution over the latent variables and its corresponding posterior.
no code implementations • 18 Jul 2016 • Mohammad Havaei, Nicolas Guizard, Hugo Larochelle, Pierre-Marc Jodoin
In this chapter, we provide a survey of CNN methods applied to medical imaging with a focus on brain pathology segmentation.
1 code implementation • 18 Jul 2016 • Mohammad Havaei, Nicolas Guizard, Nicolas Chapados, Yoshua Bengio
We introduce a deep learning image segmentation framework that is extremely robust to missing imaging modalities.
Ranked #104 on Semantic Segmentation on NYU Depth v2
no code implementations • 5 Oct 2015 • Mohammad Havaei, Hugo Larochelle, Philippe Poulin, Pierre-Marc Jodoin
Purpose: In this paper, we investigate a framework for interactive brain tumor segmentation which, at its core, treats the problem of interactive brain tumor segmentation as a machine learning problem.
15 code implementations • 13 May 2015 • Mohammad Havaei, Axel Davy, David Warde-Farley, Antoine Biard, Aaron Courville, Yoshua Bengio, Chris Pal, Pierre-Marc Jodoin, Hugo Larochelle
Finally, we explore a cascade architecture in which the output of a basic CNN is treated as an additional source of information for a subsequent CNN.
Ranked #1 on Brain Tumor Segmentation on BRATS-2013 leaderboard