no code implementations • 26 Jun 2022 • Shurui Gui, Hao Yuan, Jie Wang, Qicheng Lao, Kang Li, Shuiwang Ji
We investigate the explainability of graph neural networks (GNNs) as a step towards elucidating their working mechanisms.
no code implementations • 15 Jun 2021 • Jiefeng Chen, Yang Guo, Xi Wu, Tianqi Li, Qicheng Lao, YIngyu Liang, Somesh Jha
Compared to traditional "test-time" defenses, these defense mechanisms "dynamically retrain" the model based on test time input via transductive learning; and theoretically, attacking these defenses boils down to bilevel optimization, which seems to raise the difficulty for adaptive attacks.
no code implementations • 1 Jan 2021 • Xi Wu, Yang Guo, Tianqi Li, Jiefeng Chen, Qicheng Lao, YIngyu Liang, Somesh Jha
On the positive side, we show that, if one is allowed to access the training data, then Domain Adversarial Neural Networks (${\sf DANN}$), an algorithm designed for unsupervised domain adaptation, can provide nontrivial robustness in the test-time maximin threat model against strong transfer attacks and adaptive fixed point attacks.
no code implementations • 15 Dec 2020 • Qicheng Lao, Xiang Jiang, Mohammad Havaei
We propose a hypothesis disparity regularized mutual information maximization~(HDMI) approach to tackle unsupervised hypothesis transfer -- as an effort towards unifying hypothesis transfer learning (HTL) and unsupervised domain adaptation (UDA) -- where the knowledge from a source domain is transferred solely through hypotheses and adapted to the target domain in an unsupervised manner.
no code implementations • 8 Dec 2020 • Mohammad Havaei, Ximeng Mao, Yiping Wang, Qicheng Lao
Current practices in using cGANs for medical image generation, only use a single variable for image generation (i. e., content) and therefore, do not provide much flexibility nor control over the generated image.
1 code implementation • ICML 2020 • Xiang Jiang, Qicheng Lao, Stan Matwin, Mohammad Havaei
We present an approach for unsupervised domain adaptation---with a strong focus on practical considerations of within-domain class imbalance and between-domain class distribution shift---from a class-conditioned domain alignment perspective.
Ranked #1 on
Unsupervised Domain Adaptation
on Office-31
1 code implementation • 9 Mar 2020 • Qicheng Lao, Mehrzad Mortazavi, Marzieh Tahaei, Francis Dutil, Thomas Fevens, Mohammad Havaei
In this paper, we propose a general framework in continual learning for generative models: Feature-oriented Continual Learning (FoCL).
no code implementations • 9 Mar 2020 • Qicheng Lao, Xiang Jiang, Mohammad Havaei, Yoshua Bengio
Learning in non-stationary environments is one of the biggest challenges in machine learning.
no code implementations • ICCV 2019 • Qicheng Lao, Mohammad Havaei, Ahmad Pesaranghader, Francis Dutil, Lisa Di Jorio, Thomas Fevens
), and the style, which is usually not well described in the text (e. g., location, quantity, size, etc.).
no code implementations • 28 May 2019 • Qicheng Lao, Thomas Fevens
In practice, histopathological diagnosis of tumor malignancy often requires a human expert to scan through histopathological images at multiple magnification levels, after which a final diagnosis can be accurately determined.
no code implementations • 26 Jun 2018 • Qicheng Lao, Thomas Fevens, Boyu Wang
Unlike natural images, medical images often have intrinsic characteristics that can be leveraged for neural network learning.