Search Results for author: Changjian Shui

Found 24 papers, 7 papers with code

Generalizing across Temporal Domains with Koopman Operators

no code implementations12 Feb 2024 Qiuhao Zeng, Wei Wang, Fan Zhou, Gezheng Xu, Ruizhi Pu, Changjian Shui, Christian Gagne, Shichun Yang, Boyu Wang, Charles X. Ling

By employing Koopman Operators, we effectively address the time-evolving distributions encountered in TDG using the principles of Koopman theory, where measurement functions are sought to establish linear transition relations between evolving domains.

Domain Generalization Generalization Bounds

Hessian Aware Low-Rank Weight Perturbation for Continual Learning

1 code implementation26 Nov 2023 Jiaqi Li, Rui Wang, Yuanhao Lai, Changjian Shui, Sabyasachi Sahoo, Charles X. Ling, Shichun Yang, Boyu Wang, Christian Gagné, Fan Zhou

We conduct extensive experiments on various benchmarks, including a dataset with large-scale tasks, and compare our method against some recent state-of-the-art methods to demonstrate the effectiveness and scalability of our proposed method.

Continual Learning

Mitigating Calibration Bias Without Fixed Attribute Grouping for Improved Fairness in Medical Imaging Analysis

no code implementations4 Jul 2023 Changjian Shui, Justin Szeto, Raghav Mehta, Douglas L. Arnold, Tal Arbel

However, models that are well calibrated overall can still be poorly calibrated for a sub-population, potentially resulting in a clinician unwittingly making poor decisions for this group based on the recommendations of the model.

Attribute Fairness +2

Evaluating the Fairness of Deep Learning Uncertainty Estimates in Medical Image Analysis

no code implementations6 Mar 2023 Raghav Mehta, Changjian Shui, Tal Arbel

Unfortunately, recent studies have indeed shown significant biases in DL models across demographic subgroups (e. g., race, sex, age) in the context of medical image analysis, indicating a lack of fairness in the models.

Fairness Lesion Classification +2

Clinically Plausible Pathology-Anatomy Disentanglement in Patient Brain MRI with Structured Variational Priors

no code implementations15 Nov 2022 Anjun Hu, Jean-Pierre R. Falet, Brennan S. Nichyporuk, Changjian Shui, Douglas L. Arnold, Sotirios A. Tsaftaris, Tal Arbel

We propose a hierarchically structured variational inference model for accurately disentangling observable evidence of disease (e. g. brain lesions or atrophy) from subject-specific anatomy in brain MRIs.

Anatomy Disentanglement +1

On Learning Fairness and Accuracy on Multiple Subgroups

1 code implementation19 Oct 2022 Changjian Shui, Gezheng Xu, Qi Chen, Jiaqi Li, Charles Ling, Tal Arbel, Boyu Wang, Christian Gagné

In the upper-level, the fair predictor is updated to be close to all subgroup specific predictors.

Fairness

Information Gain Sampling for Active Learning in Medical Image Classification

no code implementations1 Aug 2022 Raghav Mehta, Changjian Shui, Brennan Nichyporuk, Tal Arbel

This work presents an information-theoretic active learning framework that guides the optimal selection of images from the unlabelled pool to be labeled based on maximizing the expected information gain (EIG) on an evaluation dataset.

Active Learning Image Classification +3

Evolving Domain Generalization

no code implementations31 May 2022 William Wei Wang, Gezheng Xu, Ruizhi Pu, Jiaqi Li, Fan Zhou, Changjian Shui, Charles Ling, Christian Gagné, Boyu Wang

Domain generalization aims to learn a predictive model from multiple different but related source tasks that can generalize well to a target task without the need of accessing any target data.

Evolving Domain Generalization Meta-Learning

Fair Representation Learning through Implicit Path Alignment

no code implementations26 May 2022 Changjian Shui, Qi Chen, Jiaqi Li, Boyu Wang, Christian Gagné

We consider a fair representation learning perspective, where optimal predictors, on top of the data representation, are ensured to be invariant with respect to different sub-groups.

Fairness Representation Learning

Gap Minimization for Knowledge Sharing and Transfer

no code implementations26 Jan 2022 Boyu Wang, Jorge Mendez, Changjian Shui, Fan Zhou, Di wu, Gezheng Xu, Christian Gagné, Eric Eaton

Unlike existing measures which are used as tools to bound the difference of expected risks between tasks (e. g., $\mathcal{H}$-divergence or discrepancy distance), we theoretically show that the performance gap can be viewed as a data- and algorithm-dependent regularizer, which controls the model complexity and leads to finer guarantees.

Representation Learning Transfer Learning

Directional Domain Generalization

no code implementations29 Sep 2021 Wei Wang, Jiaqi Li, Ruizhi Pu, Gezheng Xu, Fan Zhou, Changjian Shui, Charles Ling, Boyu Wang

Domain generalization aims to learn a predictive model from multiple different but related source tasks that can generalize well to a target task without the need of accessing any target data.

Domain Generalization Meta-Learning +1

On the benefits of representation regularization in invariance based domain generalization

no code implementations30 May 2021 Changjian Shui, Boyu Wang, Christian Gagné

Our regularization is orthogonal to and can be straightforwardly adopted in existing domain generalization algorithms for invariant representation learning.

Domain Generalization Representation Learning

Aggregating From Multiple Target-Shifted Sources

1 code implementation9 May 2021 Changjian Shui, Zijian Li, Jiaqi Li, Christian Gagné, Charles Ling, Boyu Wang

Multi-source domain adaptation aims at leveraging the knowledge from multiple tasks for predicting a related target domain.

Unsupervised Domain Adaptation

Unified Principles For Multi-Source Transfer Learning Under Label Shifts

no code implementations1 Jan 2021 Changjian Shui, Zijian Li, Jiaqi Li, Christian Gagné, Charles Ling, Boyu Wang

We study the label shift problem in multi-source transfer learning and derive new generic principles to control the target generalization risk.

Transfer Learning Unsupervised Domain Adaptation

Interventional Domain Adaptation

no code implementations7 Nov 2020 Jun Wen, Changjian Shui, Kun Kuang, Junsong Yuan, Zenan Huang, Zhefeng Gong, Nenggan Zheng

To address this issue, we intervene in the learning of feature discriminability using unlabeled target data to guide it to get rid of the domain-specific part and be safely transferable.

counterfactual Unsupervised Domain Adaptation

Beyond $\mathcal{H}$-Divergence: Domain Adaptation Theory With Jensen-Shannon Divergence

no code implementations30 Jul 2020 Changjian Shui, Qi Chen, Jun Wen, Fan Zhou, Christian Gagné, Boyu Wang

We reveal the incoherence between the widely-adopted empirical domain adversarial training and its generally-assumed theoretical counterpart based on $\mathcal{H}$-divergence.

Domain Adaptation Transfer Learning

Domain Generalization via Optimal Transport with Metric Similarity Learning

no code implementations21 Jul 2020 Fan Zhou, Zhuqing Jiang, Changjian Shui, Boyu Wang, Brahim Chaib-Draa

Previous domain generalization approaches mainly focused on learning invariant features and stacking the learned features from each source domain to generalize to a new target domain while ignoring the label information, which will lead to indistinguishable features with an ambiguous classification boundary.

Domain Generalization Metric Learning

Discriminative Active Learning for Domain Adaptation

no code implementations24 May 2020 Fan Zhou, Changjian Shui, Bincheng Huang, Boyu Wang, Brahim Chaib-Draa

To this end, we introduce a discriminative active learning approach for domain adaptation to reduce the efforts of data annotation.

Active Learning Domain Adaptation

Deep Active Learning: Unified and Principled Method for Query and Training

1 code implementation20 Nov 2019 Changjian Shui, Fan Zhou, Christian Gagné, Boyu Wang

In this paper, we are proposing a unified and principled method for both the querying and training processes in deep batch active learning.

Active Learning

Toward Metrics for Differentiating Out-of-Distribution Sets

1 code implementation18 Oct 2019 Mahdieh Abbasi, Changjian Shui, Arezoo Rajabi, Christian Gagne, Rakesh Bobba

We empirically verify that the most protective OOD sets -- selected according to our metrics -- lead to A-CNNs with significantly lower generalization errors than the A-CNNs trained on the least protective ones.

Out of Distribution (OOD) Detection

A Principled Approach for Learning Task Similarity in Multitask Learning

1 code implementation21 Mar 2019 Changjian Shui, Mahdieh Abbasi, Louis-Émile Robitaille, Boyu Wang, Christian Gagné

Hence, an important aspect of multitask learning is to understand the similarities within a set of tasks.

Accumulating Knowledge for Lifelong Online Learning

no code implementations26 Oct 2018 Changjian Shui, Ihsen Hedhli, Christian Gagné

We are providing a theoretical analysis of this algorithm, with a cumulative error upper bound for each task.

Transfer Learning

Diversity regularization in deep ensembles

no code implementations22 Feb 2018 Changjian Shui, Azadeh Sadat Mozafari, Jonathan Marek, Ihsen Hedhli, Christian Gagné

Calibrating the confidence of supervised learning models is important for a variety of contexts where the certainty over predictions should be reliable.

Cannot find the paper you are looking for? You can Submit a new open access paper.