Search Results for author: Jichang Li

Found 6 papers, 4 papers with code

Inter-Domain Mixup for Semi-Supervised Domain Adaptation

no code implementations21 Jan 2024 Jichang Li, Guanbin Li, Yizhou Yu

However, existing SSDA work fails to make full use of label information from both source and target domains for feature alignment across domains, resulting in label mismatch in the label space during model testing.

Semi-supervised Domain Adaptation Unsupervised Domain Adaptation

Adaptive Betweenness Clustering for Semi-Supervised Domain Adaptation

no code implementations21 Jan 2024 Jichang Li, Guanbin Li, Yizhou Yu

Once the graph has been refined, Adaptive Betweenness Clustering is introduced to facilitate semantic transfer by using across-domain betweenness clustering and within-domain betweenness clustering, thereby propagating semantic label information from labeled samples across domains to unlabeled target data.

Clustering Semi-supervised Domain Adaptation +1

FedDiv: Collaborative Noise Filtering for Federated Learning with Noisy Labels

1 code implementation19 Dec 2023 Jichang Li, Guanbin Li, Hui Cheng, Zicheng Liao, Yizhou Yu

However, these prior methods do not learn noise filters by exploiting knowledge across all clients, leading to sub-optimal and inferior noise filtering performance and thus damaging training stability.

Federated Learning Learning with noisy labels +1

Divide and Adapt: Active Domain Adaptation via Customized Learning

1 code implementation CVPR 2023 Duojun Huang, Jichang Li, Weikai Chen, Junshi Huang, Zhenhua Chai, Guanbin Li

To accommodate active learning and domain adaption, the two naturally different tasks, in a collaborative framework, we advocate that a customized learning strategy for the target data is the key to the success of ADA solutions.

Active Learning Informativeness +3

Neighborhood Collective Estimation for Noisy Label Identification and Correction

1 code implementation5 Aug 2022 Jichang Li, Guanbin Li, Feng Liu, Yizhou Yu

Specifically, our method is divided into two steps: 1) Neighborhood Collective Noise Verification to separate all training samples into a clean or noisy subset, 2) Neighborhood Collective Label Correction to relabel noisy samples, and then auxiliary techniques are used to assist further model optimization.

Learning with noisy labels Model Optimization

Cross-Domain Adaptive Clustering for Semi-Supervised Domain Adaptation

2 code implementations CVPR 2021 Jichang Li, Guanbin Li, Yemin Shi, Yizhou Yu

Pseudo labeling expands the number of ``labeled" samples in each class in the target domain, and thus produces a more robust and powerful cluster core for each class to facilitate adversarial learning.

Clustering Domain Adaptation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.