Search Results for author: Dapeng Hu

Found 13 papers, 7 papers with code

PseudoCal: A Source-Free Approach to Unsupervised Uncertainty Calibration in Domain Adaptation

no code implementations14 Jul 2023 Dapeng Hu, Jian Liang, Xinchao Wang, Chuan-Sheng Foo

The conventional in-domain calibration method, \textit{temperature scaling} (TempScal), encounters challenges due to domain distribution shifts and the absence of labeled target domain data.

Unsupervised Domain Adaptation

UMAD: Universal Model Adaptation under Domain and Category Shift

no code implementations16 Dec 2021 Jian Liang, Dapeng Hu, Jiashi Feng, Ran He

To achieve bilateral adaptation in the target domain, we further maximize localized mutual information to align known samples with the source classifier and employ an entropic loss to push unknown samples far away from the source classification boundary, respectively.

Universal Domain Adaptation Unsupervised Domain Adaptation

How Well Does Self-Supervised Pre-Training Perform with Streaming ImageNet?

no code implementations NeurIPS Workshop ImageNet_PPF 2021 Dapeng Hu, Shipeng Yan, Qizhengqiu Lu, Lanqing Hong, Hailin Hu, Yifan Zhang, Zhenguo Li, Xinchao Wang, Jiashi Feng

Prior works on self-supervised pre-training focus on the joint training scenario, where massive unlabeled data are assumed to be given as input all at once, and only then is a learner trained.

Self-Supervised Learning

No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID Data

1 code implementation NeurIPS 2021 Mi Luo, Fei Chen, Dapeng Hu, Yifan Zhang, Jian Liang, Jiashi Feng

Motivated by the above findings, we propose a novel and simple algorithm called Classifier Calibration with Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated gaussian mixture model.

Classifier calibration Federated Learning

How Well Does Self-Supervised Pre-Training Perform with Streaming Data?

no code implementations ICLR 2022 Dapeng Hu, Shipeng Yan, Qizhengqiu Lu, Lanqing Hong, Hailin Hu, Yifan Zhang, Zhenguo Li, Xinchao Wang, Jiashi Feng

Prior works on self-supervised pre-training focus on the joint training scenario, where massive unlabeled data are assumed to be given as input all at once, and only then is a learner trained.

Representation Learning Self-Supervised Learning

DINE: Domain Adaptation from Single and Multiple Black-box Predictors

3 code implementations CVPR 2022 Jian Liang, Dapeng Hu, Jiashi Feng, Ran He

To ease the burden of labeling, unsupervised domain adaptation (UDA) aims to transfer knowledge in previous and related labeled datasets (sources) to a new unlabeled dataset (target).

Transductive Learning Unsupervised Domain Adaptation

Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning

1 code implementation NeurIPS 2021 Yifan Zhang, Bryan Hooi, Dapeng Hu, Jian Liang, Jiashi Feng

In this paper, we investigate whether applying contrastive learning to fine-tuning would bring further benefits, and analytically find that optimizing the contrastive loss benefits both discriminative representation learning and model optimization during fine-tuning.

Contrastive Learning Image Classification +4

Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer

2 code implementations14 Dec 2020 Jian Liang, Dapeng Hu, Yunbo Wang, Ran He, Jiashi Feng

Furthermore, we propose a new labeling transfer strategy, which separates the target data into two splits based on the confidence of predictions (labeling information), and then employ semi-supervised learning to improve the accuracy of less-confident predictions in the target domain.

Classification General Classification +3

Domain Adaptation with Auxiliary Target Domain-Oriented Classifier

2 code implementations CVPR 2021 Jian Liang, Dapeng Hu, Jiashi Feng

ATDOC alleviates the classifier bias by introducing an auxiliary classifier for target data only, to improve the quality of pseudo labels.

Domain Adaptation Transfer Learning

Semantic Domain Adversarial Networks for Unsupervised Domain Adaptation

no code implementations30 Mar 2020 Dapeng Hu, Jian Liang, Qibin Hou, Hanshu Yan, Yunpeng Chen, Shuicheng Yan, Jiashi Feng

To successfully align the multi-modal data structures across domains, the following works exploit discriminative information in the adversarial training process, e. g., using multiple class-wise discriminators and introducing conditional information in input or output of the domain discriminator.

Object Recognition Semantic Segmentation +1

PROTOTYPE-ASSISTED ADVERSARIAL LEARNING FOR UNSUPERVISED DOMAIN ADAPTATION

no code implementations25 Sep 2019 Dapeng Hu, Jian Liang*, Qibin Hou, Hanshu Yan, Jiashi Feng

Previous adversarial learning methods condition domain alignment only on pseudo labels, but noisy and inaccurate pseudo labels may perturb the multi-class distribution embedded in probabilistic predictions, hence bringing insufficient alleviation to the latent mismatch problem.

Object Recognition Semantic Segmentation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.