Search Results for author: Jun Wen

Found 14 papers, 3 papers with code

Discriminative Radial Domain Adaptation

1 code implementation1 Jan 2023 Zenan Huang, Jun Wen, Siheng Chen, Linchao Zhu, Nenggan Zheng

Domain adaptation methods reduce domain shift typically by learning domain-invariant features.

Domain Generalization Unsupervised Domain Adaptation

Contrast-reconstruction Representation Learning for Self-supervised Skeleton-based Action Recognition

no code implementations22 Nov 2021 Peng Wang, Jun Wen, Chenyang Si, Yuntao Qian, Liang Wang

Finally, in the Information Fuser, we explore varied strategies to combine the Sequence Reconstructor and Contrastive Motion Learner, and propose to capture postures and motions simultaneously via a knowledge-distillation based fusion strategy that transfers the motion learning from the Contrastive Motion Learner to the Sequence Reconstructor.

Action Recognition Contrastive Learning +4

Semi-Supervised Hypothesis Transfer for Source-Free Domain Adaptation

no code implementations14 Jul 2021 Ning Ma, Jiajun Bu, Lixian Lu, Jun Wen, Zhen Zhang, Sheng Zhou, Xifeng Yan

Domain Adaptation has been widely used to deal with the distribution shift in vision, language, multimedia etc.

Source-Free Domain Adaptation

Interventional Domain Adaptation

no code implementations7 Nov 2020 Jun Wen, Changjian Shui, Kun Kuang, Junsong Yuan, Zenan Huang, Zhefeng Gong, Nenggan Zheng

To address this issue, we intervene in the learning of feature discriminability using unlabeled target data to guide it to get rid of the domain-specific part and be safely transferable.

counterfactual Unsupervised Domain Adaptation

Beyond $\mathcal{H}$-Divergence: Domain Adaptation Theory With Jensen-Shannon Divergence

no code implementations30 Jul 2020 Changjian Shui, Qi Chen, Jun Wen, Fan Zhou, Christian Gagné, Boyu Wang

We reveal the incoherence between the widely-adopted empirical domain adversarial training and its generally-assumed theoretical counterpart based on $\mathcal{H}$-divergence.

Domain Adaptation Transfer Learning

Linear Context Transform Block

no code implementations6 Sep 2019 Dongsheng Ruan, Jun Wen, Nenggan Zheng, Min Zheng

In this work, we first revisit the SE block, and then present a detailed empirical study of the relationship between global context and attention distribution, based on which we propose a simple yet effective module, called Linear Context Transform (LCT) block.

Image Classification object-detection +1

C^3 Framework: An Open-source PyTorch Code for Crowd Counting

3 code implementations5 Jul 2019 Junyu. Gao, Wei. Lin, Bin Zhao, Dong Wang, Chenyu Gao, Jun Wen

This technical report attempts to provide efficient and solid kits addressed on the field of crowd counting, which is denoted as Crowd Counting Code Framework (C$^3$F).

Crowd Counting

Expected Sarsa($λ$) with Control Variate for Variance Reduction

no code implementations25 Jun 2019 Long Yang, Yu Zhang, Jun Wen, Qian Zheng, Pengfei Li, Gang Pan

In this paper, for reducing the variance, we introduce control variate technique to $\mathtt{Expected}$ $\mathtt{Sarsa}$($\lambda$) and propose a tabular $\mathtt{ES}$($\lambda$)-$\mathtt{CV}$ algorithm.

Off-policy evaluation

Bayesian Uncertainty Matching for Unsupervised Domain Adaptation

no code implementations24 Jun 2019 Jun Wen, Nenggan Zheng, Junsong Yuan, Zhefeng Gong, Changyou Chen

By imposing distribution matching on both features and labels (via uncertainty), label distribution mismatching in source and target data is effectively alleviated, encouraging the classifier to produce consistent predictions across domains.

Unsupervised Domain Adaptation

Exploiting Local Feature Patterns for Unsupervised Domain Adaptation

no code implementations12 Nov 2018 Jun Wen, Risheng Liu, Nenggan Zheng, Qian Zheng, Zhefeng Gong, Junsong Yuan

In this paper, we present a method for learning domain-invariant local feature patterns and jointly aligning holistic and local feature statistics.

Unsupervised Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.