Search Results for author: Xingxu Yao

Found 8 papers, 1 papers with code

Ordinal Label Distribution Learning

no code implementations ICCV 2023 Changsong Wen, Xin Zhang, Xingxu Yao, Jufeng Yang

Therefore, we propose a new paradigm, termed ordinal label distribution learning (OLDL).

Age Estimation

S4OD: Semi-Supervised learning for Single-Stage Object Detection

no code implementations9 Apr 2022 Yueming Zhang, Xingxu Yao, Chao Liu, Feng Chen, Xiaolin Song, Tengfei Xing, Runbo Hu, Hua Chai, Pengfei Xu, Guoshan Zhang

In this paper, we design a dynamic self-adaptive threshold (DSAT) strategy in classification branch, which can automatically select pseudo labels to achieve an optimal trade-off between quality and quantity.

Object object-detection +3

2nd Place Solution for VisDA 2021 Challenge -- Universally Domain Adaptive Image Recognition

no code implementations27 Oct 2021 Haojin Liao, Xiaolin Song, Sicheng Zhao, Shanghang Zhang, Xiangyu Yue, Xingxu Yao, Yueming Zhang, Tengfei Xing, Pengfei Xu, Qiang Wang

The Visual Domain Adaptation (VisDA) 2021 Challenge calls for unsupervised domain adaptation (UDA) methods that can deal with both input distribution shift and label set variance between the source and target domains.

Universal Domain Adaptation Unsupervised Domain Adaptation

Multi-Source Domain Adaptation for Object Detection

no code implementations ICCV 2021 Xingxu Yao, Sicheng Zhao, Pengfei Xu, Jufeng Yang

To reduce annotation labor associated with object detection, an increasing number of studies focus on transferring the learned knowledge from a labeled source domain to another unlabeled target domain.

Domain Adaptation Object +3

Emotion-Based End-to-End Matching Between Image and Music in Valence-Arousal Space

1 code implementation22 Aug 2020 Sicheng Zhao, Yaxian Li, Xingxu Yao, Wei-Zhi Nie, Pengfei Xu, Jufeng Yang, Kurt Keutzer

In this paper, we study end-to-end matching between image and music based on emotions in the continuous valence-arousal (VA) space.

Metric Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.