Search Results for author: Xiaopeng Yan

Found 12 papers, 7 papers with code

An Exploration of Task-decoupling on Two-stage Neural Post Filter for Real-time Personalized Acoustic Echo Cancellation

no code implementations7 Oct 2023 Zihan Zhang, Jiayao Sun, Xianjun Xia, Ziqian Wang, Xiaopeng Yan, Yijian Xiao, Lei Xie

Utilization of speaker representation has extended the frontier of AEC, thus attracting many researchers' interest in personalized acoustic echo cancellation (PAEC).

Acoustic echo cancellation Speech Enhancement

The NPU-Elevoc Personalized Speech Enhancement System for ICASSP2023 DNS Challenge

1 code implementation13 Mar 2023 Xiaopeng Yan, Yindi Yang, Zhihao Guo, Liangliang Peng, Lei Xie

This paper describes our NPU-Elevoc personalized speech enhancement system (NAPSE) for the 5th Deep Noise Suppression Challenge at ICASSP 2023.

Speech Enhancement

BagFormer: Better Cross-Modal Retrieval via bag-wise interaction

no code implementations29 Dec 2022 Haowen Hou, Xiaopeng Yan, Yigeng Zhang, Fengzong Lian, Zhanhui Kang

In the field of cross-modal retrieval, single encoder models tend to perform better than dual encoder models, but they suffer from high latency and low throughput.

Cross-Modal Retrieval Retrieval

Semantically Coherent Out-of-Distribution Detection

2 code implementations ICCV 2021 Jingkang Yang, Haoqi Wang, Litong Feng, Xiaopeng Yan, Huabin Zheng, Wayne Zhang, Ziwei Liu

The proposed UDG can not only enrich the semantic knowledge of the model by exploiting unlabeled data in an unsupervised manner, but also distinguish ID/OOD samples to enhance ID classification and OOD detection tasks simultaneously.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Progressive Representative Labeling for Deep Semi-Supervised Learning

no code implementations13 Aug 2021 Xiaopeng Yan, Riquan Chen, Litong Feng, Jingkang Yang, Huabin Zheng, Wayne Zhang

In this paper, we propose to label only the most representative samples to expand the labeled set.

Webly Supervised Image Classification with Metadata: Automatic Noisy Label Correction via Visual-Semantic Graph

1 code implementation12 Oct 2020 Jingkang Yang, Weirong Chen, Litong Feng, Xiaopeng Yan, Huabin Zheng, Wayne Zhang

VSGraph-LC starts from anchor selection referring to the semantic similarity between metadata and correct label concepts, and then propagates correct labels from anchors on a visual graph using graph neural network (GNN).

General Classification Image Classification +2

Meta R-CNN : Towards General Solver for Instance-level Few-shot Learning

no code implementations28 Sep 2019 Xiaopeng Yan, Ziliang Chen, Anni Xu, Xiaoxi Wang, Xiaodan Liang, Liang Lin

Resembling the rapid learning capability of human, few-shot learning empowers vision systems to understand new concepts by training with few samples.

Few-Shot Learning Few-Shot Object Detection +3

Multivariate-Information Adversarial Ensemble for Scalable Joint Distribution Matching

1 code implementation8 Jul 2019 Ziliang Chen, Zhanfu Yang, Xiaoxi Wang, Xiaodan Liang, Xiaopeng Yan, Guanbin Li, Liang Lin

A broad range of cross-$m$-domain generation researches boil down to matching a joint distribution by deep generative models (DGMs).

Cost-effective Object Detection: Active Sample Mining with Switchable Selection Criteria

1 code implementation30 Jun 2018 Keze Wang, Liang Lin, Xiaopeng Yan, Ziliang Chen, Dongyu Zhang, Lei Zhang

The proposed process can be compatible with mini-batch based training (i. e., using a batch of unlabeled or partially labeled data as a one-time input) for object detection.

Active Learning object-detection +2

Towards Human-Machine Cooperation: Self-supervised Sample Mining for Object Detection

no code implementations CVPR 2018 Keze Wang, Xiaopeng Yan, Dongyu Zhang, Lei Zhang, Liang Lin

Though quite challenging, leveraging large-scale unlabeled or partially labeled images in a cost-effective way has increasingly attracted interests for its great importance to computer vision.

Active Learning Object +2

Cannot find the paper you are looking for? You can Submit a new open access paper.