Search Results for author: Yuehai Wang

Found 8 papers, 3 papers with code

Audio Deepfake Detection with Self-Supervised WavLM and Multi-Fusion Attentive Classifier

no code implementations13 Dec 2023 Yinlin Guo, Haofan Huang, Xi Chen, He Zhao, Yuehai Wang

In this paper, we report our efforts to combine the self-supervised WavLM model and Multi-Fusion Attentive classifier for audio deepfake detection.

DeepFake Detection

Masked Acoustic Unit for Mispronunciation Detection and Correction

no code implementations12 Aug 2021 Zhan Zhang, Yuehai Wang, Jianyi Yang

Computer-Assisted Pronunciation Training (CAPT) plays an important role in language learning.

Sentence

Text-Conditioned Transformer for Automatic Pronunciation Error Detection

no code implementations28 Aug 2020 Zhan Zhang, Yuehai Wang, Jianyi Yang

In this paper, we propose to use the target text as an extra condition for the Transformer backbone to handle the APED task.

Collaborative Distillation for Ultra-Resolution Universal Style Transfer

1 code implementation CVPR 2020 Huan Wang, Yijun Li, Yuehai Wang, Haoji Hu, Ming-Hsuan Yang

In this work, we present a new knowledge distillation method (named Collaborative Distillation) for encoder-decoder based neural style transfer to reduce the convolutional filters.

Knowledge Distillation Style Transfer

Structured Pruning for Efficient ConvNets via Incremental Regularization

no code implementations NIPS Workshop CDNNRIA 2018 Huan Wang, Qiming Zhang, Yuehai Wang, Haoji Hu

Parameter pruning is a promising approach for CNN compression and acceleration by eliminating redundant model parameters with tolerable performance loss.

Structured Pruning for Efficient ConvNets via Incremental Regularization

1 code implementation25 Apr 2018 Huan Wang, Qiming Zhang, Yuehai Wang, Yu Lu, Haoji Hu

Parameter pruning is a promising approach for CNN compression and acceleration by eliminating redundant model parameters with tolerable performance degrade.

Network Pruning

Structured Probabilistic Pruning for Convolutional Neural Network Acceleration

2 code implementations20 Sep 2017 Huan Wang, Qiming Zhang, Yuehai Wang, Haoji Hu

Unlike existing deterministic pruning approaches, where unimportant weights are permanently eliminated, SPP introduces a pruning probability for each weight, and pruning is guided by sampling from the pruning probabilities.

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.