Search Results for author: Jeih-weih Hung

Found 20 papers, 2 papers with code

Employing low-pass filtered temporal speech features for the training of ideal ratio mask in speech enhancement

no code implementations ROCLING 2021 Yan-Tong Chen, Zi-Qiang Lin, Jeih-weih Hung

Preliminary experiments conducted on a subset of TIMIT corpus reveal that the proposed method can make the resulting IRM achieve higher speech quality and intelligibility for the babble noise-corrupted signals compared with the original IRM, indicating that the lowpass filtered temporal feature sequence can learn a superior IRM network for speech enhancement.

Automatic Speech Recognition Speech Enhancement +1

A Preliminary Study of the Application of Discrete Wavelet Transform Features in Conv-TasNet Speech Enhancement Model

no code implementations ROCLING 2022 Yan-Tong Chen, Zong-Tai Wu, Jeih-weih Hung

Nowadays, time-domain features have been widely used in speech enhancement (SE) networks like frequency-domain features to achieve excellent performance in eliminating noise from input utterances.

Speech Enhancement

Cross-domain Single-channel Speech Enhancement Model with Bi-projection Fusion Module for Noise-robust ASR

no code implementations26 Aug 2021 Fu-An Chao, Jeih-weih Hung, Berlin Chen

In recent decades, many studies have suggested that phase information is crucial for speech enhancement (SE), and time-domain single-channel speech enhancement techniques have shown promise in noise suppression and robust automatic speech recognition (ASR).

Automatic Speech Recognition Speech Enhancement +1

TENET: A Time-reversal Enhancement Network for Noise-robust ASR

1 code implementation4 Jul 2021 Fu-An Chao, Shao-Wei Fan Jiang, Bi-Cheng Yan, Jeih-weih Hung, Berlin Chen

Due to the unprecedented breakthroughs brought about by deep learning, speech enhancement (SE) techniques have been developed rapidly and play an important role prior to acoustic modeling to mitigate noise effects on speech.

Automatic Speech Recognition Speech Enhancement +1

Speech Enhancement Guided by Contextual Articulatory Information

no code implementations15 Nov 2020 Yen-Ju Lu, Chia-Yu Chang, Cheng Yu, Ching-Feng Liu, Jeih-weih Hung, Shinji Watanabe, Yu Tsao

Previous studies have confirmed that by augmenting acoustic features with the place/manner of articulatory features, the speech enhancement (SE) process can be guided to consider the articulatory properties of the input speech when performing enhancement to attain performance improvements.

Automatic Speech Recognition Denoising +4

Incorporating Broad Phonetic Information for Speech Enhancement

no code implementations13 Aug 2020 Yen-Ju Lu, Chien-Feng Liao, Xugang Lu, Jeih-weih Hung, Yu Tsao

In noisy conditions, knowing speech contents facilitates listeners to more effectively suppress background noise components and to retrieve pure speech signals.

Denoising Speech Enhancement

Time-Domain Multi-modal Bone/air Conducted Speech Enhancement

no code implementations22 Nov 2019 Cheng Yu, Kuo-Hsuan Hung, Syu-Siang Wang, Szu-Wei Fu, Yu Tsao, Jeih-weih Hung

Previous studies have proven that integrating video signals, as a complementary modality, can facilitate improved performance for speech enhancement (SE).

Ensemble Learning Speech Enhancement

Speech Enhancement Based on Reducing the Detail Portion of Speech Spectrograms in Modulation Domain via Discrete Wavelet Transform

1 code implementation8 Nov 2018 Shih-kuang Lee, Syu-Siang Wang, Yu Tsao, Jeih-weih Hung

The presented DWT-based SE method with various scaling factors for the detail part is evaluated with a subset of Aurora-2 database, and the PESQ metric is used to indicate the quality of processed speech signals.

Speech Enhancement

Cannot find the paper you are looking for? You can Submit a new open access paper.