no code implementations • 2 Mar 2023 • Ye-Rin Jeoung, Joon-Young Yang, Jeong-Hwan Choi, Joon-Hyuk Chang
In this study, to enhance the training effectiveness of SA-EEND models, we propose the use of auxiliary losses for the SA heads of the transformer layers.
1 code implementation • 27 Dec 2021 • Joon-Young Yang, Joon-Hyuk Chang
Developing a single-microphone speech denoising or dereverberation front-end for robust automatic speaker verification (ASV) in noisy far-field speaking scenarios is challenging.
no code implementations • 20 Oct 2021 • Mun-Hak Lee, Joon-Hyuk Chang
The remarkable performance of the pre-trained language model (LM) using self-supervised learning has led to a major paradigm shift in the study of natural language processing.
no code implementations • 15 Feb 2021 • Jae-Hong Lee, Joon-Hyuk Chang
In this study, we use the attributions that filter out irrelevant parts of the input features and then verify the effectiveness of this approach by measuring the classification accuracy of a pre-trained DNN.
no code implementations • Interspeech 2020 • Jung-Hee Kim, Joon-Hyuk Chang
In this paper, a Wave-U-Net based acoustic echo cancellation (AEC) with an attention mechanism is proposed to jointly sup- press acoustic echo and background noise.
no code implementations • 17 Aug 2016 • Jeehye Lee, Myungin Lee, Joon-Hyuk Chang
Distant speech recognition is a challenge, particularly due to the corruption of speech signals by reverberation caused by large distances between the speaker and microphone.