Search Results for author: Joon-Hyuk Chang

Found 6 papers, 1 papers with code

Improving Transformer-based End-to-End Speaker Diarization by Assigning Auxiliary Losses to Attention Heads

no code implementations2 Mar 2023 Ye-Rin Jeoung, Joon-Young Yang, Jeong-Hwan Choi, Joon-Hyuk Chang

In this study, to enhance the training effectiveness of SA-EEND models, we propose the use of auxiliary losses for the SA heads of the transformer layers.

Action Detection Activity Detection +2

Task-specific Optimization of Virtual Channel Linear Prediction-based Speech Dereverberation Front-End for Far-Field Speaker Verification

1 code implementation27 Dec 2021 Joon-Young Yang, Joon-Hyuk Chang

Developing a single-microphone speech denoising or dereverberation front-end for robust automatic speaker verification (ASV) in noisy far-field speaking scenarios is challenging.

Denoising Speaker Verification +2

Knowledge distillation from language model to acoustic model: a hierarchical multi-task learning approach

no code implementations20 Oct 2021 Mun-Hak Lee, Joon-Hyuk Chang

The remarkable performance of the pre-trained language model (LM) using self-supervised learning has led to a major paradigm shift in the study of natural language processing.

Knowledge Distillation Language Modelling +4

Attribution Mask: Filtering Out Irrelevant Features By Recursively Focusing Attention on Inputs of DNNs

no code implementations15 Feb 2021 Jae-Hong Lee, Joon-Hyuk Chang

In this study, we use the attributions that filter out irrelevant parts of the input features and then verify the effectiveness of this approach by measuring the classification accuracy of a pre-trained DNN.

Attention Wave-U-Net for Acoustic Echo Cancellation

no code implementations Interspeech 2020 Jung-Hee Kim, Joon-Hyuk Chang

In this paper, a Wave-U-Net based acoustic echo cancellation (AEC) with an attention mechanism is proposed to jointly sup- press acoustic echo and background noise.

Acoustic echo cancellation

Ensemble of Jointly Trained Deep Neural Network-Based Acoustic Models for Reverberant Speech Recognition

no code implementations17 Aug 2016 Jeehye Lee, Myungin Lee, Joon-Hyuk Chang

Distant speech recognition is a challenge, particularly due to the corruption of speech signals by reverberation caused by large distances between the speaker and microphone.

Distant Speech Recognition speech-recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.