no code implementations • 30 May 2023 • Jaeuk Byun, Youna Ji, Soo Whan Chung, Soyeon Choe, Min Seok Choi
Our experiments demonstrate that the contextual information provided by the self-supervised speech representation can enhance speech restoration performance in various distortion scenarios, while also increasing robustness against the duration of speech attenuation and mismatched test conditions.
1 code implementation • 31 Oct 2022 • Robin Scheibler, Youna Ji, Soo-Whan Chung, Jaeuk Byun, Soyeon Choe, Min-Seok Choi
We propose DiffSep, a new single channel source separation method based on score-matching of a stochastic differential equation (SDE).
1 code implementation • 17 Aug 2021 • You Jin Kim, Hee-Soo Heo, Soyeon Choe, Soo-Whan Chung, Yoohwan Kwon, Bong-Jin Lee, Youngki Kwon, Joon Son Chung
Face tracks are extracted from the videos and active segments are annotated based on the timestamps of VoxConverse in a semi-automatic way.
no code implementations • 14 May 2020 • Soo-Whan Chung, Soyeon Choe, Joon Son Chung, Hong-Goo Kang
The objective of this paper is to separate a target speaker's speech from a mixture of two speakers using a deep audio-visual speech separation network.