Search Results for author: WooSeok Shin

Found 5 papers, 3 papers with code

TriAAN-VC: Triple Adaptive Attention Normalization for Any-to-Any Voice Conversion

1 code implementation16 Mar 2023 Hyun Joon Park, Seok Woo Yang, Jin Sob Kim, WooSeok Shin, Sung Won Han

The existing methods do not simultaneously satisfy the above two aspects of VC, and their conversion outputs suffer from a trade-off problem between maintaining source contents and target characteristics.

Voice Conversion

Multi-View Attention Transfer for Efficient Speech Enhancement

no code implementations22 Aug 2022 WooSeok Shin, Hyun Joon Park, Jin Sob Kim, Byung Hoon Lee, Sung Won Han

In this study, we propose multi-view attention transfer (MV-AT), a feature-based distillation, to obtain efficient speech enhancement models in the time domain.

Knowledge Distillation Speech Enhancement

MANNER: Multi-view Attention Network for Noise Erasure

1 code implementation4 Mar 2022 Hyun Joon Park, Byung Ha Kang, WooSeok Shin, Jin Sob Kim, Sung Won Han

In the field of speech enhancement, time domain methods have difficulties in achieving both high performance and efficiency.

Speech Enhancement

TRACER: Extreme Attention Guided Salient Object Tracing Network

1 code implementation14 Dec 2021 Min Seok Lee, WooSeok Shin, Sung Won Han

Existing studies on salient object detection (SOD) focus on extracting distinct objects with edge information and aggregating multi-level features to improve SOD performance.

object-detection RGB Salient Object Detection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.