Search Results for author: Seunghan Yang

Found 18 papers, 2 papers with code

Improving Small Footprint Few-shot Keyword Spotting with Supervision on Auxiliary Data

no code implementations31 Aug 2023 Seunghan Yang, Byeonggeun Kim, Kyuhong Shim, Simyung Chang

Few-shot keyword spotting (FS-KWS) models usually require large-scale annotated datasets to generalize to unseen target keywords.

Keyword Spotting Multi-Task Learning +1

Label Shift Adapter for Test-Time Adaptation under Covariate and Label Shifts

no code implementations ICCV 2023 Sunghyun Park, Seunghan Yang, Jaegul Choo, Sungrack Yun

Test-time adaptation (TTA) aims to adapt a pre-trained model to the target domain in a batch-by-batch manner during inference.

Test-time Adaptation

Progressive Random Convolutions for Single Domain Generalization

no code implementations CVPR 2023 Seokeon Choi, Debasmit Das, Sungha Choi, Seunghan Yang, Hyunsin Park, Sungrack Yun

Single domain generalization aims to train a generalizable model with only one source domain to perform well on arbitrary unseen target domains.

Domain Generalization Image Augmentation

Scalable Weight Reparametrization for Efficient Transfer Learning

no code implementations26 Feb 2023 Byeonggeun Kim, Jun-Tae Lee, Seunghan Yang, Simyung Chang

Efficient transfer learning involves utilizing a pre-trained model trained on a larger dataset and repurposing it for downstream tasks with the aim of maximizing the reuse of the pre-trained model.

Keyword Spotting Transfer Learning

Improving Test-Time Adaptation via Shift-agnostic Weight Regularization and Nearest Source Prototypes

no code implementations24 Jul 2022 Sungha Choi, Seunghan Yang, Seokeon Choi, Sungrack Yun

This paper proposes a novel test-time adaptation strategy that adjusts the model pre-trained on the source domain using only unlabeled online data from the target domain to alleviate the performance degradation due to the distribution shift between the source and target domains.

Test-time Adaptation

Personalized Keyword Spotting through Multi-task Learning

no code implementations28 Jun 2022 Seunghan Yang, Byeonggeun Kim, Inseop Chung, Simyung Chang

We design two personalized KWS tasks; (1) Target user Biased KWS (TB-KWS) and (2) Target user Only KWS (TO-KWS).

Keyword Spotting Multi-Task Learning +1

Domain Agnostic Few-shot Learning for Speaker Verification

no code implementations28 Jun 2022 Seunghan Yang, Debasmit Das, Janghoon Cho, Hyoungwoo Park, Sungrack Yun

Deep learning models for verification systems often fail to generalize to new users and new environments, even though they learn highly discriminative features.

Domain Generalization Few-Shot Learning +1

Dummy Prototypical Networks for Few-Shot Open-Set Keyword Spotting

no code implementations28 Jun 2022 Byeonggeun Kim, Seunghan Yang, Inseop Chung, Simyung Chang

We also verify our method on a standard benchmark, miniImageNet, and D-ProtoNets shows the state-of-the-art open-set detection rate in FSOSR.

Keyword Spotting Metric Learning +1

Domain Generalization with Relaxed Instance Frequency-wise Normalization for Multi-device Acoustic Scene Classification

no code implementations24 Jun 2022 Byeonggeun Kim, Seunghan Yang, Jangho Kim, Hyunsin Park, JunTae Lee, Simyung Chang

While using two-dimensional convolutional neural networks (2D-CNNs) in image processing, it is possible to manipulate domain information using channel statistics, and instance normalization has been a promising way to get domain-invariant features.

Acoustic Scene Classification Domain Generalization +1

Distribution Estimation to Automate Transformation Policies for Self-Supervision

no code implementations24 Nov 2021 Seunghan Yang, Debasmit Das, Simyung Chang, Sungrack Yun, Fatih Porikli

However, it is observed that image transformations already present in the dataset might be less effective in learning such self-supervised representations.

Generative Adversarial Network Self-Supervised Learning

Domain Generalization on Efficient Acoustic Scene Classification using Residual Normalization

no code implementations12 Nov 2021 Byeonggeun Kim, Seunghan Yang, Jangho Kim, Simyung Chang

Moreover, we introduce an efficient architecture, BC-ResNet-ASC, a modified version of the baseline architecture with a limited receptive field.

Acoustic Scene Classification Classification +5

Towards Robust Domain Generalization in 2D Neural Audio Processing

no code implementations29 Sep 2021 Byeonggeun Kim, Seunghan Yang, Jangho Kim, Hyunsin Park, Jun-Tae Lee, Simyung Chang

While using two-dimensional convolutional neural networks (2D-CNNs) in image processing, it is possible to manipulate domain information using channel statistics, and instance normalization has been a promising way to get domain-invariant features.

Acoustic Scene Classification Domain Generalization +3

Robust Federated Learning with Noisy Labels

1 code implementation3 Dec 2020 Seunghan Yang, Hyoungseob Park, Junyoung Byun, Changick Kim

To solve these problems, we introduce a novel federated learning scheme that the server cooperates with local models to maintain consistent decision boundaries by interchanging class-wise centroids.

Federated Learning Learning with noisy labels

Arbitrary Style Transfer using Graph Instance Normalization

no code implementations6 Oct 2020 Dongki Jung, Seunghan Yang, Jaehoon Choi, Changick Kim

Style transfer is the image synthesis task, which applies a style of one image to another while preserving the content.

Domain Adaptation Image-to-Image Translation +2

Associative Partial Domain Adaptation

no code implementations7 Aug 2020 Youngeun Kim, Sungeun Hong, Seunghan Yang, Sungil Kang, Yunho Jeon, Jiwon Kim

Our Associative Partial Domain Adaptation (APDA) utilizes intra-domain association to actively select out non-trivial anomaly samples in each source-private class that sample-level weighting cannot handle.

Partial Domain Adaptation

Partial Domain Adaptation Using Graph Convolutional Networks

no code implementations16 May 2020 Seunghan Yang, Youngeun Kim, Dongki Jung, Changick Kim

Although existing partial domain adaptation methods effectively down-weigh outliers' importance, they do not consider data structure of each domain and do not directly align the feature distributions of the same class in the source and target domains, which may lead to misalignment of category-level distributions.

Partial Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.