no code implementations • 31 Aug 2023 • Seunghan Yang, Byeonggeun Kim, Kyuhong Shim, Simyung Chang
Few-shot keyword spotting (FS-KWS) models usually require large-scale annotated datasets to generalize to unseen target keywords.
no code implementations • ICCV 2023 • Sunghyun Park, Seunghan Yang, Jaegul Choo, Sungrack Yun
Test-time adaptation (TTA) aims to adapt a pre-trained model to the target domain in a batch-by-batch manner during inference.
no code implementations • CVPR 2023 • Seokeon Choi, Debasmit Das, Sungha Choi, Seunghan Yang, Hyunsin Park, Sungrack Yun
Single domain generalization aims to train a generalizable model with only one source domain to perform well on arbitrary unseen target domains.
no code implementations • 26 Feb 2023 • Byeonggeun Kim, Jun-Tae Lee, Seunghan Yang, Simyung Chang
Efficient transfer learning involves utilizing a pre-trained model trained on a larger dataset and repurposing it for downstream tasks with the aim of maximizing the reuse of the pre-trained model.
no code implementations • 24 Jul 2022 • Sungha Choi, Seunghan Yang, Seokeon Choi, Sungrack Yun
This paper proposes a novel test-time adaptation strategy that adjusts the model pre-trained on the source domain using only unlabeled online data from the target domain to alleviate the performance degradation due to the distribution shift between the source and target domains.
no code implementations • 28 Jun 2022 • Byeonggeun Kim, Seunghan Yang, Jangho Kim, Simyung Chang
The goal of the task is to design an audio scene classification system for device-imbalanced datasets under the constraints of model complexity.
no code implementations • 28 Jun 2022 • Seunghan Yang, Byeonggeun Kim, Inseop Chung, Simyung Chang
We design two personalized KWS tasks; (1) Target user Biased KWS (TB-KWS) and (2) Target user Only KWS (TO-KWS).
no code implementations • 28 Jun 2022 • Seunghan Yang, Debasmit Das, Janghoon Cho, Hyoungwoo Park, Sungrack Yun
Deep learning models for verification systems often fail to generalize to new users and new environments, even though they learn highly discriminative features.
no code implementations • 28 Jun 2022 • Byeonggeun Kim, Seunghan Yang, Inseop Chung, Simyung Chang
We also verify our method on a standard benchmark, miniImageNet, and D-ProtoNets shows the state-of-the-art open-set detection rate in FSOSR.
no code implementations • 24 Jun 2022 • Byeonggeun Kim, Seunghan Yang, Jangho Kim, Hyunsin Park, JunTae Lee, Simyung Chang
While using two-dimensional convolutional neural networks (2D-CNNs) in image processing, it is possible to manipulate domain information using channel statistics, and instance normalization has been a promising way to get domain-invariant features.
no code implementations • 24 Nov 2021 • Seunghan Yang, Debasmit Das, Simyung Chang, Sungrack Yun, Fatih Porikli
However, it is observed that image transformations already present in the dataset might be less effective in learning such self-supervised representations.
no code implementations • 12 Nov 2021 • Byeonggeun Kim, Seunghan Yang, Jangho Kim, Simyung Chang
Moreover, we introduce an efficient architecture, BC-ResNet-ASC, a modified version of the baseline architecture with a limited receptive field.
no code implementations • 29 Sep 2021 • Byeonggeun Kim, Seunghan Yang, Jangho Kim, Hyunsin Park, Jun-Tae Lee, Simyung Chang
While using two-dimensional convolutional neural networks (2D-CNNs) in image processing, it is possible to manipulate domain information using channel statistics, and instance normalization has been a promising way to get domain-invariant features.
1 code implementation • 3 Dec 2020 • Seunghan Yang, Hyoungseob Park, Junyoung Byun, Changick Kim
To solve these problems, we introduce a novel federated learning scheme that the server cooperates with local models to maintain consistent decision boundaries by interchanging class-wise centroids.
no code implementations • 6 Oct 2020 • Dongki Jung, Seunghan Yang, Jaehoon Choi, Changick Kim
Style transfer is the image synthesis task, which applies a style of one image to another while preserving the content.
no code implementations • 7 Aug 2020 • Youngeun Kim, Sungeun Hong, Seunghan Yang, Sungil Kang, Yunho Jeon, Jiwon Kim
Our Associative Partial Domain Adaptation (APDA) utilizes intra-domain association to actively select out non-trivial anomaly samples in each source-private class that sample-level weighting cannot handle.
no code implementations • 16 May 2020 • Seunghan Yang, Youngeun Kim, Dongki Jung, Changick Kim
Although existing partial domain adaptation methods effectively down-weigh outliers' importance, they do not consider data structure of each domain and do not directly align the feature distributions of the same class in the source and target domains, which may lead to misalignment of category-level distributions.
1 code implementation • 12 Oct 2019 • Seunghan Yang, Yoonhyung Kim, Youngeun Kim, Changick Kim
Most previous methods utilize the activation map corresponding to the highest activation source.