no code implementations • 26 Feb 2023 • Byeonggeun Kim, Jun-Tae Lee, Seunghan Yang, Simyung Chang
Efficient transfer learning involves utilizing a pre-trained model trained on a larger dataset and repurposing it for downstream tasks with the aim of maximizing the reuse of the pre-trained model.
no code implementations • 30 Nov 2022 • Minseop Park, Jaeseong You, Markus Nagel, Simyung Chang
In that case, it is observed that quantization-aware training overfits the model to the fine-tuning data.
no code implementations • 28 Jun 2022 • Byeonggeun Kim, Seunghan Yang, Inseop Chung, Simyung Chang
We also verify our method on a standard benchmark, miniImageNet, and D-ProtoNets shows the state-of-the-art open-set detection rate in FSOSR.
no code implementations • 28 Jun 2022 • Seunghan Yang, Byeonggeun Kim, Inseop Chung, Simyung Chang
We design two personalized KWS tasks; (1) Target user Biased KWS (TB-KWS) and (2) Target user Only KWS (TO-KWS).
no code implementations • 28 Jun 2022 • Byeonggeun Kim, Seunghan Yang, Jangho Kim, Simyung Chang
The goal of the task is to design an audio scene classification system for device-imbalanced datasets under the constraints of model complexity.
no code implementations • 24 Jun 2022 • Byeonggeun Kim, Seunghan Yang, Jangho Kim, Hyunsin Park, JunTae Lee, Simyung Chang
While using two-dimensional convolutional neural networks (2D-CNNs) in image processing, it is possible to manipulate domain information using channel statistics, and instance normalization has been a promising way to get domain-invariant features.
no code implementations • 24 Nov 2021 • Seunghan Yang, Debasmit Das, Simyung Chang, Sungrack Yun, Fatih Porikli
However, it is observed that image transformations already present in the dataset might be less effective in learning such self-supervised representations.
no code implementations • 12 Nov 2021 • Byeonggeun Kim, Seunghan Yang, Jangho Kim, Simyung Chang
Moreover, we introduce an efficient architecture, BC-ResNet-ASC, a modified version of the baseline architecture with a limited receptive field.
no code implementations • 11 Nov 2021 • John Yang, Yash Bhalgat, Simyung Chang, Fatih Porikli, Nojun Kwak
While hand pose estimation is a critical component of most interactive extended reality and gesture recognition systems, contemporary approaches are not optimized for computational and memory efficiency.
no code implementations • 7 Oct 2021 • Simyung Chang, KiYoon Yoo, Jiho Jang, Nojun Kwak
Utilizing SEO for PFL, we also introduce self-evolutionary Pareto networks (SEPNet), enabling the unified model to approximate the entire Pareto front set that maximizes the hypervolume.
no code implementations • 29 Sep 2021 • Byeonggeun Kim, Seunghan Yang, Jangho Kim, Hyunsin Park, Jun-Tae Lee, Simyung Chang
While using two-dimensional convolutional neural networks (2D-CNNs) in image processing, it is possible to manipulate domain information using channel statistics, and instance normalization has been a promising way to get domain-invariant features.
no code implementations • 25 Jun 2021 • Jangho Kim, Simyung Chang, Nojun Kwak
Unlike traditional pruning and KD, PQK makes use of unimportant weights pruned in the pruning process to make a teacher network for training a better student network without pre-training the teacher model.
3 code implementations • 8 Jun 2021 • Byeonggeun Kim, Simyung Chang, Jinkyu Lee, Dooyong Sung
We present a broadcasted residual learning method to achieve high accuracy with small model size and computational load.
Ranked #2 on
Keyword Spotting
on Google Speech Commands
no code implementations • 25 Mar 2021 • Jangho Kim, Simyung Chang, Sungrack Yun, Nojun Kwak
We verify the usefulness of PPP on a couple of tasks in computer vision and Keyword spotting.
no code implementations • 25 Mar 2021 • Simyung Chang, Hyoungwoo Park, Janghoon Cho, Hyunsin Park, Sungrack Yun, Kyuwoong Hwang
In this work, we introduce SubSpectral Normalization (SSN), which splits the input frequency dimension into several groups (sub-bands) and performs a different normalization for each group.
Ranked #1 on
Keyword Spotting
on TAU Urban Acoustic Scenes 2019
no code implementations • 15 Jan 2019 • Sang-ho Lee, Simyung Chang, Nojun Kwak
There are methods to reduce the cost by compressing networks or varying its computational path dynamically according to the input image.
no code implementations • NeurIPS 2018 • Simyung Chang, John Yang, Jaeseok Choi, Nojun Kwak
We introduce the Genetic-Gated Networks (G2Ns), simple neural networks that combine a gate vector composed of binary genetic genes in the hidden layer(s) of networks.
1 code implementation • ICCV 2019 • Simyung Chang, SeongUk Park, John Yang, Nojun Kwak
Recent advances in image-to-image translation have led to some ways to generate multiple domain images through a single network.
no code implementations • 26 Nov 2018 • Simyung Chang, John Yang, Jae-Seok Choi, Nojun Kwak
We introduce the Genetic-Gated Networks (G2Ns), simple neural networks that combine a gate vector composed of binary genetic genes in the hidden layer(s) of networks.
no code implementations • 11 Nov 2018 • John Yang, Gyujeong Lee, Minsung Hyun, Simyung Chang, Nojun Kwak
We tackle the blackbox issue of deep neural networks in the settings of reinforcement learning (RL) where neural agents learn towards maximizing reward gains in an uncontrollable way.
no code implementations • ECCV 2018 • Simyung Chang, John Yang, SeongUk Park, Nojun Kwak
In this paper, we propose the Broadcasting Convolutional Network (BCN) that extracts key object features from the global field of an entire input image and recognizes their relationship with local features.
no code implementations • 5 Sep 2017 • Simyung Chang, Youngjoon Yoo, Jae-Seok Choi, Nojun Kwak
Our method learns hundreds to thousand times faster than the conventional methods by learning only a handful of core cluster information, which shows that deep RL agents can effectively learn through the shared knowledge from other agents.