Search Results for author: Jangho Kim

Found 10 papers, 3 papers with code

Dynamic Collective Intelligence Learning: Finding Efficient Sparse Model via Refined Gradients for Pruned Weights

no code implementations10 Sep 2021 Jangho Kim, Jayeon Yoo, Yeji Song, KiYoon Yoo, Nojun Kwak

To alleviate this problem, dynamic pruning methods have emerged, which try to find diverse sparsity patterns during training by utilizing Straight-Through-Estimator (STE) to approximate gradients of pruned weights.

PQK: Model Compression via Pruning, Quantization, and Knowledge Distillation

no code implementations25 Jun 2021 Jangho Kim, Simyung Chang, Nojun Kwak

Unlike traditional pruning and KD, PQK makes use of unimportant weights pruned in the pruning process to make a teacher network for training a better student network without pre-training the teacher model.

Keyword Spotting Knowledge Distillation +2

Prototype-based Personalized Pruning

no code implementations25 Mar 2021 Jangho Kim, Simyung Chang, Sungrack Yun, Nojun Kwak

We verify the usefulness of PPP on a couple of tasks in computer vision and Keyword spotting.

Keyword Spotting Model Compression

Position-based Scaled Gradient for Model Quantization and Pruning

1 code implementation NeurIPS 2020 Jangho Kim, KiYoon Yoo, Nojun Kwak

Second, we empirically show that PSG acting as a regularizer to a weight vector is favorable for model compression domains such as quantization and pruning.

Model Compression Quantization

Feature-map-level Online Adversarial Knowledge Distillation

no code implementations ICML 2020 Inseop Chung, SeongUk Park, Jangho Kim, Nojun Kwak

By training a network to fool the corresponding discriminator, it can learn the other network's feature map distribution.

Knowledge Distillation

QKD: Quantization-aware Knowledge Distillation

no code implementations28 Nov 2019 Jangho Kim, Yash Bhalgat, Jinwon Lee, Chirag Patel, Nojun Kwak

First, Self-studying (SS) phase fine-tunes a quantized low-precision student network without KD to obtain a good initialization.

Knowledge Distillation Quantization

Feature Fusion for Online Mutual Knowledge Distillation

1 code implementation19 Apr 2019 Jangho Kim, Minsung Hyun, Inseop Chung, Nojun Kwak

We propose a learning framework named Feature Fusion Learning (FFL) that efficiently trains a powerful classifier through a fusion module which combines the feature maps generated from parallel neural networks.

Knowledge Distillation

StackNet: Stacking Parameters for Continual learning

no code implementations7 Sep 2018 Jangho Kim, Jeesoo Kim, Nojun Kwak

The StackNet guarantees no degradation in the performance of the previously learned tasks and the index module shows high confidence in finding the origin of an input sample.

Continual Learning

Vehicle Image Generation Going Well with The Surroundings

no code implementations9 Jul 2018 Jeesoo Kim, Jangho Kim, Jaeyoung Yoo, Daesik Kim, Nojun Kwak

Using a subnetwork based on a precedent work of image completion, our model makes the shape of an object.

Colorization Image Generation +4

Paraphrasing Complex Network: Network Compression via Factor Transfer

2 code implementations NeurIPS 2018 Jangho Kim, SeongUk Park, Nojun Kwak

Among the model compression methods, a method called knowledge transfer is to train a student network with a stronger teacher network.

Model Compression Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.