Search Results for author: Jihun Oh

Found 3 papers, 1 papers with code

A Selective Survey on Versatile Knowledge Distillation Paradigm for Neural Network Models

no code implementations30 Nov 2020 Jeong-Hoe Ku, Jihun Oh, YoungYoon Lee, Gaurav Pooniwala, SangJeong Lee

This paper aims to provide a selective survey about knowledge distillation(KD) framework for researchers and practitioners to take advantage of it for developing new optimized models in the deep neural network field.

Knowledge Distillation Model Compression +1

Weight Equalizing Shift Scaler-Coupled Post-training Quantization

no code implementations13 Aug 2020 Jihun Oh, SangJeong Lee, Meejeong Park, Pooni Walagaurav, Kiseok Kwon

As a result, our proposed method achieved a top-1 accuracy of 69. 78% ~ 70. 96% in MobileNets and showed robust performance in varying network models and tasks, which is competitive to channel-wise quantization results.

Quantization

Advancing GraphSAGE with A Data-Driven Node Sampling

1 code implementation29 Apr 2019 Jihun Oh, Kyunghyun Cho, Joan Bruna

As an efficient and scalable graph neural network, GraphSAGE has enabled an inductive capability for inferring unseen nodes or graphs by aggregating subsampled local neighborhoods and by learning in a mini-batch gradient descent fashion.

General Classification Node Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.