Search Results for author: Eunhyeok Park

Found 7 papers, 4 papers with code

MEANTIME: Mixture of Attention Mechanisms with Multi-temporal Embeddings for Sequential Recommendation

1 code implementation19 Aug 2020 Sung Min Cho, Eunhyeok Park, Sungjoo Yoo

Following the custom from language processing, most of these models rely on a simple positional embedding to exploit the sequential nature of the user's history.

PROFIT: A Novel Training Method for sub-4-bit MobileNet Models

1 code implementation ECCV 2020 Eunhyeok Park, Sungjoo Yoo

In the ablation study of the 3-bit quantization of MobileNet-v3, our proposed method outperforms the state-of-the-art method by a large margin, 12. 86 % of top-1 accuracy.


Tag2Pix: Line Art Colorization Using Text Tag With SECat and Changing Loss

2 code implementations ICCV 2019 Hyunsu Kim, Ho Young Jhoo, Eunhyeok Park, Sungjoo Yoo

A GAN approach is proposed, called Tag2Pix, of line art colorization which takes as input a grayscale line art and color tag information and produces a quality colored image.

Line Art Colorization

Precision Highway for Ultra Low-Precision Quantization

no code implementations ICLR 2019 Eunhyeok Park, Dongyoung Kim, Sungjoo Yoo, Peter Vajda

We also report that the proposed method significantly outperforms the existing method in the 2-bit quantization of an LSTM for language modeling.

Language Modelling Quantization

Value-aware Quantization for Training and Inference of Neural Networks

no code implementations ECCV 2018 Eunhyeok Park, Sungjoo Yoo, Peter Vajda

We propose a novel value-aware quantization which applies aggressively reduced precision to the majority of data while separately handling a small amount of large data in high precision, which reduces total quantization errors under very low precision.


Weighted-Entropy-Based Quantization for Deep Neural Networks

no code implementations CVPR 2017 Eunhyeok Park, Junwhan Ahn, Sungjoo Yoo

Quantization is considered as one of the most effective methods to optimize the inference cost of neural network models for their deployment to mobile and embedded systems, which have tight resource constraints.

Image Classification Language Modelling +2

Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications

8 code implementations20 Nov 2015 Yong-Deok Kim, Eunhyeok Park, Sungjoo Yoo, Taelim Choi, Lu Yang, Dongjun Shin

Although the latest high-end smartphone has powerful CPU and GPU, running deeper convolutional neural networks (CNNs) for complex tasks such as ImageNet classification on mobile devices is challenging.

Cannot find the paper you are looking for? You can Submit a new open access paper.