no code implementations • 31 Jan 2024 • Geonung Kim, Beomsu Kim, Eunhyeok Park, Sunghyun Cho
As recent advancements in large-scale Text-to-Image (T2I) diffusion models have yielded remarkable high-quality image generation, diverse downstream Image-to-Image (I2I) applications have emerged.
no code implementations • 6 Dec 2023 • Junhyuk So, Jungwon Lee, Eunhyeok Park
The substantial computational costs of diffusion models, especially due to the repeated denoising steps necessary for high-quality image generation, present a major obstacle to their widespread adoption.
2 code implementations • 4 Jun 2023 • Changhun Lee, Jungyu Jin, Taesu Kim, HyungJun Kim, Eunhyeok Park
Large language models (LLMs) with hundreds of billions of parameters require powerful server-grade GPUs for inference, limiting their practical deployment.
no code implementations • NeurIPS 2023 • Junhyuk So, Jungwon Lee, Daehyun Ahn, HyungJun Kim, Eunhyeok Park
The diffusion model has gained popularity in vision applications due to its remarkable generative performance and versatility.
1 code implementation • ECCV(European Conference on Computer Vision) 2022 • Han-Byul Kim, Eunhyeok Park, Sungjoo Yoo
In this paper, we propose Branch-wise Activation-clipping Search Quantization (BASQ), which is a novel quantization method for low-bit activation.
no code implementations • 31 Jul 2022 • Sein Park, Yeongsang Jang, Eunhyeok Park
Robust quantization improves the tolerance of networks for various implementations, allowing reliable output in different bit-widths or fragmented low-precision arithmetic.
1 code implementation • CVPR 2023 • JunCheol Shin, Junhyuk So, Sein Park, Seungyeop Kang, Sungjoo Yoo, Eunhyeok Park
Recently, pseudoquantization training has been proposed as an alternative approach to updating the learnable parameters using the pseudo-quantization noise instead of STE.
no code implementations • 23 May 2022 • Ilchae Jung, Minji Kim, Eunhyeok Park, Bohyung Han
This paper presents a novel hybrid representation learning framework for streaming data, where an image frame in a video is modeled by an ensemble of two distinct deep neural networks; one is a low-bit quantized network and the other is a lightweight full-precision network.
no code implementations • ICCV 2023 • Changhun Lee, HyungJun Kim, Eunhyeok Park, Jae-Joon Kim
Binary Neural Networks (BNNs) have emerged as a promising solution for reducing the memory footprint and compute costs of deep neural networks, but they suffer from quality degradation due to the lack of freedom as activations and weights are constrained to the binary values.
no code implementations • 29 Dec 2021 • Sungmin Cho, Hongjun Lim, Keunchan Park, Sungjoo Yoo, Eunhyeok Park
Personalized news recommendation aims to provide attractive articles for readers by predicting their likelihood of clicking on a certain article.
2 code implementations • ICCV 2021 • Hyunyoung Jung, Eunhyeok Park, Sungjoo Yoo
Self-supervised monocular depth estimation has been widely studied, owing to its practical importance and recent promising improvements.
1 code implementation • 19 Aug 2020 • Sung Min Cho, Eunhyeok Park, Sungjoo Yoo
Following the custom from language processing, most of these models rely on a simple positional embedding to exploit the sequential nature of the user's history.
1 code implementation • ECCV 2020 • Eunhyeok Park, Sungjoo Yoo
In the ablation study of the 3-bit quantization of MobileNet-v3, our proposed method outperforms the state-of-the-art method by a large margin, 12. 86 % of top-1 accuracy.
2 code implementations • ICCV 2019 • Hyunsu Kim, Ho Young Jhoo, Eunhyeok Park, Sungjoo Yoo
A GAN approach is proposed, called Tag2Pix, of line art colorization which takes as input a grayscale line art and color tag information and produces a quality colored image.
no code implementations • ICLR 2019 • Eunhyeok Park, Dongyoung Kim, Sungjoo Yoo, Peter Vajda
We also report that the proposed method significantly outperforms the existing method in the 2-bit quantization of an LSTM for language modeling.
no code implementations • ECCV 2018 • Eunhyeok Park, Sungjoo Yoo, Peter Vajda
We propose a novel value-aware quantization which applies aggressively reduced precision to the majority of data while separately handling a small amount of large data in high precision, which reduces total quantization errors under very low precision.
no code implementations • CVPR 2017 • Eunhyeok Park, Junwhan Ahn, Sungjoo Yoo
Quantization is considered as one of the most effective methods to optimize the inference cost of neural network models for their deployment to mobile and embedded systems, which have tight resource constraints.
7 code implementations • 20 Nov 2015 • Yong-Deok Kim, Eunhyeok Park, Sungjoo Yoo, Taelim Choi, Lu Yang, Dongjun Shin
Although the latest high-end smartphone has powerful CPU and GPU, running deeper convolutional neural networks (CNNs) for complex tasks such as ImageNet classification on mobile devices is challenging.